uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,499,115 | arxiv | \section{Introduction}
Quasar absorption systems have historically been divided largely into
three physically distinct categories: (i) absorption systems with
strong metal lines arising in or near intervening galaxies, (ii) weak
{\rm H}~{\sc i}\ systems in the Ly$\alpha$\ forest that come from the intergalactic
medium, and (iii) intrinsic systems that are physically related to the
quasars, including associated and broad absorption line systems.
Metal absorption systems usually contain {\rm H}~{\sc i}\ lines with relatively
large column densities, including two sub-categories: damped Lyman
alpha (DLA) system and Lyman limit system (LLS) with {\rm H}~{\sc i}\ column
densities of $\log$($N_{\rm H\;I}$/[cm$^{-2}$]) $>$ 20.2 and 17.16,
respectively. Deep imaging observations around quasars have provided
evidences that the metal absorption systems are often produced in
intervening galaxies. Galaxies have been detected that can explain
\ion{Mg}{2} absorption lines (e.g., Bergeron \& Boiss\'e 1991), and
\ion{C}{4} absorption lines (e.g., Chen, Lanzetta, \& Webb 2001).
On the other hand, nearly all {\rm H}~{\sc i}\ lines have smaller column densities
($\log N_{HI}$\ $\leq$ 15) than those associated found in gas that shows
strong metal lines. The number of {\rm H}~{\sc i}\ lines per unit $z$ increases
with redshift (Peterson 1978; Weymann et al. 1998a; Bechtold 1994),
because the intergalactic medium is denser and less ionized at higher
$z$. The Ly$\alpha$\ absorption lines are produced in intergalactic clouds
(e.g., Sargent et al. 1980; Melott 1980), which are the higher density
regions in the inter-galactic medium (IGM). The Ly$\alpha$\ lines are
broadened by Hubble flow (e.g., Rauch 1998; Kim et al. 2002a) as well
as the Doppler broadening from the gas temperature.
Misawa et al. (2004) presented a study of {\rm H}~{\sc i}\ absorption lines seen
towards 40 quasars in spectra from the Keck HIRES spectrograph. In a
departure from prior work, they considered the {\rm H}~{\sc i}\ lines without
considering the metal lines. They classified the {\rm H}~{\sc i}\ lines as either
high density lines [HDLs] which have or are near to strong {\rm H}~{\sc i}\ lines
that are probably related to galaxies, and low density lines [LDLs]
that are far from any strong {\rm H}~{\sc i}\ lines and are more likely to be far
from galaxies and hence in the IGM.
Following Misawa et al. (2004), we define HDLs as all {\rm H}~{\sc i}\ lines
within $\pm$ 200~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of a lines with $15$ $<$ $\log N_{HI}$\ $<$
19~cm$^{-2}$. We define LDLs as lines with 12 $<$ $\log N_{HI}$\ $<$ 15 which are
within 200 -- 1000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of a line with $15 < $ $\log N_{HI}$\ $<$ 19 ~cm$^{-2}$
. This last velocity constraint is intended to make the LDL a
``control'' sample for the HDLs, where the two samples come from
similar redshifts and regions of the spectra with similar signal to
noise.
Misawa et al. (2004) discovered that the HDLs have smaller Doppler
parameters ($b$-values), for a given column density than the LDLs, and
they also found the same effect in a hydrodynamic simulation with 2.7
kpc cells. Misawa et al. (2004) suggested that the LDLs are cool or
shock-heated diffuse intergalactic gas, and that the HDLs are cooler
dense gas near to galaxies.
Misawa et al. (2004) fit all the accessible transitions in the {\rm H}~{\sc i}\
Lyman series to help de-blend {\rm H}~{\sc i}\ lines. Their main sample comprised
86 {\rm H}~{\sc i}\ absorption systems each with $\log N_{HI}$\ $>$ 15 cm$^{-2}$ . They also
fit all {\rm H}~{\sc i}\ lines within $\pm$1,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of these {\rm H}~{\sc i}\ lines, to
give a total sample of 1339 {\rm H}~{\sc i}\ lines, including the 86 lines. This
is the only large sample in which multiple Lyman series lines are fit
together, although this method has been used on individual systems and
small samples (Songaila et al. 1994; Tytler, Fan, \& Burles 1996;
Wampler et al. 1996; Carswell et al. 1996; Songaila et al. 1997;
Burles \& Tytler 1998a,1998b, Burles, Kirkman, \& Tytler 1999; Kirkman
et al. 2000; O'Meara et al. 2001; Kirkman et al. 2003; Kim et
al. 2002b; Janknecht et al. 2006).
In this paper, we present measurements of the absorption lines that
Misawa et al. (2004) have analyzed. We give a detailed description of
each absorption system and we summarize new results. The paper is
organized as follows: In \S\ref{sec:data}, we give descriptions of the
data and the line fitting. The results of our statistical analyses are
presented in \S\ref{sec:results}. We discuss our results in
\S\ref{sec:discussion}, and summarize them in \S\ref{sec:summary}. In
the Appendix, we describe the properties and we give velocity plots
for each {\rm H}~{\sc i}\ system. We use a cosmology in which $H_{0}=72$ {\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}
Mpc$^{-1}$, $\Omega_{m}$ = 0.3, and $\Omega_{\Lambda}$ = 0.7.
\section{Spectra and Line Fitting\label{sec:data}}
The 40 quasars in our sample have either DLA systems or LLSs, and they
were observed as a part of a survey for measurements of the deuterium
to hydrogen (D/H) abundance ratio. The detailed description of the
absorption and data reduction are presented in Misawa et
al. (2004). We caution that our sample was biased in subtle ways by
the selection of LLS and DLAs that seemed more likely to show D, i.e.
those with simpler velocity structure.
We list the 40 quasars in Table~1. Column (1) is the quasar name, (2)
the emission redshift. Columns (3) and (4) are the optical magnitude
in the $V$ and $R$ bands. The lower and upper wavelength limits of the
spectra are presented in columns (5) and (6). Column (7) gives the S/N
ratio at the center of the spectrum. Same data set was also used in
Misawa et al. (2007) in a study of the quasar intrinsic absorption
lines.
We will discuss only {\rm H}~{\sc i}\ lines with $\log N_{HI}$\ $>$ 15 cm$^{-2}$\ and other
{\rm H}~{\sc i}\ weaker lines within $\pm$1,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of these strong {\rm H}~{\sc i}\
lines. We selected this velocity range since it is enough to include
the conspicuous clustering of strong metal lines. Indeed such strong
metal lines are normally confined to an interval of $ <400$~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ even
for DLA systems (Lu et al. 1996b).
Here we briefly review the line detection and fitting procedures that
we discussed in more detail in Misawa et al. (2004). We began
searching the literature for {\rm H}~{\sc i}\ lines with $\log N_{HI}$\ $>$ 15,
including the DLA and LLS catalogues (Sargent, Steidel, \& Boksenberg
1989, hereafter SSB; Lanzetta 1991; Tytler 1982), and metal absorption
systems (P\'eroux et al. 2001; Storrie-Lombardi et al. 1996;
Petitjean, Rauch, \& Carswell 1994; Lu et al. 1993; Steidel \& Sargent
1992; Lanzetta et al. 1991; Barthel, Tytler, \& Thomson 1990; Steidel
1990a,b; Sargent, Boksenberg, \& Steidel 1988, hereafter SBS; SSB). We
also search for them ourselves. If more than one strong {\rm H}~{\sc i}\ line was
detected in a single 2000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ velocity window, we take the position
of the {\rm H}~{\sc i}\ line with the largest column density (hereafter the ``main
component'') as system center. We found 86 {\rm H}~{\sc i}\ systems with $\log N_{HI}$\
$>$ 15, at 2.1 $<$ {\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}\ $<$ 4.0, in 31 of the 40 quasars. Figure~1
of Misawa et al. (2004) gives the velocity plot of one of these
systems, and below we give the rest.
We give parameters describing these 86 systems in Table~2. In
successive columns list (1) the name of the quasar; (2) the redshift
of the main component, that with the largest column density ; (3) the
{\rm H}~{\sc i}\ column density of the main component $N_{1}$; (4) $N_{2}$, the
second largest {\rm H}~{\sc i}\ column density within $\pm 1000$ {\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of the main
component; (5) the ratio of $N_{2}$ to $N_{1}$; (6) -- (9) the S/N
ratios at Ly$\alpha$, Ly$\beta$, Ly$\epsilon$, and Ly10; (10) the number of lines in the
$\pm$ 1,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ window; (11) the number of {\rm H}~{\sc i}\ lines classified as
HDLs (described later); (12) comments on the {\rm H}~{\sc i}\ system; (13)
references. We will call this list sample S0 (Table~3).
When we were fitting the lines, we rejected narrow lines with Doppler
parameter of $b$ $<$ 4.8 {\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}, which corresponds to the resolution of
our spectra. We also identify all lines with $b$ $<$ 15 {\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ as
possible metal lines (called \ion{M}{1} in Tables) because {\rm H}~{\sc i}\ lines
with this narrow width are rare (e.g., Hu et al. 1995, hereafter H95;
Lu et al. 1996a, hereafter L96; Kirkman \& Tytler 1997a, hereafter
KT97). If there was more than one way to fit the lines, we chose the
fit with the fewest lines. If the model did not give good fits to all
the Lyman series lines, we adopted the model that best fit the lower
order lines where the SNR is best. For {\rm H}~{\sc i}\ lines with column
densities of $\log N_{HI}$\ $\geq$ 16.6 the Lyman continuum optical depth is
$\tau$ $\geq$ 0.25. For these systems we checked if the residual flux
at the Lyman limit was consistent. Our fitting method could readily
overestimate the Doppler parameter but not the column density. Once
the fitting model is chosen, we used $\chi^{2}$ minimization in a code
written by David Kirkman, to get the best fit parameters of {\rm H}~{\sc i}\
column density ($\log N_{HI}$), Doppler parameter ($b$), and absorption
redshift ($z$). The internal errors are typically
$\sigma$($\log N_{HI}$)=0.09~cm$^{-2}$, $\sigma$($b$)=2.1~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi} , and
$\sigma$($z$)=2.5$\times$$10^{-5}$.
We prepared a sample S1 that is a sub-sample of S0 including only 973
{\rm H}~{\sc i}\ lines and 61 {\rm H}~{\sc i}\ systems with $\log N_{HI}$\ $>$ 15. S1 excludes 25
systems with difficulties such as (i) poor fitting due to gaps in the
echelle formatted spectra, (ii) poor fitting due to strong DLA wings
(i.e., $\log N_{HI}$\ $>$ 19), (iii) close proximity in redshift to the
background quasars (i.e., within 1,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of the emission
redshift), and (iv) overlapping with other {\rm H}~{\sc i}\ systems. The S/N
ratios of the spectra are at least S/N $\simeq$ 11 per 2.1~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ pixel
and the mean value is S/N $\simeq$ 47 for Ly$\alpha$\ lines. Among these 61
{\rm H}~{\sc i}\ systems, three systems may be physically associated with the
quasars based on the partial coverage analysis for the corresponding
metal absorption lines (Misawa et al. 2007). However, we keep these
systems in S1 sample, because we still cannot reject the idea that
they are intervening systems.
We give detailed descriptions of all the lines that we fit in the
Appendix. We also give velocity plots of the first five Lyman
transitions, Ly$\alpha$, Ly$\beta$, Ly$\gamma$, Ly$\delta$, and Ly$\epsilon$ .
\section{RESULTS\label{sec:results}}
We investigate the properties of line parameters such as the column
density, Doppler parameter, and the clustering properties of the {\rm H}~{\sc i}\
lines. Since this sample contains not only {\rm H}~{\sc i}\ lines originating in
the intergalactic diffuse gas clouds (i.e., LDLs), but also {\rm H}~{\sc i}\ lines
produced by intervening galaxies (i.e., HDLs), we also attempted to
separate {\rm H}~{\sc i}\ lines into HDLs and LDLs based on the clustering trend
(Misawa et a. 2004).
Our analysis is similar to that of previous studies (e.g., H95; L96;
KT97), but with three key differences: (i) earlier studies used all
{\rm H}~{\sc i}\ lines detected in the quasar spectra, whereas we use only {\rm H}~{\sc i}\
lines within $\pm$ 1,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of the main components with $\log N_{HI}$\
$\geq$ 15, (ii) our sample contains a number of strong lines ($\log N_{HI}$\
$\geq$ 15) in addition to weak {\rm H}~{\sc i}\ lines ($\log N_{HI}$\ $<$ 15), and (iii)
our sample covers a wide redshift range: 2.0 $\leq$ {\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}\ $\leq$
4.0. The redshift distributions of the 86 and 61 {\rm H}~{\sc i}\ systems in
samples S0 and S1 are shown in Figure~1.
\subsection{Sub-Samples for the Statistical Analysis}
For further investigation, we prepared several sub-samples as
follow. It is known that the comoving number densities of
low-ionization lines, such as {\rm H}~{\sc i}\ lines, decreases in the vicinity of
quasars (Carswell et al. 1982; Murdoch et al. 1986; Tytler 1987). This
trend is known as the ``proximity effect'', and is probably caused by
the enhanced UV flux from the quasar towards which the absorption is
seen. We separate the 61 {\rm H}~{\sc i}\ systems (sample S1) into sub-samples
S2a (the velocity difference from the quasar, $\Delta v$ $>$
5,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}) and S2b ($\Delta v$ $<$ 5,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}). We have already
removed from S1 all {\rm H}~{\sc i}\ systems within 1,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of the quasars to
avoid {\rm H}~{\sc i}\ systems from the quasar host galaxies.
H95 emphasized that the line detection limit is almost wholly
determined by the line blending (or blanketing), and not by the S/N
ratio of the spectrum. In order to confirm whether the distribution of
line parameters is affected by the quality of the spectrum, we made
two overlapping samples from S1 using the S/N ratio of each spectrum
in the Ly$\alpha$\ region: S3a (S/N $\geq$ 40), and S3b (S/N $\geq$
70). These sub-samples include 34 ($\sim$ 60\%) and 17 ($\sim$ 30\%)
of the 61 {\rm H}~{\sc i}\ systems of the sample S1.
Sample S1 covers a broader range of redshifts, $2.0 < z < 4.0$, when
compared with previous studies: $2.55 < z < 3.19$ for H95, $3.43 < z <
4.13$ for L96, and $2.43 < z < 3.05$ for KT97. When we investigate the
redshift evolution of {\rm H}~{\sc i}\ absorbers, we also divided S1 into two
sub-samples; S4a ($z < 2.9$) and S4b ($z \geq 2.9$). Here the two
sub-samples have nearly the same number of {\rm H}~{\sc i}\ systems.
Finally, we made sub-samples according to the column densities of {\rm H}~{\sc i}\
lines, as the distributional trends of strong and weak {\rm H}~{\sc i}\ lines are
very different (see Figure~2 in Misawa et al. 2004); the {\rm H}~{\sc i}\ lines
with relatively large column densities tend to cluster around the main
components, while the number of weak {\rm H}~{\sc i}\ lines decreases near the
center of {\rm H}~{\sc i}\ systems because of line blanketing. Since one of our
interests is to determine the boundary value of column density between
HDLs and LDLs (although other parameters may be necessary to separate
them), we separate the 973 {\rm H}~{\sc i}\ lines into eight sub-samples according
to their column densities. We use boundary values of $\log N$ = 13,
14, 15, and 16, where sub-sample S5$_{ab}$ contains {\rm H}~{\sc i}\ lines whose
column densities are $\log$($N_{\rm H\;I}$/[cm$^{-2}$]) values of a to b.
In Table~3 we summarize these 16 sub-samples. Here we should emphasize
that S5$_{1213}$, S5$_{1214}$, S5$_{1215}$, S5$_{1216}$, S5$_{1319}$,
S5$_{1419}$, S5$_{1519}$, and S5$_{1619}$, are separated by the
properties of lines, while S2a, S2b, S3a, S3b, S4a, and S4b are
separated by the unit of system.
\subsection{Physical Properties of {\rm H}~{\sc i}\ Absorbers}
For each sub-sample prepared in the last section, we perform
statistical analysis, including analysis of the column density
distribution, Doppler parameter distribution, and line clustering
properties. The samples used here contain both HDLs and LDLs.
\subsubsection{Column Density Distribution}
The column density distribution of {\rm H}~{\sc i}\ lines, $dn(N)/dN$, are usually
fit with a power law (Carswell et al. 1984, Tytler 1987, Petitjean et
al. 1993; H95),
\begin{eqnarray}
\log\left(\frac{{\rm d}n(N)}{{\rm d}N}\right) = - \beta
\hspace{1mm} \log N + A,
\label{eqn:1}
\end{eqnarray}
where the index $\beta$ was estimated to be 1.46 (H95), 1.55 (L96),
and 1.5 (KT97) with only weaker {\rm H}~{\sc i}\ lines with $\log N_{HI}$\ = 12 --
14.5. Janknecht et al. (2006) found $\beta$ = 1.60$\pm$0.03 at lower
redshift $z$ $<$ 1.9. The column density distributions of our seven
sub-samples (S1, S2a, S2b, S3a, S3b, S4a and S4b) per unit redshift
and unit column density are analyzed. The plot in Figure~2 is the
result for sub-sample S1. Since the turn-over of the distribution
around $\log N_{HI}$\ = 12.5 is probably due to line blending and/or
blanketing as described later in \S\ 4, we fit the column density
distributions only for $\log N_{HI}$\ $>$ 13. The best-fitting parameters,
$\beta$ and $A$, as well as the redshift bandpass, $\Delta z$, of each
sub-sample are summarized in Table~4, along with the past results from
KT97 and Petitjean et al. (1993).
The indices that we find, $\beta$ = 1.40$\pm$0.03, are slightly
smaller than the value in the past results, $\beta$ = 1.46 --1.55,
which means that our sample favors strong {\rm H}~{\sc i}\ lines. But we expect
this type of trend. Our samples contain not only LDLs but also HDLs,
and since we cover only the velocity regions within $\pm$ 1,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\
of the main components, we have a strong excess of strong {\rm H}~{\sc i}\
lines. We see the column density distribution does not change with the
velocity distance from the quasars (S2a and S2b) or with the S/N ratio
(S3a and S3b), but it is weakly affected by redshift (S4a and S4b). We
also applied the Kolmogorov-Smirnov (K-S) test to the sub-samples. The
results in Table~5 show that we can not rule out the hypothesis that
they are random samplings from the same population.
\subsubsection{Doppler Parameter Distribution}
The distribution of the Doppler parameter of {\rm H}~{\sc i}\ lines have been
approximately given by the truncated Gaussian distribution (H95; L96),
\begin{equation} \frac{{\rm d}n(b)}{{\rm d}b} = \left\{
\begin{array}{ll} A \;
\exp \left[ \frac{-(b-b_{0})^{2}}{2 \sigma_{b}^{2}} \right] & b \geq
b_{min}\\
0 & b < b_{min}
\end{array}
\right.
\label{eqn:2}
\end{equation}
where $b_{0}$ and $\sigma_{b}$ are the mean and the dispersion of $b$
distribution and $b_{min}$ is the minimum $b$ value for an {\rm H}~{\sc i}\ line.
There are two origins of line broadening: thermal broadening
($b_{T}$), and micro-turbulence ($b_{tur}$). The total amount of
broadening is given by $b = \sqrt{b_{T}^{2} + b_{tur}^{2}}$.
In order to determine the correct Doppler parameter, we have to
individually resolve and fit {\rm H}~{\sc i}\ lines using Voigt profiles. However,
most of weak {\rm H}~{\sc i}\ lines disappear in the observed spectrum due to line
blending and blanketing, especially near strong lines. To derive the
intrinsic (as opposed to observed) distribution of the Doppler
parameter, previous authors (i.e., H95; L96; KT97) have chosen
artificial {\rm H}~{\sc i}\ lines (as input data) with distributions characterized
by a Gaussian, and used them for comparison with the observed
distributions. KT97 noted that the $b$-value distribution of lines
seen in simulated spectra resembles the distribution of the input
data, except for two differences: (i) an excess of lines with large
$b$-values is seen in the recovered data, compared to the input data,
which is probably produced by line blending, and (ii) lines with very
small $b$-values ($b$ $<$ 15~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}) are found, which are probably data
defects or noise, as they are not present in the input
data. Nonetheless, the $b$-value distribution of the input and
recovered data resemble each other between $b$ = 20~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ and $b$ =
60~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}.
We would ideally like to determine the real distribution of Doppler
parameters; however, the only way to do this is to perform
simulations, and compare the recovered Doppler parameter distribution
with the observed distribution. Such simulations are expensive to
perform, so in this work we simply compared the distribution in our
sample with the past results of H95 and L96 at $b$ = 20 -- 60~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}. We
analyzed the observed distributions of Doppler parameters for 15
sub-samples, and compare them with the results in H95 ($b_{0}$ =
28~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}, $\sigma_{b}$ = 10~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}, and $b_{min}$ = 20~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}) and L96
($b_{0}$ = 23~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}, $\sigma_{b}$ = 8~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}, and $b_{min}$ = 15~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}),
where the parameters are the inputs used for artificial spectra that
reproduce the observed distribution.
We see an excess of {\rm H}~{\sc i}\ lines with large $b$-values of $b$ $>$
50~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ in all of the sub-samples, while we have no lines with $b$
$\leq$ 15~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ because we decided to classify them into metal
lines. All sub-samples except for $S_{1213}$ have relatively large
$b$-values, and their distributions are closer to H95's distribution
than L96's. In contrast, the distribution of $S_{1213}$, containing
only {\rm H}~{\sc i}\ lines with small column densities, $\log N_{HI}$\ $<$ 13,
resembles L96's distribution. We plot in Figure~3 the Doppler
parameter distributions of sub-samples S1 and S$_{1213}$. We also
applied a K-S test to the sub-samples (S2a, S2b, S3a, S3b, S4a, and
S4b). The results are listed in Table~6. The probability, that the
distributions of sub-samples S2a ($\Delta v$({\ifmmode{z_{em}}\else{$z_{em}$}\fi} $-$ {\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}) $>$
5000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}) and S2b ($\Delta v$({\ifmmode{z_{em}}\else{$z_{em}$}\fi} $-$ {\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}) $\leq$ 5000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}) were
drawn from the same parent population, is very small, $\sim$2.4~\%.
This result could suggest that the Doppler parameters of {\rm H}~{\sc i}\ lines
within 5000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of quasars are affected by UV flux from the quasars.
\subsection{HDLs and LDLs}
In the previous section, we carried out statistical analysis using
sub-samples containing both HDLs and LDLs together. Here, we repeat
these tests on the two samples separately.
Metal absorption lines seen in DLA systems or LLSs are strongly
clustered within several hundred {\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi} , which implies their
relationship to galaxies. In simulations Dav\'e et al. (1999) also
noted that galaxies tend to lie near the dense regions that are
responsible for strong {\rm H}~{\sc i}\ lines. On the other hand, for weak {\rm H}~{\sc i}\
lines, no strong clustering is seen (e.g., Rauch et al. 1992; L96;
KT97), although some studies found only weak clustering trends (e.g.,
Webb 1987; H95; Cristiani et al. 1997).
As Misawa et al. (2004) found, lines with $\log N_{HI}$\ $\geq$ 15 show
strong clustering trends at $\Delta v$ $<$ 200~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}, while lines with
lower column densities cluster weakly at $\Delta v$ $<$ 100~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\
(Figure~4).
Misawa et al. (2004) defined HDLs as {\rm H}~{\sc i}\ lines with $15$ $<$ $\log N_{HI}$\
$<$ 19 and other weaker {\rm H}~{\sc i}\ lines within $\pm$ 200~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of those
stronger {\rm H}~{\sc i}\ lines. They then defined the LDLs as all other lines
with 12 $<$ $\log N_{HI}$\ $<$ 15. They chose $\log N_{HI}$\ = 15 ~cm$^{-2}$\ in the
definition because the two point correlation was largest for a
sub-sample of {\rm H}~{\sc i}\ lines with 15 $<$ $\log N_{HI}$\ $<$ 19. We list the
number of HDLs and LDLs in the sub-samples in Table~7.
\subsubsection{Column Density Distribution}
In Table~8 we give the parameters that describe the column density
distributions of HDLs and LDLs for five sub-samples (S1, S2a, S2b,
S4a, and S4b). The most obvious result is that the HDLs have a smaller
index than the LDLs. We see the same result in Figure~3 of Petitjean
et al. (1993), and hence we now confirm this with the first large
sample to consider the sub-components of the HDLs.
The distributions of LDLs in sample S1, S2a, and S4b are almost
consistent with the previous result in KT97. On the other hand, the
power law indices for the LDLs of S2b ($\Delta v$ $\leq$ 5000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi})
and S4a ($z$ $<$ 2.9), $\beta$ = 1.90$\pm$0.16 and 1.71$\pm$0.06, are
larger than the values for the other sub-samples, $\beta \sim$ 1.52,
which means that LDLs at lower redshift or near the quasars tend to
have lower column densities compared with those at higher redshift or
far from the quasars. The change in the column density distribution
near to the quasars may be just a consequence of the enhanced UV
radiation.
\subsubsection{Doppler Parameter Distribution}
The Doppler parameter distributions of HDLs and LDLs for sub-samples
(S1, S2a, S2b, S4a, and S4b) are also investigated, and the results of
K-S tests applied to them are summarized in Table~9. The only
remarkable result is that the probability, that the Doppler parameter
distributions of LDLs at $\Delta v$ $>$ 5,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ (S2a) and $\Delta
v$ $\leq$ 5,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ (S2b) from the quasars were drawn from the same
parent population, is very small, $\sim$2.0\%. We shows these two
distributions in Figure~5. In Figure~6 we see that the cumulative
distribution for the line $b$-values rises more slowly for the LDLs
near to the quasars (at $\Delta v$ $\leq$ 5000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}), which means that
these lines near to the quasars are broader by about 2--3~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}. For
other pairs of the sub-samples, we could not rule out the hypothesis
that their parent populations are same.
\section{Discussion\label{sec:discussion}}
In this section, we discuss our results, especially the fact that the
column density distribution is changing with the redshift and the
velocity distance from the quasars. We also compare our results to
those at lower redshift ($z$ $<$ 0.4) from the literature. After that,
we also briefly discuss the completeness of {\rm H}~{\sc i}\ lines in our 40
spectra.
\subsection{Redshift Evolution of {\rm H}~{\sc i}\ Absorbers}
In \S\ 3, we prepared two sub-samples, S4a and S4b, to compare the
physical properties of {\rm H}~{\sc i}\ lines at lower redshift ({\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}\ $<$ 2.9)
and at higher redshift ({\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}\ $\geq$ 2.9). We do not see a change in
the column density distribution in the sample as a whole, but once
they are separated into HDLs and LDLs, we notice that the index of the
column density distribution of LDLs at {\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}\ $<$ 2.9 ($\beta$ =
1.71$\pm$0.06) is clearly different from that of LDLs at {\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}\ $\geq$
2.9 ($\beta$ = 1.52$\pm$0.09). On the other hand, there was no
redshift evolution for HDLs. This trend, shown in Figure~7, means that
there is a deficit of relatively stronger LDLs (i.e., $\log N_{HI}$\ $\geq$
14.5) at lower redshift. One of the possible explanations is that at
lower redshift, more {\rm H}~{\sc i}\ lines with the column densities just below
$\log N_{HI}$\ = 15 (i.e., relatively strong LDLs) might be associated with
HDLs. In other words, stronger (i.e., $\log N_{HI}$\ = 14.5 -- 15) LDLs get
into within 200~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of the nearest HDLs, and would be classified
into HDLs, which is consistent with the trend expected in the
hierarchical clustering model (Figure~8).
As for the Doppler parameter distribution, we did not find any
remarkable redshift evolutions in neither HDLs nor LDLs. L96 claimed
that there is a redshift evolution of the Doppler parameter between
$z$ = 2.8 and 3.7; the mean value of Doppler parameter at {\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}\ = 3.7
($b_{0}$ = 23~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}; L96) is smaller than the value at {\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}\ = 2.8
($b_{0}$ = 28~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}; H95). The corresponding value in KT97 ($b_{0} =
23$ {\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}) is, however, different from the result in H95. The
difference may be due to the different line fitting procedure used in
these studies; L96 and KT97 used the VPFIT software, while H95 used
different software. Especially important is how the authors chose to
treat blended lines. The difference could be related to the difference
of the spectrum resolutions; R=45000 (L96; KT97) and R=36000 (H95).
Janknecht et al. (2006) did not detect any evolution on the Doppler
parameter at $z$ = 0.5 -- 1.9. Our results, which are based on the
data set taken with one observational configuration and fit using the
same procedure, suggests that the Doppler parameter distribution of
{\rm H}~{\sc i}\ clouds does not evolve with redshift at $z$ = 2 -- 4.
\subsection{Proximity Effect near Quasars}
It has long been noted that the number of Ly$\alpha$\ lines decreases near
to the redshift of the quasars (Carswell et al. 1982; Murdoch et
al. 1986; Tytler 1987). This phenomenon is related to the local excess
of UV flux from the quasars. The proximity effect has been used to
evaluate the intensity of the background UV flux. Bajtlik et
al. (1988) first measured the mean intensity of the background UV
intensity, $J_{\nu}$ = $10^{-21.0\pm0.5}$ (erg s$^{-1}$ cm$^{-2}$
Hz$^{-1}$ str$^{-1}$) at the Lyman limit at 1.7 $<$ $z$ $<$ 3.8, by
estimating the distance from the quasar at which the quasar flux is
equal to the background UV flux. The typical radius is $\sim$ 5~Mpc in
physical scale that corresponds to the velocity shift of $\Delta v$
$\sim$ 4,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ from the quasars. L96 also evaluated the background
UV intensity to be $J_{\nu}$ = $2\times10^{-22}$ (erg s$^{-1}$
cm$^{-2}$ Hz$^{-1}$ str$^{-1}$) at $z$ $\sim$ 4.1 in the spectrum of
Q0000-26.
We see two differences in LDLs within 5,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of the quasars,
compared to those far from the quasars ($\Delta v$ $>$ 5,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}). We
see fewer strong LDLs leading to a large index for the column density
power law, $\beta$ = 1.90$\pm$0.16 (Figure~9). We also see that the
distribution of Doppler parameter is different from that of {\rm H}~{\sc i}\ lines
far from the quasars at 98.8~\% confidence level. The lines near to
the quasar apparently tend to have broader lines (Figures~5 and 6),
although this is a tentative result because we consider few lines near
to the quasars.
These results could be accounted for by assuming a two-phase
structure: outer cold low-density regions and inner hot high-density
regions in which temperature is determined by the competition between
photoionization heating and adiabatic cooling. When gas is near to the
quasars, the outer regions become too highly ionized to show much
\ion{H}{1}, and only the inner hot regions would be observed in
\ion{H}{1}, which would increase the mean value of the Doppler
parameter. The increased ionization also decreases the total column
densities of \ion{H}{1} gas compared with gas far from the quasars
(Figure~10). As reported in the past observations (e.g., Kim et
al. 2001; Misawa et al. 2004), $\log N_{HI}$\ and $b$({\rm H}~{\sc i}) have a positive
correlation for $\log N_{HI}$\ $<$ 15. This correlations was also
reproduced by hydrodynamical simulations (e.g., Zhang et al. 1997;
Misawa et al. 2004). These results suggest that high density regions
tend to have larger Doppler parameters, if the absorbers are not
optically thick. Dav\'e et al. (1999) also presented an interesting
plot in their Figure~11 that supported there existed three kinds of
phases for {\rm H}~{\sc i}\ absorbers (diffuse, shocked, and condensed phases).
Among them, the diffuse phase whose volume densities are small (i.e.,
it corresponds to LDLs in our paper) has a positive correlation
between $\log N_{HI}$\ and $b$. On the other hand, an anti-correlation
between $\log N_{HI}$\ and $b$ is seen only for the condensed phase with
high volume density that is probably associated with galaxies. The
shocked phase, probably consisting of shock-heated gas in galaxies,
does not show any remarkable correlations between them. Thus, if we
assume all LDLs in our sample arise in the diffuse phase absorbers,
our scenario above could reproduce the difference between sub-samples
S2a and S2b.
\subsection{Comparison to {\rm H}~{\sc i}\ Absorbers at Lower Redshift}
The number density evolution of {\rm H}~{\sc i}\ absorbers (i.e., $dN/dz$
$\propto$ $(1+z)^{-\gamma}$) has been known to slow dramatically at
$z$ $\sim$ 1.6, from a high-$z$ rapid evolution with $\gamma$ of
1.85$\pm$0.27 (Bechtold 1994) to a low-$z$ slow evolution with
$\gamma$ of 0.16$\pm$0.16 (Weymann et al. 1998b). This trend is
suggested to be due to the decline in the extragalactic background
radiation using hydrodynamic cosmological simulations (e.g., Theuns,
Leonard, \& Efstathiou 1998). Thus, a comparison of {\rm H}~{\sc i}\ absorbers at
high-$z$ and local universe is another interesting topic.
In \S~3, we found that the column density distribution of LDLs at $z$
$<$ 2.9 ($\beta$ = 1.71$\pm$0.06) is steeper than that at $z$ $>$ 2.9
($\beta$ = 1.52$\pm$0.09). We proposed this trend could be due to the
hierarchical clustering. If the assembly of structure in the IGM
indeed dominates the column density distribution, we would expect to
find a steeper column density distributions at lower redshift as
proposed in \S~4.1.
Using the {\it Hubble Space Telescope} ({\it HST}) and the {\it Far
Ultraviolet Spectroscopic Explorer} ({\it FUSE}), Penton, Stocke, \&
Shull (2004) and Lehner et al. (2007) estimated power-law indices to
be $\beta$ of 1.65$\pm$0.07 at 12.3 $\leq$ $\log N_{HI}$\ $\leq$ 14.5 and
1.76$\pm$0.06 at 13.2 $\leq$ $\log N_{HI}$\ $\leq$ 16.5 at $z$ $<$ 0.4,
respectively. On the other hand, Dav\'e \& Tripp (2001) found a
flatter distribution ($\beta$ = 2.04$\pm$0.23) at $z$ $<$ 0.3. The
latter steeper distribution was also reproduced by hydrodynamical
simulations (e.g., Theuns et al. 1998). If we accept the steeper
result, the column density distribution would continue to be steeper
as going to the lower redshift, which supports our idea that the
hierarchical clustering could play a main role of the evolution seen
in the column density distribution, although extragalactic radiation
would contribute to play a role.
Absorption line width is another parameter that is still in argument
whether it would evolve with redshift or not, as mentioned in \S~4.1.
While most of the space-based ultraviolet observations could not
measure line widths by model fittings because of the lacks of spectral
resolutions, Lehner et al. (2007) for the first time measured Doppler
parameters of {\rm H}~{\sc i}\ absorption lines accurately at lower redshift ($z$
$<$ 0.4), and investigate their distributional trend. By comparing to
the results at higher redshift, Lehner et al. (2007) discovered that
Doppler parameters are monotonously increasing from $z$ = 3.1 to
$\sim$0. Such a trend was not confirmed in past papers (e.g.,
Janknecht et al. 2006). The fraction of the broad Ly$\alpha$\ absorbers
(BLA; $b$ $\geq$ 40~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}) is also confirmed to increase by a factor of
$\sim$3 from $z$ $\sim$ 3 to 0 (Lehner et al. 2007). Here, $b$ =
40~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ corresponds to gas temperature of $T_{gas}$ $\sim$
$10^{5}$~K, which is a border between the cool photoionized absorbers
and the highly ionized warm-hot absorbers. These results suggests
that a large fraction of {\rm H}~{\sc i}\ absorbers at very low redshift (i.e.,
$z$ $<$ 0.4) are hotter and/or more kinematically disturbed than at
higher redshift (i.e., $z$ $>$ 2.0).
In our sample,we do not see any clear difference of mean/median
Doppler parameter at $z$ $\geq$ 2.9 ($b_{mean}$ = 31.0$\pm$10.0,
$b_{med}$ = 28.1) and $z$ $<$ 2.9 ($b_{mean}$ = 32.0$\pm$10.9,
$b_{med}$ = 29.6). Neither HDLs nor LDLs shows any evolutional
trends. These negative results could be because with our optical data
we covered only higher redshift regions than $z$ $\sim$ 1.6, at which
$dN/dz$ evolution dramatically changed. Similarly, the fraction of the
BLA ($f_{BLA}$ = 0.182 at $z$ $\geq$ 2.9 and 0.196 at $z$ $<$ 2.9)
shows an only marginal hint to the evolution. However, these fractions
are consistent to the result from KT97 ($f_{BLA}$ = 0.179; Lehner et
al. 2007) at 2.43 $<$ $z$ $<$ 3.05 that is similar redshift coverage
as our sample. Thus, Doppler parameter could increase as going to the
lower redshift, but such a trend would be remarkable only if we trace
its distribution at very low redshift (at $z$ $<$ 0.4) and compare it
to that at much higher redshift (at $z$ $>$ 2).
As for the clustering trend of {\rm H}~{\sc i}\ absorption lines, we see a very
similar property at low and high redshift regions. As presented in
Figure~4, we found a strong clustering trend within $\Delta v$\ of
200~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ for {\rm H}~{\sc i}\ lines with $\log N_{HI}$\ between 15 and 19, while only a
weak correlation is seen for weaker {\rm H}~{\sc i}\ lines within $\Delta v$\ of
100~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}. Penton et al. (2004) presented very similar results:
5$\sigma$ (7.2$\sigma$) excess within $\Delta v$\ of 190~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ (260~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi})
and only stronger {\rm H}~{\sc i}\ lines contribute to this clustering. Penton,
Stock, \& Shull (2002) proposed such clustering trends within several
hundreds of {\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ are due to clusters of galaxies. There could exist
similar kinematical structures both at high-$z$ and in local universe.
\subsection{Completeness of {\rm H}~{\sc i}\ Line Sample}
For a statistical analysis, especially the number density analysis,
the completeness of the {\rm H}~{\sc i}\ line detection is influenced by the
detection limit of absorption lines (e.g., equivalent width or column
density). In this study, we have used the {\rm H}~{\sc i}\ lines detected in the
40 HIRES spectra that have various S/N ratios. The strong line sample
will have subtle biases arising from the selection of the quasars
because they were once thought to be good targets for the detection of
deuterium. For example, we avoided quasars with no LLSs, and we
avoided LLSs with previously known complex velocity
structure. Nevertheless, we confirmed that our sample is almost
complete for weak lines in the following way.
The minimum detectable equivalent width in the observed-frame,
$W_{min}$, can be estimated using the following relation,
\begin{equation}
U = \frac{W_{min}N_{C}}{\sigma(W_{min}N_{C})}
= \frac{W_{min}(S/N)}{(M_{L}^{2}M_{C}^{-1}+M_{L}-W_{min})^{1/2}},
\label{eqn:3}
\end{equation}
where $M_{L}$ and $M_{C}$ are the numbers of pixels over which the
equivalent width and the continuum level ($N_{C}$) are determined
(Young et al. 1979; Tytler et al. 1987). The value of ($S/N$) is the
S/N ratio per pixel. When we set $U \simeq W/\sigma(W)$ is 4 (i.e.,
4$\sigma$ detection), the eqn.(\ref{eqn:3}) can be solved to give,
\begin{eqnarray}
W_{min} & = & (S/N)^{-2}\{[64+16(S/N)^{2} \times \nonumber \\
& & (M_{L}+M_{L}^{2}/M_{C})]^{1/2}-8\} \times
\Delta \lambda \hspace{2mm} (\rm{\AA}),
\label{eqn:4}
\end{eqnarray}
where $\Delta \lambda$ is the wavelength width per pixel in angstroms
(Misawa et al. 2002). Here, we set $M_{L}$ for 2.5 times FWHM of each
line, and $M_{C}$ for full width of each echelle order. Once the
minimum rest-frame equivalent width, $W_{rest} [=W_{min}/(1+z)]$, has
been evaluated, it can be converted to the minimum column density by
choosing a specific Doppler parameter; the result is insensitive to
the choice on the linear part of the curve of growth. Among the 86
{\rm H}~{\sc i}\ systems in our data sample, the {\rm H}~{\sc i}\ system at {\ifmmode{z_{abs}}\else{$z_{abs}$}\fi}\ = 2.940 in
the spectrum of Q0249-2212 is located in the region with the lowest
S/N ratio (i.e., $S/N$ $\sim$ 11). This corresponds to a 4$\sigma$
detection limit of $\log N_{HI}$\ $\sim$ 12.3 for an isolated Ly$\alpha$\ line
with any Doppler parameter seen in our sample ($b$ = 15 -- 80~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}).
Thus, our sample is complete for {\rm H}~{\sc i}\ lines with $\log N_{HI}$\ $>$
12.3. Therefore, the bend in the column density distribution near
$\log N_{HI}$\ $\sim$ 13 is probably due to the line blending and
blanketing.
\section{Summary\label{sec:summary}}
We present 40 high-resolutional (FWHM = 8.0~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}) spectra obtained
with Keck+HIRES. Over the wide column density range (12 $<$ $\log N_{HI}$\
$<$ 19), we fit {\rm H}~{\sc i}\ lines by Voigt profiles using not only Ly$\alpha$\ line
but also higher Lyman series lines such as Ly$\beta$\ and Ly$\gamma$\ up to Lyman
limit when possible. To investigate the detailed line properties, we
made several sub-samples that are separated according to the distance
from the quasar, redshift, the column density, and the S/N ratio of
the spectrum. We also classify them into HDLs (lines arising in or
near to intervening galaxies) and LDLs (lines not obviously near to
galaxies and hence more likely to be from the intergalactic diffuse
gas), based on the clustering properties. The main results are
summarized below:
\begin{enumerate}
\item
We present a database of {\rm H}~{\sc i}\ absorption lines with a wide column
density range (i.e., $\log N_{HI}$\ = 12--19) from a wide redshift range
(i.e., $z$ = 2--4).
\item
Our data sample is complete at $\log N_{HI}$\ $\geq$ 12.3 with 4$\sigma$
line detection. The turnover at $\log N_{HI}$\ $<$ 13 seen in the $\log N_{HI}$\
distribution is not due to a quality of our spectra but due to the
line blending and blanketing.
\item
The power-law indices of the column density distribution of LDLs shows
evolution with redshift, from $\beta$ = 1.52$\pm$0.09 at $z$ $\geq$
2.9 to $\beta$ = 1.71$\pm$0.06 at $z$ $<$ 2.9. This trend could be
related to the hierarchical clustering in cosmological timescale. No
evolution is seen for HDLs.
\item
Within 5,000~{\ifmmode{{\rm km~s}^{-1}}\else{km~s$^{-1}$}\fi}\ of the quasars, the power-law index of the column
density distribution for LDLs ($\beta$ = 1.90$\pm$0.16) is larger than
those far from the quasars ($\beta$ = 1.53$\pm$0.05). We also found a
hint (Figure~6) that the Doppler parameters are larger near the
quasars. These results could be due to the UV flux excess from the
quasars. We do not see any similar trend for the HDLs.
\item
We suggest that HDLs and LDLs are produced by physically different
phases or absorbers, because they have four key differences seen in
(i) clustering property, (ii) redshift evolution, (iii) Proximity
effect, and (iv) $\log N_{HI}$\ -- $b_{min}$ relation (see Misawa et
al. 2004).
\end{enumerate}
\acknowledgments We acknowledge support from NASA under grant
NAG5-6399, NAG5-10817, NNG04GE73G and by the National Science
Foundation under grant AST 04-07138. This work was also in part
supported by JSPS. The UCSD team were supported in part by NSF grant
AST 0507717 and by NASA grant NAG5-13113. We also thank the anonymous
referee for very useful comments and suggestions.
|
1,116,691,499,116 | arxiv | \section{Introduction}
\subsection{Review of the contents}
A well-established principle in the border area between algebraic topology and global analysis predicts that local geometric representations of primary topological invariants lead to
interesting secondary constructions \cite[Introduction]{bunke-2008}.
In the present paper, based on this principle, we construct an interesting secondary invariant for
string bordism classes and elements of the homotopy of the spectrum of topological modular forms.
We will employ the notion of geometric string
structures which was recently developed by Waldorf \cite{waldorf}.
Here is a short summary of our constructions. For all $m\ge 1$ we define an abelian group
$T_{2m}$ (Definition \ref{uzifefwfwefewfwef}) and construct a homomorphism (Definition \ref{uzduiwqdqwd})
\begin{equation}\label{bbb}
b^{an} :MString_{4m-1}\to T_{2m}
\end{equation}
using spectral invariants of Dirac operators and differential geometry.
It is a secondary version of the Witten genus.
Because of the appearance of spectral invariants, a direct evaluation of $b^{an}$ is complicated. But it can be analyzed on the subgroup
\begin{equation}\label{bbb333}A_{4m-1}:=\ker(MString_{4m-1}\stackrel{j}{\to} MSpin_{4m-1})\subseteq MString_{4m-1}\ .\end{equation}
The restriction of $b^{an}$ to $A_{4m-1}$ will be compared with homomorphisms
\begin{equation}\label{bbb1}
b^{geom},b^{top} :A_{4m-1}\to T_{2m}
\end{equation}
defined using differential geometry (\ref{w8e9fwefwfwefwf}), and homotopy theory (Definition \ref{uzidqwdqwdwqdwd}), respectively.
The constructions of the homomorphisms $b^{an}$ and $b^{geom}$ are very similar to the analytic and geometric description of the Adams $e$-invariant given in \cite{MR0397797}. Our first main result asserts that
\begin{equation}\label{uiwefwefwefwef6565}
b^{an}_{|A_{4m-1}}=b^{geom}=b^{top}\ .
\end{equation}
The first equality $b^{an}_{|A_{4m-1}}=b^{geom}$ is essentially a consequence of the Atiyah-Patodi-Singer index theorem \cite{MR0397797} and will be verified during the construction of $b^{geom}$ in Subsection \ref{ioqdqwd}. The second equality
$b^{geom}=b^{top}$, shown in Theorem \ref{e2e7uiwqdqwdwqd}, is deeper, and the resulting equality $b^{an}_{|A_{4m-1}}=b^{top}$ can be considered
as a secondary index theorem. In the proof we use ideas we learned from \cite{laures-cob}.
The analogy with the secondary index theorem for the $e$-invariant of Adams \cite{MR0159336} as formulated in \cite[Introduction]{bunke-2008} will be explained in Subsection \ref{uidqwdqwdqwdqwdee}.
We will give a complete and explicit homotopy theoretic calculation of $b^{top}$. It is based on the factorization (Proposition \ref{udiqwdqwdqwdq})
$$\xymatrix{MString_{4m-1}\ar[r]^\sigma&tmf_{4m-1}\ar[d]^{b^{tmf}}&\\A_{4m-1}\ar@{_{(}->}[u]\ar[r]^{b^{top}}&T_{2m}}
$$ and the explicit calculation of $b^{tmf}$ (Propositions \ref{uidodqwdqwd} and \ref{zwdqdwqd}) whose construction is similar to the one of $b^{top}$.
Here, $tmf$ denotes the connective spectrum of topological modular forms of Goerss-Hopkins-Miller and Lurie \cite{goerss1}, and $\sigma$ is induced by the $tmf$-valued Witten genus which has been constructed as an $E_\infty$-ring map $\sigma :MString\to tmf$ in \cite{ahr}.
Since the homotopy groups of string bordism
and $tmf$ are interesting and complicated, it is useful to relate these objects non-trivially to geometry or global analysis.
\bigskip
{\em Acknowledgements: The first author is very grateful to K. Waldorf for introducing him to the notion of a geometric string structure
and further discussion at earlier stages of this work. He further thanks
M. Hovey for very helpful remarks about the structure of the string bordism groups.
The charts of the spectral sequences have been compiled using the TEX-package of T. Bauer.
The homotopy groups of $MSpin$ have been calculated with the help of MAPLE.}
\subsection{Thom spectra}
In the remainder of this introduction we give a more detailed overview.
We first recall the definition of the Thom spectra appearing in the present paper.
Let \begin{equation}\label{uifwefwefwefewf}
\dots \to BO\langle 8 \rangle\to BO\langle 4\rangle\to BO\langle 2\rangle\to BO
\end{equation}
be the first stages of the Postnikov tower of the $H$-space $BO$. The classical names of these spaces are
$$BSO=BO\langle 2\rangle\ ,\quad BSpin=BO\langle 4\rangle\ ,\quad BString = BO\langle 8 \rangle\ .$$
There is a functor from the category of $H$-spaces over $BO$ to ring spectra which associates to a map $\xi:X\to BO$ the Thom spectrum $X^\xi$.
The Thom spectrum for the map $BO\langle i \rangle\to BO$ will be denoted with $MO\langle i\rangle$. In particular we will write
$$MSO=MO\langle 2\rangle\ ,\quad MSpin=MO\langle 4\rangle\ ,\quad MString = MO\langle 8 \rangle\ .$$ These spectra are related by morphisms of ring spectra
$$MString \stackrel{j}{\to} MSpin \to MSO\to MO\ .$$
\subsection{The Witten genus, $T_{2m}$, and $b^{geom}$}
For the purpose of this introduction, we explain the construction of $b^{geom}$ which is the simplest of the three maps in (\ref{uiwefwefwefwef6565}). In particular we want to motivate the Definition \ref{uzifefwfwefewfwef} of the group $T_{2m}$.
The map $b^{geom}$ is a secondary version of the Witten genus.
We let $KO$ denote the real $K$-theory spectrum. For a (ring) spectrum $X$ we let
$X[[q]]$ denote the (ring) spectrum which represents the homology theory
$X[[q]]_*(B):=X_*B[[q]]$ for all finite $CW$-complexes $B$.
The Witten genus is a morphism of ring spectra
$$R:MSpin\to KO[[q]]\ .$$
We refer to Subsection \ref{udqwidqwdqwdqwd444} for a description of this map as a transformation of homology theories.
For a spectrum $X$ we let $X_* :=\pi_*(X)$ denote the homotopy groups of $X$.
Via the Thom-Pontrjagin construction, elements in the homotopy of Thom spectra like $MSpin$ or $MString$ can be realized as bordism classes of manifolds with corresponding structures on the tangent bundles (see Subsection \ref{udidqwdqwdqwdqwdqwdqwd}).
Thus homotopy classes of $MString$ correspond to bordism classes of closed string manifolds $[M,\alpha^{top}]$, where a string manifold $(M,\alpha^{top})$ consists of a spin manifold $M$ together with the choice of a topological string structure $\alpha^{top}$, see Subsection \ref{udidqwdqwdqwdqwdqwdqwd} for details. The morphism $j:MString_*\to MSpin_*$ just forgets the topological string structure, i.e. it is given geometrically by $j([M,\alpha^{top}])=[M]$.
We now give a description of the Witten genus following the exposition \cite{MR1189136}.
Let $[M]\in MSpin_{4m}$ be a homotopy class represented by a closed $4m$-dimensional spin manifold $M$. We choose a connection $\nabla^{TM}$ on the tangent bundle $TM$ of $M$. In general,
if $\nabla^V$ is a connection on a real vector bundle $V\to B$ over some manifold $B$, then by $p_i(\nabla^{V})\in \Omega^{4i}(B)$ we denote the Chern-Weil representative of the $i$'th
Pontrjagin class $p_i(V)\in H^{4i}(B;\mathbb{Z})$. We set $\kappa_{m}:=1$ for even $m$, and $\kappa_{m}:=\frac{1}{2}$ in the case of odd $m$.
In (\ref{udiqwdqwdqwd3}) we define a power series
$$\Phi\in \mathbb{Q}[[q]][[p_1,p_2,\dots]]\ .$$ With this notation
we have the following local expression for the Witten genus
\begin{equation}\label{uiqdqwdqwdwqdwqd}
R([M])=\kappa_{m}
\int_M \Phi(p_1(\nabla^{TM}),p_2(\nabla^{TM}),\dots)\ ,
\end{equation}
which a priori is an element of $\mathbb{Q} [[q]]$.
By the Atiyah-Singer index theorem, the coefficients of this formal power series
can be identified with indices of twisted Dirac operators. More precisely, we have
$$R([M])=\kappa_m\sum_{n\ge 0} q^n{\tt index}(D_M\otimes R_n(TM))\ ,$$
where $R_n(TM)$ is a certain virtual bundle derived from the tangent bundle (see (\ref{zfquefhefwefwef})) for all $n\ge 0$, and $D_M\otimes R_n(TM)$ denotes the spin Dirac operator of $M$ twisted by
$R_n(TM)$. Since the indices are integers and even in the case of odd $m$ (because of an additional real symmetry) we see that
$$R([M])\in \mathbb{Z}[[q]]\ .$$
For a subring $R\subseteq \mathbb{C}$ we denote by $\mathcal{M}^R_{2m}$ the space of modular forms for $SL(2,\mathbb{Z})$ of weight $2m$ with a $q$-expansion in the subring $R[[q]]\subseteq \mathbb{C}[[q]]$.
Using the $q$-expansion we will identify $\mathcal{M}^R_{2m}$ with a sub-$R$-module of $R[[q]]$.
For more information on modular forms we refer to Subsection \ref{fzuefwefwefewf}.
It is a crucial observation for our constructions that the Witten genus has the following factorization
$$\xymatrix{&&&\mathcal{M}^\mathbb{Z}_{2m}\ar@{_{(}->}[d]\\MString_{4m}\ar@{.>}[urrr]\ar[r]_j&MSpin_{4m}\ar[r]_R&KO[[q]]_{4m}\ar[r]_\cong&\mathbb{Z}[[q]]}\ .$$
The construction of the secondary invariant $b^{geom}$ starts with the local formula (an integral over characteristic forms) for the Witten genus given by the right-hand side of (\ref{uiqdqwdqwdwqdwqd}). It will be applied to a spin zero-bordism $Z$ of a string manifold $(M,\alpha^{top})$ and gives, combined with a contribution from a geometric string structure $\alpha$, a formal power series in $\mathbb{R}[[q]]$.
In order to ensure independence of the choices of the geometry and the zero bordism we have to calculate modulo everything that comes from closed spin manifolds
and string zero bordisms.
This leads us to the definition of the following quotient group.
\begin{ddd}\label{uzifefwfwefewfwef}
The group $T_{2m}$
is defined by
$$T_{2m}:=\frac{\mathbb{R}[[q]]}{\mathbb{Z}[[q]]+\mathcal{M}_{2m}^\mathbb{R}}\ .$$
\end{ddd}
We now turn to the construction of the secondary invariant $b^{geom}(M,\alpha^{top})$.
It is based on the existence of a spin zero bordism of the string manifold $(M,\alpha^{top})$. We therefore have to assume that
the class $[M,\alpha^{top}]\in MString_{4m-1}$ is in the subgroup
$A_{4m-1}\subseteq MString_{4m-1}$ defined in (\ref{bbb333}).
Let $(M,\alpha^{top})$ be a $4m-1$-dimensional string manifold such that $[M,\alpha^{top}]\in A_{4m-1}$.
Then we can choose a spin zero bordism $Z$ of $M$. Furthermore, we choose a connection $\nabla^{TZ}$ extending a connection $\nabla^{TM}$ on the tangent bundle $TM$ of $M$.
A topological string structure $\alpha^{top}$ is by definition a "trivialization" of the spin characteristic class $\frac{p_1}{2}(TM)\in H^4(M;\mathbb{Z})$. A geometric refinement $\alpha$ of $\alpha^{top}$ trivializes
this class on the form level, i.e. $\alpha$ gives rise to a form
$H_\alpha\in \Omega^{3}(M)$ such that $dH_\alpha=\frac{1}{2} p_1(\nabla^{TM})$
(see \cite{waldorf}).
This can be used to define a refinement
$\tilde p_1(\nabla^{TZ},\alpha)\in \Omega^4(Z)$ of $p_1(\nabla^{TZ})$
which vanishes on the boundary $M$ of $Z$ (see Subsection \ref{udqwidqwdqwdqwd}).
The value of $b^{geom}$ on the class $[M,\alpha^{top}]\in MString_{4m-1}$ is then given (compare Lemma \ref{zdqwud}) as the class
$$b^{geom}([M,\alpha^{top}]):=\left[\int_Z \Phi(\tilde p_1(\nabla^{TZ},\alpha),p_2(\nabla^{TZ}),\dots)\right]\in T_{2m}\ .$$
In contrast to $b^{geom}$, the analytic variant $b^{an}([M,\alpha^{top}])\in T_{2m}$
in Definition \ref{uiqdwqdqwdwqd} provides an intrinsic formula which does not depend on the choice of a spin zero bordism and is therefore defined on all of $MString_{4m-1}$. It involves spectral invariants of the twisted Dirac operators $D_M\otimes R_n(TM)$ for all $n\ge 0$.
In Lemma \ref{udidqwdqwd} we show the equality
$$b^{geom}=b^{an}_{|A_{4m-1}}\ .$$
The construction of all variants of $b$ is based on the interplay between the facts that $MString_{4m-1}$ is torsion and $KO[[q]]_{4m}\otimes \mathbb{Q}$ is non-trivial. This explains the restriction to dimensions of the form $4m-1$.
In order to detect elements of $MString_*$ in dimensions $0,1,2,4 \:\:\mbox{mod $8$}$ one can use the Witten genus $R\circ j:MString\to KO[[q]]$ directly.
\subsection{Calculations}
In Section \ref{gfdhfdf} we consider the case of three-manifolds, i.e. the case $m=1$.
In this case it suffices to consider the constant term of the formal power series
representing $b^{geom}([M,\alpha^{top}])$. This simplifies matters considerably and justifies
a separate discussion. We will see that
$$b^{geom}:MString_{3}\cong \mathbb{Z}/24 \mathbb{Z}\hookrightarrow T_2=\mathbb{R}[[q]]/\mathbb{Z}[[q]]$$ is injective.
As a side result we get the following analog for spin manifolds of Atiyah's canonical $2$-framings of oriented three-manifolds \cite{MR1046621}: {\em A $3$-dimensional connected closed spin manifold has a canonical
topological string structure (Definition \ref{uiqdwqdqwdw})}.
In higher dimensions the homotopy groups $MString_*$ are not fully understood. The following facts are known \cite{MR1455523}.
\begin{enumerate}
\item $MString_{k}$ is a finite group for $k\equiv 1,2,3\:\:\mbox{mod $4$}$.
\item The $p$-torsion of $MString_*$ is trivial for every prime $p\ge 5$.
\item The $3$-torsion of $MString_*$ is annihilated by multiplication with $3$.
\end{enumerate}
The spin bordism groups $MSpin_*$ are calculated additively in \cite{MR0190939} (see Section \ref{udiqdqwdwqdqwdwqdwqd} for a table of $MSpin_*$).
We will use the following facts:
\begin{enumerate}
\item $MSpin_k$ is a finite group for $k\equiv 1,2,3\:\:\mbox{mod $4$}$.
\item For all $k\ge 0$ the torsion $MSpin_{k,tors}\subseteq MSpin_k$ is a direct
sum of copies of $\mathbb{Z}/2\mathbb{Z}$.
\item $MSpin_{4m-1}=0$ for $m\le 9$.
\end{enumerate}
We get the following consequences for the subgroup
$A_{4m-1}\subseteq MString_{4m-1}$:
\begin{enumerate}
\item For all $m\ge 1$, the group $A_{4m-1}\subseteq MString_{4m-1}$ contains
all $3$-torsion elements.
\item For all $m\ge 1$ we have $2 MString_{4m-1}\subseteq A_{4m-1}$.
\item We have $ A_{4m-1}=MString_{4m-1}$ for $m\le 9$.
\end{enumerate}
Because of our lack of knowledge of $A_{4m-1}$,
for an explicit description of $b^{top}$ Proposition \ref{udiqwdqwdqwdq} is a crucial observation.
It asserts that $b^{top}$ has a factorization
$$\xymatrix{MString_{4m-1}\ar[r]^\sigma&tmf_{4m-1}\ar[d]^{b^{tmf}}&\\A_{4m-1}\ar@{_{(}->}[u]\ar[r]^{b^{top}}&T_{2m}} \ .
$$
In contrast to $MString$, the homotopy groups of $tmf$ are known \cite{MR2508200}. In Propositions \ref{uidodqwdqwd} and \ref{zwdqdwqd} we obtain a complete calculation of $b^{tmf}$. In particular, the $3$-torsion of $tmf_{4m-1}$ is detected completely by $b^{tmf}$.
This calculation implies that $b^{top}$ is non-trivial in arbitrarily high dimensions.
Let us explain some examples.
\begin{itemize}
\item
There exists a $3$-torsion element
$x\in tmf_{27}$ which goes to the element with the name
$\nu\Delta\in tmf_{(3),27}$ under localization at $3$.
By Proposition \ref{zwdqdwqd} we know that $b^{tmf}(x)\not=0$.
By a result of Hopkins-Mahowald \cite[Theorem 6.25]{MR1989190} the $tmf$-valued Witten genus $$\sigma:MString_*\to tmf_*$$ is surjective. We can choose an element $y\in A_{27}=MString_{27}$ such that $\sigma(y)=x$.
We then have
$$ b^{top}(y)\stackrel{ \ref{udiqwdqwdqwdq}}{=}b^{tmf}(x)\not=0\ .$$
If $(M,\alpha)$ is a closed $27$-dimensional string manifold one can try to calculate the element
$\sigma([M,\alpha])\in tmf_{(3),27}= (\mathbb{Z}/3\mathbb{Z})\:\nu\Delta$. An answer in terms of characteristic numbers is given in Subsection \ref{ziddqwdqwdwq}.
Similar statements can be produced in higher dimensions using the $72$-periodicity of $tmf_{(3),*}$ given by the multiplication with $\Delta^3$.
\item
Let $x\in tmf_{192}$ be an element which goes to
$\Delta^8\in tmf_{(2),192}$ under localization at $2$. Then by Hopkins-Mahowald \cite[Theorem 6.25]{MR1989190} there exists a class $y\in MString_{192}$ such that $\sigma(y)=x$. Let $g\in A_3=MString_3$ be the generator (\ref{reuwrewrwer}).
Note that $yg\in A_{195}$. The element
$\sigma( yg)\in tmf_{195}$ goes to $\nu \Delta^8\in tmf_{(2),195}$ under localization at $2$.
By Proposition \ref{uidodqwdqwd} the order of $b^{top}(yg)=b^{tmf}(\nu\Delta^8)$ is $8$.
\end{itemize}
\subsection{Open problems}
We close with stating some open problems:
\begin{enumerate}
\item What is $A_{4m-1}\subseteq MString_{4m-1}$? For which $m$ do we have an equality?
\item What is the image $\sigma(A_{4m-1})\subseteq tmf_{4m-1}$? For which $m$ do we have an equality?
\item What is $b^{an}(x)\in T_{2m}$ for $x\in MString_{4m-1}\setminus A_{4m-1}$?
One could conjecture that
$$b^{an}=b^{tmf}\circ \sigma\ .$$
\end{enumerate}
\section{Three-manifolds}\label{gfdhfdf}
\subsection{A string bordism invariant in dimension $3$}\label{ziqdqwdwqd}
In the present subsection we give a geometric construction of a homomorphism
$$d:MString_{3}\to \mathbb{Z}/24\mathbb{Z}\ .$$
We will see in Corollary \ref{uqiduwqdqwdwqdd} that $d$ is an isomorphism.
It is known that $MSpin_3=0$. Let $M$ be a closed spin three-manifold $M$.
Then we can find a spin zero bordism
$Z$. We choose a connection $\nabla^{TM}$ on $TM$. It naturally induces a connection on the $Spin(3)$-principal bundle given by the spin structure.
We furthermore choose an extension of $\nabla^{TM}$ to a connection $\nabla^{TZ}$.
Let us fix a topological string structure $\alpha^{top}$ on $M$.
As a principal bundle can be equipped with a connection
a topological string structure $\alpha^{top}$ on a spin bundle with connection can be refined to a geometric string structure $\alpha$. For details we refer to \cite{waldorf}.
A geometric string structure $\alpha$ on $M$ gives rise to a $3$-form $H_{\alpha}$
which satisfies $dH_\alpha=\frac{1}{2} p_1(\nabla^{TM})$ (this condition is non-vacuous in higher dimensions).
We can form the difference\footnote{This difference has also been considered in the recent (and independent) paper
\cite{redden}, where it is put in relation with the $e$-invariant of Adams.}
$$d_Z(M,\alpha):=\frac{1}{2} \int_Z p_1(\nabla^{TZ})-\int_MH_{\alpha}\in \mathbb{R}\ .$$
\begin{lem}\label{zudqwdqwdqwd}
The real number $d_Z(M,\alpha)$ is independent of the choice of connections and the geometric data of the string structure.
\end{lem}
{\it Proof.$\:\:\:\:$}
First observe that the difference does not depend on the extension of the connection to the bordism $Z$. Indeed, given two extensions $\nabla^{TZ},\nabla^{TZ,\prime}$ we form the closed manifold
$W:=Z\cup_M -Z$ with the induced connection $\nabla^{TW}$ which restricts to
$\nabla^{TZ}$ and $\nabla^{TZ,\prime}$ on the two obvious copies of $Z$ in $W$.
We then have
$$\int_Z p_1(\nabla^{TZ})-\int_Z p_1(\nabla^{TZ,\prime})=\int_W p_1(\nabla^{TW})=3\ {\tt sign}(W)\ .$$
But ${\tt sign}(W)=0$ because of the obvious orientation-reversing $\mathbb{Z}/2\mathbb{Z}$-symmetry of $W$.
Two connections on $TM$ can be joined by a connection $\nabla^{T(I\times M)}$ on the cylinder $I\times M$.
Similarly, two geometric string structures $\alpha$ and $\alpha^\prime$ refining the same underlying topological string structure can be connected by a geometric string structure $\tilde \alpha$ on $I\times M$.
Then we have by Stokes' theorem
\begin{equation}\label{zqwduqgwdqwdwqdwqd}
\frac{1}{2}\int_{I\times M}p_1(\nabla^{T(I\times M)})=\int_{I\times M} dH_{\tilde \alpha}=\int_M H_\alpha
-\int_M H_{\alpha^\prime}\ .\end{equation}
We choose $Z^\prime:=Z\cup_M (I\times M)$ as the zero bordism for the primed choices. Then
\begin{equation}\label{zqwduqgwdqwdwqdwqd1}\frac{1}{2}\int_{Z^\prime} p_1(\nabla^{TZ,\prime})-\frac{1}{2}\int_{Z} p_1(\nabla^{TZ})=\frac{1}{2}\int_{I\times M}p_1(\nabla^{T(I\times M)})\ .
\end{equation}
The assertion now follows from the combination of (\ref{zqwduqgwdqwdwqdwqd}) and (\ref{zqwduqgwdqwdwqdwqd1}).
Alternatively one could conclude the result from Lemma \ref{uefhiwefuwefw} below
and a continuity argument.
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
As a consequence of Lemma \ref{udidqwdqwdqwdqwdqwdqwd}, the integral $\int_{M} H_\alpha$ does not depend on the geometry of $\alpha$ so that we can also write it as
$\int_{M} H_{\alpha^{\tiny top}}$. Moreover we can write $d_Z(M,\alpha)=d_Z(M,\alpha^{top})$.
The set of topological string structures on $M$ is a torsor under $H^3(M;\mathbb{Z})\cong \mathbb{Z}$. We will write the action additively, see (\ref{ufiwefwefwefw}).
Note that (see \cite{waldorf})
\begin{equation}\label{uiqdqwdwqdqwdwd54545}
\int_{M} H_{\alpha^{\tiny top}+x}=\int_M H_{\alpha^{\tiny top}}+\langle x,[M]\rangle\ ,\quad \forall x\in H^3(M,\mathbb{Z})\ .
\end{equation}
Therefore
\begin{equation}\label{udiqdwqdqwd}
d_Z(M,\alpha^{top}+x)=d_Z(M,\alpha^{top})-\langle x,[M]\rangle\ .
\end{equation}
\begin{lem}\label{uefhiwefuwefw}
We have $d_Z(M,\alpha)\in \mathbb{Z}$.
\end{lem}
{\it Proof.$\:\:\:\:$}
We let $\widehat{H\mathbb{Z}}^*$ denote the differential integral cohomology functor.
It has first been introduced by Cheeger-Simons \cite{MR827262}. For an axiomatic picture see \cite{bunke-2009}. Differential integral cohomology is the home for geometric refinements of Chern and Pontrjagin classes.
Let $$\frac{\hat p_1}{2}(\nabla^{TM})\in \widehat{H\mathbb{Z}}^4(M)$$ denote the
lift introduced in \cite{MR827262} of the spin characteristic class $\frac{p_1}{2}(TM)$ (see also \cite[Sec. 4.2]{bunke-20095}). Its integral over $M$ is an element of $\mathbb{R}/\mathbb{Z}$. By the bordism formula for the evaluation of
differential cohomology classes we have the equality
$$[\frac{1}{2}\int_Z p_1(\nabla^{TZ})]=\int_M \frac{\hat{p}_1}{2}(\nabla^{TM})$$
in $\mathbb{R}/\mathbb{Z}$.
We also know (see e.g. \cite[Sec. 4.2]{bunke-20095}) that
$$a(H_\alpha)= \frac{\hat{p}_1}{2}(\nabla^{TM})\ ,$$
where $a:\Omega^3(M)/{\tt im}(d)\to \widehat{H\mathbb{Z}}^4(M)$ is one of the structure maps
of differential cohomology.
It follows that
$$[d_Z(M,\alpha^{top})]=[\frac{1}{2}\int_Z p_1(\nabla^{TZ})-\int_MH_{\alpha}]=0\in \mathbb{R}/\mathbb{Z}\ .$$
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
By the specialization of Hirzebruch's signature theorem to the four-dimensional case, the
signature of a closed oriented $4$-dimensional spin manifold $W$ is given by
\begin{equation}\label{uidqwdqwdww}
{\tt sign}(W)=\frac{1}{3}\int_W p_1(\nabla^{TW})\ .
\end{equation}
Similarly, by the specialization of the Atiyah-Singer index theorem the index of the spin Dirac operator of a four-dimensional spin manifold $W$ is given by
\begin{equation}\label{uidqwdqwdww1}{\tt index}(D_W)=-\frac{1}{24}\int_W p_1(\nabla^{TW})\ . \end{equation}
In dimensions $8m+4$ the spin Dirac operator has an additional real symmetry which forces its index to be even.
It follows that
$$ \frac{1}{2}\int_W p_1(\nabla^{W})\equiv 0 \:\:\:\:\mbox{mod $24$}\ .$$
From the additivity of the signature
we conclude that the class
$$d(M,\alpha^{top}):=[d_Z(M,\alpha^{top})]\in \mathbb{Z}/24\mathbb{Z}$$
is independent of the choice of the zero bordism $Z$.
\begin{lem}\label{uiqodqwdqwdqwd}
$d(M,\alpha^{top})\in \mathbb{Z}/24\mathbb{Z}$ is a string bordism invariant.
\end{lem}
{\it Proof.$\:\:\:\:$}
If $\alpha$ has an extension to a geometric string structure $\tilde \alpha$ over $Z$, then by Stokes' theorem
$$
\frac{1}{2}\int_Z p_1(\nabla^{TZ})=\int_Z dH_{\tilde \alpha}=
\int_M H_\alpha\ .
$$
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
It is now clear that $$d:MString_3\to \mathbb{Z}/24\mathbb{Z}\ ,\quad [ M,\alpha^{top}]\mapsto d(M,\alpha^{top})$$ is well-defined and a homomorphism.
\subsection{A generator for $MString_3$}
In this subsection we construct a specific generator $g\in MString_3$
such that $d(g)=[1]\in \mathbb{Z}/24\mathbb{Z}$. It will be used in later calculations.
Let $S$ denote the sphere spectrum, and let $\epsilon:S\to MString$ be the unit of the ring spectrum $MString$. It is known \cite[Thm 2.2.1]{MR1455523} that
$\epsilon:S_3\to MString_3$ is an isomorphism so that
$MString_3\cong S_3\cong \mathbb{Z}/24\mathbb{Z}$.
We consider the sphere $S^3\subset \mathbb{R}^4$. As the boundary of the disc $D^4\subset \mathbb{R}^4$ it has a preferred orientation, spin structure and string structure $\alpha^{top}$. We define
\begin{equation}\label{reuwrewrwer}
g:=[S^3,\alpha^{top}-{\tt or}_{S^3}] \in MString_3\ ,
\end{equation}
where ${\tt or}_{S^3}\in H^3(S^3;\mathbb{Z})$ is the orientation class of $S^3$.
It follows immediately from (\ref{udiqdwqdqwd}) that
$d(g)=[1]\in \mathbb{Z}/24\mathbb{Z}$. Since the order of $d(g)$ is $24$ we see that $g\in MString_3$ is a generator.
\begin{kor}\label{uqiduwqdqwdwqdd}
The homomorphism
$$d:MString_3\to \mathbb{Z}/24\mathbb{Z}\ ,\quad [M,\alpha^{top}]\mapsto d(M,\alpha^{top})$$
is an isomorphism
\end{kor}
We will systematically extend this construction to higher dimensions in Section \ref{uiudwdqwdqwdd}.
\subsection{Atiyah's canonical $2$-framing}\label{uidqwdqwdqw}
In this Subsection we discuss, as a side aspect of the main topic of the present paper, an analogue for spin manifolds of an observation by Atiyah \cite{MR1046621} saying
that oriented three-manifolds have canonical $2$-framings.
We will show that every closed connected spin three-manifold has a canonical string structure.
Let us first explain the result of Atiyah \cite{MR1046621}. For all $n\ge 2$ the horizontal composition in the following diagram of Lie groups
$$\xymatrix{&&Spin(2n)\ar[d]\\SO(n)\ar@{.>}[urr]\ar[r]_/-.9em/{{\tt diag}}\ar[r]&SO(n)\times SO(n)\ar[r]&SO(2n)}
$$
has a unique lift indicated by the dotted arrow. This implies that
the double $2V:=V\oplus V$ of a $n$-dimensional real oriented vector bundle $V$ has a canonical spin structure.
A $2$-framing of a closed oriented three-manifold $M$ is by definition a
spin-trivialization of the double $2TM$ of its tangent bundle.
Atiyah now considers an oriented zero bordism $Z$ of $M$.
The spin bundle $2TZ$ is trivialized by the $2$-framing $\alpha$ at the boundary $\partial Z\cong M$. This trivialization refines the spin characteristic class
$\frac{p_1}{2}(2TZ)\in H^4(Z;\mathbb{Z})$ to a relative cohomology class
$$\frac{p_1}{2}(2TZ,\alpha)\in H_c^4(Z,M;\mathbb{Z})\ .$$
Atiyah then observes that
$$\sigma(\alpha):=3{\tt sign}(Z)- \langle \frac{p_1}{2}(2TZ,\alpha),[Z,M]\rangle\in \mathbb{Z}$$ does not depend on the oriented zero bordism
$Z$. Furthermore, by changing $\alpha$, we can alter the right-hand side by any integer.
It follows that there is a unique $2$-framing $\alpha_0$ such that
$\sigma(\alpha_0)=0$. This is the canonical $2$-framing of $M$.
We now define the canonical topological string structure of a three-dimensional closed connected spin manifold $M$. Let $\alpha^{top}$ be any topological string structure on $M$. The combination
$$\sigma(M,\alpha^{top}):=3{\tt sign}(Z)- 2 d_Z(M,\alpha^{top})\in \mathbb{Z}$$
is independent of the choice of the spin zero bordism $Z$ of $M$
(by the same argument as in \cite{MR1046621} which employs additivity of the signature and the formula (\ref{uidqwdqwdww})).
Its class
$$\sigma(M):=[\sigma(M,\alpha^{top})]=[3{\tt sign}(Z)-2d_Z(M,\alpha^{top})]=[{\tt sign}(Z)]\in \mathbb{Z}/2\mathbb{Z}$$ is independent of the choice of the string structure $\alpha^{top}$ as well.
We will see from the examples below that both possible values do occur.
\begin{prop}\label{uwifwefwef}
A closed connected spin three-manifold $M$ has
a unique topological string structure $\alpha^{top}_0$ characterized by
$\sigma(M,\alpha_0^{top})\in \{0,1\}$.
\end{prop}
\begin{ddd}\label{uiqdwqdqwdw}
The topological string structure uniquely characterized in Proposition \ref{uwifwefwef}
will be called the canonical string structure.
\end{ddd}
\subsection{Examples}\label{zuqdqwd}
In this subsection we calculate two examples which show that both possible values in of the invariant $\sigma(M)\in \mathbb{Z}/2\mathbb{Z}$ and thus of $\sigma(M,\alpha^{top}_0)\in \{0,1\}$ of a connected three-dimensional spin manifold do really occur.
\begin{enumerate}
\item The three-dimensional compact group $SO(3)$ is diffeomorphic to $\mathbb{R}¸\P^3$. We thus have
$$H^*(SO(3);\mathbb{Z}/2\mathbb{Z})\cong \mathbb{Z}/2\mathbb{Z}[x]/(x^4)\ ,\quad |x|=1\ .$$ The manifold $SO(3)$ fits into a principal bundle $SO(2)\to SO(3)\stackrel{\pi}{\to} S^2$. The circle $SO(2)$ acts by the adjoint representation on the quotient of Lie algebras $so(3)/so(2)$. We fix a basis element of the Lie algebra $so(2)$.
It determines a complex structure on $so(3)/so(2)$ by requiring that it rotates counterclockwise.
It furthermore induces a fundamental vector field on $SO(3)$ which trivializes the vertical
bundle $T^v\pi$. The vertical bundle therefore gets a spin and a topological string structure $\alpha_{T^v\pi}^{top}$. This spin structure induces the bounding spin structure on all fibers of $\pi$.
Furthermore, using the identification $TS^2\cong SO(3)\times_{SO(2)}so(3)/so(2)$, we fix a complex structure on $TS^2$. This complex structure determines the orientation of $S^2$.
Since $H^1(S^2;\mathbb{Z}/2\mathbb{Z})=0$ and $w_2(TS^2)=0$ it can be refined to a spin structure in a unique way. Furthermore, there exists a unique
topological string structure $\alpha_{S^2}^{top}$ on the spin manifold $S^2$.
Up to homotopy there is a unique splitting
\begin{equation}\label{udiqdwqdwqd}
TSO(3)\cong T^v\pi\oplus \pi^*TS^2 .
\end{equation}
The spin structures of the summands induce a spin structure $s_0$ on $SO(3)$.
Since $H^{1}(SO(3);\mathbb{Z}/2\mathbb{Z})\cong \mathbb{Z}/2\mathbb{Z}$ there is actually a second spin structure $s_1$.
Since the restriction of $H^{1}(SO(3);\mathbb{Z}/2\mathbb{Z})\to H^{1}(\pi^{-1}(x);\mathbb{Z}/2\mathbb{Z})$ is an isomorphism for all $x\in S^2$ the spin structure $s_1$ induces the non-bounding spin structure on every fiber of $\pi$.
We continue with the spin structure $s_0$ and choose the topological string structure $\alpha^{top}$ such that (\ref{udiqdwqdwqd}) becomes an isomorphism
of string bundles.
The trivialization of $T^v\pi$ induces a geometric string structure $\alpha_{T^v\pi}$ refining $\alpha^{top}_{T^v\pi}$ with $H_{\alpha_{T^v\pi}}=0$. We further choose a geometric string structure $\alpha_{S^2}$ which refines $\alpha^{top}_{S^2}$.
We have $H_{\alpha_{S^2}}=0$ since $S^2$ is two-dimensional. We choose the geometric string structure $\alpha$ refining $\alpha^{top}$ as the sum of $\alpha_{T^v\pi}$ and $\pi^*\alpha_{S^2}$. Then we have $H_\alpha=0$.
The action of $SO(2)$ on $so(3)/so(2)$ fixes a metric and induces an action on the unit disc $D^2$.
If we fix the point $1\in D^2$, then we can identify $SO(2)$ with the boundary of $D^2$, the orbit of $1$. In this way we get an identification $SO(3)\cong \partial Z$ with the boundary of the four-dimensional manifold
$$Z:=SO(3)\times_{SO(2)} D^2\ .$$
The $D^2$-bundle $q:Z\to S^2$ exhibits this manifold as a fiberwise zero bordism of $SO(3)$.
It can be identified with the unit-disc bundle in the tangent bundle $TS^2$.
The vertical bundle of $q$ is therefore given by $T^vq\cong q^*TS^2$. Up to homotopy
we have a unique decomposition
$$TZ\cong T^vq\oplus q^*TS^2\cong q^*(TS^2\oplus TS^2)\ .$$
It gives $TZ$ a complex structure and therefore an orientation. Since the projection
$q:Z\to S^2$ is a homotopy equivalence and $H^1(S^2;\mathbb{Z}/2\mathbb{Z})=0$ we see that the spin structure on $TZ$ induced by the spin structure on $TS^2$ and this decomposition is the unique one. The restriction of the spin structure on $T^vq$ to each fiber of $\pi$
is obviously the bounding one. It follows that the restriction of the spin structure on $Z$ to its boundary is the spin structure $s_0$ fixed above. Hence $Z$ is a spin zero bordism of $SO(3)$.
We have identified $SO(3)$ with the unit sphere bundle in $TS^2$. This unit sphere bundle
is the same as the orthonormal complex frame bundle. Therefore the complex bundle
$\pi^*TS^2\cong T^vq_{|SO(3)}$ is canonically trivialized.
We can choose a complex connection $\nabla^{T^vq}$ which is compatible
with this trivialization, and we define
$\nabla^{TZ}$ as the sum of $\nabla^{T^vq}$ and $q^*\nabla^{TS^2}$, where
$\nabla^{TS^2}$ is any complex connection of $TS^2$.
Then we have
$$p_1(\nabla^{TZ})=-c_2(\nabla^{TZ})=-c_1(\nabla^{T^vq})\wedge q^*c_1(\nabla^{TS^2})\ .$$
By a standard calculation\footnote{Without any explicit calculation of integrals this can be seen as follows.
We consider the complex line bundle $D^2\times \mathbb{C}\to D^2$ with the trivialization
over $S^1$ given by the tautological section $s:S^1\to \mathbb{C}$ on the boundary. Using this
trivialization we glue it with the trivial bundle $\bar D^2\times \mathbb{C}$, where $\bar D$ has the opposite complex structure, in order to get a holomorphic bundle $L\to S^2\cong \mathbb{C}\P^1$. We choose a connection on
$D^2\times \mathbb{C}$ which is compatible with the trivialization at the boundary. This connection can then be extended to $\nabla^L$ as a trivial connection across $\bar D$.
The section $z:D^2\to D^2\times \mathbb{C}$ extends by $1$ to a holomorphic section of $L$.
Furthermore, the constant section $1:D^2\to D^2\times \mathbb{C}$ can be extended by the holomorphic function $\bar z$ to $\bar D$.
It is easy to see that these two generate the two-dimensional space of holomorphic sections
of $L\to \mathbb{C}\P^2$. By the Riemann-Roch theorem
$2=1+\int_{\mathbb{C}\P^1} c_1(L)=1+\int_{D^2} c_1(\nabla^{L})$ and therefore
$\int_{D^2} c_1(\nabla^{L})=1$.
} we get
$\int_{Z/S^2} c_1(\nabla^{T^vq})=1$.
It follows that
$$\int_{Z}p_1(\nabla^{TZ})=-\int_{S^2}c_1(\nabla^{TS^2})=-2\ .$$
This gives $d_Z(SO(3),\alpha^{top})=-1$.
Using again that $q:Z\to S^2$ is a homotopy equivalence we
see that $H_2(Z;\mathbb{Z})\cong \mathbb{Z}$ is generated by the orientation of $S^2$.
The self intersection of this class, which geometrically can be considered as the zero section of $TS^2$, is the Euler characteristic $\chi(S^2)=2$. It is positive so that
${\tt sign}(Z)=1$.
We conclude that
$$\sigma(SO(3),\alpha^{top})=3+2=5\ .$$
It follows that $$\sigma(SO(3))=1\ .$$
The canonical string structure of $SO(3)$ with the spin structure $s_0$ fixed above is given by $$\alpha_{0}^{top}=\alpha^{top}-2\ {\tt or}_{SO(3)}\ .$$
\item
\newcommand{\mathbb{T}}{\mathbb{T}}
We now consider the torus $\mathbb{T}^3\cong \mathbb{R}^3/\mathbb{Z}^3$.
This representation as a quotient fixes a trivialization of $T\mathbb{T}^3$
and therefore a spin and geometric string structure $\alpha$ with $H_\alpha=0$.
The induced spin structure on each circle subgroup of $\mathbb{T}^3$ is bounding.
We write $\mathbb{T}^3\cong S^1\times \mathbb{T}^2$ and can consider $S^1$ as the spin boundary of the disc $D^2$. Therefore
$Z:=D^2\times \mathbb{T}^2$ is a spin zero bordism of $\mathbb{T}^3$.
We choose $\nabla^{TZ}$ such that respects this decomposition and
the factor $\mathbb{T}^2$ is flat. Then $p_1(\nabla^{TZ})=0$.
It follows that $d_Z(\mathbb{T}^3,\alpha^{top})=0$. The intersection form
on $H_2(Z;\mathbb{Z})$ vanishes so that ${\tt sign}(Z)=0$. We conclude that
$$\sigma(\mathbb{T}^3)=0$$
for the spin structure chosen above, and that the canonical string structure of $\mathbb{T}^3$ with this spin structure is given by
$$\alpha_0^{top}=\alpha^{top}\ .$$
\end{enumerate}
\section{Invariants from the Witten genus}\label{uiudwdqwdqwdd}
\subsection{Introduction}
In Section \ref{gfdhfdf}, using geometric string structures, we have defined a homomorphism $$d:MString_3\to \mathbb{Z}/24\mathbb{Z}$$ which turned out to be an isomorphism.
In the present section, with the construction of the homomorphisms $b^{geom}:A_{4m-1}\to T_{2m}$, we generalize this to higher dimensions.
It is interesting to observe that using the index theorem of Atiyah-Patodi-Singer \cite{MR0397797} we can give an alternative expression for $d$ which does not involve the zero bordism $Z$. This parallels the treatment of the $e$-invariant of Adams given in \cite{MR0397797}.
We will actually work in the slightly different setting of index theory for manifolds with boundary which has been developed in \cite{MR2191484}, and which we will briefly review in Subsection \ref{eoiwewe}.
We consider a closed string three-manifold $(M,\alpha^{top})$ and a spin zero bordism $Z$.
Let us assume that $\nabla^{TM}$ is the Levi-Civita connection associated to a Riemannian metric on $M$, and that $\nabla^{TZ}$ is the Levi-Civita connection for an extension of that metric to $Z$ with a product structure.
These geometric structures turn the manifolds $M$ and $Z$ into geometric manifolds $\mathcal{M}$ and $\mathcal{Z}$. If we choose a (real) taming $\mathcal{M}_t$, then we get a boundary tamed manifold $\mathcal{Z}_{bt}$.
The index ${\tt index}(\mathcal{Z}_{bt})$ of the Fredholm operator associated to a boundary tamed manifold can be calculated by the formula \cite[Thm. 2.2.18]{MR2191484}.
It follows from the presence of a real structure that ${\tt index}(\mathcal{Z}_{bt})$ is even. In the case at hand we have
$${\tt index}(\mathcal{Z}_{bt})=-\frac{1}{24}\int_Z p_1(\nabla^{TZ}) + \eta (\mathcal{M}_t)\ ,$$
where $\eta(\mathcal{M}_t)$ is the eta-invariant of the tamed manifold $\mathcal{M}_t$, see Subsection \ref{eoiwewe}.
Therefore we have the following equality in $\mathbb{R}/\mathbb{Z}$:
$$[\frac{1}{2}\int_Z p_1(\nabla^{TZ})]=[12\eta(\mathcal{M}_t)]\ .$$
We furthermore choose a geometric refinement $\alpha$ of the topological string structure $\alpha^{top}$
based on the spin connection induced by $\nabla^{TM}$.
Then we get the expression
$$d(M,\alpha)=[12\eta (\mathcal{M}_t)-\int_M H_\alpha]\in \mathbb{Z}/24\mathbb{Z}$$
which is now intrinsic to $M$.
With the construction of $b^{an}:MString_{4m-1}\to T_{2m}$ (see Definition \ref{uiqdwqdqwdwqd} ) we generalize this
analytic formula to higher dimensions.
\subsection{Tamings and $\eta$-invariants}\label{eoiwewe}
In this subsection we recall the necessary language of local index theory for manifolds with boundary. The main reference for the set up is \cite{MR2191484} which specialized to manifolds with boundary can be considered as a variant of \cite{MR0397797}.
If $M$ is a $4m-1$-dimensional closed Riemannian spin manifold and $V\to M$ is a real vector bundle with metric $h^V$ and connection $\nabla^V$, then we can form the twisted Dirac operator $D_M\otimes V$ which acts on sections of the bundle $S(M)\otimes_\mathbb{R} V$, where $S(M)\to M$ is the spinor bundle. A taming of $D_M\otimes V$ is by definition a selfadjoint operator $Q$ acting on sections of $S(M)\otimes V$ and given by a smooth integral kernel such that $D_M\otimes V+Q$ is invertible. If $m$ is odd, then the spinor bundle has a quaternionic
structure. If the taming respects this quaternionic structure, then we call it a real taming. If $m$ is even, then $S(M)$ has a real structure, and a real taming should respect this structure. The obstruction against the existence of a real taming is
${\tt index}(D_M\otimes V)\in KO_{4m-1}$. Since $KO_{4m-1}=0$ for all $m\ge 1$ such real tamings always exist. Following the notation introduced in \cite{MR2191484} we will call a Riemannian spin manifold a geometric manifold $\mathcal{M}$, and we denote such a manifold with an additional choice of a geometric bundle $\mathbf{V}:=(V,h^V,\nabla^V)$ by $\mathcal{M}\otimes \mathbf{V}$. Finally, this data together with a choice of a real taming will be denoted by $(\mathcal{M}\otimes \mathbf{V})_t$ and called a tamed manifold.
The $\eta$-invariant
$\eta((\mathcal{M}\otimes \mathbf{V})_t)\in \mathbb{R}$
of a tamed manifold is defined
by
$$\eta((\mathcal{M}\otimes \mathbf{V})_t):=\frac{-1}{\sqrt{\pi}}\int_0^\infty {\tt Tr} (D_M+Q)e^{-t^2(D_M+Q)^2}dt\ .$$
These definitions can be extended to graded or virtual bundles $V$ in a natural way.
\subsection{The Witten genus}\label{udqwidqwdqwdqwd444}
In this subsection we recall the Witten genus $R:MSpin\to KO[[q]]$
and introduce several related formal power serieses.
We use the correspondence between an even formal power series
$\phi\in A[[x^2]]$ and a power series
$K_\phi\in A[[p_1,p_2,\dots]]$ given by
$$\prod_{i=1}^\infty \phi(x_i)=K_\phi(p_1,p_2,\dots)\ , \quad
\sum_{i\ge 0} p_i=\prod_{i=1}^\infty (1+x_i^2)\ ,$$
where $A$ is some commutative $\mathbb{Q}$-algebra, e.g. $A=\mathbb{Q}[[q]]$ in the example which follows.
We let (see \cite[Ch. 6.3]{MR1189136})
$$\phi_W(x,q)\in \mathbb{Q}[[q]][[x]]$$ be given by
\begin{eqnarray}
\phi_W(x,q)&:=&\frac{\frac{x}{2}}{\sinh(\frac{x}{2})} \prod_{n=1}^\infty\frac{(1-q^n)^2}{(1-q^ne^x)(1-q^ne^{-x})}\nonumber\\
&=&\exp\left[\sum_{k=2}^\infty \frac{2}{(2k)!}G_{2k} x^{2k}\right] e^{G_2(q)x^2}\ .\label{ujkwefef6e7}
\end{eqnarray}
Here
$$G_{2k}:=-\frac{B_{2k}}{4k}+\sum_{n=1}^\infty \sigma_{2k-1}(n)q^n\in \mathbb{Q}[[q]]$$
with the Bernoulli numbers $B_{2k}$ and the sum $\sigma_{2k-1}(n)$ of the $(2k-1)$'th-powers of the positive divisors of $n$.
For $k\ge 2$ the power series
$G_{2k}$ is the $q$-expansion of a modular form of weight $2k$. We have $G_{2k}\in \mathcal{M}_{2k}^\mathbb{Q}$, since the constant term of the formal power series $G_{2k}$ is rational, while all higher terms are integral.
We will also need the series
$$\theta_W(x,q):=\exp\left[\sum_{k=2}^\infty \frac{2}{(2k)!}G_{2k} x^{2k}\right]\in \mathbb{Q}[[q]][[x]]\ .$$
We define
\begin{equation}\label{udiqwdqwdqwd3}
\Phi:=K_{\phi_W}\in \mathbb{Q}[[q]][[p_1,p_2,\dots]]\ ,\quad \Theta:=K_{\theta_W}\in \mathbb{Q}[[q]][[p_1,p_2,\dots]]\ .
\end{equation}
For $k\ge 1$ we let
$$N_{2k}(p_1,\dots)=\sum_{j=1}^\infty x_j^{2k}\in \mathbb{Q}[[p_1,\dots]]$$
be the Newton polynomials.
Then we can write
$$\Phi(p_1,\dots)=\exp\left[\sum_{k=2}^\infty \frac{2}{(2k)!}G_{2k} N_{2k}(p_1,\dots )\right]e^{G_2p_1}\in \mathbb{Q}[[q]][[p_1,\dots]]\ .$$
We define
$$\tilde \Phi(p_1,\dots,):=\exp\left[\sum_{k=2}^\infty \frac{2}{(2k)!}G_{2k} N_{2k}(p_1,\dots )\right] \sum_{j=1}^\infty \frac{G_2^j p_1^{j-1}}{j!}\in \mathbb{Q}[[q]][[p_1,\dots]] \ .$$
More systematically
\begin{equation}\label{udiqdqwdqwdqwd}
\tilde \Phi =\Theta \frac{e^{G_2p_1}-1}{p_1}\ .
\end{equation}
We now turn to the definition of the Witten genus.
For a real $l$-dimensional vector bundle $V\to B$ we define the formal power series of virtual bundles
\begin{equation}\label{zfquefhefwefwef}
R(V):=\sum_{n\ge 0} q^n R_n(V)= \prod_{k\ge 1} (1-q^k)^{2l}\bigotimes_{k\ge 1} Sym_{q^k}(V\otimes_\mathbb{R} \mathbb{C})\ ,
\end{equation}
where
$$Sym_q(V):=\bigoplus_{n\ge 0} q^n Sym^n(V)$$
is the generating power series of the symmetric powers $Sym^n(V)$ of $V$.
For a manifold $Z$ we then have
\begin{equation}\label{zduzwtduqwdwd}\hat{\mathbf{A}}(TZ)\cup {\mathbf{ch}}(R(TZ)) = \Phi(p_1(TZ),\dots)\ .\end{equation}
If $\nabla^{TZ}$ is a connection on $TZ$, then we will write
$$\Phi(\nabla^{TZ}):=\Phi(p_1(\nabla^{TZ}),\dots)\in \Omega^{4*}(Z)[[q]]\ ,$$
and we will use a similar convention for $\tilde \Phi$ and $\Theta$.
The homomorphism
$$R:MSpin_{4m}\to \mathbb{Z}[[q]]\cong KO[[q]]_{4m}$$
is given by
\begin{equation}
\label{udiqdqwdqwdqwdw}R([M])=\kappa_m {\tt index}(D_M\otimes R(TM))=\kappa_m \langle \Phi(p_1(TM),p_2(TM),\dots),[M]\rangle
\end{equation}
(see \cite{MR1189136}).
In the case of odd $m$ the factor $\kappa_m=\frac{1}{2}$ takes care of the factor $2$ on the lower horizontal arrow of the commutative diagram
$$\xymatrix{KO_{4m}\ar[d]^{\cong}\ar[r]&K_{4m}\ar[d]^\cong\\
\mathbb{Z}\ar[r]^2&\mathbb{Z}}\ ,$$
since we take the index of the twisted Dirac operator as a complex operator.
The above construction can be refined to a multiplicative natural transformation of homology theories $$R:MSpin\to KO[[q]]\ .$$
Let $f:M\to X$ be a map from a closed $l$-dimensional spin manifold $M$ to a space $X$ representing the bordism class $[M,f]\in MSpin_l(X)$.
If we choose a Riemannian metric on $M$, and $V\to M$ is a real vector bundle with connection, then we can define
the real $K$-homology class
$[D_M\otimes V]\in KO_{l}(M)$ of the Dirac operator of $M$ twisted by $V$.
It is independent of the choice of the geometric structures.
The value of $$R:MSpin_l(X)\to KO[[q]]_l(X)$$ on $[M,f]\in MSpin_l(X)$ is then given by
$$R([M,f]):=\sum_{n\ge 0} q^n f_*[D_M\otimes R_n(TM)]\in KO_l(X)[[q]]=KO[[q]]_l(X)\ .$$
The formal power series defined in (\ref{zfquefhefwefwef}) is multiplicative in the sense that $R(V\oplus W)=R(V)\otimes R(W)$. This easily implies that the transformation $R$
is multiplicative.
\subsection{A string bordism invariant in dimension $4m-1$}\label{dzuidwqdqwd}
Let $m\ge 1$ and consider a $4m-1$-dimensional closed string manifold $(M,\alpha^{top})$. We choose a Riemannian metric on $M$ and get a geometric manifold $\mathcal{M}$. The Riemannian metric gives a Levi-Civita connection $\nabla^{TM}$
which together with the trivial connection on the trivial bundle $\Theta_\mathbb{R}=M\times \mathbb{R}$ induces a connection on the virtual bundles $R_n(TM\oplus \Theta_\mathbb{R})$ for all $n\ge 0$.
We choose real tamings $(\mathcal{M}\otimes R_n(TM\oplus \Theta_\mathbb{R}))_t$ for all $n\ge 0$.
Finally we choose a geometric refinement $\alpha$ of the topological string structure $\alpha^{top}$. It gives rise to a form $H_\alpha\in \Omega^3(M)$ satisfying
$dH_\alpha=\frac{1}{2}p_1(\nabla^{TM})$.
\begin{ddd}\label{uiqdwqdqwdwqd}
We define the formal power series
\begin{equation}\label{formal}\tilde b^{an}(\mathcal{M},\alpha,t):= 2\kappa_m\int_M H_{\alpha}\wedge \tilde \Phi(\nabla^{TM})+\kappa_m\sum_{n\ge 0} q^n \eta((\mathcal{M}\otimes R_n(TM\oplus \Theta_\mathbb{R}))_t) \in \mathbb{R}[[q]] \ ,\end{equation}
where
$$\kappa_m=\left\{\begin{array}{cc}
1&m\equiv 0(2)\\
\frac{1}{2}&m\equiv 1(2)
\end{array}\right.\ ,$$
and we choose tamings which are compatible with the real structure.
\end{ddd}
The last entry in the list of variables of $\tilde b^{an}$ indicates the dependence of the formal power series on the choice of the tamings.
Recall the Definition \ref{uzifefwfwefewfwef} of the group $T_{2m}$.
\begin{ddd}\label{uzduiwqdqwd}
We let $$ b^{an}(M,\alpha^{top}):=[\tilde b^{an}(\mathcal{M},\alpha,t)]\in T_{2m}$$ denote the class in $T_{2m}$ represented by the formal power series $\tilde b^{an}(\mathcal{M},\alpha,t)$.
\end{ddd}
This notation is justified by the following Lemma.
\begin{lem}
The class $b^{an}(M,\alpha^{top})\in T_{2m}$ is an invariant of the string bordism class of the $4m-1$-dimensional string manifold $(M,\alpha^{top})$. In particular, it is independent of the choice of geometric structures and the taming involved in the definition of $\tilde b^{an}(\mathcal{M},\alpha,t)$.
\end{lem}
{\it Proof.$\:\:\:\:$}
Let $(Z,\tilde \alpha)$ be a string bordism between
$(M,\alpha)$ and $(M^\prime,\alpha^\prime)$.
Then we choose a Riemannian metric on $Z$ which extends the given metrics on $M$ and $M^\prime$ with product structures. We have decompositions of geometric bundles $TZ_{|M}\cong TM\oplus \Theta_{\mathbb{R}}$ and $TZ_{|M^\prime}\cong TM^\prime\oplus \Theta_{\mathbb{R}}$, where the trivial summands correspond to the normal bundle.
Therefore,
for all $n\ge 0$ the geometric bundles
$R_n(TZ)$ extend the geometric bundles $R_n(TM\oplus \Theta_\mathbb{R})$ and $R_n(TM^\prime\oplus \Theta_\mathbb{R})$ The tamings $(\mathcal{M}\otimes R_n(TM\oplus \Theta_\mathbb{R}))_t$ and $(\mathcal{M}^\prime\otimes R_n(TM^\prime\oplus \Theta_\mathbb{R}))_{t^\prime}$ induce a boundary taming
$(\mathcal{Z}\otimes R_n(TZ))_{bt}$.
The index theorem \cite[Thm. 2.2.18]{MR2191484} for boundary tamed manifolds gives
\begin{eqnarray*}\lefteqn{
{\tt index}(\mathcal{Z}_{bt}\otimes R_n(TZ))}&&\\&&=\int_{Z} \hat{\mathbf{A}}(\nabla^{TZ})\wedge {\mathbf{ch}}(\nabla^{R_n(TZ)}) + \eta((\mathcal{M}^\prime\otimes R_n(TM^\prime\oplus \Theta_\mathbb{R}))_{t^\prime})-\eta((\mathcal{M}\otimes R_n(TM\oplus \Theta_\mathbb{R}))_t)\ .
\end{eqnarray*}
If $m$ is odd, then ${\tt index}(\mathcal{Z}_{bt})$ is even. Using that (see (\ref{zduzwtduqwdwd})),
\begin{equation}\label{uidqwdqwdqwd}
\sum_{n\ge 0} q^n \hat{\mathbf{A}}(\nabla^{TZ})\wedge {\mathbf{ch}}(\nabla^{R_n(TZ)})=\Phi(\nabla^{TZ})\ .
\end{equation}
we get the following equality in $T_{2m}$.
\begin{eqnarray}\lefteqn{\hspace{-2cm}
\left[\kappa_m\sum_{n\ge 0}q^n\big(\eta((\mathcal{M}^\prime\otimes R_n(TM^\prime\oplus \mathbb{R}))_{t^\prime})-\eta((\mathcal{M}\otimes R_n(TM\oplus \mathbb{R}))_t)\big)\right] }&& \nonumber\\&=&
[-\kappa_m\int_Z \Phi(\nabla^{TZ})] \hspace{4cm}\label{klwefwef}\end{eqnarray}
Using Stoke's theorem, (\ref{udiqdqwdqwdqwd}), and $d H_{\tilde \alpha}=p_1(\nabla^{TZ})$ we calculate
\begin{eqnarray}
2\int_{M^\prime} H_{\alpha^\prime}\wedge \tilde \Phi(\nabla^{TM^\prime}) -
2\int_{M } H_{\alpha }\wedge \tilde \Phi(\nabla^{TM})&=&\int_Z 2dH_{\tilde \alpha} \wedge
\tilde \Phi(\nabla^{TZ})\nonumber\\
&=&\int_Z \Theta(\nabla^{TZ}) e^{G_2 p_1(\nabla^{TZ})}-\int_Z\Theta(\nabla^{TZ})\nonumber\\
&=&\int_Z \Phi(\nabla^{TZ})-\int_Z\Theta(\nabla^{TZ})\label{uihzdqwieerqweqwe}\ .
\end{eqnarray}
Note that $\int_Z\Theta(\nabla^{TZ})\in \mathcal{M}^\mathbb{R}_{2m}$ so that
\begin{equation}\label{uwdqwdqwd}\left[2\kappa_m\int_{M^\prime} H_{\alpha^\prime}\wedge \tilde \Phi(\nabla^{TM^\prime}) -
2\kappa_m\int_{M } H_{\alpha }\wedge \tilde \Phi(\nabla^{TM})\right]=
\left[\kappa_m\int_Z \Phi(\nabla^{TZ})\right]
\end{equation}
in $T_{2m}$.
If we combine
(\ref{uwdqwdqwd}) and (\ref{udiqdqwdqwdqwd}), then we get
$b^{an}(M,\alpha)=b^{an}(M^\prime,\alpha^\prime)$.
The independence of $b^{an}(M,\alpha^{top})$ from the geometric structures and tamings
follows from the bordism invariance since we can connect two choices of geometric structures and tamings by corresponding structures on
a cylinder $Z=[0,1]\times M$ which provides a bordism.
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
The invariant $b^{an}(M,\alpha^{top})$ is clearly additive under disjoint unions of string manifolds.
We have therefore defined a homomorphism
\begin{equation}\label{udiudwqdqwdqwdwqd}
b^{an}:MString_{4m-1}\to T_{2m}\ ,\quad [M,\alpha^{top}]\mapsto b^{an}(M,\alpha^{top})\ .
\end{equation}
The main goal of the present paper is to understand the homotopy theoretic meaning
of this global analytic construction.
\subsection{A geometric expression}\label{ioqdqwd}
Since the construction of $b^{an}$ involves spectral invariants of Dirac operators,
a direct evaluation of $b^{an}$ is complicated. Our analysis of $b^{an}$ relies on the comparison with a homotopy theoretic version $b^{top}:A_{4m-1}\to T_{2m}$. The comparison between
$b^{an}_{|A_{4m-1}}$ and $b^{top}$ is achieved via an intermediate construction
$$b^{geom}:A_{4m-1}\to T_{2m}$$
using differential geometry which we will present in the present subsection.
We consider a closed string manifold $(M,\alpha^{top})$ of dimension $4m-1$ which represents an element in $[M,\alpha^{top}]\in A_{4m-1}\subseteq MString_{4m-1}$. Then we can choose a spin zero bordism $Z$ of $M$. We choose a connection $\nabla^{TZ}$ on $TZ$ and let $\nabla^{TM}$ be its restriction to $M\cong \partial Z$.
Furthermore, we let $\alpha$ be a geometric string structure which refines
$\alpha^{top}$ based on the spin connection on $TM$ induced by $\nabla^{TM}$.
Then we can define the formal power series
\begin{equation}\label{w8e9fwefwfwefwf}
\tilde b^{geom}(M,\alpha,\nabla^{TZ}):=2\kappa_m \int_M H_\alpha\wedge \tilde \Phi(\nabla^{TM})-\kappa_m \int_Z \Phi(\nabla^{TZ}) \in \mathbb{R}[[q]]\ .\end{equation}
\begin{lem}\label{udidqwdqwd}
In $T_{2m}$ we have the equality
$$[\tilde b^{geom}(M,\alpha,\nabla^{TZ})]=b^{an}(M,\alpha^{top})\ .$$
\end{lem}
{\it Proof.$\:\:\:\:$}
We first observe that $[\tilde b^{geom}(M,\alpha,\nabla^{TZ})]$ does not depend on the choice of the
connection $\nabla^{TZ}$ and the corresponding geometric refinement $\alpha$ of $\alpha^{top}$.
Let $\nabla^{TZ}_i$ and $\alpha_i$, $i=0,1$, be two choices of geometric structures. Then we can find a connection $\nabla^{TW}$ on $W:=[0,1]\times Z$
which restricts to $\nabla^{TZ}_i$ on $\{i\}\times Z$. Furthermore we can find a string structure $\tilde \alpha$ on $V:=[0,1]\times M$ which connects the string structures $\alpha_i$, $i=0,1$. From
(\ref{uihzdqwieerqweqwe})
we get
$$
2\int_M H_{\alpha_1}\wedge \tilde \Phi(\nabla^{TM}_1)- 2 \int_M H_{\alpha_0}\wedge \tilde \Phi(\nabla_0^{TM}) = \int_{V} \Phi(\nabla^{TV}) -\int_{V}\Theta (\nabla^{TV})\ .
$$
The last term belongs to $\mathcal{M}_{2m}^\mathbb{R}$ so that
\begin{equation}\label{d77uqdidqwdwqd}
\left[2\kappa_m \int_M H_{\alpha_1}\wedge \tilde \Phi(\nabla^{TM}_1)-2\kappa_m \int_M H_{\alpha_0}\wedge \tilde \Phi(\nabla_0^{TM})\right]=\left[\kappa_m\int_{V} \Phi(\nabla^{TV})\right]
\end{equation}
in $T_{2m}$.
Again by Stoke's theorem we have
\begin{eqnarray}
0&=&\int_{W} d\Phi(\nabla^{TW})\nonumber
=\int_{\partial W} \Phi(\nabla^{TW})\\
&=&\int_{Z}\Phi(\nabla^{TZ}_1)-\int_{Z}\Phi(\nabla^{TZ}_0)-\int_{V}\Phi(\nabla^{TV}_0)\ .\label{zgsuzqs}
\end{eqnarray}
If we combine (\ref{zgsuzqs}) and (\ref{d77uqdidqwdwqd}), then we get
$[\tilde b^{geom}(M,\alpha_1,\nabla^{TZ}_1)]=[\tilde b^{geom}(M,\alpha_0,\nabla^{TZ}_0)]$.
We now choose a Riemannian metric on $Z$ with product structure near the boundary which extends the metric of $M$. Then we get the geometric manifold $\mathcal{Z}$. The taming $(\mathcal{M}\otimes R_n(TM\oplus \Theta_\mathbb{R}))_t$ induces a boundary taming $(\mathcal{Z}\otimes R_n(TZ))_{bt}$, and by the index formula
$$\kappa_m\int_{Z}\hat{\mathbf{A}}(\nabla^{TZ})\wedge {\mathbf{ch}}(\nabla^{R_n(TZ)})+\kappa_m\eta((\mathcal{M}\otimes R_n(TM\oplus \mathbb{R}))_t)=\kappa_m{\tt index}((\mathcal{Z}\otimes R_n(TZ))_{bt})\in \mathbb{Z}$$
for all $n\ge 0$.
Using (\ref{uidqwdqwdqwd}) we get the equality of classes
$$\left[-\kappa_m \int_Z \Phi(\nabla^{TZ})\right]=\left[\kappa_m\sum_{n\ge 0}q^n \eta((\mathcal{M}\otimes R_n(TM\oplus \mathbb{R}))_t)\right]$$
in $T_{2m}$. Combining this with (\ref{formal}) and (\ref{w8e9fwefwfwefwf})
we conclude that
$$[\tilde b^{geom}(M,\alpha,\nabla^{TZ})]=[\tilde b^{an}(\mathcal{M},\alpha,t)]=b^{an}(M,\alpha^{top})\ .$$
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
As a consequence of Lemma \ref{udidqwdqwd} we have constructed a homomorphism
$$b^{geom}:A_{4m-1}\to T_{2m}$$ which maps $[M,\alpha^{top}]\in A_{4m-1}$ to the class in $T_{2m}$ represented by the formal power series
$b^{geom}(M,\alpha,\nabla^{TZ})$ for some choices of the spin zero bordism $Z$ and the geometric structures. Of course, we have the equality
$$b_{|A_{4m-1}}^{an}=b^{geom}\ .$$
\subsection{A topological expression}\label{udqwidqwdqwdqwd}
In this subsection we retain the assumptions of Subsection \ref{ioqdqwd}
and transform the geometric expression $[\tilde b(M,\alpha,\nabla^{TZ})]=b^{geom}(M,\alpha^{top})$ into a purely topological
formula. Let $[0,1)\times M\hookrightarrow
Z$ be a collar and consider a function
$\chi\in C^\infty_c[0,1)$ with $\chi\equiv 1$ near $0$.
Using the normal variable of the collar this function can be transported to the collar. Its extension by zero gives a smooth function on $Z$ which we will also denote by $\chi\in C^\infty(Z)$.
Let $p:[0,1)\times M\to M$ denote the projection.
The form $d(\chi p^* H_\alpha)$
can be considered as a closed form on $Z$ supported near $M$.
We now define the closed form
$$\tilde p_1(\nabla^{TZ},\alpha):=p_1(\nabla^{TZ})-2d(\chi p^*H_\alpha)\in \Omega^4_c(Z)\ .$$
It represents a relative cohomology class \begin{equation}\label{uidquwdqwdqwdqwdqwdqwdq}
\tilde p_1(TZ,\alpha^{top}):=[\tilde p_1(\nabla^{TZ},\alpha)]\in H^4(Z,M;\mathbb{R})\ .
\end{equation}
We define
$$\hat b^{geom}(M,\alpha^{top},Z):=\langle -\kappa_m \tilde \Phi(TZ)\cup \tilde p_1(TZ,\alpha^{top}), [Z,M]\rangle\in \mathbb{R}[[q]]\ .$$
\begin{lem}\label{zdqwud}
In $T_{2m}$ we have the equality
$$b^{geom}(M,\alpha^{top})=[\hat b^{geom}(M,\alpha^{top},Z)]\ .$$
\end{lem}
{\it Proof.$\:\:\:\:$}
We write
\begin{eqnarray*}
\Theta(\nabla^{TZ}) &=&\Phi_W(\nabla^{TZ}) -\tilde \Phi_W(\nabla^{TZ}) \wedge p_1(\nabla^{TZ}) \\&=&\Phi_W(\nabla^{TZ}) -\tilde \Phi_W(\nabla^{TZ}) \wedge \tilde p_1 -2\tilde \Phi_W(\nabla^{TZ}) \wedge d(\chi H_\alpha)\\&=&\Phi_W(\nabla^{TZ}) -\tilde \Phi_W(\nabla^{TZ}) \wedge \tilde p_1 -2d (\tilde \Phi_W(\nabla^{TZ}) \wedge \chi H_\alpha)\ .
\end{eqnarray*}
Since $\int_Z \Theta(\nabla^{TZ})\in \mathcal{M}_{2m}^\mathbb{R}$ and
$$\int_{Z} 2d (\tilde \Phi (\nabla^{TZ}) \wedge \chi H_\alpha)=2\int_M \tilde \Phi(\nabla^{TZ}) \wedge H_\alpha$$
we conclude that
$$[b^{geom}(M,\alpha,\nabla^{TZ})]=[- \kappa_m\int_Z \tilde \Phi(\nabla^{TZ})\wedge \tilde p_1]=[\hat b^{geom}(M,\alpha,Z)]$$
in $T_{2m}$.
\subsection{Calculation in the case $m=1$}
In the present Subsection we explain the relation between the three-dimensional constructions in Section \ref{gfdhfdf} with the higher-dimensional theory of the present section.
We have $\kappa_1=\frac{1}{2}$. In (\ref{reuwrewrwer}) we have fixed a generator
$g\in MString_3$ in the form
$$g=[S^3,\alpha^{top}-{\tt or}_{S^3}]=[S^3,\alpha^{top}-{\tt or}_{S^3}]-[S^3,\alpha^{top}]\ ,$$
where $\alpha^{top}$ is the topological string structure coming from the representation of $S^3$ as the boundary of the disc $D^4\subset \mathbb{R}^4$ with its natural topological string structure.
We use formula (\ref{w8e9fwefwfwefwf}) in order to calculate $b^{geom}(g)\in T_2$.
Because of the structure of the representative of $g$ as a difference, the contribution of the zero bordism $Z$ drops out. In the three-dimensional case only the
cohomological-degree $0$ part $\tilde \Phi_{[0]}=G_2$ of $\tilde \Phi$ contributes. Using (\ref{uiqdqwdwqdqwdwd54545}) we get
$$b^{geom}(g)=b^{geom}([S^3,\alpha^{top}-{\tt or}_{S^3}])-b^{geom}([S^3,\alpha^{top}]) =\left[-2\kappa_mG_2\langle {\tt or}_{S^3},[S^3]\rangle\right] =\left[-G_2\right] \ .$$
Note that
$$G_2=-\frac{1}{24}+ q+\dots\ .$$
The higher terms of the $q$-expansion are all integral.
This implies
\begin{equation}\label{dzuwdzquwd}
b^{geom}(g)= [\frac{1}{24}]\in T_2=\mathbb{R}/\mathbb{Z}[[q]]\ .
\end{equation}
Therefore $b^{geom}(g)$ has order $24$, and we see that
$b^{geom}:MString_3\to T_2$ is injective.
This shows in particular that the homomorphism $b^{geom}=b^{an}$
is non-trivial at least in some examples.
Let $j:\mathbb{Z}/24\mathbb{Z}\to \mathbb{R}/\mathbb{Z}$ be the inclusion such that
$j([1]):=[\frac{1}{24}]$. Then we get the following relation between the three-dimensional construction $d$ as in Corollary \ref{uqiduwqdqwdwqdd}
and $b^{geom}$:
$$\xymatrix{MString_3\ar[r]^d\ar[d]^{b^{geom}}&\mathbb{Z}/24\ar[d]^j\mathbb{Z}\\mathbb{T}_2\ar[r]^{p_0}&\mathbb{R}/\mathbb{Z}}\ ,
$$
where $p_0:T_2=\mathbb{R}/\mathbb{Z}[[q]]\to \mathbb{R}/\mathbb{Z}$ takes the constant coefficient.
\section{The secondary index theorem}
\subsection{Construction of $b^{top}$}\label{qdwqdwqd}
In this subsection for all $m\ge 1$ we give a homotopy theoretic construction of a homomorphism
$$b^{top}:A_{4m-1}\to T_{2m}\ .$$
We let $ko\to KO$ denote a connective cover of the real $K$-theory spectrum.
Then we get an induced connective cover
$$c:ko[[q]]\to KO[[q]]\ .$$
We let $tmf$ denote the connective spectrum of topological modular forms
constructed by Goerss, Hopkins and Miller \cite{MR1989190}, \cite{MR2125040}. It fits into the following commutative diagram
of multiplicative transformations
$$\xymatrix{&&ko[[q]]\ar[d]^c\\MString\ar[d]^j \ar[r]^\sigma&tmf\ar[r]^W\ar@{.>}[ur]^w&KO[[q]]\\MSpin\ar@{..>}@/^0.5cm/[rruu]^r\ar[rru]_{R}&}\ .$$
The map $\sigma$ will be called the $tmf$-valued Witten genus. The $tmf$-valued Witten genus and the factorization
$W\circ \sigma=R\circ j:MString \to KO[[q]]$ (for $R$ see Subsection \ref{udqwidqwdqwdqwd444}) have been constructed by Ando, Hopkins, Rezk \cite{ahr}. Since $tmf$ is connective we have a unique factorization
of $W$ over a ring map $w:tmf\to ko[[q]]$.
Similarly, since $MSpin$ is connective, we have a unique ring map $r$ which lifts $R$.
For an abelian group $G$ we let $MG$ denote the associated Moore spectrum. For a spectrum $X$
we write $XG:=X\wedge MG$. The inclusion $\mathbb{Z}\to \mathbb{Q}$ induces a map $M\mathbb{Z}\to M\mathbb{Q}$, and we get an exact triangle
$$ M\mathbb{Z}\to M\mathbb{Q} \to M\mathbb{Q}/\mathbb{Z}\to \Sigma M\mathbb{Z}\ $$in the stable homotopy category.
By smashing with $X$ it induces an exact triangle
$$X\to X\mathbb{Q} \to X\mathbb{Q}/\mathbb{Z}\to \Sigma X$$
which is
functorial in $X$.
For a spectrum $X$ we write $X^{l}:=\Sigma^l X$ for all $l\in \mathbb{Z}$.
We define the spectra $F$ and $G$ to fit into exact triangles
$$G^{-1}\to tmf\stackrel{w}{\to} ko[[q]]\to G$$
and
$$F^{-1}\stackrel{\nu}{\to} MString \stackrel{j}{\to} MSpin\to F\ .$$
We choose a map $\bar\sigma:G\to F$ such that the following diagram becomes a morphism of distinguished triangles
$$\xymatrix{F^{-1}\ar[r]^(.4){\nu}\ar[d]^{-\Sigma^{-1}(\bar\sigma)}&MString\ar[d]^\sigma\ar[r]^j&MSpin\ar[d]^r\ar[r]&F\ar[d]^{\bar\sigma}\\ G^{-1}\ar[r]&tmf\ar[r]^w&ko[[q]]\ar[r]&G}\ .$$
We now consider the following commutative diagram:
\begin{equation}\label{udqidqwdqwdqwd}
\xymatrix{&&&MSpin\mathbb{Q}^{-1}\ar[rr]^{-\Sigma^{-1}(r_\mathbb{Q})}\ar[dd]&&ko[[q]]\mathbb{Q}^{-1}\ar[dd]&\\
&&MSpin^{-1}\ar[ur]\ar[rr]^/-2em/{-\Sigma^{-1}(r)}\ar[dd]&&ko[[q]]^{-1}\ar[dd]\ar[ur]&&\\
&&&F\mathbb{Q}^{-1}\ar[rr]^/-2em/{-\Sigma^{-1}(\bar \sigma_\mathbb{Q})}\ar[dd]&&G\mathbb{Q}^{-1}\ar[dd]&&\\
&&F^{-1}\ar[rr]_/-2em/{-\Sigma^{-1}(\bar\sigma)}\ar[ur]\ar[dd]\ar@/^0.0cm/@{..>}[rrru]^a&&G^{-1}\ar[dd]\ar[ur]_b&&\\
&&&MString\mathbb{Q}\ar[rr]^/-2em/{\sigma_\mathbb{Q}}\ar[dd]&&tmf\mathbb{Q}\ar[dd]&\\
S^{4m-1}\ar@{.>}@/_2.2cm/[ddrr]^0\ar[ddrr]\ar@{~>}@/^3cm/[uuuuurrr]_\omega\ar@{-->}@/_0.3cm/[uuurrr]_/1em/{F_{Z_\mathbb{Q}}}\ar@{-->}@/^0.3cm/[urrr]_/+1.6em/{(M,\alpha^{top})_\mathbb{Q}}\ar@{..>}[rr]_{(M,\alpha^{top})}\ar@{..>}[uurr]^{F_Z}&&MString\ar[ur]\ar[rr]^/-2em/{\sigma}\ar[dd]^j&&tmf\ar[ru]\ar[dd]^/-2em/w&&\\
&\ar@{::>}[dl]_Z&&MSpin\mathbb{Q}\ar[rr]^/-2em/{r_\mathbb{Q}}&&ko[[q]]\mathbb{Q}&\\
&&MSpin\ar[ur]\ar[rr]^r&&ko[[q]]\ar[ru]&&}\ .\end{equation}
The maps $b, \sigma_\mathbb{Q}$ and $\bar \sigma_\mathbb{Q}$ are the smash products of ${\tt id}_ {G^{-1}},\sigma $ and $ \bar \sigma$ with the canonical map $S=M\mathbb{Z}\to M\mathbb{Q}$, respectively. The map $a$ is defined as the composition $a:=-b\circ \Sigma^{-1}(\bar \sigma)$.
Note that $a$ does not depend on the choice of $\bar \sigma$ because by construction
$$\xymatrix{F^{-1}\ar[r]^/-0.5em/\nu\ar[d]^{a}&MString\ar[d]\ar[r]^j&MSpin\ar[d]^r\ar[r]&F\ar[d]^{-\Sigma(a)}\\ G\mathbb{Q}^{-1}\ar[r]&tmf\mathbb{Q}\ar[r]^{w_\mathbb{Q}}&ko[[q]]\mathbb{Q}\ar[r]&G\mathbb{Q}}$$
is a morphism of triangles.
It determines $a$ up to elements coming from
$[MString,ko[[q]]]\mathbb{Q}]$. Since $MString$ is rationally even we have $[MString,ko[[q]]\mathbb{Q}^{-1}]=0$.
We calculate the homotopy groups of $G\mathbb{Q}$ from the exact sequence
$$\dots \to tmf\mathbb{Q}_j \to ko[[q]]\mathbb{Q}_j \to G\mathbb{Q}_{j}\to tmf\mathbb{Q}_{j+1} \to ko[[q]]\mathbb{Q}_{j+1} \to \dots\ .$$
In particular, since $tmf$ is rationally even, we get the short exact sequences
$$0\to tmf\mathbb{Q}_{4m}\to ko[[q]]\mathbb{Q}_{4m} \to G\mathbb{Q}_{4m}\to 0\ .$$
For $m\ge 0$ we have $ko[[q]]_{4m}\cong \mathbb{Z}[[q]]$, so that $ko[[q]]\mathbb{Q}_{4m}\cong \mathbb{Z}[[q]]\otimes\mathbb{Q}\subset \mathbb{Q}[[q]]$. Note that
$$\mathcal{M}^\mathbb{Q}_{2m}\subset \mathbb{Z}[[q]]\otimes\mathbb{Q}\subset \mathbb{Q}[[q]]\ ,$$
and by \cite{MR1989190} the subgroup $\mathcal{M}^\mathbb{Q}_{2m}$ coincides with the image of
$$\sigma_\mathbb{Q}:tmf\mathbb{Q}_{4m}\to ko[[q]]\mathbb{Q}_{4m}\ .$$ We therefore
get an identification
$$G\mathbb{Q}_{4m}\cong \frac{\mathbb{Z}[[q]]\otimes \mathbb{Q}}{\mathcal{M}^\mathbb{Q}_{2m}}\ .$$
Let $(M,\alpha^{top})$ be a closed string manifold of dimension $4m-1$.
The associated string bordism class $[M,\alpha^{top}]\in MString_{4m-1}$ is represented by a map
$(M,\alpha^{top}):S^{4m-1}\to MString$ as indicated in diagram (\ref{udqidqwdqwdqwd}).
We assume that $$[M,\alpha]\in A_{4m-1}\subseteq MString_{4m-1}\ .$$
Then we can choose a spin zero bordism
$Z$ of $M$.
This zero bordism can be interpreted as a zero homotopy $Z$
of the composition $$S^{4m-1}\stackrel{(M,\alpha^{top})}{\longrightarrow} MString\stackrel{j}{\to} MSpin$$ indicated in (\ref{udqidqwdqwdqwd}). It thus gives rise to a map
$F_Z:S^{4m-1}\to F^{-1}$ representing the
element $[F_Z]\in F_{4m}$.
We have
$$a([F_Z])\in G\mathbb{Q}_{4m}\cong \frac{\mathbb{Z}[[q]]\otimes \mathbb{Q}}{\mathcal{M}^\mathbb{Q}_{2m}}\ .$$
Note that
$[F_Z]$ is determined by $[M,\alpha^{top}]\in MString_{4m-1}$ up to elements in the image of $MSpin_{4m}\to F_{4m}$.
By a diagram chase we see that these go into the image of
$ko[[q]]_{4m}\to G\mathbb{Q}_{4m}$, i.e. of
$$\mathbb{Z}[[q]]\to \frac{\mathbb{Z}[[q]]\otimes \mathbb{Q}}{\mathcal{M}^\mathbb{Q}_{2m}}\ .$$
The natural chain of inclusions
$$\mathbb{Z}[[q]]\otimes \mathbb{Q}\to \mathbb{Q}[[q]]\to \mathbb{R}[[q]]$$
induces an inclusion
$$\frac{\mathbb{Z}[[q]]\otimes \mathbb{Q}}{\mathbb{Z}[[q]]+\mathcal{M}^\mathbb{Q}_{2m}}\to \frac{\mathbb{R}[[q]]}{\mathbb{Z}[[q]]+\mathcal{M}_{2m}}=T_{2m}\ .$$
In the following we will use these inclusions implicitly.
By construction, the class $[a([F_Z])]\in T_{2m}$ only depends on the class $[M,\alpha^{top}]\in A_{4m-1}$. Moreover, it depends on this bordism class additively thus justifying the following definition.
\begin{ddd}\label{uzidqwdqwdwqdwd}
We define the homomorphism
\begin{equation}\label{uidqwdqwd}b^{top}:A_{4m-1}\to T_{2m}\ ,\quad [M,\alpha^{top}]\mapsto[a([F_Z])] \ .\end{equation}
\end{ddd}
\subsection{The index theorem}\label{uidqwdqwdqwdqwdee}
In this subsection we show the secondary index theorem $b^{top}=b^{geom}$.
It is very similar to the secondary index theorem $e^{an}=e^{top}$ for the $e$-invariant stated in \cite[Introduction]{bunke-2008}. Let us explain the analogy in greater detail.
The primary invariant
for a map $S^k\to S$ in the case of the $e$-invariant was induced by the unit $x:S\to MU$. The $e$-invariant was obtained from a zero homotopy of the composition $S^{2m-1}\to S\to MU$, seen through the eyes of $K$-theory.
The construction can be visualized in the diagram
$$\xymatrix{&\overline{MU}^{-1}\ar[d]\ar[r]&K\mathbb{Q}/\mathbb{Z}^{-1}\\S^{2m-1}\ar@{-->}[urr]_/2em/{e(x)}\ar@{.>}[ur]\ar[dr]^0\ar[r]^x&S\ar[d]&\\
&MU&}\ .$$
Here $\overline{MU}$ is the cofiber of the unit, the dotted arrow is induced by the zero homotopy, and the map $\overline{MU}\to K\mathbb{Q}/\mathbb{Z}$
is explained in \cite{bunke-2008}.
In the present case the primary invariant for a map $x:S^k\to MString$ is induced by the map $j:MString\to MSpin$.
The secondary invariant $b^{top}$ measures a zero homotopy of the composition
$S^{4m-1}\to MString\to MSpin$ (which exists since we assume that $x\in A_{4m-1}$) by means of the Witten genus $r$ with values in $ko[[q]]$.
The visualization is
$$\xymatrix{&F^{-1}\ar[d]\ar[r]^a&G\mathbb{Q}^{-1}\\S^{4m-1}\ar@{-->}[urr]_/2em/{\hat b^{top}(x)}\ar@{.>}[ur]^{F_Z}\ar[dr]^0\ar[r]^x&MString\ar[d]&\\
&MSpin&}\ ,$$
where the class
$$\hat b^{top}(x)=a\circ F_Z\in G\mathbb{Q}_{4m}\cong \frac{\mathbb{Z}[[q]]\otimes \mathbb{Q}}{\mathcal{M}^\mathbb{Q}_{2m}}$$ represents $b^{top}(x)\in T_{2m}$.
The reason that we can not go directly to $ko[[q]]\mathbb{Q}/\mathbb{Z}$ is that
in contrast to $S_{2m}\otimes\mathbb{Q}=0$ for $m\ge 1$ in the case of the $e$-invariant, in our situation we have $MString_{4m}\otimes \mathbb{Q}\not=0$ in general.
\begin{theorem}\label{e2e7uiwqdqwdwqd}
For all $m\ge 1$ we have the following equality of homomorphisms
$$b^{geom}=b^{top}:A_{4m-1}\to T_{2m}\ .$$
\end{theorem}
{\it Proof.$\:\:\:\:$}
We let $(M,\alpha^{top}):S^{4m-1}\to MString$ be a map
which represents a class $[M,\alpha^{top}]\in A_{4m-1}$.
It produces a closed $4m-1$-dimensional string manifold, also denoted by $(M,\alpha^{top})$, via the Thom-Pontrjagin construction and Lemma \ref{udiqdqwdwqd}.
Since
$MString$ is rationally even the element
$[F_{Z_\mathbb{Q}}]\in F\mathbb{Q}_{4m}$ maps to zero in $MString\mathbb{Q}_{4m}$.
This implies the existence of a lift $\omega:S^{4m-1}\to MSpin\mathbb{Q}^{-1}$ as indicated in (\ref{udqidqwdqwdqwd}).
By a diagram chase we see that the formal power series
$-\Sigma^{-1}(r_\mathbb{Q})(\omega)\in ko[[q]]\mathbb{Q}_{4m}\cong \mathbb{Z}[[q]]\otimes \mathbb{Q}$ represents
$b^{top}([M,\alpha^{top}])$.
Let us give the geometric interpretation of $\omega$.
Since $MString\mathbb{Q}_{4m-1}=0$ there exists a $4m$-dimensional string manifold $(V,\beta^{top})$ and a natural number $L\in \mathbb{N}$ such that
$L (M,\alpha^{top})=\partial (V,\beta^{top})$, where by $L (M,\alpha^{top})$ we denote the disjoint union of $L$ copies of $(M,\alpha^{top})$.
We define a spin manifold $LZ\cup_M -V$, where
$-V$ denotes the spin manifold $V$ with the opposite orientation and spin structure. Then
$\omega=\frac{1}{L}[L Z\cup_M -V]$.
We choose Riemannian metrics on $Z$ and $V$ with product structures which glue nicely.
Then we have associated Levi-Civita connections on the tangent bundles of the manifolds.
By (\ref{uiqdqwdqwdwqdwqd}) or (\ref{udiqdqwdqwdqwdw}) we have
$$-\Sigma^{-1}(r_\mathbb{Q})(\omega)=\frac{\kappa_m}{L}\int_{LZ\cup_M -V} \Phi(\nabla^{T(LZ\cup_M -V)})\in \mathbb{Z}[[q]]\otimes\mathbb{Q}\cong k[[q]]\mathbb{Q}_{4m}\ .$$
Using a geometric refinement $\beta$ of the string structure $ \beta^{top}$
with restriction $\alpha$ to $M$
we can write by (\ref{udiqdqwdqwdqwd})
$$ \Phi(\nabla^{TV})=2d H_\beta\wedge \tilde \Phi(\nabla^{TV})+
\Theta(\nabla^{TV})\ .$$
By Stokes' theorem we get
\begin{eqnarray*}-\Sigma^{-1}(
r_\mathbb{Q})(\omega)&=&\frac{\kappa_m}{L}\int_{LZ\cup_M -V} \Phi(\nabla^{T(LZ\cup_M -V)})\\&=&\kappa_m\int_Z \Phi(\nabla^{TZ})-2\kappa_m\int_MH_\alpha\wedge \tilde \Phi(\nabla^{TM})-\frac{\kappa_m}{L} \int_V\Theta(\nabla^{TV})\ .
\end{eqnarray*}
The last integral belongs to $\mathcal{M}^\mathbb{R}_{2m}$, and the first two term together equal $\tilde b^{geom}(M,\alpha,\nabla^{TZ})$, compare with (\ref{w8e9fwefwfwefwf}).
This shows that
$b^{top}([M,\alpha])=b^{geom}([M,\alpha])$.
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
\subsection{Factorization over $tmf$}\label{uiffwefwefewf}
In this subsection we show that
$$b^{top}:A_{4m-1}\to T_{2m}$$ defined in Definition \ref{uzidqwdqwdwqdwd} admits a factorization over a homomorphism
$$b^{tmf}:tmf_{4m-1 } \to T_{2m}\ .$$
For the construction of $b^{tmf}$ we consider the following diagram of vertical and horizontal fiber sequences
\begin{equation}\label{diagr2}
\xymatrix{&&&tmf\mathbb{Q}^{-1}\ar[d]\\
&&\:\:\:\:\:ko[[q]]^{-1}\ar[d]\ar[r]&\:\:{}^{\hat x_\mathbb{Q}}\:\:ko[[q]]\mathbb{Q}^{-1}\ar[d]\\&
\:\:\:\: G\mathbb{Q}/\mathbb{Z}^{-2}\ar[r]\ar[d]&\:\:{}^{\tilde x}\:\:G^{-1}\ar[r]\ar[d]&\:\:{}^{\tilde x_\mathbb{Q}}\:\:G\mathbb{Q}^{-1} \ar[d]\\&
\:\:{}^{x_{\mathbb{Q}/\mathbb{Z}}}\:\: tmf\mathbb{Q}/\mathbb{Z}^{-1}\ar[d]^{\bar w}\ar[r]&\:\:{}^x\:\:tmf\ar[d]^w\ar[r]&\:\:\:{}^{x_\mathbb{Q}}\:\: tmf\mathbb{Q}\ar[d]\\\:\:{}^z\:\: ko[[q]]\mathbb{Q}^{-1}\ar[r]&\:\:{}^{\bar w(x_{\mathbb{Q}/\mathbb{Z}})}\:\: ko[[q]]\mathbb{Q}/\mathbb{Z}^{-1}\ar[r]&\:\:\:\:\: ko[[q]]\ar[r]&\:\:\:\:\:ko[[q]]\mathbb{Q}}\ .\end{equation}
The script-size symbols denote elements which will be chased through the diagram during the following discussion.
Let $x\in tmf_{4m-1}$. We are going to construct $b^{tmf}(x)\in T_{2m}$ by a diagram chase.
Since $ko[[q]]_{4m-1}=0$ we can choose a lift
$\tilde x\in G_{4m}$. Its image $\tilde x_\mathbb{Q}\in G\mathbb{Q}_{4m}$ maps to $x_\mathbb{Q}=0$ in $tmf\mathbb{Q}_{4m-1}=0$. Therefore we can further choose a lift $\hat x_\mathbb{Q}\in ko[[q]]\mathbb{Q}_{4m}\cong \mathbb{Z}[[q]]\otimes \mathbb{Q}$ which represents
by definition $b^{tmf}(x)\in T_{2m}$.
Let us check that
$b^{tmf}(x)$ is well-defined independently of the choice of the lifts. The lift
$\tilde x$ is defined uniquely up to elements
in the image $ko[[q]]_{4m}\to G_{4m}$. These are mapped to integral power serieses in
$ko[[q]]\mathbb{Q}_{4m}\cong \mathbb{Z}[[q]]\otimes \mathbb{Q}$. The lift $\hat x_\mathbb{Q}$ is well-defined up to elements in the image of $tmf\mathbb{Q}_{4m}\to ko[[q]]\mathbb{Q}_{4m}$, i.e. the space of power series $\mathcal{M}_{2m}^\mathbb{Q}$.
Since in $T_{2m}$ integral and modular power serieses are factored out,
it follows that $b^{tmf}(x)$ is independent of the choices.
\begin{prop}\label{udiqwdqwdqwdq}
For every $m\ge 1$
we have a factorization
$$\xymatrix{MString_{4m-1}\ar[r]^/0.5em/\sigma&tmf_{4m-1}\ar[d]^{b^{tmf}}&\\A_{4m-1}\ar@{_{(}->}[u]\ar[r]^{b^{top}}&T_{2m}}\ .
$$
\end{prop}
{\it Proof.$\:\:\:\:$}
We chase in the diagrams (\ref{udqidqwdqwdqwd}) and (\ref{diagr2}).
Let $y\in A_{4m-1}\subseteq MString_{4m-1}$ and $x=\sigma(y)\in tmf_{4m-1}$.
Then we choose a lift $y_F\in F_{4m}$. The element $\tilde x:=-\Sigma^{-1}(\bar \sigma)(y_F)\in G_{4m}$ can serve as a lift of $x$ in the construction of $b^{tmf}$. The image
$\tilde y_\mathbb{Q}\in F\mathbb{Q}_{4m}$ can further be lifted to
$\hat y_\mathbb{Q}\in MSpin\mathbb{Q}^{-1}_{4m}$. On the one-hand the element $-\Sigma^{-1}(r_\mathbb{Q})(\hat y_\mathbb{Q})\in ko[[q]]\mathbb{Q}_{4m}\cong \mathbb{Z}[[q]]\otimes \mathbb{Q}$
by construction represents $b^{top}(y)$. On the other hand, it can serve
as a lift $\hat x_\mathbb{Q}$ in the construction of $b^{tmf}(x)$ and thus
represents $b^{tmf}(x)$.
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
Using the diagram (\ref{diagr2}), we can give the following alternative description of $b^{tmf}(x)$ which will be employed in the Adams spectral sequence calculations in the Subsections \ref{uwiefwefwefwefewfwfef} and \ref{qwddwqdqwdoiuqwdoiwqd} below. Since
$x\in tmf_{4m-1}$ maps to $x_\mathbb{Q}=0$ we can choose a lift $x_{\mathbb{Q}/\mathbb{Z}}\in tmf\mathbb{Q}/\mathbb{Z}_{4m}$.
We let $\bar w(x_{\mathbb{Q}/\mathbb{Z}})\in ko[[q]]\mathbb{Q}/\mathbb{Z}_{4m}$ be its image under the map $\bar w:=w\wedge {\tt id}_{M\mathbb{Q}/\mathbb{Z}}:tmf\mathbb{Q}/\mathbb{Z}\to ko[[q]]\mathbb{Q}/\mathbb{Z}$ induced by $w $. Note that there is a natural projection map
$p:ko[[q]]\mathbb{Q}/\mathbb{Z}_{4m}\cong \mathbb{Z}[[q]]\otimes \mathbb{Q}/\mathbb{Z} \to T_{2m}$.
\begin{lem}\label{duqwiduqwdqwdqwd}
We have the equality
$$-p(\bar w({x_{\mathbb{Q}/\mathbb{Z}}}))=b^{tmf}(x)\ .$$
\end{lem}
{\it Proof.$\:\:\:\:$}
We can lift $\bar w(x_{\mathbb{Q}/\mathbb{Z}})$
to an element $z\in ko[[q]]\mathbb{Q}_{4m}$.
By a diagram chase we see that
$$z+\hat x_\mathbb{Q}\in {\tt im}(tmf\mathbb{Q}_{4m}\to ko[[q]]\mathbb{Q}_{4m})+{\tt im}(ko[[q]]_{4m}\to ko[[q]]\mathbb{Q}_{4k})
$$
so that $\hat x_\mathbb{Q}$ and $-z$ represent the same element in $T_{2m}$. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent
\section{Some details about Adams spectral sequences}
In this section we collect some facts and small results about
the generalized Adams spectral sequence in connection with
$\mathbb{Q}/\mathbb{Z}$-versions of spectra. This material will be used in the analysis of the invariant $b^{tmf}:tmf_{4m-1}\to T_{2m}$ in later sections.
\subsection{The $E_1$-term and the cobar complex}
Let us recall some details of the construction of the generalized Adams spectral sequence
${}^Y E_*^{*,*}(X)$ for a spectrum $X$ and a commutative ring spectrum $Y$ such that $Y_*Y$ is a flat $Y_*$-module.
As a first step we form the Adams resolution of the sphere spectrum
$S$ \cite[Def. 2.2.10]{ravenel}
$$\xymatrix{S\ar[d]&\bar Y^{-1}\ar[d]\ar[l]&\bar Y^{-1}\wedge \bar Y^{-1}\ar[l]\ar[d]&\bar Y^{-1}\wedge \bar Y^{-1}\wedge \bar Y^{-1}\ar[l]\ar[d]&\dots \ar[l]\\Y\ar@{.>}[ur]& Y\wedge \bar Y^{-1}\ar@{.>}[ur]& Y\wedge\bar Y^{-1}\wedge\bar Y^{-1}\ar@{.>}[ur]&Y\wedge \bar Y^{-1}\wedge \bar Y^{-1}\wedge \bar Y^{-1}\ar@{.>}[ur]&}\ .$$
Then we obtain the standard $Y$-based Adams resolution of $X$ by smashing this resolution of $S$ with $X$.
Note that $Y_*X$ is a comodule over the Hopf algebroid $(Y_*,Y_*Y)$.
It gives rise to the reduced cobar complex
$\bar C^*(Y_*X)$, see \cite[A.1.2.11]{ravenel}. There is a decomposition $\overline{Y_*Y}\oplus Y_*\cong Y_*Y$ of $Y_*$-modules (actually there are two such decompositions
induced by the left or right unit of the Hopf algebroid), where $\overline{Y_*Y}\subset Y_*Y$ is the kernel of the counit. Hence the reduced cobar complex is a subcomplex of the cobar complex
$C^*(Y_{*}X)\cong Y_*Y^{\otimes_{Y_*} *}\otimes_{Y_*} Y_*X$.
According to \cite[Prop. 2.11]{ravenel} the $E_1$-term of the generalized Adams spectral sequence is given by
$$({}^Y E_1^{*,*}(X),d_1)\cong (\bar C^*(Y_{*}X),d^{cobar})\ .$$
\subsection{Application to $tmf$}\label{zudwzdqwd}
In this subsection we recall the relevant example of a certain ring spectrum $Y$
introduced by Ravenel \cite{MR737778}.
The $K$-theory functor $K^0$ is represented by the $H$-space $\mathbb{Z}\times BU$. We have an equivalence
$\Omega (\mathbb{Z}\times BU)\cong \Omega BU\cong U$.
This $H$-space represents the functor $K^{-1}$. Hence
$K^{-2}$ is represented by the $H$-space $\Omega U$.
The Bott periodicity transformation $K^{-2}\to K^0$ is represented by a map of $H$-spaces
$\Omega U\to \mathbb{Z}\times BU$. By restriction we get a
map of $H$-spaces
\begin{equation}\label{mmm1}\Omega U(4)\to \mathbb{Z}\times BU\end{equation}
To a map $\xi:X\to \mathbb{Z}\times BU$ we can associate a Thom spectrum $X^\xi$. If $\xi$ is a map of $H$-spaces, then $X^\xi$ is a ring spectrum which is commutative if $\xi$ deloops twice.
It is known that the map (\ref{mmm1}) is a two-fold loop map and we
let $Y$ be the associated commutative ring spectrum.
The spectrum $tmf$ has the characterizing property that the cobar complex
$C^*(Y_{*}tmf)$ is the cobar complex of the Weierstrass Hopf algebroid \cite[14.5]{rezk}.
In particular every group in the complex $C^*(Y_{*}tmf)$ is torsion-free.
The same is true for the reduced cobar complex, and thus for the $E_1$-terms ${}^Y E_1^{*,*}(tmf_{(p)})$ of the $p$-localization $tmf_{(p)}$ of $tmf$ for all primes $p$.
\subsection{$\mathbb{Q}/\mathbb{Z}$-theory}
For abelian groups $A,B$ we denote
$A*B:=Tor_1^\mathbb{Z}(A,B)$. One can identify
\begin{equation}\label{uidqwdqwdqwd55345345}A*\mathbb{Q}/\mathbb{Z}\cong A_{tors}\subseteq A\end{equation}
in a canonical way, where
$A_{tors}\subseteq A$ denotes the torsion subgroup of $A$.
For a spectrum $X$ we have a functorial short exact sequence
\begin{equation}\label{ttrrz}0\to X_*\otimes \mathbb{Q}/\mathbb{Z} \to X\mathbb{Q}/\mathbb{Z}_*\to X_{*-1}* \mathbb{Q}/\mathbb{Z} \to 0\ .\end{equation}
The $Y$-based Adams resolution of $X\mathbb{Q}/\mathbb{Z}$ is obtained from the $Y$-based Adams resolution
of $X$ by smashing with the Moore spectrum $M\mathbb{Q}/\mathbb{Z}$.
Therefore we get short exact sequences
$$0\to {}^Y E^{s,t}_1(X)\otimes \mathbb{Q}/\mathbb{Z}\to {}^Y E^{s,t}_1(X\mathbb{Q}/\mathbb{Z})\to {}^Y E_1^{s,t-1}(X)*\mathbb{Q}/\mathbb{Z}\to 0\ .$$
The differentials of the $E_1$-term of the $Y$-based generalized Adams spectral sequence are induced by maps of spectra defined before smashing with $M\mathbb{Q}/\mathbb{Z}$. Hence, from the functoriality of (\ref{ttrrz}) in $X$, we get a short exact sequences of complexes whose cohomology groups form long exact sequences involving constituents of the $E_2$-terms of the generalized Adams spectral sequences of $X$ and $X\mathbb{Q}/\mathbb{Z}$.
Now we assume that ${}^Y E_1^{*,*}(X)$ is torsion-free. Then
we have an isomorphism of complexes $${}^Y E^{*,*}_1(X)\otimes \mathbb{Q}/\mathbb{Z}\cong {}^Y E^{*,*}_1(X\mathbb{Q}/\mathbb{Z})\ .$$
>From the universal coefficient formula we deduce exact sequences
$$0\to {}^Y E_2^{s,t}(X)\otimes \mathbb{Q}/\mathbb{Z}\to {}^Y E_2^{s,t}(X\mathbb{Q}/\mathbb{Z})\stackrel{\partial}{\to} {}^Y E_2^{s+1,t}(X)* \mathbb{Q}/\mathbb{Z}\to 0\ .$$
Now we consider the distinguished triangle in the stable homotopy category
$$ X\to X\mathbb{Q}\to X\mathbb{Q}/\mathbb{Z}\stackrel{h}{\to} \Sigma X \ .$$
If $Y_*(h)=0$, then by the geometric boundary theorem \cite[Thm. 2.3.4]{ravenel} for $r\ge 2$ there exist connecting maps
$$\delta_r:{}^Y E^{s,t}_r(X\mathbb{Q}/\mathbb{Z})\to {}^Y E_r^{s+1,t}(X)$$
which are compatible with the differentials of the spectral sequences. Furthermore they are filtered versions of $$h_*:X\mathbb{Q}/\mathbb{Z}_{*+1}\to X_*\ .$$
Assume again that ${}^Y E_1^{s,t}(X)$ is torsion free for all $s,t$. The case $s=0$
yields $Y_*(h)=0$ so that the geometric boundary theorem applies.
In this way, using (\ref{uidqwdqwdqwd55345345}), we can compare the two maps
$$\xymatrix{{}^Y E_2^{s,t}(X\mathbb{Q}/\mathbb{Z})\ar[r]^{\partial}\ar@/_0.6cm/[rr]_{\delta_2}&{}^Y E_2^{s+1,t}(X)_{tors}\ar[r]^i& {}^Y E_2^{s+1,t}(X)}\ ,$$
where $i$ denotes the canonical embedding.
\begin{lem}
We have $i\circ \partial=\delta_2$.
\end{lem}
{\it Proof.$\:\:\:\:$}
Let $(A,\Gamma)$ be a Hopf algebroid such that $\Gamma$ is a flat $A$-module, and let
\begin{equation}\label{uzdiwdqwdqwd}
0\to M\to N\to Q\to 0
\end{equation} be an exact sequence of $(A,\Gamma)$-comodules. Then the associated complex of reduced cobar complexes $$0\to \bar C^*(M)\to \bar C^*(N)\to \bar C^*(Q)\to 0$$ is exact, too. The boundary operator $${\tt Ext}^s_{(A,\Gamma)}(A,Q)\to {\tt Ext}^{s+1}_{(A,\Gamma)}(A,M)$$ in the associated long exact sequence
is the ${\tt Ext}_{(A,\Gamma)}$-multiplication
with the extension class of the sequence (\ref{uzdiwdqwdqwd}) in
${\tt Ext}^1_{(A,\Gamma)}(Q,M)$.
We now apply this to the exact complex
\begin{equation}\label{udiqwdqwd}
0\to Y_*X\to Y_*X\mathbb{Q}\to Y_*X\mathbb{Q}/\mathbb{Z}\to 0
\end{equation}
of $Y_*Y$-comodules.
The Adams $E_1$-terms give the exact sequences of reduced cobar complexes.
The map $i\circ \partial$ is the boundary operator of this sequence. The map $\delta_2$ is by
\cite[Thm 2.3.4 (a)]{ravenel} the ${\tt Ext}^1_{(A,\Gamma)}(Q,M)$-multiplication with the extension class of (\ref{udiqwdqwd}).
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
\subsection{Application to $tmf$}\label{udiqwdwqd}
In this short subsection we specialize the facts obtained above to the case of $tmf$.
We let $Y:=\Omega U(4)^\xi$ be as in Subsection \ref{zudwzdqwd}. Since
${}^Y E_1^{s,t}(tmf_{(p)})$ is torsion-free for all $s,t$ we get a short exact sequence
$$0\to {}^Y E^{s,t}_2(tmf_{(p)})\otimes \mathbb{Q}/\mathbb{Z}\to {}^Y E_2^{s,t}(tmf_{(p)}\mathbb{Q}/\mathbb{Z})\stackrel{\partial}{\to} {}^Y E_2^{s+1,t}(tmf_{(p)})_{tors}\to 0\ .$$
Since $\partial$ is induced by $\delta_2$ it is compatible with the boundary operators of the spectral sequences and a filtered version of
$$tmf_{(p)}\mathbb{Q}/\mathbb{Z}_{t-s}\to tmf_{(p),t-s-1}\ .$$
\subsection{Application to $ko$}\label{uiqdqwdqwdqwd234324234}
As before we let $ko$ denote the connective real $K$-theory spectrum.
Replacing $U(4)$ by $U(2)$ in the construction of the spectrum $Y$ in Subsection \ref{zudwzdqwd}
one gets a commutative ring spectrum $Y^\prime$. The Hopf algebroid $(Y^\prime_*,Y^\prime_*Y^\prime)$ is flat. It can be used to calculate
$ko_*$. The Hopf algebroid
\begin{equation}\label{ifowfwef}
(Y^\prime_*ko,Y^\prime_*Y^\prime\otimes_{Y^\prime_*}Y^\prime_*ko)\cong (\mathbb{Z}[b,c],\:\mathbb{Z}[b,c,r])
\end{equation}
with right unit
$$\left(\begin{array}{c}b\\c
\end{array}\right) \mapsto \left(\begin{array}{c}b+2r\\c+br+r^2
\end{array}\right)$$
has been identified in \cite{hill}. Furthermore, in this reference it has been shown that the Hopf algebroid (\ref{ifowfwef}) is equivalent to the Hopf algebroid
$$(\mathbb{Z}[b],\: \mathbb{Z}[b,r]/(r^2+2r))$$
with the right unit
$b\mapsto b+2r$. It can be used to calculate
the $E_2$-term of the generalized Adams spectral sequence
$${}^{Y^\prime} E^{s,t}_r(ko)\Rightarrow ko_{t-s}\ .$$
The inclusion $U(2)\to U(4)$ induces a map of ring spectra
$Y^\prime\to Y$, and therefore a morphism of spectral sequences
${}^{Y^\prime} E^{*,*}_*(ko)\to {}^{Y} E^{*,*}_*(ko)$.
One can check that this map is an isomorphism from the $E_2$-term on.
The $E_1$-term is the reduced cobar complex
of the $Y^\prime_*Y^\prime$-comodule $Y^\prime_*ko$.
It consists of torsion-free abelian groups. The same is true after base-change to $Y$.
Using the chain of invariant ideals
$$(2)\subseteq (2,b)\subseteq \mathbb{Z}[b,c]/(r^2+2r)$$
and a routine calculation with Bockstein spectral sequences we obtain
$${}^YE_2^{*,*}(ko)\cong \mathbb{Z}[b,r]/(2r)\ ,\quad |b|=(0,4)\ ,\: |r|=(1,2)\ .$$
The equality $ko_3=0$ forces the differential $d_3b=r^3$
which determines $d_3$ algebraically. By sparseness, the spectral sequence degenerates
at $E_4$. Its $E_4=E_\infty$-term reproduces the familiar computation of $ko_*$.
The result is visualized in the following chart. \newpage
\centerline{\includegraphics[scale=0.8]{chart1.ps}}
\mbox{}\vspace{1cm}\\\centerline{\scriptsize${}^YE_r^{s,t}(ko)\Rightarrow ko_*$}\vspace{1cm}
The horizontal (resp. vertical) axis represents $t-s$ (rep. $s$).
Boxes indicate copies of $\mathbb{Z}$ and dots copies of $\mathbb{Z}/2\mathbb{Z}$. The $E_\infty$-term
is black. The $E_2$-term and the differentials are gray.
The element $b^2\in {}^YE^{0,8}_2(ko)$ is a periodicity generator of dimension $8$.
Now we turn to $ko\mathbb{Q}/\mathbb{Z}$.
We have a distinguished triangle
\begin{equation}\label{hdzqdwqdqwdqwd}
ko\to ko\mathbb{Q}\to ko\mathbb{Q}/\mathbb{Z}\stackrel{l}{\to} \Sigma ko
\end{equation}
in $ko$-modules.
Since $Y_*ko$ is torsion-free the map $Y_*(l)$ vanishes. The geometric boundary theorem applies and shows that
the second map in the exact sequence
$$0\to {}^Y E_2^{s,t}(ko)\otimes \mathbb{Q}/\mathbb{Z}\to {}^Y E^{s,t}_2(ko\mathbb{Q}/\mathbb{Z})\stackrel{\partial}{\to} {}^Y E_2^{s+1,t}(ko)_{tors}\to 0$$
is compatible with the differential $d_3$.
We get the following chart of ${}^Y E^{*,*}_*(ko\mathbb{Q}/\mathbb{Z})$.
\vspace{1cm}
\centerline{\includegraphics[scale=0.8]{chart2.ps}}
\vspace{1cm}
\centerline{\scriptsize${}^YE_r^{s,t}(ko\mathbb{Q}/\mathbb{Z})\Rightarrow ko\mathbb{Q}/\mathbb{Z}_*$}\vspace{1cm}
The circles indicate copies of $\mathbb{Q}/\mathbb{Z}$ whose elements can be written as $b^k\otimes [q]$ with $q\in \mathbb{Q}$ and $[q]\in \mathbb{Q}/\mathbb{Z}$. The dots indicate copies of $\mathbb{Z}/2\mathbb{Z}$.
All indicated differentials are forced by the compatibility of $\partial$ with $d_3$. The spectral sequence degenerates at $E_4=E_\infty$. Starting in dimension $2$, the $E_\infty$-term is $8$-periodic with periodicity generator $b^2$ acting via the module structure under ${}^YE^{*,*}_{\infty}(ko)$.
For our applications we are especially interested
in $ko\mathbb{Q}/\mathbb{Z}_{4m}$.
For all $m\ge 0$ we have
$${}^Y E_2^{0,4m}(ko\mathbb{Q}/\mathbb{Z})\cong \mathbb{Q}/\mathbb{Z}\ , \quad {}^Y E_2^{2,8m+6}(ko\mathbb{Q}/\mathbb{Z})\cong \mathbb{Z}/2\mathbb{Z}\ .$$
We let $s\in {}^Y E_2^{2,6}(ko\mathbb{Q}/\mathbb{Z})\cong \mathbb{Z}/2\mathbb{Z}$ be the non-trivial element. It is a permanent cycle.
There is an extension problem in dimensions $8m-4$. By the $8$-periodicity it suffices to solve it in dimension $4$.
The long exact sequence in homotopy groups associated with
the triangle (\ref{hdzqdwqdqwdqwd}) gives isomorphisms
$$ko\mathbb{Q}/\mathbb{Z}_{4}\cong ko_{4}\otimes \mathbb{Q}/\mathbb{Z}\cong \mathbb{Q}/\mathbb{Z}\ .$$
This solves the extension problem for $ko\mathbb{Q}/\mathbb{Z}_{4}$ as follows: The Adams filtration of
$ko\mathbb{Q}/\mathbb{Z}_{4}\cong \mathbb{Q}/\mathbb{Z}$ is given by
$\mathbb{Z}/2\mathbb{Z}\subset \mathbb{Q}/\mathbb{Z}$, where the subgroup is in filtration $\ge 2$.
We let $\vartheta\in {\tt Filt}^2 ko\mathbb{Q}/\mathbb{Z}_{4}$ denote the non-trivial element. It is detected by
$[\vartheta]_2=s$.
We now observe from the chart of the spectral sequence ${}^Y E_2^{*,*} (ko\mathbb{Q}/\mathbb{Z})$
that $${\tt Filt}^{3} ko\mathbb{Q}/\mathbb{Z}_{*}=0\ .$$
In the following we introduce some notation to write elements in $ko\mathbb{Q}/\mathbb{Z}_*$ and their
relatives in ${}^YE_\infty^{*,*}(ko\mathbb{Q}/\mathbb{Z})$.
Let $a_m\in ko_{8m-4}\cong \mathbb{Z}$ be the generator detected by the uniquely determined element $[a_m]_2=2b^{2m-1}$. For $q\in \mathbb{Q}$ we can then consider the class $[q a_m]\in ko\mathbb{Q}/\mathbb{Z}_{8m-4}\cong ko\mathbb{Q}_{8m-4}/ko_{8m-4}$.
If $2q\not\in \mathbb{Z}$, then this class is detected by $2b^{2m-1}\otimes [q]\in
{}^Y E_2^{0,8m-4}(ko\mathbb{Q}/\mathbb{Z})$. If $q=\frac{1}{2}$, then
$[q a_m]$ is detected by the unique element $[qa_m]_2=b^{2m-2}s\in {}^Y E_2^{2,8m-2}(ko\mathbb{Q}/\mathbb{Z})$.
If $a\in ko_{8m}\cong \mathbb{Z}$ is detected by $[a]_2=b^{2m}\in {}^YE^{0,8m}_2(ko)$,
then for $q\in \mathbb{Q}$ the class $[qa]\in ko\mathbb{Q}/\mathbb{Z}_{8m}$ is detected by
$b^{2m}\otimes [q]\in {}^Y E_2^{0,8m}(ko\mathbb{Q}/\mathbb{Z})$.
It is easy to see that for a spectrum $X\in \{ko,\ ko\mathbb{Q}/\mathbb{Z}\}$ and any prime $p$ we have
$${}^Y E_*^{*,*}(X_{(p)})\cong {}^Y E_*^{*,*}(X)\otimes \mathbb{Z}_{(p)}\ ,$$
where $X_{(p)}$ denotes the $p$-localization of the spectrum $X$.
The above spectral sequences can be localized at $p=2$ without any essential changes.
If localized at a prime $p\not=2$ they simplify considerably.
In this case the spectral sequences degenerate at the $E_2$-terms, and
$${}^Y E^{*,*}_2(ko_{(p)} )\cong \mathbb{Z}_{(p)}[b]\ ,$$
$${}^Y E_2^{0,4m}(ko_{(p)}\mathbb{Q}/\mathbb{Z})\cong \mathbb{Q}/\mathbb{Z}_{(p)}$$
and vanishes elsewhere.
\section{Calculations at $2$}
\subsection{The result}\label{diwedqwdqwd}
In this section we analyze the localization at $2$
$$b^{tmf}:tmf_{(2),4m-1}\to T_{(2),2m}\ ,\quad m\ge 1\ ,$$
of the maps $b^{tmf}:tmf_{4m-1}\to T_{2m}$ constructed in Subsection \ref{uiffwefwefewf},
where $$T_{(2),2m}:=(T_{2m})_{(2)}:=T_{2m}\otimes \mathbb{Z}_{(2)}$$ is the localization of $T_{2m}$ at $2$.
In view of the surjectivity of the Witten genus
$\sigma:MString_{*}\to tmf_{*}$ (\cite[Theorem 6.25]{MR1989190}) and Proposition \ref{udiqwdqwdqwdq} the result implies
a non-triviality statement about the homomorphism $b^{top}:A_{4m-1}\to T_{2m}$
for an infinite number of $m\ge 1$.
In order to state the result we must name some elements of $tmf_{(2),*}$.
To this end we first recall some parts of the calculations of \cite{MR2508200} of the spectral sequence ${}^Y E_*^{*,*}(tmf_{(2)})$ and of $tmf_{(2),*}$.
Let $\Delta\in \mathcal{M}^\mathbb{Z}_{12}$ be the unique cusp form which is normalized such that $\Delta=q+\dots$.
There exists an unique element
$\Delta_2\in {}^Y E_2^{0,24}(tmf_{(2)})\cong \mathbb{Z}^2_{(2)}$
which corresponds to $\Delta$ under the identification (\ref{ghagdastdtztzqw}).
We know that $8\Delta_2$ is a permanent cycle.
The element $\Delta_2^8\in {}^Y E_2^{0,192}(tmf_{(2)})$ is a permanent cycle. It detects a unique element
$\Delta^8\in tmf_{(2),192}$ which is a periodicity generator of $tmf_{(2),*}$.
The map $\sigma : MString\to tmf$ induces an isomorphism $\sigma : MString_3\to tmf_3$.
The generator $g\in MString_3$ defined in (\ref{reuwrewrwer}) maps to a generator
$\nu:=\sigma(g)$ of $tmf_{(2),3}$, where we omit the localization map
$tmf\to tmf_{(2)}$ from notation. The order of $\nu$ is $8$, it belongs to filtration $1$, and it is detected by a unique element $[\nu]_2\in {}^Y E_2^{1,4}(tmf_{(2)})$.
Let $\eta\in tmf_{1}$ be the image of the Hopf map in $S_1$ under the unit $S\to tmf$.
It is detected by $[\eta]_2\in {}^Y E_2^{1,2}(tmf_{(2)})$.
For $i\in \{14,38,74,110,134\}$ we let $[a_i]_2\in {}^Y E_2^{1,i}(tmf_{(2)})\cong \mathbb{Z}/2\mathbb{Z}$ denote generators of the corresponding groups.
The upper half of the fourth column of the following table displays all additive generators of ${}^Y E_\infty^{1,4m}(tmf_{(2)})\subseteq E_2^{1,4m}(tmf_{(2)})$ and dimension $4m-1<195$. We fix elements of
$tmf_{(2),4m-1}$ detected by these generators and which are named in the second column.
These elements are unique up to elements of filtration $\ge 3$ (with the exception of $\nu$ which we have fixed above).
The third column lists the order of the elements in the first column. If we multiply the elements of order $8$ (or $4$, respectively) by $4$ (or $2$, respectively), then we get elements in filtration $\ge 3$.
These are detected by elements in ${}^YE_\infty^{3,4m}(tmf_{(2)})$ which do not appear separately in the table.
The lower half of the fourth column lists the remaining permanent cycles of ${}^Y E_2^{3,4m+3}(tmf_{(2)})$ in dimension $4m-1<192$.
The second column is a complete list of additive generators of
$${\tt Filt}^{ 1}tmf_{(2),4m-1}/{\tt Filt}^{ 4}tmf_{(2),4m-1}\ .$$
$$\begin{array}{|c|c|c|c|c|c|}\hline
m&name & {\tt ord} & {}^YE^{*,*}_2(tmf_{(2)}) &b^{tmf}(\dots) & c\in \{\dots\} \\ \hline\hline
1 &\nu &8 &[\nu]_2 &[\frac{3}{8}]& \\ \hline
7&2\nu \Delta&4 &2[\nu]_2\Delta_2&[\frac{c}{4}\Delta] &1,3\\\hline
13&\nu \Delta^2&8 &[\nu]_2\Delta^2_2&[\frac{c}{8}\Delta^2] & 1,5 \\\hline
25&\nu \Delta^4&8 &[\nu]_2\Delta^4_2&[\frac{c}{8}\Delta^4] & 1,5 \\\hline
31&2\nu \Delta^5&4 &2[\nu]_2\Delta^5_2&[\frac{c}{4}\Delta^5] & 1,3 \\\hline
37&\nu \Delta^6&8 &[\nu]_2\Delta^6_2&[\frac{c}{8}\Delta^6] &1,5 \\\hline\hline
4&\eta a_{14}&2&[\eta]_2 [a_{14}]_2&0&\\\hline10&\eta a_{38}&2&[\eta]_2 [a_{38}]_2&0&\\\hline19&\eta a_{74}&2&[\eta]_2 [a_{74}]_2&0&\\\hline29&\eta a_{110}&2&[\eta]_2 [a_{110}]_2&0&\\\hline34&\eta a_{134}&2&[\eta]_2 [a_{134}]_2&0&\\\hline
\end{array}\ .
$$
The table can be continued in the obvious $192$-periodic way using the multiplication by $\Delta^8$.
There are more elements in dimensions $4m-1$ which are products of
the listed elements with other elements in dimensions divisible by $4$. All these products
are of filtration $\ge 4$.
The main result of the present section is the following proposition.
\begin{prop}\label{uidodqwdqwd}
\begin{enumerate}
\item The values of $b^{tmf}$ on the elements listed in the second column are represented by the formal power series listed in the fifth column of the table.
\item The map $b^{tmf}$ annihilates all elements in filtration $\ge 4$.
\item\label{udiqwdhqwdqwdwqd} For all $m\ge 0$ and $[m]_{\mbox{mod $48$}}\in \{1,7,13,25,31,37\}$
the map $b^{tmf}$ induces an injective map
$$\bar b^{tmf}:{\tt Filt}^{ 1}tmf_{(2),4m-1}/{\tt Filt}^{4}tmf_{(2),4m-1}\to T_{(2),2m}\ .$$
\item For all $m\ge 0$ not listed in \ref{udiqwdhqwdqwdwqd}
the map $\bar b^{tmf}:tmf_{(2),4m-1}\to T_{(2),2m}$ is trivial.
\end{enumerate}
\end{prop}
The remainder of the present section is devoted to the proof of this proposition.
\subsection{Transition to $\mathbb{Q}/\mathbb{Z}$-theory}\label{fuweifwefweftztwf}
From Subsection \ref{udiqwdwqd}
we have an exact sequence
\begin{equation}\label{u7diqwdqwd444}
0\to {}^Y E_2^{s,t}(tmf_{(2)})\otimes \mathbb{Q}/\mathbb{Z}\to {}^Y E_2^{s,t}(tmf_{(2)}\mathbb{Q}/\mathbb{Z})\stackrel{\partial}{\to} {}^Y E_2^{s+1,t}(tmf_{(2)})_{tors}\to 0
\end{equation}
for all $t,s\in \mathbb{Z}$.
The group
${}^Y E_2^{s,t}(tmf_{(2)})$ is finite for all $s\ge 1$ and $t\in \mathbb{Z}$.
This implies that
$ {}^Y E_2^{s,t}(tmf_{(2)})\otimes \mathbb{Q}/\mathbb{Z}=0$
for $s\ge 1$.
Note that
${}^Y E_2^{0,t}(tmf_{(2)})\otimes \mathbb{Q}/\mathbb{Z}$ is a divisible group.
There are no non-trivial maps from divisible groups to finite groups. Hence
the elements in the subgroup $ {}^Y E_2^{0,t}(tmf_{(2)})\otimes \mathbb{Q}/\mathbb{Z}\subseteq {}^Y E_2^{0,t}(tmf_{(2)}\mathbb{Q}/\mathbb{Z})$
are annihilated by all differentials, i.e. they are permanent cycles.
In particular we get an embedding
$$0\to {}^Y E_2^{0,t}(tmf_{(2)})\otimes \mathbb{Q}/\mathbb{Z}\to {}^Y E_\infty^{0,t}(tmf_{(2)}\mathbb{Q}/\mathbb{Z})\ .$$
By an inspection of the calculation \cite{MR2508200} we see
that the maps
$$\delta_r: {}^Y E_r^{s,4m}(tmf_{(2)}\mathbb{Q}/\mathbb{Z})\to {}^YE_r^{s+1,4m}(tmf_{(2)})$$
are isomorphisms for all $r\ge 2$ and $s\ge 1$.
Indeed, they map cycles to cycles.
Since the differentials of ${}^Y E_r^{*,*}(tmf_{(2)}\mathbb{Q}/\mathbb{Z})$ annihilate the subgroups ${}^Y E_2^{0,t}(tmf_{(2)})\otimes \mathbb{Q}/\mathbb{Z}$ and otherwise are induced by the differentials of
${}^Y E_r^{*,*}(tmf_{(2)})$
we further conclude that if $\delta_r(x)$ is a boundary, then so is
$x$.
We therefore have isomorphisms
$$ \delta_r: \frac{ {}^Y E_r^{0,4m}(tmf_{(2)}\mathbb{Q}/\mathbb{Z})}{{}^Y E_2^{0,4m}(tmf_{(2)})\otimes \mathbb{Q}/\mathbb{Z}}\stackrel{\sim}{\to} E_r^{1,4m}(tmf_{(2)})$$
for all $r\ge 2$ and $r=\infty$.
\subsection{The map to $ko[[q]]$}\label{uwiefwefwefwefewfwfef}
The morphism of ring spectra $w:tmf\to ko[[q]]$ induces a morphism $E_r(w)$ of $Y$-based generalized Adams spectral sequences.
Note that
${}^Y E_2^{*,*}(ko[[q]])\cong {}^Y E^{*,*}_2(ko)[[q]]$.
The spectra $ko[[q]]_{(2)}$, $tmf_{(2)}\mathbb{Q}/\mathbb{Z}$ and $ko[[q]]_{(2)}\mathbb{Q}/\mathbb{Z}$ are modules over $tmf_{(2)}$.
We get induced module structures on the spectral sequences.
Recall the element $\Delta_2\in {}^Y E_2^{0,24}(tmf_{(2)})$ introduced in Subsection \ref{diwedqwdqwd}.
We have
$$E_2(w)(\Delta_2)=b^{6} \Delta\in {}^Y E_2^{0,24}(ko[[q]]_{(2)})\cong {}^Y E_2^{0,24}(ko)[[q]]_{(2)}\ .$$
We let $\bar w=w\wedge {\tt id}_{M\mathbb{Q}/\mathbb{Z}}:tmf_{(2)}\mathbb{Q}/\mathbb{Z}\to ko[[q]]_{(2)}\mathbb{Q}/\mathbb{Z}$ denote the map induced by $w$.
Since $${\tt Filt}^{ 3}ko[[q]]_{(2)}\mathbb{Q}/\mathbb{Z}_*=0$$ it is clear that
$\bar w$ annihilates ${\tt Filt}^{3}tmf_{(2)}\mathbb{Q}/\mathbb{Z}_*$. It now follows from Lemma \ref{duqwiduqwdqwdqwd} that $b^{tmf}$ annihilates
${\tt Filt}^{4}(tmf_{(2)})$. This is Assertion 2. of Proposition \ref{uidodqwdqwd}.
Consequently $b^{tmf}$ annihilates all elements except for those listed in the table in Subsection \ref{diwedqwdqwd} in dimensions $4m-1$.
We now explain how we get the listed values of $b^{tmf}$.
Recall the generator $\nu\in tmf_{(2),3}$ of order $8$. We choose a lift
$\nu_{\mathbb{Q}/\mathbb{Z}}\in tmf_{(2)}\mathbb{Q}/\mathbb{Z}_{4}$ as in (\ref{diagr2}).
By (\ref{dzuwdzquwd}) know that
$b^{top}(g)=[\frac{1}{24}]\in T_2= \mathbb{R}[[q]]/\mathbb{Z}[[q]]$. Therefore, using that
$[\frac{1}{24}]=[\frac{3}{8}]$ in $\mathbb{Q}/\mathbb{Z}_{(2)}$, we get by Lemma \ref{duqwiduqwdqwdqwd}
$$\bar w( \nu_{\mathbb{Q}/\mathbb{Z}})=[-\frac{3}{8}] \in ko[[q]]_{(2)}\mathbb{Q}/\mathbb{Z}_4\cong \mathbb{Z}_{(2)}[
[q]]\otimes \mathbb{Q}/\mathbb{Z}\ .$$
From the discussion at the end of Subsection \ref{uiqdqwdqwdqwd234324234} it follows that
$$E_2(\bar w)([ \nu_{\mathbb{Q}/\mathbb{Z}}]_2)
=b\otimes [-\frac{3}{4}]\in E_2^{0,4}(ko[[q]]_{(2)}\mathbb{Q}/\mathbb{Z})\cong \mathbb{Z}[[q]]_{(2)}\otimes \mathbb{Q}/\mathbb{Z}\ .$$
Let $\lambda=2\nu \Delta\in {\tt Filt}^{1}tmf_{(2),27}$ be detected by $[\lambda]_2:=2[\nu]_2\Delta_2$.
Then we can choose a lift
$ \lambda_{\mathbb{Q}/\mathbb{Z}}\in tmf_{(2)}\mathbb{Q}/\mathbb{Z}_{27}$ which is detected by
$[\lambda_{\mathbb{Q}/\mathbb{Z}}]_2=2[ \nu_{\mathbb{Q}/\mathbb{Z}}]_2 \Delta_2$ (note that the lift $ \lambda_{\mathbb{Q}/\mathbb{Z}} $ is unique up to elements coming from $tmf_{(2)}\mathbb{Q}_{28}\cong \mathbb{Q}$).
This follows from the condition $\partial [ \lambda_{\mathbb{Q}/\mathbb{Z}}]_2=[\lambda]_2$\ ,
where $\partial$ is as in (\ref{u7diqwdqwd444}), and
$\partial (2[\nu_{\mathbb{Q}/\mathbb{Z}}]_2 \Delta_2)=2\Delta_2 \partial ([ \nu_{\mathbb{Q}/\mathbb{Z}}]_2)$.
We conclude that
$$E_2(\bar w)( \lambda_{\mathbb{Q}/\mathbb{Z}})=E_2(\bar w)([ \nu_{\mathbb{Q}/\mathbb{Z}}]_2)
2\Delta_2=b^7\otimes [\frac{1}{2}\Delta]\in E_2^{0,28}(ko[[q]]_{(2)}\mathbb{Q}/\mathbb{Z})\ .$$
Lemma \ref{duqwiduqwdqwdqwd} implies
that
$$b^{tmf}(2\nu \Delta)=[\frac{c}{4}\Delta]\in T_{(2),28}\ ,$$
where $c=1$ or $3$. This indeterminacy is unavoidable since
$2\nu \Delta$ is only determined up to elements of higher filtration.
By a similar argument
$$b^{tmf}(\nu\Delta^2)=[\frac{c}{8}\Delta^2]\in T_{(2),52}$$
with $c\in \{3,7\}$. In a similar manner we get the value of $b^{tmf}$ on the remaining entries of the upper part of the second column.
Turning to the elements listed in the lower part of the second column one
shows $b^{tmf}(\eta a_i)=0$ as follows.
Note that $\eta a_i\in Filt^3 tmf_{(2),i+1}$.
Therefore $b^{tmf}(\eta a_i)$ would be represented by an element in
$Filt^{2} ko[[q]]\mathbb{Q}/\mathbb{Z}_{i+2}$.
In all cases we have $i+2\equiv 0 \:\mbox{mod $8$}$.
But then
$Filt^2 ko[[q]]\mathbb{Q}/\mathbb{Z}_{i+2}=0$.
We thus have shown Assertions 1 and 4 of Proposition \ref{uidodqwdqwd}.
It remains to show Assertion 3. It suffices to show that
${\tt ord}(b^{tmf}(x))={\tt ord}(x)$ for the generators of the cyclic groups
${\tt Filt}^{1}tmf_{(2),4m-1}/{\tt Filt}^4tmf_{(2),4m-1}$ listed in the first column
of the table and their multiples by $\Delta^8$. The assertion immediately follows from Corollary \ref{zdwdqwdqwdwddqwdq}.
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
\section{Calculations at $3$}
\subsection{The result}\label{diwedqwdqwd1}
Compared with the localization at $2$, the structure of the spectral sequence ${}^Y E^{*,*}_*(tmf_{(3)})$ and of $tmf_{(3),*}$
is much simpler. We again start by recalling
the calculations of \cite{MR2508200}.
There is an unique element $\Delta_2\in {}^Y E_2^{0,24}(tmf_{(3)})$
which corresponds to $\Delta$ under the identification (\ref{ghagdastdtztzqw}).
We know that
$3\Delta_2$ is a permanent cycle.
The element $\Delta_2^3\in {}^Y E_2^{0,72}(tmf_{(3)})$ is a permanent cycle and detects a unique element
denoted by $\Delta^3\in tmf_{(3),72}$ which is a periodicity generator.
The generator $g\in MString_3$ defined in (\ref{reuwrewrwer}) maps to a generator
$\nu:=\sigma(g)$ of $tmf_{(3),3}$ of order $3$ in filtration $1$ detected by $[\nu]_2\in {}^Y E_2^{1,4}(tmf_{(3)})$.
The following table gives the complete list of additive generators of $tmf_{(3),4m-1}$ for
$4k-1< 75 $. All these elements live in filtration one and are uniquely determined.
The table can be continued in the obvious $72$-periodic way using multiplication by $\Delta^3$.
$$\begin{array}{|c|c|c|c|c|c|}\hline
m&name & {\tt ord} &{}^YE^{*,*}_2(tmf_{(3)}) &b^{tmf}(\dots) \\ \hline\hline
1 &\nu &3 &[\nu]_2 &[\frac{2}{3}] \\ \hline
7&\nu \Delta&3 &[\nu]_2\Delta_2&[\frac{2}{3}\Delta]
\\\hline
\end{array}\ .
$$
The third column displays the order of the element. The last two columns display the elements in ${}^YE^{*,*}_2(tmf_{(3)})$ which detect the elements in the second column, and the value
of the invariant $b^{tmf}$ in $T_{(3),2k}$, expressed as a class of a rational formal power series.
\begin{prop}\label{zwdqdwqd}
\begin{enumerate}
\item The value of $b^{tmf}$ on the generator listed in the second column of the table is as indicated in the last column of the table.
\item For all $m\ge 1$ we have injections
$$b^{tmf}:tmf_{(3),4m-1}\to T_{(3),2m}\ .$$
\end{enumerate}
\end{prop}
The following two subsections are devoted to the proof of this proposition.
\subsection{Transition to $\mathbb{Q}/\mathbb{Z}$-theory}
By Subsection \ref{udiqwdwqd}
we have an exact sequence
\begin{equation}\label{u7diqwdqwd4443}
0\to {}^Y E_2^{s,t}(tmf_{(3)})\otimes \mathbb{Q}/\mathbb{Z}\to {}^Y E_2^{s,t}(tmf_{(3)}\mathbb{Q}/\mathbb{Z})\stackrel{\partial}{\to} {}^Y E_2^{s+1,t}(tmf_{(3)})_{tors}\to 0\ .
\end{equation}
As in Subsection \ref{fuweifwefweftztwf} we see that
the elements in the subgroup $ {}^Y E_2^{0,t}(tmf_{(3)})\otimes \mathbb{Q}/\mathbb{Z}\subseteq {}^Y E_2^{0,t}(tmf_{(3)}\mathbb{Q}/\mathbb{Z})$
are permanent cycles and
we get an embedding
$$0\to {}^Y E_2^{0,t}(tmf_{(3)})\otimes \mathbb{Q}/\mathbb{Z}\to {}^Y E_\infty^{0,t}(tmf_{(3)}\mathbb{Q}/\mathbb{Z})\ .$$
By an inspection of the calculation \cite{MR2508200} we see again
that the maps
$$\delta_r: {}^Y E_r^{s,4m}(tmf_{(3)}\mathbb{Q}/\mathbb{Z})\to E_r^{s+1,4m}(tmf_{(3)})$$
are isomorphisms for all $r\ge 2$ and $s\ge 1$, so that the maps
$$ \delta_r: \frac{ {}^Y E_r^{0,4m}(tmf_{(3)}\mathbb{Q}/\mathbb{Z})}{{}^Y E_2^{0,4m}(tmf_{(3)})\otimes \mathbb{Q}/\mathbb{Z}}\stackrel{\sim}{\to} E_r^{1,4m}(tmf_{(3)})\ $$
are isomorphisms for all $r\ge 2$ and $r=\infty$.
\subsection{The map to $ko[[q]]$}\label{qwddwqdqwdoiuqwdoiwqd}
The element $\Delta_2\in {}^Y E_2^{0,24}(tmf_{(3)})$ satisfies
$$E_2(w)(\Delta_2)=b^{6} \Delta\in {}^Y E_2^{0,24}(ko[[q]]_{(3)})\cong {}^Y E_2^{0,24}(ko)[[q]]_{(3)}\ .$$
Recall the generator $\nu\in tmf_{(3),3}$ of order $3$ and the lift
$ \nu_{\mathbb{Q}/\mathbb{Z}}\in tmf_{(3)}\mathbb{Q}/\mathbb{Z}_{4}$ as in (\ref{diagr2}).
By (\ref{dzuwdzquwd}) know that
$b^{top}(g)=[\frac{1}{24}]\in T_2\cong \mathbb{Q}/\mathbb{Z}[[q]]$. Therefore, using that
$[\frac{1}{24}]=[-\frac{1}{3}]$ in $\mathbb{Q}/\mathbb{Z}_{(3)}$, we get
$$\bar w( \nu_{\mathbb{Q}/\mathbb{Z}})=[\frac{1}{3}] \in ko[[q]]_{(3)}\mathbb{Q}/\mathbb{Z}_4\cong \mathbb{Z}[
[q]]_{(3)}\otimes \mathbb{Q}/\mathbb{Z}\ .$$
With Lemma \ref{duqwiduqwdqwdqwd} this gives $$b^{tmf}(\nu)=\left[\frac{2}{3}\right]\in T_{(3),2}\ .$$
Furthermore, we have
$$E_2(\bar w)([\tilde \nu]_2)
=b\otimes [\frac{2}{3}]\in E_2^{0,4}(ko[[q]]_{(3)}\mathbb{Q}/\mathbb{Z})\cong \mathbb{Q}/\mathbb{Z}[[q]]_{(3)}\ .$$
Let $\lambda=\nu \Delta\in {\tt Filt}^{1}tmf_{(3),27}$ be detected by $[\lambda]_2:=[\nu]_2\Delta_2$.
Then we can choose a lift
$ \lambda_{\mathbb{Q}/\mathbb{Z}}\in tmf_{(3)}\mathbb{Q}/\mathbb{Z}_{27}$ which is detected by
$[ \lambda_{\mathbb{Q}/\mathbb{Z}}]_2=[\tilde \nu]_2 \Delta_2$ (note that the lift $ \lambda_{\mathbb{Q}/\mathbb{Z}}$ is unique up to elements coming from $tmf_{(3)}\mathbb{Q}_{28}\cong \mathbb{Q}$).
We conclude that
$$E_2(\bar w)( \lambda_{\mathbb{Q}/\mathbb{Z}})=E_2(\bar w)( \nu_{\mathbb{Q}/\mathbb{Z}}]_2)
\Delta_2=b^7\otimes [\frac{2}{3}\Delta]\in E_2^{0,28}(ko[[q]]_{(3)}\mathbb{Q}/\mathbb{Z})\ .$$
This gives $$b^{tmf}(\nu \Delta)=\left[\frac{2}{3}\Delta\right]\in T_{(3),14}\ .$$
This finishes the proof of Assertion 1 of Proposition \ref{zwdqdwqd}. .
For Assertion 2.
it suffices to show that
${\tt ord}(b^{tmf}(x))={\tt ord}(x)$ for the generators of the cyclic groups
$tmf_{(3),4m-1}$ listed in the first column
of the table and their multiples by $\Delta^3$. The assertion immediately follows from Corollary \ref{zdwdqwdqwdwddqwdq}.
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
\subsection{How to detect $\nu\Delta$?}\label{ziddqwdqwdwq}
The equality $b^{top}=b^{geom}$ can be used to do some explicit calculations of the tmf-valued Witten genus. In the following we discuss an example.
Let $(M,\alpha^{top})$ be a $27$-dimensional closed string manifold. It represents a string bordism class $[M,\alpha^{top}]\in MString_{27}$. Moreover,
$\sigma([M,\alpha^{top}])= c \Delta \nu\in tmf_{(3),27}$ for a uniquely determined
$c\in \mathbb{Z}/3\mathbb{Z}$.
The problem consists in the explicit calculation of the value $c$.
First observe that by the calculations in Subsection \ref{diwedqwdqwd1} we have an equality of classes
in $T_{(3),27}$
$$[b^{top}(M,\alpha^{top})]=[\frac{2c}{3} \Delta]\ .$$
Expand $b^{top}(M,\alpha^{top})=b_0+qb_1+\dots$.
Then we form
$$b^{top}(M,\alpha^{top})-b_0 E_{14}=(b_1-24 b_0)q+\dots \ . $$
Note that $3 (b^{top}(M,\alpha^{top})-b_0 E_{14})\in \mathbb{Z}[[q]]_{(3)}$.
We have
$$\frac{2}{3}\Delta= \frac{2}{3} q+\dots \quad \ ,\quad\ .$$
Therefore we get
$$c=-3(b_1-24 b_0) \: ({\tt mod\:} 3)\ .$$
Let $Z$ be a spin zero bordism of $M$.
Then
specializing Lemma (\ref{zdqwud}) we get
\begin{eqnarray*}
c&=&-\langle\tilde p_1\cup \left({\frac {1967}{729}}\,{N_{{4}}}^{3}+{\frac {356}{243}}\,N_{{8}}{p_{{1}
}}^{2}+{\frac {2575}{2187}}\,N_{{6}}{p_{{1}}}^{3}\right.\\&&\left.+{\frac {152}{81}}\,
N_{{8}}N_{{4}}+{\frac {941}{729}}\,N_{{4}}p_{{1}}N_{{6}}+{\frac {6232
}{2187}}\,{p_{{1}}}^{6}\right.\\&&\left.+{\frac {898}{729}}\,N_{{4}}{p_{{1}}}^{4}+{
\frac {541}{243}}\,N_{{10}}p_{{1}}+{\frac {623}{729}}\,{N_{{4}}}^{2}{
p_{{1}}}^{2}\right.\\&&\left.+{\frac {457}{729}}\,N_{{12}}+{\frac {2398}{2187}}\,{N_{{
6}}}^{2}\right), [Z,M]\rangle {\tt mod} 3
\end{eqnarray*}
Here $N_{2i}:=N_{2i}(p_1,p_2,...)$ denotes the $i$'th Newton polynomial and we use the abbreviations $p_i:=p_i(TZ)$ and $\tilde p_1:=\tilde p_1(TZ,\alpha^{top})$ for the class defined in (\ref{uidquwdqwdqwdqwdqwdqwdq}).
It is a non-trivial integrality statement that the right-hand side is an integer (before taking its value modulo $3$).
\section{Geometric and homotopy theoretic string bordism}\label{udidqwdqwdqwdqwdqwdqwd}
In the present paper we use a geometric picture of string bordism.
The goal of the present subsection is to give the geometric definition of string bordism and to relate it with the
homotopy theoretic picture.
This material is familiar to geometers and
we include it mainly as a reference.
We first recall some facts about string structures. The following diagram gives the unstable variation
of the Postnikov tower (\ref{uifwefwefwefewf}). For $n\ge 3$ we consider
\begin{equation}\label{uqiwdqwdqwdwd}
\xymatrix{&BString(n)\ar[d]&\\ X\ar[ddr]_{V}\ar@{.>}[ur]^{\alpha^{top}}\ar[r]_/-1em/{\xi}&BSpin(n)\ar[d]\ar[r]^{\frac{p_1}{2}}&K(\mathbb{Z},4)\\
&BSO(n)\ar[r]^{w_2}\ar[d]&K(\mathbb{Z}/2\mathbb{Z},2)\\&BO(n)\ar[r]^{w_1}&K(\mathbb{Z}/2\mathbb{Z},1)}\ .
\end{equation}
The space $BSO(n)$ is the homotopy fiber of the first Stiefel-Whitney class $w_1$. Similarly, $BSpin(n)$ and $BString(n)$ are the homotopy fibers of the second Stiefel-Whitney class $w_2$ and the spin refinement $\frac{p_1}{2}$ of the first Pontrjagin class, respectively.
Let $V\to X$ be a $n$-dimensional real vector bundle over a $CW$-complex $X$. We use same symbol in order to denote the classifying map $V:X\to BO(n)$. An orientation and spin structure on $V$ is given by the choice of a lift $\xi:X\to BSpin(n)$. A spin bundle is by definition a pair $(V,\xi)$. We will usually drop the $\xi$ from the notation and refer to $V$ as a spin bundle.
\begin{ddd}
A topological string structure on a spin bundle $V$ is defined as a homotopy class of lifts $\alpha^{top}$. The pair $(V,\alpha^{top})$ is called a string bundle.
\end{ddd}
The set of topological string structures on the spin bundle $V$ is a torsor under $H^3(X;\mathbb{Z})$. We will write the action of $x\in H^3(X;\mathbb{Z})$ as
\begin{equation}\label{ufiwefwefwefw}
(x,\alpha^{top})\mapsto \alpha^{top}+x\ .
\end{equation}
In differential geometry one works with orientations, spin, or string structures on the (unstabilized)
tangent bundle, while in homotopy theory one considers those structures on the stable normal bundle.
In order to transfer structures back and forth between the tangent and stable normal bundles
the two-out-of three principle Lemma \ref{ioqwdqwdqwdwqd} below is crucial.
In the following we describe an abstract setting for this principle which can inductively be applied to the layers of the tower (\ref{uqiwdqwdqwdwd}).
Assume that we have a sequence of maps
$(B(n)\to BO(n))_{n\ge n_0}$ together with homotopy commutative diagrams
$$\xymatrix{B(n)\times B(m)\ar[d]\ar[r]^{\kappa_B}&B(n+m)\ar[d]\\BO(n)\times BO(m)\ar[r]^/0.5em/{\oplus}&BO(n+m)}$$
satisfying the obvious associativity constraints. We further assume that the spaces $B(n)$ are $l-1$-connected for some integer $l$, and that we have
a collection of maps $(c_n:B(n)\to K(\pi,l))_{n\ge n_0}$
inducing isomorphisms in $\pi_l(B(n))\stackrel{\sim}{\to}\pi$ for all $n\ge n_0$ and some fixed abelian group $\pi$.
We define the $l$-connected spaces $C(n)$ as homotopy pull-backs
$$\xymatrix{C(n)\ar[r]\ar[d]&\mbox{}*\ar[d]\\
B(n)\ar[r]^{c_n}&K(l,\pi)}\ .$$
Finally we assume that the collection of maps $(c_n)_{n\ge n_0}$ is additive in the sense that the lower squares of the following diagrams are homotopy commutative:
$$\xymatrix{C(n)\times C(m)\ar@{.>}[drr]^{\kappa_C}\ar[ddd]&&&\\&&C(n+m)\ar[ddd]&\\&K(\pi,l)\times K(\pi,l)\ar[rrd]^+&&\\B(n)\times B(m)\ar[rrd]^{\kappa_B}\ar[ru]^{c_n\times c_m}&&&K(\pi,l)\\&&B(n+m)\ar[ur]_{c_{n+m}}&}\ .
$$
Then we get unique (up to homotopy) lifts $\kappa_C$ of $\kappa_B$ indicated by the dotted arrow which again satisfy
an associativity constraint.
Let $X$ be a $CW$-complex. A real $n$-dimensional vector bundle $V\to X$ with a $B$-structure is a diagram
$$\xymatrix{&B(n)\ar[d]\\X\ar[ur]^\xi\ar[r]^V&BO(n)}\ ,$$
where $V$ is the classifying map of the vector bundle, and the lift $\xi$ defines the $B(n)$-structure.
The pair $(V,\xi)$ will be called a $B$-bundle.
A $C$-structure on a $B$-bundle $(V,\xi)$ is given by a further lift $\alpha$ as indicated in
$$\xymatrix{&C(n)\ar[d]\\&B(n)\ar[d]\\X\ar@{.>}[uur]^\alpha\ar[ur]^\xi\ar[r]^V&BO(n)}\ .$$
The set of isomorphism classes of $C$-structures on the $B$-bundle $(V,\xi)$
is in one-to-one correspondence with the set of homotopy classes of lifts $\eta$ of $\xi$.
By obstruction theory this set is a torsor under
$H^l(X;\pi)$. If $x\in H^l(X;\pi)$, then we let $\alpha+x$ denote the result of the action of $x$ on $\alpha$.
The maps $\kappa_B $ allow to define the sum of $B$-bundles
$$(V,\xi)\oplus (W,\zeta):=(V\oplus W,\kappa_B(\xi,\zeta))\ .$$
Furthermore, using the maps $\kappa_C$ we define
the sum of $B$-bundles with $C$-structures by
$$((V,\xi),\alpha)\oplus ((W,\zeta),\beta):=((V\oplus W,\kappa_B(\xi,\zeta)),\kappa_C(\alpha,\beta))\ .$$
One can check that
$\kappa_C$ is bilinear with respect to the action of $H^l(X;\pi)$, i.e.
we have
$$\kappa_C(\alpha+x,\beta)=\kappa_C(\alpha,\beta+x)=\kappa_C(\alpha,\beta)+x\ , \quad \forall x\in H^l(X;\pi)\ .$$
We can now formulate the two-out-of-three principle for $C$-structures on $B$-bundles.
We consider $B$-bundles $(V,\xi)$ and $(W,\zeta)$.
\begin{lem}
The choice of two $C$-structures out of $\alpha,\beta,\gamma$ on the $B$-bundles $$(V,\xi)\ ,\quad (W,\zeta)\ ,\quad (V\oplus W,\kappa_B(\xi,\zeta))$$ uniquely fixes
the third such that there is an isomorphism of $C$-bundles
$$((V,\xi),\alpha)\oplus((W,\zeta),\beta)\cong ((V\oplus W,\kappa_B(\xi,\zeta)),\gamma)\ .$$
\end{lem}
The characteristic classes which define the layers of the Postnikov tower (\ref{uqiwdqwdqwdwd})
are additive. Hence
the theory described above can be applied.
Recalling the stability statements $\pi_0(O(n))=\mathbb{Z}/2\mathbb{Z}$ for $n\ge 1$,
$\pi_1(SO(n))=\mathbb{Z}/2\mathbb{Z}$ and $\pi_3(Spin(n))=\mathbb{Z}$ for
$n\ge 3$ we thus obtain the following.
\begin{kor}\label{ioqwdqwdqwdwqd}
We have a two-out-of-three principle for
\begin{itemize}
\item orientations on real vector bundles
\item spin-structures on oriented vector bundles of rank $\ge 3$ and
\item string structures on spin bundles of rank $\ge 3$.
\end{itemize}
\end{kor}
In contrast, this principle does not apply to (homotopy classes of) framings as is obvious from
$TS^2\oplus\mathbb{R}\simeq S^2\times\mathbb{R}^3$.
We now describe the geometric picture of string bordism.
A spin manifold $M$ is a manifold with a chosen orientation and spin structure on its tangent bundle $TM$.
\begin{ddd}\label{uidqwdqwdqwdqwd}
A string manifold $(M,\alpha^{top})$ is a spin manifold $M$ with a chosen topological string structure $\alpha^{top}$ on the spin bundle $TM$.
\end{ddd}
If $(Z,\beta^{top})$ is a string manifold with boundary $M$, then $M$
has an induced string structure $\alpha^{top}$ defined as follows.
The inner normal field induces a decomposition
\begin{equation}\label{udiqduqwdwqd}TZ_{|M}\cong TM\oplus \Theta_\mathbb{R}\ ,
\end{equation} where $\Theta_\mathbb{R}:= M\times \mathbb{R}$
denotes the trivial bundle. A trivial bundle has a canonical orientation, as well as a canonical spin structure, and a canonical string structure $can$. We define the orientation, spin structure, and string structure on $TM$ by the two-out-of-three principle such that (\ref{udiqduqwdwqd})
becomes a decomposition of structured bundles.
We can now define the $n$'th geometric string bordism homology group $\Omega^{String}_n(X)$ of a space $X$ in the usual manner by cycles and relations. A cycle for $\Omega^{String}_n(X)$ is a pair $((M,\alpha^{top}),f)$ of an $n$-dimensional closed string manifold $(M,\alpha^{top})$ and a map $f:M\to X$. The set of isomorphism classes of cycles forms a semigroup with respect to disjoint union. The group $\Omega^{String}_n(X)$ is the defined as the quotient of the group completion of this semigroup by the group generated by string manifolds with maps to $X$ which are boundaries of $n+1$-dimensional compact string manifolds with maps to $X$. The remaining structures for the homology theory
(e.g. the suspension isomorphism) are defined in the standard way and will be neglected in the discussion below, too.
We have the following consequence of the Pontrjagin-Thom construction and the
two-out-of-three principle Lemma \ref{ioqwdqwdqwdwqd}.
\begin{lem}\label{udiqdqwdwqd}
There is a canonical isomorphism
$$\Omega^{String}_*(X)\cong MString_*(X)\ $$
for all $*\ge 3$.
\end{lem}
{\it Proof.$\:\:\:\:$}
The standard Thom-Pontrjagin construction produces an isomorphism between the homotopy theoretic string bordism and a version of the geometric string bordism
with topological string structures on stable normal bundles \cite[Theorem, Chapter II]{stongcob}. Using the two-out-of-three principle Lemma \ref{ioqwdqwdqwdwqd} we can go back and forth between topological string structures on stable normal bundles and on the (unstable) tangent bundle. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent
In the main body of the text, we will use the notation $MString_*(X)$ in order to denote both, the geometric and the homotopy theoretic string bordism groups of $X$.
\section{Modular forms and the structure of $T_{2m}$}\label{fzuefwefwefewf}
\newcommand{{\mathfrak{M}}}{{\mathfrak{M}}}
\newcommand{{\mathfrak{W}}}{{\mathfrak{W}}}
\newcommand{{\mathfrak{C}}}{{\mathfrak{C}}}
The present subsection has two goals. One is to make the structure of the group $T_{2m}$ more explicit. In applications the important problem is to decide whether a formal power series
$f\in \mathbb{R}[[q]]$ represents a non-trivial element in $T_{2m}$. This can be solved using Lemma \ref{invariant}.
The other goal is to fix the isomorphism between the $E_2$-term ${}^YE_2^{0,4m}(tmf)$ of the $Y$-generalized based Adams sequence for $tmf$ and the space of modular forms $\mathcal{M}^\mathbb{Z}_{2m}$. We have used this relation in order to name elements of $tmf_*$ explicitly.
In the following we recall some material from \cite{rezk}.
Let $A$ be a commutative ring. A generalized Weierstrass equation over $A$
is an equation of the form
\begin{equation}\label{fhzwfewfewf87weff}
Y^2Z+a_1XYZ+a_3YZ^2=X^3+a_2X^2Z+a_4XZ^2+a_6Z^3
\end{equation} with $a_i\in A$. We let
$$W_0:={\tt spec} \mathbb{Z}[a_1,a_2,a_3,a_4,a_6]$$ be the affine scheme over $\mathbb{Z}$ representing
Weierstrass equations.
We furthermore let
$$\xymatrix{\mathcal{C}\ar@{^{(}->}[r]\ar[dr]& W_0\times \P^2\ar[d]^{pr_{W_0}}\\ &W_0} $$
denote the universal Weierstrass curve defined by equation (\ref{fhzwfewfewf87weff}).
By $G\subseteq PGL_{3,\mathbb{Z}}$ we denote the subgroup scheme
of automorphisms of $\P^2_\mathbb{Z}$ which respect
generalized Weierstrass equations.
The group scheme $G$ is affine and acts on the universal Weierstrass curve $\mathcal{C}\to W_0$.
The scheme $W_1:=G\times W_0$ is the scheme of morphisms of the action groupoid $W_1\rightrightarrows W_0$
in the category of schemes.
It will be called the Weierstrass groupoid and is the spectrum of the so-called Weierstrass Hopf algebroid
$(\mathcal{O}_{W_0}(W_0),\mathcal{O}_{W_1}(W_1))$.
The stack represented by the Weierstrass groupoid will be denoted by ${\mathfrak{W}}$. The action of $G$ on the universal Weierstrass curve $\mathcal{C}$ gives an action groupoid
$G\times\mathcal{C}\rightrightarrows \mathcal{C}$, and by ${\mathfrak{C}}$ we denote the corresponding stack.
The $G$-equivariant projection $\mathcal{C}\to W_0$ induces a morphism of stacks
$\pi:{\mathfrak{C}}\to {\mathfrak{W}}$.
For every ring $A$ the point $[0:1:0]\in \P^2_A$ belongs to each Weierstrass curve and is fixed by every automorphism in $G(A)$. We therefore get a section
$$\xymatrix{{\mathfrak{C}}\ar[d]^\pi\\ {\mathfrak{W}}\ar@/^1cm/@{.>}[u]^s}\ .$$ The section $s$ lies in the smooth locus of $\pi$.
The pull-back of the relative cotangent bundle
$s^*T^*({\mathfrak{C}}/{\mathfrak{W}})\to {\mathfrak{W}}$ is thus a line bundle over ${\mathfrak{W}}$.
Its sheaf of sections will be denoted by $ \omega$.
We define an evenly graded Hopf algebroid $(A_*,\Gamma_*)$ whose underlying graded rings are given by
$$A_*:=\bigoplus_{n\ge 0} \omega^{2n}(W_0)\ ,\quad \Gamma_*:= \bigoplus_{n\ge 0} \omega^{2n} (W_1)\ ,$$
using the map $W_1\stackrel{s}{\to} W_0\to {\mathfrak{W}}$ in order to view
$W_1$ as a scheme over ${\mathfrak{W}}$. The structure maps of this Hopf algebroid are induced in the canonical way.
The spectrum $tmf$ is characterized by the property that the complex $({}^YE_1^{*,2*}(tmf),d_1)$ has a preferred bi-graded isomorphism to the cobar complex $(C^*(A_*),d^{cobar})$ of the Hopf-algebroid $(A_*,\Gamma_*)$. This isomorphism will be used
to name elements of ${}^YE_r^{*,*}(tmf_{(p)})$ explicitly.
Let $\bar {\mathfrak{M}}$ denote the Deligne-Rapoport compactification of the moduli stack of elliptic curves over $\mathbb{Z}$, i.e. the stack of generalized elliptic curves with irreducible geometric fibers, denoted $M_1$ in \cite{MR0337993}. Since generalized elliptic curves admit
Weierstrass equations (Zariski locally), there
is a unique (1-)morphism $i:\bar{\mathfrak{M}}\to{\mathfrak{W}}$
such that $i^*({\mathfrak{C}})$ is isomorphic to the universal curve over $\bar{\mathfrak{M}}$. One can check that $i$ is an open immersion.
The algebro-geometric definition of the space of
integral modular forms of weight $m$ is
$${}^{alg}\mathcal{M}_{m}^\mathbb{Z}:=H^0(\bar {\mathfrak{M}},i^* \omega^{m})\ .$$
Since ${\mathfrak{W}}$ is normal and
the codimension of ${\mathfrak{W}}\setminus i(\bar {\mathfrak{M}})$ is $\ge 2$ the inclusion $i:\bar {\mathfrak{M}} \to {\mathfrak{W}}$ induces an isomorphism
$$i^*:H^0( {\mathfrak{W}},\omega^{m})\to H^0(\bar{\mathfrak{M}},i^*\omega^{m})\ .$$
The following isomorphism will be used to identify elements in ${}^YE_2^{0,*}(tmf)$
with modular forms.
\begin{equation}\label{udiqdqwdqwdqwdqwd}
{}^{alg}\mathcal{M}_{2m}^\mathbb{Z} \stackrel{(i^*)^{-1}}{\to}
H^0({\mathfrak{W}},\omega^{2m}) \cong
H^0(C^*(A_* ,d^{cobar}) )_{2m}
\cong {}^YE_2^{0,4m}(tmf)\ ,
\end{equation}
where $$H^0(C^*(A_* ,d^{cobar}) )=\bigoplus_{m\ge 0} H^0(C^*(A_* ,d^{cobar}) )_{2m}$$
is the decomposition of the cohomology by degree.
We now relate the algebraists' version ${}^{alg}\mathcal{M}^\mathbb{Z}_{2m}$ of modular forms with
our formal power series version $\mathcal{M}^\mathbb{Z}_{2m}\subset \mathbb{Z}[[q]]$.
In homogeneous coordinates $x:=X/Z$, $y:=Y/Z$ we can consider the $1$-form
$$\eta\in \Gamma(\mathcal{C}, T^*(\mathcal{C}/W_0))\ ,\quad \eta:=\frac{dx}{2y+a_1x+a_3}=\frac{dy}{3x^2+2a_2x+a_4-a_1y}\ .$$
For all $n\ge 0$ the section $s^*\eta^n\in \omega^n(W_0)$ is nowhere vanishing and
induces isomorphisms of groups
$$\eta^n:\mathcal{O}_{W_0}(W_0)\stackrel{\sim}{\to} \omega^n(W_0)\ .$$
The homomorphism $Tate:{\tt spec}\mathbb{Z}[[q]]\to W_0$ defined by
$$a_1\mapsto 1\ ,a_2\mapsto 0\ ,a_3\mapsto 0\ , a_4\mapsto B\ , a_6\mapsto C$$
with $B,C\in \mathbb{Z}[[q]]$ given by
\[ B:=-5\sum\limits_{n\ge 1}n^3\cdot\frac{q^n}{1-q^n}, C:=-\frac{1}{12}\sum\limits_{n\ge 1}(7n^5+5n^3)\cdot\frac{q^n}{1-q^n}\]
defines the Tate curve over $\mathbb{Z}[[q]]$.
It induces an injective map
$$r:{}^{alg}\mathcal{M}_{2m}^\mathbb{Z} \stackrel{(i^*)^{-1}}{\to } H^0({\mathfrak{W}},\omega^{2m}) \hookrightarrow
\omega^{2m}(W_0)\stackrel{(\eta^{2m})^{-2}}{\to}\mathcal{O}_{W_0}(W_0)\stackrel{Tate^*}{\to} \mathbb{Z}[[q]]\ .$$
The following Lemma is a consequence of the $q$-expansion principle, \cite[Theorem 1.6.1]{katzpadic}.
\begin{lem}
The map $r$ defined above induces
an isomorphism $r:{}^{alg}\mathcal{M}_{2m}^\mathbb{Z}\stackrel{\sim}{\to} \mathcal{M}_{2m}^\mathbb{Z}$.
\end{lem}
If we combine the $q$-expansion principle with (\ref{udiqdqwdqwdqwdqwd})
we get the identification
\begin{equation}\label{ghagdastdtztzqw}
{}^YE_2^{0,4m}(tmf)\cong \mathcal{M}_{2m}^\mathbb{Z}
\end{equation}
used in the present paper.
We now turn to the structure of the groups $T_{2m}$.
For $\nu\ge 0$ we denote by $p_\nu :\mathbb{R}[[q]]\to\mathbb{R}$ the projection onto the
$\nu$-th coefficient.
Let $m\ge 2$ and $k_m:=\dim_\mathbb{C} \mathcal{M}^\mathbb{C}_{2m}$ be the dimension of the space of modular forms of weight $2m$.
\begin{lem}\label{invariant}
\begin{itemize}
\item[i)] There exists a $\mathbb{Z}$-basis $f_0,\ldots, f_{k_m-1}\in \mathcal{M}^\mathbb{Z}_{2m}$ such that $p_i(f_j)=\delta_{i,j}$ for $0\leq i,j\leq k_m-1$.
\item[ii)] For a basis as in part i), the map
\[ \alpha: T_{2m}=\frac{\mathbb{R}[[q]]}{\mathbb{Z}[[q]]+\mathcal{M}^\mathbb{R}_{2m}}\longrightarrow \prod_{\nu\ge k_m}\left( \mathbb{R}/\mathbb{Z}\right)\ , \quad \, f\mapsto \left(\left[ p_\nu(f-\sum\limits_{i=0}^{k-1} p_i(f)f_i)\right]\right)_{\nu\ge k_m}\]
is well-defined and an isomorphism of abelian groups.
\end{itemize}
\end{lem}
{\it Proof.$\:\:\:\:$} It is easy to see that $i)$ implies $ii)$.
We now show $i)$.
There is an isomorphism $[ \bar {\mathfrak{M}}]\cong \P^1_\mathbb{Z}$ of the coarse moduli space $[ \bar {\mathfrak{M}}]$ of the Deligne-Rapoport compactification of the moduli stack of elliptic
curves $\bar {\mathfrak{M}}$, \cite[VI, Th\'eor\`eme 1.1]{MR0337993}. Let $\pi: \bar {\mathfrak{M}} \to [\bar {\mathfrak{M}}]$ denote the canonical projection.
One can check that the sheaf $\pi_*i^*\omega^{2m}$ is locally free of rank one.
Hence there exists a unique function
$n:\mathbb{N}_0\to \mathbb{Z}$ such that
$$pi_*i^*\omega^{2m}\simeq\mathcal{O}(n(m)) \ .$$
The projection
$\pi:\bar {\mathfrak{M}}\to [{\mathfrak{M}}]$ induces an isomorphism
$$\pi^*:H^0(\P^1_\mathbb{Z},\mathcal{O}(n(m)))\stackrel{\sim}{\to}H^0(\bar {\mathfrak{M}},i^* \omega^{2m}) \ .$$
The map
$$H^0(\P^1_\mathbb{Z},\mathcal{O}(n(m)))\stackrel{\pi^*}{\to }H^0(\bar {\mathfrak{M}},i^* \omega^{2m})={}^{alg}\mathcal{M}_{2m}^\mathbb{Z}\stackrel{r}{\to} \mathbb{Z}[[q]]$$ corresponds
to the Taylor expansion at a point $\infty\in \P^1_\mathbb{Z}$,
\cite[Chapter 8.11]{KatzMazur}.
Its is now elementary to write down a basis of
$H^0(\P^1_\mathbb{Z},\mathcal{O}(n(m)))$ whose Taylor expansion at $\infty$
has the property analogous to $i)$.
\hspace*{\fill}$\Box$ \\[0.5cm]\noindent
The following corollary will be used later. Let $\Delta\in \mathcal{M}^{\mathbb{Z}}_{12}$ be the unique cusp form normalized such that $\Delta=q+\dots$.
\begin{kor}\label{zdwdqwdqwdwddqwdq}
For every $k\ge 0$ and integer $a\in \mathbb{N}$ the element
$\left[\frac{1}{a}\Delta^k \right]\in T_{12k+2}$
has order $a$.
\end{kor}
{\it Proof.$\:\:\:\:$}
The case $k=0$ is clear since $\mathcal{M}_{2}=0$. Let us now assume that $k\ge 1$.
We have $\dim \mathcal{M}_{12k+2}=k$. Since $p_\nu(\frac{1}{a}\Delta^k)=0$ for $\nu=0,\dots,k-1$
we have
$$ \alpha(\left[ \frac{1}{a}\Delta^k\right])=\left(p_\nu(\frac{1}{a}\Delta^k)\right)_{\nu\ge k} =\left( \left[ \frac{1}{a} \right], \left[ \frac{a_1}{a}\right],\ldots\right)\in(\mathbb{R}/\mathbb{Z})^\mathbb{N}$$
for suitable $a_1,\ldots\in\mathbb{Z}$.
Therefore
$\alpha(\left[ \frac{1}{a}\Delta^k\right])$ has order $a$. \hspace*{\fill}$\Box$ \\[0.5cm]\noindent
\section{Table of $MSpin_*$}\label{udiqdqwdwqdqwdwqdwqd}
Anderson-Brown-Peterson \cite{MR0190939} have obtained the following additive decomposition $$MSpin^{\hspace{0.2em} \hat{\mbox{}}}_2\cong \prod_{n(J)\:\mbox{even}} ko^{\hspace{0.2em} \hat{\mbox{}}}_2\langle n(J)\rangle \vee \prod_{1\neq n(J)\:\mbox{odd}} ko^{\hspace{0.2em} \hat{\mbox{}}}_2\langle n(J)-2\rangle \vee \prod_{i=0}^\infty \prod_{j=1}^{\dim(Z_i)} K(\mathbb{Z}/2\mathbb{Z},i)\ $$
of the $2$-completion of $MSpin$.
In the first two products the index $J$ runs over all unordered tuples
$J=(j_1,\dots,j_k)$ for $k\ge 0$ and $j_i\ge 2$, $n(J):=\sum_{i=1}^k j_i$, and $Z:=\bigoplus_{i=0}^\infty Z_i$ is some
$\mathbb{N}$-graded ${\mathbb{F}}_2$-vector space. Furthermore, $ko^{\hspace{0.2em} \hat{\mbox{}}}_2\langle k\rangle$ denotes the $2$-completed and $k$-connective cover of $ko$.
The Poincar\'e series given in \cite[Thm.1.11]{MR0190939} can be combined to
determine the dimensions $\dim(Z_i)$ for all $i\ge 0$. Using MAPLE and these formulas we have compiled
the following table.
Recalling that there is no odd torsion in $MSpin_*$, this table determines
$MSpin_*$ additively for $*\leq 127$. The final column separates off the
torsion which is accounted for by the $K(\mathbb{Z}/2\mathbb{Z},i)$'s, i.e. those Spin-manifolds which are detected by Stiefel-Whitney classes.
For the purpose of the present paper, note that the familiar structure
of $ko_*$ implies that $ MSpin_{4m-1}$ is entirely accounted for by the
$K(\mathbb{Z}/2\mathbb{Z},i)$'s. In the smallest dimension $MSpin_{39}\simeq\mathbb{Z}/2\mathbb{Z}$
one can use computations of Mahowald/Gorbunov to show that $ MString_{39}\to
\ MSpin_{39}$ is zero, i.e. $A_{39}=MSpin_{39}$ but for all
$m\ge 11$ such that $ MSpin_{4m-1}\neq 0$ (the table strongly suggests
this is in fact true for {\em all} $m\ge 11$) we cannot decide whether
or not $A_{4m-1}\subseteq MString_{4m-1}$ is an equality.
\newpage
\thispagestyle{empty}
\begin{tabular}{|c|c|c|c|}\hline $i$&$\dim_\mathbb{Q}(MSpin\mathbb{Q}_i)$&$\dim_{\mathbb{F}_2}(MSpin_{i,tors})$& $ \dim_{\mathbb{F}_2} Z_i$
\\\hline 0&1&0&0
\\\hline 1&0&1&0
\\\hline 2&0&1&0
\\\hline 3&0&0&0
\\\hline 4&1&0&0
\\\hline 5&0&0&0
\\\hline 6&0&0&0
\\\hline 7&0&0&0
\\\hline 8&2&0&0
\\\hline 9&0&2&0
\\\hline10&0&3&0
\\\hline11&0&0&0
\\\hline12&3&0&0
\\\hline13&0&0&0
\\\hline14&0&0&0
\\\hline15&0&0&0
\\\hline16&5&0&0
\\\hline17&0&5&0
\\\hline18&0&7&0
\\\hline19&0&0&0
\\\hline20&7&1&1
\\\hline21&0&0&0
\\\hline22&0&1&1
\\\hline23&0&0&0
\\\hline24&11&0&0
\\\hline25&0&11&0
\\\hline26&0&15&0
\\\hline27&0&0&0
\\\hline28&15&2&2
\\\hline29&0&1&1
\\\hline30&0&3&3
\\\hline31&0&0&0
\\\hline32&22&1&1
\\\hline33&0&23&1
\\\hline34&0&31&1
\\\hline35&0&0&0
\\\hline36&30&6&6
\\\hline37&0&2&2
\\\hline38&0&7&7
\\\hline39&0&1&1
\\\hline40&42&4&4
\\\hline41&0&45&3
\\\hline42&0&60&4
\\\hline43&0&2&2
\\\hline
\end{tabular}
\newpage
\thispagestyle{empty}
\begin{tabular}{|c|c|c|c|}\hline $i$&$\dim_\mathbb{Q}(MSpin\mathbb{Q}_i)$&$\dim_{\mathbb{F}_2}(MSpin_{i,tors})$& $ \dim_{\mathbb{F}_2} Z_i$
\\\hline 44& 56& 14& 14
\\\hline 45& 0& 6& 6
\\\hline 46& 0& 17& 17
\\\hline 47& 0& 4& 4
\\\hline 48& 77& 11& 11
\\\hline 49& 0& 86& 9
\\\hline 50& 0& 114& 13
\\\hline 51& 0& 7& 7
\\\hline 52& 101& 31& 31
\\\hline 53& 0& 15& 15
\\\hline 54& 0& 38& 38
\\\hline 55& 0& 13& 13
\\\hline 56& 135& 29& 29
\\\hline 57& 0& 159& 24
\\\hline 58& 0& 210& 34
\\\hline 59& 0& 22& 22
\\\hline 60& 176& 67& 67
\\\hline 61& 0& 38& 38
\\\hline 62& 0& 80& 80
\\\hline 63& 0& 36& 36
\\\hline 64& 231& 70& 70
\\\hline 65& 0& 290& 59
\\\hline 66& 0& 379& 82
\\\hline 67& 0& 58& 58
\\\hline 68& 297& 142& 142
\\\hline 69& 0& 90& 90
\\\hline 70& 0& 169& 169
\\\hline 71& 0& 92& 92
\\\hline 72& 385& 158& 158
\\\hline 73& 0& 521& 136
\\\hline 74& 0& 676& 186
\\\hline 75& 0& 143& 143
\\\hline 76& 490& 291& 291
\\\hline 77& 0& 205& 205
\\\hline 78& 0& 347& 347
\\\hline 79& 0& 219& 219
\\\hline 80& 627& 343& 343
\\\hline 81& 0& 931& 304
\\\hline 82& 0& 1196& 404
\\\hline 83& 0& 330& 330
\\\hline 84& 792& 589& 589
\\\hline 85& 0& 448& 448
\\\hline 86& 0& 698& 698
\\\hline 87& 0& 494& 494
\\\hline
\end{tabular}
\newpage
\thispagestyle{empty}
\begin{tabular}{|c|c|c|c|}\hline $i$&$\dim_\mathbb{Q}(MSpin\mathbb{Q}_i)$&$\dim_{\mathbb{F}_2}(MSpin_{i,tors})$& $ \dim_{\mathbb{F}_2} Z_i$
\\\hline 88& 1002& 721& 721
\\\hline 89& 0& 1658& 656
\\\hline 90& 0& 2103& 848
\\\hline 91& 0& 729& 729
\\\hline 92& 1255& 1171& 1171
\\\hline 93& 0& 952& 952
\\\hline 94& 0& 1385& 1385
\\\hline 95& 0& 1068& 1068
\\\hline 96& 1575& 1472& 1472
\\\hline 97& 0& 2948& 1373
\\\hline 98& 0& 3689& 1731
\\\hline 99& 0& 1550& 1550
\\\hline 100& 1958& 2296& 2296
\\\hline 101& 0& 1967& 1967
\\\hline 102& 0& 2706& 2706
\\\hline 103& 0& 2233& 2233
\\\hline 104& 2436& 2941& 2941
\\\hline 105& 0& 5239& 2803
\\\hline 106& 0& 6461& 3451
\\\hline 107& 0& 3194& 3194
\\\hline 108& 3010& 4438& 4438
\\\hline 109& 0& 3969& 3969
\\\hline 110& 0& 5215& 5215
\\\hline 111& 0& 4539& 4539
\\\hline 112& 3718& 5760& 5760
\\\hline 113& 0& 9312& 5594
\\\hline 114& 0& 11311& 6746
\\\hline 115& 0& 6411& 6411
\\\hline 116& 4565& 8470& 8470
\\\hline 117& 0& 7839& 7839
\\\hline 118& 0& 9925& 9925
\\\hline 119& 0& 9005& 9005
\\\hline 120& 5604& 11086& 11086
\\\hline 121& 0& 16544& 10940
\\\hline 122& 0& 19796& 12954
\\\hline 123& 0& 12582& 12582
\\\hline 124& 6842& 15963& 15963
\\\hline 125& 0& 15193& 15193
\\\hline 126& 0& 18656& 18656
\\\hline 127& 0& 17493& 17493
\\\hline
\end{tabular}
|
1,116,691,499,117 | arxiv | \section{Introduction}
For a given graph $F$, the Turan number $\ex(n,F)$ is the maximum number of edges in an $n$-vertex graph and containing no copy of $F$. The generalized Turan number was introduced in \cite{bollobas2008pentagons,erdos1962number} and was systematically studied by N. Alon, C. Shikhelman \cite{alon2016many,alon2018additive}, by introducing the function $\ex(n,H,\mathcal{F})$, which is the maximum number of copies of $H$ in an $n$-vertex and $\mathcal{F}-$free graphs.
In the case $\mathcal{F}=\{F\}$, it is denoted in short by $\ex(n,H,F)$.
The earliest result of this type of extremal graph problem is due to Zykov \cite{zykov1949some} and independently by P. Erd\H os \cite{erdos1962number}, who determined $\ex(n,K_s,K_t)$ exactly for all $s$ and $t$. Subsequently, a variety of asymptotic and exact results were obtained. The most well-known of which is the determination of $\ex(n, C_5, C_3)$ by H. Hatami, J. Hladk\'y, D. Kr\' al', S. Norine, and A. Razborov \cite{hatami2013number} and independently by A. Grzesik \cite{grzesik2012maximum}.
For more results, we refer the reader to \cite {grzesik2018maximum, gyori2018maximum,bollobas2008pentagons,ergemlidze2019note,ergemlidze2018triangles}.
Let $f(n,H)$ be the maximum number of copies of $H$ in an $n$-vertex planar graph. Equivalently such problems can be described as $\ex(n,H,\mathcal{F})$, where $\mathcal{F}$ is the family of subdivisions of $K_5$ and $K_{3,3}$ \cite{kuratowski1930probleme}. S. Hakimi and E.F. Schmeichel \cite{hakimi1979number} determined the exact value of $f(n,H)$ when $H$ is a cycle of length $3$ and cycle of length $4$.
Moreover, they gave a conjecture for the exact value of $f(n, H)$ when $H$ is a cycle of length five. Later E. Gy\H{o}ri, A. Paulos, N. Salia, C. Tompkins, and O. Zamora confirmed it in \cite{gyHori2019maximum1}. In the same paper, the order of magnitude is also given for $f(n, H)$ when $H$ is a cycle of length more than $4$.
It is natural to ask the value of $f(n, P_k)$, where $P_k$ is a path of length $k$ (here, the length of a path is the number of edges). Clearly $f(n, P_1)=3n-6$. Alon and Caro \cite{alon1984number} determined the exact value of $f(n, H)$, where $H$ is a complete bipartite graph in which the smaller class is of size $1$ or $2$. Consequently from the former result, the value of $f(n, P_2)$ is determined. In particular they showed that
\begin{theorem}\cite{alon1984number}
For $n\geq 4$, $f(n,P_2)=n^2+3n-16$.
\end{theorem}
Recently, E. Gy\H{o}ri, A. Paulos, N. Salia, C. Tompkins, and O. Zamora in \cite{gyHori2019maximum2} determined the exact value of $f(n, P_3)$. They proved the following results.
\begin{theorem}\cite{gyHori2019maximum2}\label{thm1}
We have,
\begin{align*}
f(n,P_3)= \begin{cases}
12, &\text{if $n=4$;}\\
147, &\text{if $n=7$;}\\
222, &\text{if $n=8$;}\\
7n^2-32n+27, &\text{if $n=5, 6$ and $n\geq 9$.}
\end{cases}
\end{align*}
\label{main}
\end{theorem}
The same authors in \cite{gyHori2020generalized} also gave the order of magnitude of $f(n, P_k)$.
\begin{theorem}\cite{gyHori2020generalized}
$f(n, P_k)= \Theta(n^{{\lfloor{\frac{k}{2}}\rfloor}+1})$.
\end{theorem}
In this paper we give an asymptotic value of $f(n, P_4)$, where $P_4$ is a path of length $4$.
\begin{theorem}\label{4}
$f(n, P_4)=n^3+O(n^2)$.
\end{theorem}
This bound is asymptotically the best possible. Consider the maximal planar graph on $n$ vertices containing two degree $n-1$ vertices as shown in Figure \ref{fig1}. It can be checked that this graph contains at least $n^3$ copies of $P_4$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.7]
\foreach \x in{1,2,3,4,5}{\draw[fill=black](0,\x)circle(3pt);};
\draw[fill=black](-3,0)circle(3pt);
\draw[fill=black](3,0)circle(3pt);
\foreach \x in{1,2,4}{\draw[thick](0,\x)--(0,\x+1);}
\draw (0,3.625)node{$\vdots$};
\foreach \x in{1,2,3,4,5}{\draw[thick](-3,0)--(0,\x)--(3,0);}
\draw[black,thick](-3,0)--(3,0);
\end{tikzpicture}
\caption{Extremal graph}
\label{fig1}
\end{figure}
Before we proceed to the proof of our result, we mention some notation. For a graph $G$, we use the notation $V(G)$ and $E(G)$ respectively for the vertex and edge sets of the graph. The number of paths of length $4$ in $G$ is denoted by $P_4(G)$. For a vertex $v\in V(G)$, the degree of $v$ is denoted by $d_G(v)$. We may omit the subscript and write simply $d(v)$ if the underlying graph is clear. Let $u$, $w\in V(G)$. We denote the number of vertices in $G$ which are adjacent to both vertices by $d(u,w)$.
\section{Proof of Theorem \ref{4}}
For any given graph $G$ and vertices $u$ and $v$ in $G$, it is easy to see that the number of paths of length $4$ in the graph with $u$ and $v$ the two vertices next to the terminal vertices of the path is at most $d(u)d(u, v)d(v)$. Thus,
\begin{align*}
P_4(G)\leq \frac{1}{2}\sum_{u\in V(G)}\sum_{u\neq v\in V(G)}d(u)d(v)d(u, v).
\end{align*}
Notice that this bound is crude in as much as we can get better order lower terms.
Since $d(u, v)\leq \min\{d(u), d(v)\}$, then
\begin{align*
P_4(G)\leq \frac{1}{2}\sum_{u\in V(G)}\sum_{u\neq v\in V(G)}d(u)d(v)\min\{d(u), d(v)\}. \end{align*}
So if $(x_1, x_2, x_3, \dots, x_n)$ is the degree sequence of $G$, arranged in decreasing order, we have that
\begin{align*
P_4(G) \leq \sum_{i=1}^{n-1}\sum_{i<j}x_ix_j^2.
\end{align*}
To prove Theorem \ref{4}, we need the following lemma.
\begin{lemma}\label{lm1}
Let $G$ be a planar graph on $n$ vertices and $S\subseteq V(G)$ such that $|S|=k$. Then
$$\sum_{v\in S}d(v)\leq 2n+6k-16.$$
\end{lemma}
\begin{proof}
Let $G'$ be the graph induced by $S$. Since $G'$ is planar, $$\sum\limits_{v\in S}d_{G'}(v)\leq 6k-12.$$
Now we count the number of edges between the vertex sets $S$ and $S'$, say $e=e(S,S')$; that is, the number of edges in the planar bipartite graph with color classes $S$ and $S'$. Since the graph is bipartite, it is also triangle-free. Thus, each face uses at least $4$ edges and each edge is counted twice. Hence $4f\leq 2e$, where $f$ is the number of faces in the planar graph. Using the inequality and Euler's formula, $n+f=e+2$, we obtain $e=e(S,S')\leq 2n-4$. Therefore, \begin{displaymath}
\sum_{v\in S}d(v)=\sum\limits_{v\in S}d_{G'}(v)+e(S,S')\leq 2n+6k-16.\qedhere\end{displaymath}
\end{proof}
Given $n\geq 2$, we define the set
\begin{align*}
A_n = \bigg\{(x_1, x_2, x_3,\dots, x_n) \in \mathbb{Z}^n :~ & n\geq x_1\geq x_2 \geq \cdots \geq x_n \geq 0, \forall k \in \{1, \dots, n\}, \\& \sum_{i=1}^k x_i \leq 2n+6k-16 \ \mbox{ and } \sum_{i=1}^{n}x_i\leq 6n-12\bigg\}.
\end{align*}
Let $(x_1, x_2, \dots, x_n)$ be the degree sequence of an $n$-vertex planar graph $G$ in decreasing order. Since $\sum\limits_{v\in V(G)}d(v) = 2\abs{E(G)}\leq 6n-12$, by Lemma~\ref{lm1}, we have $(x_1, x_2, \dots, x_n)\in A_n$.
Consider the function $S_n:\mathbb{R}^n \to \mathbb{R}$ by $$S_n(x_1, x_2, \dots, x_n) = \sum_{i}^{n-1}\sum\limits_{i<j}x_ix_j^2,$$
then Theorem~\ref{4} will be a corollary of the following theorem.
\begin{theorem}
\label{bound}
For every $(x_1, x_2, \dots, x_n) \in A_n$, we have \begin{displaymath}
S_n(x_1, x_2, \dots, x_n) \leq n^3 +O(n^2).
\end{displaymath}
\end{theorem}
Before proving Theorem~\ref{bound} we need the following lemmas.
\begin{lemma}\label{lm4}
Let $(x_1, x_2, \dots, x_n)\in A_n$ be a point maximizing $S_n$ over $A_n$
Then $x_1-x_2\leq 1$.
\end{lemma}
\begin{proof}
Suppose by contradiction that $x_1-x_2\geq 2$. Define the sequence $(y_1, y_2, \dots, y_n)\in A_n$ as $y_1=x_1-1$, $y_2=x_2+1$ and $y_i=x_i$ for all $i\neq 1,2$. Then
\begin{align*}
S_n(y_1, y_2, \dots, y_n)&=\sum_{i=1}^{n-1}\sum\limits_{i<j}y_iy_j^2=(x_1-1)((x_2+1)^2+x_3^2+\cdots+x_n^2)+(x_2+1)(x_3^2+\cdots+x_n^2)\\&+x_3(x_4^2+\cdots+x_n^2)+\cdots+x_{n-1}x_n^2\\&=x_1(x_2^2+\cdots+x_n^2)+x_2(x_3^2+\cdots+x_n^2)+\cdots+x_{n-1}x_n^2+(x_1-1)(2x_2+1).
\end{align*}
Thus $S_n(y_1, y_2, \dots, y_n)-S_n(x_1, x_2, \dots, x_n)=(x_1-1)(2x_2+1)>0$, which is a contradiction.
\end{proof}
\begin{lemma}\label{n}
Let $(x_1, x_2, \dots, x_n)\in A_n$ be a point maximizing $S_n$ over $A_n$.
If $x_1=n$, then $S_n(x_1, x_2, \dots, x_n)\leq n^3+O(n^2)$.
\end{lemma}
\begin{proof}
By Lemma \ref{lm4}, we have $x_2 \in \{n, n-1\}$.
Since $x_1+x_2+x_3\leq 2n+2$, we see $x_3\leq 3$.
Therefore,
\begin{align*}
S_n(x_1, x_2, \dots, x_n)&=\sum_{i=1}^{n-1}\sum\limits_{i<j}x_ix_j^2\leq n((n-1)^2+
\underbrace{3^2+3^3+\cdots+3^2}_{n-2\ \text{terms}})\\&+(n-1)(\underbrace{3^2+3^3+\cdots+3^2}_{n-2\ \text{terms}})+3(\underbrace{3^2+3^3+\cdots+3^2}_{n-3\ \text{terms}})+\cdots+3(3^2)\\&=n(n-1)^2+9n(n-2)+9(n-1)(n-2)+27(n-3)+27(n-4)+\cdots+27\\&=n(n-1)^2+9(n-2)(2n-1)+\frac{27}{2}(n-3)(n-2)=n^3+O(n^2). \qedhere
\end{align*}
\end{proof}
\begin{lemma}\label{lm5}
Let $(x_1 ,x_2 ,\dots, x_n) \in A_n$. If $x_2\leq \frac{n}{18}$, then $S_n(x_1, x_2, \dots, x_n) \leq n^3+O(n^2)$.
\end{lemma}
\begin{proof}
\begin{align*}
S_n(x_1, x_2, \dots, x_n) &\leq\sum\limits_{i}^{n-1}\sum\limits_{i<j}x_ix_j^2\leq \sum\limits_{i=1}^{n-1}\sum\limits_{i<j}x_ix_jx_2=x_2\sum\limits_{i=1}^{n-1}\sum\limits_{i<j}x_ix_j=\frac{x_2}{2}\sum\limits_{i,j}x_ix_j\\&=\frac{x_2}{2}\sum\limits_{i}x_i\sum_{j}x_j=\frac{x_2}{2}\left(\sum\limits_{i}x_i\right)^2=\frac{x_2}{2}(6n-12)^2
\end{align*}
Thus, if $x_2\leq \frac{n}{18}$, then $S_n(x_1, x_2, \dots, x_n)\leq n^3+O(n^2)$.
\end{proof}
We need the following claim to prove the lemma which follows.
\begin{claim}\label{cm1}
Suppose $(x_1, x_2, \dots, x_n) \in A_n$.
If $k$ is the smallest non negative integer such that $\sum\limits_{i=1}^{k}x_i=2n+6k-16$, then $x_k\geq 7$ and $x_{k+1}\leq 6$.
\end{claim}
\begin{proof}
Since $\sum\limits_{i=1}^{k} x_i = 2 n + 6k - 16$, we have $\sum\limits_{i=1}^{k-1} x_i + x_k = 2 n + 6 k - 16$. From the definition of $A$, $\sum\limits_{i=1}^{k-1} x_i < 2 n + 6 (k - 1) - 16$. Therefore, $2n + 6k - 16 < 2n + 6(k - 1) - 16 + x_k$. Thus, $x_k \geq 7$. Now suppose $x_{k+1} \geq 7$. In that case, $(2n+6k-16)+7 \leq 2n+6(k+1)-16$ which simplifies to $7 \leq 6$, a contradiction. Therefore, $x_{k+1} \leq 6$.
\end{proof}
\begin{lemma}
\label{cases}
Let $(x_1, x_2, \dots, x_n)\in A_n$ be a point maximizing $S_n$ over $A_n$.
One of the following must hold:
\begin{itemize}
\item[$(i)$] $x_1 = n$,
\item[$(ii)$] $x_2 \leq \frac{n}{18}$,
\item[$(iii)$] there exists a $k \leq 11664$ such that $x_i \leq 6$ for $i>k$.
\end{itemize}
\end{lemma}
\begin{proof}
Suppose that $(i)$ and $(ii)$ are false, that is $x_1<n$ and $x_2 > \frac{n}{18}$. Then we have to show that $(iii)$ holds.
If there exists an $r$ such $\sum\limits_{i=1}^r x_i = 2n+6r-16,$ then take $k$ to be the smallest such $r$. Otherwise, let $k$ be the last index such that $x_k$ is not 0. If $k<n$ in both cases, we have $x_k > x_{k+1}$, either because of Claim~\ref{cm1} or because $x_k >0 = x_{k+1}$. Additionally, from Claim~\ref{cm1}, we have that $x_i \leq 6$ for $i>k$. We are going to prove that $k\leq 11664$, hence $k$ satisfies $(iii)$.
Define $y = (y_1, y_2, y_3, \dots, y_n)$ by $y_1=x_1+1$, $y_k=x_k-1$ and, $y_i=x_i$ for $i\neq 1,k$.
And note $y \in A_n$.
We have
\begin{align*}
S_n(y_1, y_2, \dots, y_n)&=\sum_{i=1}^{n-1}\sum\limits_{i<j}y_iy_j^2 \\
&= (x_1+1)\left(x_2^2+x_3^2+\cdots+x_{k-1}^2+(x_k-1)^2+x_{k+1}^2+\cdots+x_n^2\right) \\
&\hspace{12pt} +x_2\left(x_3^2+x_4^2+\cdots+x_{k-1}^2+(x_k-1)^2+x_{k+1}^2+\cdots+x_n^2\right) \\
&\hspace{12pt} +\cdots+x_{k-1}\left((x_k-1)^2+x_{k+1}^2+\cdots+x_n^2\right) \\
&\hspace{12pt} +(x_k-1)\left(x_{k+1}^2+\cdots+x_n^2\right)+\cdots+x_{n-1}x_n^2 \\
&= \sum\limits_{i=1}^{n-1}\sum\limits_{i<j}x_ix_j^2+(1-2x_k)(1+x_1+x_2+x_3+\cdots+x_{k-1}) \\
&\hspace{12pt} +\left(x_2^2+x_3^2+\cdots+x_n^2\right) - \left(x_{k+1}^2+x_{k+2}^2+\cdots+x_n^2\right).
\end{align*}
Thus, $S_n(y_1, y_2, \dots, y_n)-S_n(x_1, x_2, \dots, x_n)=(x_2^2+x_3^2+\cdots+x_k^2)-(2x_k-1)(1 + x_1 + x_2 + \cdots + x_k)$. Since $S_n(y_1, y_2, \dots, y_n)\leq S_n(x_1,x_2,\dots,x_n)$, $x_2>\frac{n}{18}$ and $\sum\limits_{i=1}^{k}x_i\leq 6n$, we have
\begin{displaymath}
\frac{n^2}{18^2}< (x_2^2 + x_3^2 + x_4^2 + \cdots+x_k^2)\leq (2x_k-1)(1 + x_1 + x_2 + x_3 + \cdots + x_k)<12nx_k.
\end{displaymath}
Therefore, $x_k>\frac{n}{18^2\cdot12}$. Hence we have $6n\geq \sum\limits_{i=1}^kx_i\geq k\frac{n}{18^2\cdot6}$ and $k \leq (18\cdot 6)^2 = 11664$.
\end{proof}
\begin{lemma}
\label{lm6}
Let $m\geq 2$ be an integer and $x_1, x_2, \dots, x_m$ be reals such that $x_1 \geq x_2 \geq \cdots \geq x_m \geq 0$. Put $t := \sum\limits_{i=1}^m x_i$, then $S_m(x_1,x_2,x_3,\ldots,x_m) \leq (t/2)^3$.
\end{lemma}
\begin{proof}
We are going to proceed by induction on $m$. First, we show the relation holds for $m=2$.
Let $x_1, x_2$ be real numbers such that $x_1\geq x_2 \geq 0$ and $t=x_1+x_2$, which gives $x_2=t-x_1$. Hence, $S_2(x_1,x_2) = S(x_1,t-x_1) = x_1(t-x_1)^2$.
Let $f(x) = x(t-x)^2$. We have $f'(x) = t^2 - 4 t x + 3 x^2 = (t-x)(t-3x)$, which is negative in $[t/2, t]$. Since $x_1 \geq t/2$, we have
\begin{align*}
S_2(x_1,x_2) = f(x_1) \leq \max_{t/2 \leq x \leq t}f(x) = f\left(t/2\right) = \frac{t^3}{8}.
\end{align*}
Therefore, the lemma holds for $m=2$.
Now suppose $m\geq 3$ is such that the lemma is true for $m-1$, and let $x_1, x_2, \dots, x_m$ be real numbers such that $x_1\geq x_2\geq\cdots \geq x_m\geq 0$ and $\sum_{i=1}^mx_i = t$.
By the induction hypothesis, we have $S_{m-1}(x_1, x_2, \dots, x_{m-1}) \leq \left(\frac{t-x_m}{2} \right)^3.$
Thus, we get
\begin{align*}
S_m(x_1,x_2,\dots,x_m) & =S_{m-1}(x_1,x_2,\dots,x_{m-1})+(x_1+x_2+x_3+\cdots+x_{m-1})x_m^2 \\&=S_{m-1}(x_1,x_2,\dots,x_{m-1})+(t-x_m)x_m^2 \leq \frac{(t-x_m)^3}{8}+(t-x_m)x_m^2.
\end{align*}
Let $g(x) = \frac{(t-x)^3}{8}+(t-x)x^2$, then $g''(x) = \frac{11 t - 27 x}{4}$.
We have that $x_m\leq \frac{t}{m} \leq \frac{t}{3}$, and $g''(x) \geq \frac{2t}{4} \geq 0$, for $x\leq t/3$. Thus $g$ is convex in $[0,t/3]$, therefore
\begin{align*}
S_m(x_1,x_2,\dots,x_m) \leq g(x_m) \leq \max_{0\leq x \leq t/3} g(x) = \max\left\{g(0),g\left(t/3\right)\right\} = \max\left\{\frac{t^3}{8},\frac{t^3}{9}\right\} = \frac{t^3}{8}.\tag*{\qedhere}
\end{align*}
\end{proof}
Now we are able to prove Theorem~\ref{bound}.
\begin{proof}[Proof of Theorem~\ref{bound}]
Take $(x_1, x_2, \dots, x_n)$ maximizing $S_n$ over $A_n$.
By Lemma~\ref{cases} we have three possible cases.
If $x_1 = n$, then $S_n(x_1, x_2, \dots, x_n) \leq n^3 + O(n^2)$ by Lemma~\ref{n}.
If $x_2 \leq \frac{n}{18}$, then $S_n(x_1, x_2, \dots, x_n) \leq n^3 + O(n^2)$ by Lemma~\ref{lm5}.
If there exists a $k$ satisfying $(iii)$ in Lemma~\ref{lm5}, then we have that $\sum\limits_{i=1}^k x_i \leq 2n+6k = 2n + O(1)$. Hence by Lemma~\ref{lm6}, we have $S_k(x_1,x_2,\dots,x_k) \leq \left(\dfrac{2n+O(1)}{2}\right)^3 = n^3 + O(n^2).$
Therefore, together with the fact that $x_i \leq 6$ for $i>k$,
\begin{align*}
S_n(x_1,x_2,\dots,x_n) &= \sum_{i=1}^{n-1}\sum_{j=i+1}^n x_ix_j^2 \leq \sum_{i=1}^{n-1}\left(\sum_{j=i+1}^k x_ix_j^2 + \sum_{j=k+1}^n x_ix_j^2\right) \\ &= \sum_{i=1}^{k-1}\sum_{j=i+1}^k x_ix_j^2 + \sum_{i=k}^{n-1}\sum_{j=i+1}^k x_ix_j^2 + \sum_{i=1}^{n-1}\sum_{j=k+1}^n x_ix_j^2 \\ &\leq S_k(x_1,x_2,\dots,x_k) + O(n^2) + 36n\sum_{j=1}^{n-1}x_j \\ &\leq n^3 + O(n^2). \qedhere
\end{align*}
\end{proof}
\section{ Conjectures and Concluding Remarks.}
Let $P_k$ be a path of length $k$. We propose the following conjectures of the asymptotic values of $f(n,P_{2\ell})$ and $f(n,P_{2\ell+1})$.
\begin{conjecture}\label{gff}
For paths with even length, $f(n,P_{2\ell})=4\ell\left(\frac{n}{\ell}\right)^{\ell+1}+O(n^{\ell}).$
\end{conjecture}
\begin{conjecture}\label{ghh}
For paths with odd length,
$f(n,P_{2\ell+1})=8\ell(\ell+1)\left(\frac{n}{\ell}\right)^{\ell+1}+O(n^{\ell})$, for $\ell\geq 2.$
\end{conjecture}
For $\ell\geq 3$, in both cases the lower bound is attained from planar graph on $n$ vertices obtained with a balanced blowing up of maximum independent set of vertices of a $2\ell-$vertex cycle and joining the vertices by path, see Figure \ref{fig2}. In the case of $\ell =2$, the lower bound is attained from an $n$-vertex planar graph given in Figure \ref{fig1}.\\
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.185]
\draw[thick,red](10,0)--(8.7,5)--(5,8.7)--(0,10)--(-5,8.7)--(-8.7,5)--(-10,0);
\draw[thick](5,8.7)--(11.3,6.5)--(10,0)--(13.9,8)--(5,8.7)--(16.5,9.5)--(10,0)(8.7,5)--(13.9,8);
\draw[thick](-5,8.7)--(-11.3,6.5)--(-10,0)--(-13.9,8)--(-5,8.7)--(-16.5,9.5)--(-10,0)(-8.7,5)--(-13.9,8);
\draw[thick](5,8.7)--(0,13)--(-5,8.7)--(0,16)--(5,8.7)--(0,19)--(-5,8.7)(0,10)--(0,16);
\draw[fill=black](0,19)circle(10pt);
\draw (-15.2,8.75) node[rotate=150]{$\dots$};
\draw (15.2,8.75) node[rotate=30]{$\dots$};
\draw (0,17.4) node[rotate=90]{$\dots$};
\draw[fill=black](16.5,9.5)circle(10pt);
\draw[fill=black](-16.5,9.5)circle(10pt);
\draw[fill=black](10,0)circle(10pt);
\draw[fill=black](-10,0)circle(10pt);
\draw[fill=black](0,10)circle(10pt);
\draw[fill=black](0,13)circle(10pt);
\draw[fill=black](0,16)circle(10pt);
\draw[fill=black](8.7,5)circle(10pt);
\draw[fill=black](11.3,6.5)circle(10pt);
\draw[fill=black](13.9,8)circle(10pt);
\draw[fill=black](-8.7,5)circle(10pt);
\draw[fill=black](-11.3,6.5)circle(10pt);
\draw[fill=black](-13.9,8)circle(10pt);
\draw[fill=black](5,8.7)circle(10pt);
\draw[fill=black](-5,8.7)circle(10pt);
\draw[fill=black](-9.6,-2.6)circle(3pt);
\draw[fill=black](9.6,-2.6)circle(3pt);
\draw[fill=black](-8.6,-5)circle(3pt);
\draw[fill=black](8.6,-5)circle(3pt);
\draw[fill=black](-7.1,-7.1)circle(3pt);
\draw[fill=black](7.1,-7.1)circle(3pt);
\end{tikzpicture}
\caption{The graph obtained after blowing up an independent set in an even cycle and joining the vertices by a path. This planar graph attains the lower bound stated in Conjecture \ref{gff} and Conjecture \ref{ghh}.}
\label{fig2}
\end{figure}
\section*{Acknowledgements}
The research of the second, the fifth and the seventh authors is partially supported by the National Research, Development and Innovation Office -- NKFIH, grant K 132696.
The reseach of the third author was partially supported by Simons Foundation Collaboration Grant \#353292 and by the J. William Fulbright Foreign Scholarship Board.
The research of the fifth author is partially supported by Shota Rustaveli National Science Foundation of Georgia SRNSFG, grant number DI-18-118.
\bibliographystyle{abbrv}
|
1,116,691,499,118 | arxiv | \section{Introduction}
In this article, we present a technique, based on a degeneration idea that Couveignes in \cite{Couv99} used in a slightly different context, to construct covers of elliptic curves with unique, totally ramified branch points. In particular, given an elliptic curve $E$ and an integer $g\ge 2$, we construct a genus-$g$ curve $C$ and a map $f:C\to E$ of degree $2g-1$, so that $f$ is ramified above exactly one point of $E$, and so that the local monodromy above that point is of type a $(2g-1)$-cycle.
The study of branched covers of curves goes back to Riemann, who determined necessary and sufficient conditions on the local monodromies for such covers to exist. However, interest in writing down explicit equations for such covers is more recent, and it was only after Bely\u\i\ in \cite{Belyi79} proved his celebrated theorem and Grothendieck laid out his Esquisse d'un Programme \cite{Groth97} for using such covers to understand the absolute Galois group of $\mathbb{Q}$ that interest in this subject took off. More recently, based on work of Beckmann \cite{Beck89}, Roberts in \cite{Rob04} has demonstrated that Bely\u\i\ maps $\mathbb{P}^1\to\mathbb{P}^1$ can be used in practice to construct number fields with limited ramification.
While the theory in Beckmann's work and Roberts's work applies to more general covers of curves, computations of covers of other curves has been too difficult for practical use for constructing number fields of limited ramification.
On the other hand, inspired by the analogy with Bely\u\i\ maps, work has been done from a mostly topological perspective on branched covers of elliptic curves. This work began with Lochak in \cite{Lochak05} and has continued with work of M\"oller in \cite{Moller05} and of Herrlich and Schmith\"usen (for example, in \cite{HS09}) and their Karlsruhe school.
In Section \ref{origamis}, we present some background on branched covers of elliptic curves ramified at one point (also known as origamis). In Section \ref{family}, we present our result. Finally, in Section \ref{degen}, we present the degeneration technique used to construct the family of covers for the first time. While the technique presented does not guarantee a solution, we suspect that this section will be the most interesting part of this article.
\section{Origamis} \label{origamis}
By the Riemann-Hurwitz formula, an unramified cover of a genus-1 curve must again be a genus-1 curve, which means that such a map is simply a composition of an isogeny of elliptic curves and a translation.
However, if we allow one branched point on our elliptic curve, then there are covers by higher-genus curves. These will be our objects of study.
\begin{defn} Let $E$ be an elliptic curve. An origami is a pair $(C,f)$, where $C$ is a curve and $f:C\to E$ is a map, branched only above one point. \end{defn}
Origamis are so-called because they admit a pictorial interpretation vaguely reminiscent of the eponymous Japanese art form, analogous to that of dessins for Bely\u\i\ maps.
Over $\mathbb{C}$, any elliptic curve $E$ can be written as $\mathbb{C}/\Lambda$, for some lattice $\Lambda\subset\mathbb{C}$. We will find it most helpful to think of $E$ as a fundamental parallelogram for $\Lambda$. The choice of lattice $\Lambda$ or fundamental parallelogram determines the complex structure on $E$. Many of our arguments do not depend on the choice of complex structure; when this happens, we choose to work with the square lattice $\Lambda=\mathbb{Z}[i]$, and our fundamental parallelogram of choice will be the square $S$ with vertices 0, 1, $1+i$, and $i$. (The only reason we prefer this parallelogram is that it is easier to draw than are other parallelograms. It should not generally be assumed that we are interested in the special properties of the elliptic curve $\mathbb{C}/\mathbb{Z}[i]$ not enjoyed by other elliptic curves.) Our elliptic curve will then be the square, with opposite edges identified.
Now, consider a disjoint union of $n$ translates of $S$, and identify various edges to form an orientable surface $X$ subject to the following requirements: \begin{enumerate} \item $X$ is connected. \item Every left edge is identified with a unique right edge, and vice versa. \item Every top edge is identified with a unique bottom edge, and vice versa. \end{enumerate} If we remove all the vertices of the $n$ squares, the resulting figure carries the structure of a Riemann surface, obtaining a complex structure whose charts are (slightly enlarged versions of) the original $n$ squares minus the vertices. The resulting Riemann surface $\widetilde{X}$ is then a compact Riemann surface with several punctures. There is a unique way of compactifying $\widetilde{X}$ so that its compactification is a compact Riemann surface; we call this Riemann surface $X$. Furthermore, $X$ admits a degree $n$ map to the elliptic curve $\mathbb{C}/\mathbb{Z}[i]$ by mapping a point in any translate of $S$ to the corresponding point in $S$. This map is branched only above the vertex of $S$. An example can be seen in Figure \ref{Lorigami}. In this diagram, we have explained the edge identification; in the future, if there are no markings on the edges, we take this to mean that opposite edges are identified. (This will be the case in all origami diagrams in this article.) The map is shown in Figure \ref{origmap}.
\begin{figure} \begin{center} \includegraphics[height=1in]{L-origami.png} \end{center} \caption[An origami diagram]{This diagram represents a genus-2 curve with a degree-3 map to the elliptic curve $y^2=x^3-x$. Here we identify opposite edges, meaning that edge $a$ is identified with edge $b$, edge $c$ with edge $d$, edge $e$ with edge $f$, and edge $g$ with edge $h$.} \label{Lorigami} \end{figure}
\begin{figure} \begin{center} \includegraphics[height=1in]{origmap.png} \end{center} \caption{Shown are all the preimages under $f$ in $C$ of the marked point in $E$.} \label{origmap} \end{figure}
Had we chosen to distinguish a different elliptic curve with a different fundamental parallelogram $P$, the corresponding origami would simply consist of a disjoint union of $n$ translates of $P$ with similar edge identifications.
The origami diagram, though apparently extremely simple, turns out to carry a wealth of combinatorial information in readily available form. For example, we can compute the local monodromy about the branch point. To do this, number the squares of $C$ from 1 to the degree $n$ of the map, in any way. We now define two permutations $g,h\in S_n$, which will be the monodromies around two loops generating $H_1(E)$. Let $g$ be the permutation obtained by moving one square to the right, and let $h$ be the permutation obtained by moving one square up. For example, if we label the square in Figure \ref{Lorigami} with a ``1'' in the top left corner, a ``2'' in the bottom left, and a ``3'' in the bottom right, then $g=(23)$ and $h=(12)$. The local monodromy above the branch point is the commutator $[g,h]=g^{-1}h^{-1}gh$, which in this case is the 3-cycle $(132)$. If we relabel the squares, we obtain different permutations $g'$ and $h'$ and a different commutator; however, there is some $\sigma\in S_n$ so that $g'=\sigma^{-1}g\sigma$ and $h'=\sigma^{-1}h\sigma$, so the cycle type of the local monodromy is well-defined.
It is also possible to determine the genus of $C$ from the origami diagram. To do this, we use Euler's formula $V-E+F=2-2g$. The number of faces $F$ is equal to $n$, and the number of edges is $2n$. To determine the number of vertices, we can either check directly which vertices in the diagram are glued to which other vertices, or we can note that the number of vertices is equal to the number of cycles in the cycle decomposition of the local monodromy. Hence, in this case, there is one vertex, so $V-E+F=-2$, and so the genus is 2. In this article, we will only be interested in origamis for which $V=1$.
\section{A family of algebraic origamis} \label{family}
In this section, we construct a family of examples of explicit origamis, one for each genus $g$.
\begin{defn} We say that an origami is totally ramified if the preimage of the branch point is a single point. \end{defn}
The origamis we construct here will all be totally ramified.
\begin{thm} \label{totramorigami} For each $g\ge 1$ and $t\neq 0,-1$, the genus-$g$ curve \[C_t:y^2=x(x+1)(x^{2g-1}+tj(x)^2),\] where \[j(x)=\sum_{i=0}^{g-1} \binom{2g-1}{2i}(x+1)^i,\] admits a degree $2g-1$ map to the elliptic curve \[E_t:y^2=x(x+1)(x+t),\] totally ramified above $(0,0)$ and unramified everywhere else. The map is given by $(x,y)\mapsto(f_1(x),f_2(x)y)$, where \[f_1(x)=\frac{x^{2g-1}}{j(x)^2}\] and \[f_2(x)=\frac{x^{g-1}\sum_{i=0}^{g-1}\binom{2g-1}{2i+1}(x+1)^i}{j(x)^3}.\] \end{thm}
\begin{proof} We first check that $(f_1(x),f_2(x)y)$ actually gives a map from $C_t$ to $E_t$. This amounts to checking that \[f_2(x)^2(x (x+1) (x^{2g-1}+tj(x)^2)) = f_1(x)(f_1(x)+1)(f_1(x)+t)\] is a formal identity. This does happen to be the case; hence $(f_1(x),f_2(x)y)$ does define a map from $C_t$ to $E_t$.
Now, we check that the ramification type is as claimed. To do this, we observe that if $f(x,y)=(f_1(x),f_2(x)y)$ is the map above, and $\omega=\frac{dx}{y}\in\Omega^1_{E_t}$ is an invariant differential on $E_t$, then \[f^\ast\omega=(2g-1)\frac{x^{g-1}\; dx}{y}.\] So, $f^\ast\omega$ vanishes to order $2g-2$ at $(0,0)$ and has no other zeros or poles. Hence, $f$ is totally ramified at $(0,0)$ and unramified everywhere else. \end{proof}
\begin{rem} It is worth noting that the map $(f_1(x),f_2(x)y)$ is independent of $t$ and hence defines a map $F:\mathbb{P}^2_{\mathbb{C}}\to\mathbb{P}^2_{\mathbb{C}}$. If we fix an elliptic curve $E_t$ in the target $\mathbb{P}^2$, then $F^{-1}(E_t)$ is a union of several irreducible components, one of which is $C_t$. If we take $t=0$ or $t=-1$, then $E_t$ is a nodal cubic, and $C_t$ is a singular quintic of arithmetic genus 0. This will be relevant in the next section. \end{rem}
\section{Degeneration techniques} \label{degen}
The proof given in the previous section thoroughly fails to capture the motivation that went into the discovery of this result. In fact, the story of finding these examples is much more interesting than is the proof. Therefore, we now discuss how the reader could (and the author did) discover such an example. To do this, we carefully work with the lowest-degree example: that of a degree-3 origami from a genus-2 curve to an elliptic curve. Such an origami must necessarily be totally ramified.
In the remainder of this section, we perform some educated guesswork; it will not be clear whether our guesses will turn out to be successful until we present a proof in the style of that of Theorem \ref{totramorigami}.
We will construct a family of genus-2 curves mapping to a family of elliptic curves, parametrized (essentially) by their Legendre form. Hence, for any $j$-invariant other than 0 or 1728, we will actually construct six genus-2 curves mapping to an elliptic curve with this $j$-invariant. These six genus-2 curves come in three pairs of isomorphic curves; hence, we generically obtain three pairwise nonisomorphic covers in this way.
\begin{figure} \begin{center} \includegraphics[height=1in]{L-origami-degen.png} \end{center} \caption{Here, we shrink the dotted edges to a point. The resulting surface has geometric genus 0.} \label{Lorigamidegen} \end{figure} \begin{figure} \begin{center} \includegraphics[height=1in]{threepinches.png} \end{center} \caption{A three-dimensional version of Figure \ref{Lorigamidegen}.} \label{threepinches} \end{figure}
To do this, we start by constructing a cover $C'$ of the nodal cubic \[E':y^2=x^3+x^2,\] which we expect to arise as a degeneration of covers of elliptic curves which limit to $E'$. One possibility is that the degenerate origami diagram will look like Figure \ref{Lorigamidegen}, with the dotted edges collapsed to a point. The curve represented by this origami has geometric genus 0, since it is a double torus with three pinched loops, as in Figure \ref{threepinches}. Furthermore, since the origamis are totally ramified, the family of covers must degenerate to a curve with only one preimage of the branch point in $E'$. Finally, a map $C'\to E'$ can be described as a map from the normalization of $C'$ to the normalization of $E'$.
The next thing to do is to construct an explicit equation for $C'$, as well as its normalization map. While in general this is a notoriously difficult problem, it is easy in this case. By the picture, we can see that $C'$ has one nodal point and has geometric genus 0; hence it has a Weierstra\ss\ equation of the form $y^2=(x-a)^4(x-b)$. We choose to take $a=0$ and $b=-1$ so that we obtain the curve \[C':y^2=x^5+x^4.\]
To compute the normalization of $E'$, we note that the map $E'\to\mathbb{P}^1$ given by $(x,y)\mapsto x+1$ has a square root $y/x$ in $\mathbb{C}(E')$. Letting $u=y/x$, we have $x=u^2-1$ and $y=u^3-u$, so $\mathbb{C}(E')=\mathbb{C}(u)$, and the normalization map is $\mathbb{P}^1\to E'$, given by $u\mapsto(u^2-1,u^3-u)$. A similar computation shows that the normalization of $C'$ is $\mathbb{P}^1\to C'$, given by $t\mapsto (t^2-1,t(t^2-1)^2)$. Note that, in the normalizations of both $C'$ and $E'$, the preimage of the nodal point is $\{\pm 1\}\subset\mathbb{P}^1$.
The map on normalizations must have the same degree as the map $C'\to E'$, and it can only be branched at the preimages of the node of $E'$ and the branch point of the map $C'\to E'$. In this case, the only possibilities are $\{\pm 1\}$, so by the Riemann-Hurwitz formula it must be branched at both points. Fortunately, there are very few maps $\mathbb{P}^1\to\mathbb{P}^1$ with only two branch points: they are simply conjugates of $z\mapsto z^n$, where $n$ is the degree of the map. In this case, the map on normalizations is \[z\mapsto\frac{z^3+3z}{3z^2+1}.\]
Now, in order to compute the map $f:C'\to E'$, note that we have the following commutative diagram: \[\xymatrix{\mathbb{P}^1\ar[r]\ar[d] & \mathbb{P}^1\ar[d] \\ C'\ar[r]^f & E'}\] Furthermore, the vertical maps have near-inverses; the inverse of the vertical arrow on the left is given by $(x,y)\mapsto y/x^2$. Hence, $f$ is the composition of the other three arrows; putting this together, we have \[f(x,y)=\left(\frac{x^3}{(3x+4)^2},\frac{xy(x+4)}{(3x+4)^3}\right).\]
We now proceed to prolong $f$ to a map from a family of genus-2 curves to the Legendre family of elliptic curves by means of deformations.
In order to figure out the map from a family of nonsingular genus-2 curves to a family of elliptic curves, we deform the defining equations for the nodal quintic and for the map. We let the defining equation of the genus-2 curve be \[C_t:y^2=x^5+(1+at)x^4+btx^3+ctx^2+dtx,\] where $a,b,c,d\in\mathbb{C}[\![t]\!]$. The defining equation of the elliptic curve will be \[E_t:y^2=x(x+1)(x+t).\] The map will be \[(x,y)\mapsto\left(\frac{x^3}{((3+et)x+(4+ft))^2},\frac{(x^2+(4+gt)x)y}{((3+et)x+(4+ft))^3}\right).\] A priori, $a,b,c,d,e,f,g$ are power series in $t$; for now, we are only interested in their constant terms. Expanding everything out and equating the $tx^i$ terms for various values of $i$ gives us a system of linear equations; we then find that \begin{align*} a&= 9 \\ b&= 33 \\ c&= 40 \\ d&=16 \\ e=f=g&= 0 \end{align*} is a solution. In fact, these values of $a,b,c,d,e,f,g$ are not merely the constant terms of power series; they are in fact the entire power series. Hence, if we let \[C_t:y^2=x^5+(1+9t)x^4+33tx^3+40tx^2+16tx\] and \[E_t:y^2=x(x+1)(x+t),\] then \[f(x,y)=\left(\frac{x^3}{(3x+4)^2},\frac{xy(x+4)}{(3x+4)^3}\right)\] is a map $f:C_t\to E_t$. Indeed, this map is only branched over $(0,0)$, with its preimage being $(0,0)$; we can check this directly, or we can verify that the pullback of the invariant differential $\omega=\frac{dx}{y}\in\Omega^1_{E_t}$ (which has no zeros or poles) is $3x\frac{dx}{y}\in\Omega^1_{C_t}$, which has a double zero at $(0,0)$ and no other zeros or poles.
The same method allows us to construct totally ramified origamis in every genus. For instance, in genus 3, the degenerate curve has the form $y^2=x^6(x+1)$, and the origami diagram is shown in Figure \ref{orig3}. In general, the origami diagrams we use for the constructions here are staircase-shaped.
\begin{center}\begin{figure} \includegraphics[height=1.5in]{orig3.png} \caption{A genus-3 origami. Here, opposite sides are glued together.} \label{orig3} \end{figure}\end{center}
When we perform a degeneration procedure as we did in the genus-2 case, we find that \[C_t:y^2=x^7+(1+25t)x^6+225tx^5+760tx^4+1200tx^3+896tx^2+256tx\] and \[E_t:y^2=x(x+1)(x+t).\] Then \[f(x,y)=\left(\frac{x^5}{(5x^2+20x+16)^2},\frac{x^3(x^2+12x+16)y}{(5x^2+20x+16)^3}\right)\] is a totally ramified origami $f:C_t\to E_t$.
It is worth noticing that, in these cases, we need only change the equation of the genus-$g$ curve as $t$ varies; in particular, the map does not change. Hence, we have a map $f:\mathbb{P}^2\to\mathbb{P}^2$ so that the inverse images of elliptic curves in a certain family are all genus-$g$ curves, so that the map is an origami. The proof above explains this phenomenon.
It would be interesting to see this method generalize to cases where the base need not be an elliptic curve. In particular, we would like to know to what extent is it possible to construct branched covers of a curve $C$ in $\mathbb{P}^2$ by constructing a suitable map $f:\mathbb{P}^2\to\mathbb{P}^2$, chosen so that its branch locus is consistent with the desired branching properties of the cover of $C$, and restricting to the map $f\mid_D:D\to C$, where $D$ is some irreducible component inside $f^{-1}(C)$ for which $f\mid_D:D\to C$ is flat. The author has used this method to construct several examples of branched and unbranched covers of higher-genus curves, but a detailed study of this method may be the topic of future work.
It is not so easy to use the techniques in this article to write down curves $C$ and maps $f$ for other ramification types. One challenge is that it is unclear how the degenerate pictures ought to look. Another challenge is that, even if it were clear, sometimes the normalizations of the covering degenerate curve may have positive genus, and we would then need to write down explicit equations for maps from a positive-genus curve to a genus-1 curve, and there is no known good procedure for doing so. The author has developed different techniques one can use to write down equations with other branching types and some of these techniques are presented in \cite{RS12}.
\section*{Acknowledgements}
The author would like to thank Akshay Venkatesh for suggesting this problem and for many helpful discussions and comments.
\providecommand{\WileyBibTextsc}{}
\let\textsc\WileyBibTextsc
\providecommand{\othercit}{}
\providecommand{\jr}[1]{#1}
\providecommand{\etal}{~et~al.}
|
1,116,691,499,119 | arxiv | \section{\label{I}INTRODUCTION}
Magnetic skyrmions (MSk´s) are chiral spin textures with vortex configuration, the existence of which was predicted earlier \cite{Bog}, are of potential interest in the field of information storage, processing devices, and neuromorphic computing \cite{rev1,rev2,rev3,rev4}. As well known, such topologically protected structures result from Dzyaloshinskii–Moriya interaction (DMI) which causes noncollinear ordering of magnetic moments in media without an inversion center \cite{Dz,Mor}. An example of a medium with stable MSk´s, in which the inversion symmetry is broken by the interface, is a ferromagnet (FM)/heavy metal (HM) bilayer, where the HM provides the DMI in the system of FM magnetic moments \cite{DMI1,DMI2,DMI3,DMI4,DMI5}. This DMI removes the chiral degeneracy, thus mediating nonreciprocal spin-wave propagation \cite{DMISW1,DMISW2} and formation of chiral domain walls \cite{Bog,DMI2,DW}. It is also of interest that the chiral symmetry can be broken by magnetostatic interaction between the FM film and polarizable substrate (paramagnet (PM) \cite{MS1,MS2,MS3} or a superconductor (SC) \cite{MS2,MS3,MS4,MS5}). One consequence of this coupling is spin-wave nonreciprocity \cite{MS3,MSSW1,MSSW2,MSSW3,MSSW4,MSSW5,MSSW6,MSSW7,MSSW8} in FM/PM \cite{MS3,MSSW3,MSSW4,MSSW5,MSSW6,MSSW7,MSSW8} and FM/SC \cite{MS3,MSSW1,MSSW2} systems. Apart of these studies it was found that the magnetostatic interaction in similar systems contributes to the stabilization of chiral magnetic textures \cite{stab1,stab2,stab3,stab4}. It has also been shown that the SC has a significant effect on the MSk \cite{SC1,SC2,SC3,SC4,SC5,SC6,SC7}. The latter studies point out that MSk´s can be stabilized in an FM film on a polarized substrate due to magnetostatic interaction.
In this paper, we theoretically study FM/PM and FM/SC hybrid systems, where the PM is an FM above $T_c$ and SC in FM/SC is considered in the London limit below $T_c$, where $T_c$ is the Curie temperature in PM or the critical temperature for the normal metal-SC phase transition. We show that the effective DMI in the systems under study arises as an additional term in the energy of the FM film,
\begin{equation}
F_\text{DMI}^\text{eff}\propto\int_{-\infty}^{+\infty} D_\text{eff}(q)(\mathbf{n}\times\mathbf{q})\cdot[\mathbf{M(-q)}\times\mathbf{M(q)}]\,d\mathbf{q}
\label{eq:1},
\end{equation}
where $D_\text{eff}$ is the effective DMI constant, whose sign depends on the type of substrate ($D_\text{eff}<0$ and $D_\text{eff}>0$ for PM and SC, respectively), $\mathbf{n}$ is normal to the interface (Fig.~\ref{fig:1}), $\mathbf{M(q)}$ is Fourier transform of the FM film magnetization $\mathbf{M}(\bm{\rho}=x,y)$, and $\mathbf{q}=(q_x,q_y,0)$ is two-dimensional wave vector. In a frame of our model, this effective DMI depends on the system temperature $T$ through the susceptibility in PM, $\chi(T)=C/(T-T_c)$, or the London penetration depth in SC, $\lambda(T)={\lambda_0/[(1-T/T_c)]^{1/2}}$, which are contained in $D_\text{eff}$. Here $C$ and $\lambda_0$ are the PM Curie constant and the SC London penetration depth at zero temperature, respectively. The contribution (\ref{eq:1}) results from magnetostatic interaction between the film and substrate. We find that only in the case of a PM substrate, the effective DMI leads to stabilization of the Néel-type chiral magnetic textures, such as a magnetic spiral (MSp) and MSk. The strong temperature sensitivity of $D_\text{eff}$ near $T_c$ allows for tuning the effective DMI and thus controlling the MSk radius, which can be useful for the applications.
The article is organized as follows. In Section~\ref{II}, we describe the model used and derive an analytical expression for the magnetic energy of the systems under consideration. Section~\ref{III} presents conditions for stability of one-dimensional magnetic textures of the Néel-type, namely, a (1) domain wall and (2) MSp. In Section~\ref{IV}, we consider how the MSk can be stabilized. Section~\ref{V} contains a brief description of the obtained results. In Appendices~\ref{A} and \ref{B}, we calculate the magnetostatic energy in the systems under consideration and consider the case of MSp with the Bloch component.
\begin{figure}[h!!]
\includegraphics{fig_1
\caption{\label{fig:1} Schematic for a magnetic spiral (skyrmion) of the Néel-type (solid arrows) and its images (dashed arrows) in FM/PM (a, b) and FM/SC (c, d). The right ($\psi=0$, $\sigma=1$) (a, c) and left ($\psi=\pi$, $\sigma=-1$) (b, d) rotations of the solid arrows lead to the left and right rotations of the dashed ones, respectively.}
\end{figure}
\section{\label{II}FREE ENERGY OF THE FM/PM AND FM/SC SYSTEMS}
As for the exact geometry of our system, we place the FM film is at $0<z<h$, while a PM or SC substrate is at $z<0$ (Fig.~\ref{fig:1}). It is assumed that $h\ll L_0$, where $L_0=(2A_\text{ex}/M_0^2)^{1/2}$ is the exchange length with $A_\text{ex}$ and $M_0$ being the exchange stiffness and saturation magnetization in the FM film. As a result, the dependence of $\mathbf{M}$ on $z$ can be neglected. The nonuniform magnetization $\mathbf{M}$ induces magnetization $\mathbf{m}(\bm{\rho},z)$ or a supercurrent $\mathbf{j}_s(\bm{\rho},z)$ in the PM and SC substrates, respectively, which produce stray fields in the FM region ($z>0$). In this Section, we consider the free energy of an FM film on PM (Subsection A) and SC (Subsection B) substrates. Obtained results are generalized in Subsection C --- with distinguishing the effective DMI term.
\subsection{\label{IIA}FM/PM}
The total free energy of the FM/PM system can be written as the sum of three contributions --- $F_0$, $F_-$, and $F_+$ --- to the total free energy of the system, which are free energies within the regions $0<z<h$, $z<0$, and $z>h$, respectively, i.e.,
\begin{subequations}
\begin{equation}
F=F_0+F_-+F_+
\label{eq:2a},
\end{equation}
\begin{eqnarray}
F_0&=&\int_{-\infty}^{+\infty}\int_{0}^{h}\left[\frac{L_0^2}{2}\sum_{\alpha=1}^2\left(\frac{\partial\mathbf{M}}{\partial x_\alpha}\right)^2-\frac{1}{2}K_aM_z^2 \right.
\nonumber\\
& &\left.-\left(\mathbf{M}\cdot \mathbf{H}_\text{ext}\right)-\left(\mathbf{M}\cdot \mathbf{H}_0\right)-\frac{\mathbf{H}_0^2}{8\pi}\right]d\bm{\rho}\,dz
\label{eq:2b},
\end{eqnarray}
\begin{equation}
F_-=\int_{-\infty}^{+\infty}\int_{-\infty}^{0}\left[\frac{1}{2\chi}\mathbf{m}^2-\left(\mathbf{m}\cdot\mathbf{H}_-\right)-
\frac{\mathbf{H}_-^2}{8\pi}\right]d\bm{\rho}\,dz
\label{eq:2c},
\end{equation}
\begin{equation}
F_+=-\int_{-\infty}^{+\infty}\int_{h}^{+\infty}\frac{\mathbf{H}_+^2}{8\pi}\,d\bm{\rho}\,dz
\label{eq:2d}.
\end{equation}
\end{subequations}
\noindent
In Eq.~(\ref{eq:2b}), the first term is nonuniform exchange, the second term is the magnetic anisotropy with $K_a>4\pi$ being the magnetic anisotropy constant, the third term is the Zeeman energy in external magnetic field $\mathbf{H}_\text{ext}$, and the fourth term relates to the magnetostatic energy. The last terms in Eqs.~(\ref{eq:2b}), (\ref{eq:2c}), and Eq.~(\ref{eq:2d}) represent the energies of the magnetic (magnetostatic) fields $\mathbf{H}_0 (0<z<h)$, $\mathbf{H}_-(z<0)$, and $\mathbf{H}_+(z>h)$ jointly produced by magnetizations $\mathbf{M}$ and $\mathbf{m}$. We assume that $\mathbf{m}=\chi\mathbf{H}_-$, which is valid under the condition that the spatial scale of $\mathbf{M}$ inhomogeneity is much larger than the exchange length in PM. Otherwise, $\mathbf{m}$ is determined by convolution of $\mathbf{H}_-$ with the integral kernel \cite{MS3}. With taking into account the electromagnetic boundary conditions \cite{Landau} and that $\mathbf{m}=\chi\mathbf{H}_-$, the total free energy reads as
\noindent
\begin{eqnarray}
F&=&\int_{-\infty}^{+\infty}\int_{0}^{h}\left[\frac{L_0^2}{2}\sum_{\alpha=1}^2\left(\frac{\partial\mathbf{M}}{\partial x_\alpha}\right)^2-\frac{1}{2}K_a M_z^2 \right.
\nonumber\\
& &\left.-\left(\mathbf{M}\cdot \mathbf{H}_\text{ext}\right)-\frac{1}{2}\left(\mathbf{M}\cdot \mathbf{H}_0\right)\right]d\bm{\rho}\,dz
\label{eq:3}.
\end{eqnarray}
It can be seen that $F_-$ and the energies of the magnetostatic fields cancel each other.
\subsection{\label{IIB}FM/SC}
As for the FM/SC system, we consider its free energy as a function of magnetic induction. Excluding such a modification, equations for the free energy of the FM/SC system are similar to those for the FM/PM one [Eqs.~(\ref{eq:2b})--(\ref{eq:2d})]:
\begin{subequations}
\begin{eqnarray}
F_0&=&\int_{-\infty}^{+\infty}\int_{0}^{h}\left[\frac{L_0^2}{2}\sum_{\alpha=1}^2\left(\frac{\partial\mathbf{M}}{\partial x_\alpha}\right)^2-\frac{1}{2}K_aM_z^2 \right.
\nonumber\\
& &\left.-\left(\mathbf{M}\cdot \mathbf{H}_\text{ext}\right)-\left(\mathbf{M}\cdot \mathbf{B}_0\right)+\frac{\mathbf{B}_0^2}{8\pi}\right]d\bm{\rho}\,dz
\label{eq:4a},
\end{eqnarray}
\begin{equation}
F_-=\frac{1}{8\pi}\int_{-\infty}^{+\infty}\int_{-\infty}^{0}\left[\mathbf{B}_-^2+\lambda^2(\text{rot}\mathbf{B}_-)^2\right]d\bm{\rho}\,dz
\label{eq:4b},
\end{equation}
\begin{equation}
F_+=\int_{-\infty}^{+\infty}\int_{h}^{+\infty}\frac{\mathbf{B}_+^2}{8\pi}\,d\bm{\rho}\,dz
\label{eq:4c}.
\end{equation}
\end{subequations}
\noindent
The first and second terms in Eq.~(\ref{eq:4b}) are the energies of the magnetostatic field and the superconducting current, respectively. Similarly, the magnetostatic fields $\mathbf{B}_0 (0<z<h)$, $\mathbf{B}_-(z<0)$, and $\mathbf{B}_+(z>h)$ are generated jointly by magnetization $\mathbf{M}$ and supercurrent $\mathbf{j}_s$. We assume that $\mathbf{j}_s$ obeys the London equation, i.e., $\mathbf{j}_s=-c\mathbf{A}_-/(4\pi\lambda^2)$, where $\mathbf{B}_-=\text{rot}\mathbf{A}_-$. Using this equation along with the electromagnetic boundary conditions \cite{Landau}, we can reduce the free energy (\ref{eq:2a}) for the FM/SC system
\begin{eqnarray}
F&=&\int_{-\infty}^{+\infty}\int_{0}^{h}\left[\frac{L_0^2}{2}\sum_{\alpha=1}^2\left(\frac{\partial\mathbf{M}}{\partial x_\alpha}\right)^2-\frac{1}{2}K_a M_z^2 \right.
\nonumber\\
& & \left.-\left(\mathbf{M}\cdot \mathbf{H}_\text{ext}\right)-\frac{1}{2}\left(\mathbf{M}\cdot \mathbf{B}_0\right)\right]d\bm{\rho}\,dz
\label{eq:5}.
\end{eqnarray}
We see again that $F_-$ and the energies of the magnetostatic fields cancel each other.
\subsection{\label{IIC}Magnetostatic energy and effective DMI}
The last terms in Eqs.~(\ref{eq:3}) and (\ref{eq:5}) are the magnetostatic energy, $F_\text{MS}=F_\text{MS}^\text{intra}+F_\text{MS}^\text{inter}$, which consists of two contributions. One of them, $F_\text{MS}^\text{intra}$, is the interaction between magnetic moments inside the FM film (intralayer contribution), while another one, $F_\text{MS}^\text{inter}$, is the interaction between $\mathbf{M}$ and $\mathbf{m}$ or $\mathbf{j}_s$ (interlayer contribution). The terms above can be written as follows (see Appendix~\ref{A} for their derivation):
\begin{subequations}
\begin{eqnarray}
F_\text{MS}^\text{intra(inter)}&=&\frac{S^2}{(2\pi)^2}\int_{-\infty}^{+\infty}D_{\alpha\beta}^\text{intra(inter)}(\mathbf{q}) \nonumber\\
& &\times M_\alpha(-\mathbf{q})M_\beta(\mathbf{q})\,d\mathbf{q}
\label{eq:6a},
\end{eqnarray}
\begin{eqnarray}
D_{\alpha\beta}^\text{intra}(\mathbf{q})&=&\frac{2\pi}{q}\left\{\left[qh-\left(1-e^{-qh}\right)\right]\frac{q_\alpha q_\beta}{q^2} \right.
\nonumber \\
& &+\left(1-e^{-qh}\right)\delta_{\alpha z}\delta_{\beta z} \biggl \}
\label{eq:6b},
\end{eqnarray}
\begin{eqnarray}
D_{\alpha\beta}^\text{inter}(\mathbf{q})&=&qhD_\text{eff}(q)\left(\frac{q_\alpha q_\beta}{q^2}-\frac{iq_\alpha}{q}\delta_{\beta z}+ \right.
\nonumber \\
& &\left.+\frac{iq_\beta}{q}\delta_{\alpha z}+\delta_{\alpha z}\delta_{\beta z}\right)
\label{eq:6c},
\end{eqnarray}
\end{subequations}
\noindent
where $S$ is the area of the system, $i$ is the imaginary unit, $\delta_{\alpha\beta}$ is the Kronecker symbol, and the effective DMI constant $D_\text{eff}$ has the form
\begin{equation}
D_\text{eff}(q)=-\frac{\pi\kappa(q)}{q^2h}\left(1-e^{-qh}\right)^2
\label{eq:7},
\end{equation}
where $\kappa(q)$ is the parameter which describes the substrate properties and depends on $T$,
\begin{equation}
\kappa(q)=\left\{
\begin{array}{rcl}
&\frac{2\pi\chi}{1+2\pi\chi}&\text{ for FM/PM,}\\
&-\frac{\sqrt{q^2\lambda^2+1}-q\lambda}{\sqrt{q^2\lambda^2+1}+q\lambda}&\text{ for FM/SC,}\\
\end{array}
\right.
\label{eq:8}
\end{equation}
which lies in the ranges of $[0,1]$ and $[-1,0]$ in FM/PM and FM/SC, respectively. Note that the sign of $\kappa$ (and hence the sign of $F_\text{MS}^\text{inter}$) depends on the substrate type and that $|\kappa|\rightarrow1$ if $T=T_c$ (FM/PM case, $\chi\rightarrow\infty$) and $T=0$ (FM/SC case, $\lambda=\lambda_0\rightarrow0$).
As the form of $F_\text{MS}^\text{intra}$ is similar to that of $F_\text{MS}^\text{inter}$, one can introduce the tensor $D_{\alpha\beta}=D_{\alpha\beta}^\text{intra}+D_{\alpha\beta}^\text{inter}$, which describes the total magnetostatic energy $F_\text{MS}$. This tensor can be represented as the sum of two terms, i.e., $D_{\alpha\beta}=D_{\alpha\beta}^s+D_{\alpha\beta}^a$, where $D_{\alpha\beta}^s=D_{\beta\alpha}^s$ is a symmetric tensor and $D_{\alpha\beta}^a=-D_{\beta\alpha}^a$ is an antisymmetric tensor whose components are nonzero only in the presence of a substrate. The $D_{\alpha\beta}^a$ tensor can be represented as
$D_{\alpha\beta}^a=ihD_\text{eff}(q)\varepsilon_{\alpha\beta\gamma}{(\mathbf{n}\times\mathbf{q})}_\gamma$, where $\varepsilon_{\alpha\beta\gamma}$ is the Levi-Civita tensor. Substituting $D_{\alpha\beta}^a$ into Eq.~(\ref{eq:6a}), we obtain the expression for the effective DMI energy (\ref{eq:1}), where the coefficient of proportionality is $ihS^2/(2\pi)^2$. The interfacial DMI energy arising in the FM/HM system can also be reduced to the form given by Eq.~(\ref{eq:1}) after replacing $D_\text{eff}$ with a DMI constant independent of $q$.
If the spatial scale of $\mathbf{M}(\bm{\rho})$ is much larger than the film thickness $h$, one can assume that $qh\ll1$. So, expanding $D_{\alpha\beta}$ into a series up to terms linear in $qh$, one obtains that
\begin{eqnarray}
D_{\alpha\beta}&\approx&\pi h\left\{qh\left[1-\kappa(q)\right]\frac{q_\alpha q_\beta}{q^2}+2\left[1-\frac{1}{2}\kappa(q)qh\right]\delta_{\alpha z}\delta_{\beta z} \right.
\nonumber \\
& &\left.+\kappa(q)qh\left(\frac{iq_\alpha}{q}\delta_{\beta z}-\frac{iq_\beta}{q}\delta_{\alpha z}\right)\right\}
\label{eq:9}.
\end{eqnarray}
If $1-\kappa\ll1$ (FM/PM case, $T\sim T_c$) one can neglect the first term in Eq.~(\ref{eq:9}). One can also neglect $\kappa qh/2$ due to $qh\ll1$ in the second term of Eq.~(\ref{eq:9}) \footnote{In fact, it is incorrect to neglect the term linear in $qh$. However, next we will make sure that neglecting this term will not lead to significant mistakes.}. As a result, the total free energy (\ref{eq:3}) can be rewritten as
\begin{eqnarray}
F&=&h\int_{-\infty}^{+\infty}\left[\frac{L_0^2}{2}\sum_{\alpha=1}^{2}{\left(\frac{\partial\mathbf{M}}{\partial x_\alpha}\right)^2-\frac{1}{2}}K_a^\text{eff}M_z^2 \right. \nonumber \\
& &\left. -\left(\mathbf{M}\cdot\mathbf{H}_\text{ext}\right)+f_\text{DMI}^\text{eff}\right]\,d\bm{\rho}
\label{eq:10},
\end{eqnarray}
where $K_a^\text{eff}=K_a-4\pi$, and
\begin{eqnarray}
f_\text{DMI}^\text{eff}&=&-\pi\kappa h\left(M_z\frac{\partial M_x}{\partial x}-M_x\frac{\partial M_z}{\partial x}\right.\nonumber \\
& &\left. +M_z\frac{\partial M_y}{\partial y}-M_y\frac{\partial M_z}{\partial y}\right)
\label{eq:11}.
\end{eqnarray}
It can be seen that, in this approximation, taking into account the magnetostatic energy leads to renormalization of the anisotropy constant and the appearance of the term $f_\text{DMI}^\text{eff}$, which has the conventional form for the DMI contribution \cite{Bog}. This energy can be evaluated as $\pi\kappa hM_0^2\sim1\text{ erg/cm}^2$ at $\pi h\sim10\text{ nm}$, $M_0\sim10^3\text{ erg/(Gs}\cdot\text{cm}^3)$, which is comparable to the DMI energy in the FM/HM system \cite{DMISW2}.
\section{\label{III}ONE-DIMENSIONAL CASE}
When FM magnetization $\mathbf{M}$ depends on a one coordinate ($x$) only, it can be represented as
\begin{eqnarray}
\mathbf{M}(x)&=&M_0\left[\text{sin}\Theta(x)\,\text{cos}\psi\,{\hat{\mathbf{e}}}_x+\text{sin}\Theta(x)\,\text{sin}\psi\,{\hat{\mathbf{e}}}_y \nonumber \right. \\
& &\left. +\text{cos}\Theta(x)\,{\hat{\mathbf{e}}}_z\right]
\label{eq:12},
\end{eqnarray}
where ${\hat{\mathbf{e}}}_i$ is the $i$-th unit vector of the Cartesian coordinate system, $\psi$ is the constant, and $\Theta(x)$ is a function of $x$. At $\psi=\pm\pi/2$ and $\psi=0$ or $\pi$, configuration of $\mathbf{M}$ is of the Bloch- and Néel-type, respectively. We study the stabilization problem of both localized (domain wall) and delocalized (MSp) magnetic textures. Although calculations of the MSp energy in an FM film on PM and SC substrates were already reported \cite{MS1,MS5}, stabilizing such a texture has not been discussed so far. Moreover, the case of the PM substrate is considered only in the linear approximation in $\chi$.
\subsection{\label{IIIA}Domain wall}
If the boundary conditions are $\Theta(-\infty)=0$ and $\Theta(\infty)=\pi$, then Eq.~(\ref{eq:12}) describes the Néel- ($\psi=0$, $\pi$) or Bloch-type ($\psi=\pm\pi/2$) domain wall. We restrict ourselves to considering the case of a PM substrate only. After substituting Eq.~(\ref{eq:12}) into Eqs.~(\ref{eq:10}),
(\ref{eq:11}) and varying it with respect to $\Theta$, one obtains at $H_\text{ext}=0$ that \cite{DMI2}
\begin{equation}
\frac{d^2\Theta}{dx^2}-\frac{K_a^\text{eff}}{L_0^2}\,\text{sin}\Theta\,\text{cos}\Theta=0
\label{eq:13},
\end{equation}
whose solution for the these boundary conditions is $\Theta(x)=\pi-\text{arccos}{\left(\text{tanh}\sqrt{K_a^\text{eff}}x/L_0\right)}$. For the free energy of a film with domain wall, we have that
\begin{equation}
\Delta F=F-F_0\propto2\eta\sqrt{K_a^\text{eff}}-\pi^2\kappa\,\text{cos}\psi
\label{eq:14},
\end{equation}
where $\eta=L_0/h$, and $F_0$ is the energy of a film uniformly magnetized along the normal, i.e., $F_0=-K_a^\text{eff}M_0^2hS/2$. The obtained magnetic state exists if $\kappa$ is small enough not to perturb too much the system ($\Delta F>0$) \cite{DMI2,Bog2,Bog3}. This condition can be rewritten as $\kappa<\kappa_c^\text{DW}$ and $\psi=0$, where $\kappa_c^\text{DW}=2\eta\sqrt{K_a^\text{eff}}/\pi^2$. For a certain set of parameters, there is the interval of $\kappa$ in which ${\Delta F>0}$. For example, $\kappa_c^\text{DW}\approx0.7$ ($\chi\approx0.4$) at $\eta\sim3$ and $K_a/(4\pi)\sim1.1$.
\subsection{\label{IIIB}MSp}
Under the condition that $\Theta(x)=kx$, Eq.~(\ref{eq:12}) describes the MSp (Fig.~\ref{fig:1}). Such a trial function $\Theta(x)$ allows for calculation of the exact magnetostatic energy (\ref{eq:6a}). Having calculated $\mathbf{M}(\mathbf{q})$, one obtains the free energy, $\Delta F(k,\sigma)=F(k,\sigma)-F_0$, at $H_\text{ext}=0$ and $\psi=0$, $\pi$, i.e.,
\begin{eqnarray}
\Delta F(k,\sigma)&=&\frac{1}{2}M_0^2hS\left[L_0^2k^2+\frac{1}{2}K_a^\text{eff} \right. \nonumber \\
& &\left. -\frac{\pi\kappa(k)}{kh}(1-e^{-kh})^2(1+\sigma)^2\right]
\label{eq:15},
\end{eqnarray}
if $k\neq0$ and $\Delta F(k,\sigma)=0$ if $k=0$. Here $\sigma=1$ ($-1$) corresponds to $\psi=0$ ($\pi$); see Appendix B for arbitrary $\psi$. Equation~(\ref{eq:15}) is consistent with results of the calculations performed in Refs.~\cite{MS1,MS5} and shows that the PM (SC) substrate decreases (increases) the energy of the system. The last statement is valid in the case of an arbitrary distribution of $\mathbf{M}$ (see Appendix~\ref{A}). From Eq.~(\ref{eq:15}) we can also see that only the interlayer magnetostatic energy, which is proportional to $\kappa$, depends on the direction of MSp rotation. In the case of FM/PM(SC), the rotation corresponding to $\sigma=1$ ($-1$) is energetically favorable, which is a result of removing the chiral degeneracy of the system. All these features can be understood using the method of images \cite{Landau} (Fig.~\ref{fig:1}). In Fig.~\ref{fig:1} the solid and dashed arrows show, respectively, the magnetic moments and their images for the cases of PM ($\kappa=1$) and SC ($\kappa=-1$) substrates. The boundary condition is vanishing of the tangential [for FM/PM, (Fig.~\ref{fig:1}a,b)] and normal [FM/SC, (Fig.~\ref{fig:1}c,d)] components of the magnetostatic field, which determine the mutual orientation of the magnetic moments and their images. The difference between the free energy in the cases of $\sigma=1$ (Fig.~\ref{fig:1}a,c) and $\sigma=-1$ (Fig.~\ref{fig:1}b,d) results from different configurations of the arrows.
Assuming that the period of the MSp is much larger than the film thickness $h$, i.e., $kh\ll1$, we can obtain expressions for the equilibrium $k^\ast$ and corresponding minimum of the free energy at $\sigma=1$,
\begin{equation}
k^\ast h\approx\frac{2\pi\kappa}{\eta^2},\ \Delta F(k^\ast,1)=\frac{1}{4}M_0^2hS\left(K_a^\text{eff}-\frac{8\pi^2\kappa^2}{\eta^2}\right)
\label{eq:16},
\end{equation}
where it was taken into account that $\eta\gg1$. Equation~(\ref{eq:16}) are valid under the condition that $\kappa>1/2$, which can only be satisfied for the FM/PM system and provides the stability of the state with respect to deviation of $\psi$ from zero (see Appendix B). From the condition that $\Delta F(k^\ast,1)=0$, one can determine the critical $\kappa$, i.e., $\kappa_c^\text{MSp}\approx\eta\sqrt{K_a^\text{eff}}/(2\sqrt2\pi)$. For $\kappa>\kappa_c^\text{MSp}$ the formation of MSp is energetically favorable. Since $\kappa\le1$, it is necessary to require the fulfillment of the relation $\kappa_c^\text{MSp}<1$. As a result, $\kappa_c^\text{MSp}\approx0.6$ ($\chi\approx0.2$) at $\eta\sim5$ and $K_a/(4\pi)\sim1.1$.
As for the FM/SC, the case of $\sigma=-1$ corresponds to $k^\ast=0$ [see Eq.~(\ref{eq:15})]. Thus, only the PM substrate stabilizes the Néel-type MSp with $\sigma=1$.
\section{\label{IV}MS\lowercase{k} STABILIZATION}
Now we consider the magnetization distribution with cylindrical symmetry (MSk) in the FM/PM case. As for the SC substrate, it increases the system energy, and so, MSk cannot be energetically favorable (see Appendix~A). The existence of a metastable MSk in the FM/SC is beyond the scope of this paper and requires additional research.
We represent $\mathbf{M}$ as
\begin{equation}
\mathbf{M}(\rho)=M_0\left[\text{sin}\Theta(\rho)\,\hat{\mathbf{e}}_\rho+\text{cos}\Theta(\rho)\,\hat{\mathbf{e}}_z\right]
\label{eq:17},
\end{equation}
where $\hat{\mathbf{e}}_\rho=\bm{\rho}/\rho$. Equation~(\ref{eq:17}) corresponds to the configuration shown in Fig.~\ref{fig:1}a. Let the external magnetic field $\mathbf{H}_\text{ext}$ be directed against the $z$-axis, so that $\Theta(0)=0$ and $\Theta(\infty)=\pi$. As a trial function, we take the simplest linear ansatz~\cite{Bog},
\begin{equation}
\Theta(\rho)=\frac{\pi\rho}{R}\theta(R-\rho)+\pi\theta(\rho-R)
\label{eq:18},
\end{equation}
where $R$ is the MSk radius and $\theta(t)$ is the Heaviside step function. Taking into account Eqs.~(\ref{eq:17}) and (\ref{eq:18}), the approximate free energy~(\ref{eq:10}), $\Delta F(\xi)=F(\xi)-F_0$, takes the form
\begin{eqnarray}
\Delta F(\xi)&=&\pi M_0^2h^3\left\{6.15\,\eta^2+\frac{1}{16}\left[K_a^\text{eff} \right. \right. \nonumber \\
& &\left. \left.+4\left(1-\frac{4}{\pi^2}\right)\frac{H_\text{ext}}{M_0}\right]\xi^2-\frac{\pi^2\kappa}{2}\xi\right\}
\label{eq:19},
\end{eqnarray}
where $\xi=2R/h$, $F_0=-M_0^2hS(K_a^\text{eff}/2+H_\text{ext}/M_0)$. The minimum of $\Delta F(\xi)$ is provided by the competition between the anisotropy energy with $K_a^\text{eff}$ and the effective DMI energy. Note that for trial function~(\ref{eq:18}) the exchange energy does not depend on $\xi$. As a result of free energy minimization, we obtain the equilibrium parameter $\xi^\ast$ and the corresponding energy $\Delta F(\xi^\ast)$ in the form
\begin{subequations}
\begin{equation}
\xi^\ast=\frac{16\pi^2\kappa}{K_a^\text{eff}+4\left(1-4/\pi^2\right)H_\text{ext}/M_0}
\label{eq:20a},
\end{equation}
\begin{eqnarray}
\Delta F(\xi^\ast)&=&\pi M_0^2h^3\left[6.15\,\eta^2- \right. \nonumber \\
& & \left. \frac{16\pi^4\kappa^2}{K_a^\text{eff}+4\left(1-4/\pi^2\right)H_\text{ext}/M_0}\right]
\label{eq:20b}.
\end{eqnarray}
\end{subequations}
The obtained magnetic state is energetically favorable fore $\kappa>\kappa_c^\text{MSk}$, where $\kappa_c^\text{MSk}\approx0.06\,\eta\left(K_a^\text{eff}+2.4\,H_\text{ext}/M_0\right)^{1/2}$. So, $\kappa_c^\text{MSk}\approx0.3$ ($\chi\approx0.07$) at $\eta\sim5$, $K_a/(4\pi)\sim1.1$ and $H_\text{ext}=0$.
We also present numerical calculations based on the exact magnetostatic energy $F_\text{MS}$. Then $F_\text{MS}^\text{intra}$ and $F_\text{MS}^\text{inter}$ are described by Eqs.~(\ref{eq:A18}) and (\ref{eq:A19}), where
\begin{subequations}
\begin{equation}
M_z(\mathbf{q})=\Delta M_z(q)-\frac{(2\pi)^2}{S}M_0\,\delta(\textbf{q})
\label{eq:21a},
\end{equation}
\begin{equation}
\Delta M_z(q)=\frac{2\pi M_0}{S}\int_{0}^{\infty}\rho\left[\text{cos}\Theta(\rho)+1\right]J_0(q\rho)\,d\rho
\label{eq:21b},
\end{equation}
\begin{eqnarray}
\text{div}\textbf{M}(q)&=&\frac{2\pi M_0\sigma}{S}\int_{0}^{\infty}\left[\rho\frac{d\Theta(\rho)}{d\rho}\text{cos}\Theta(\rho) \right. \nonumber \\
& & \left. +\text{sin}\Theta(\rho)\right]J_0(q\rho)\,d\rho
\label{eq:21c}.
\end{eqnarray}
\end{subequations}
Here $\text{div}\mathbf{M}(\mathbf{q})=\text{div}\mathbf{M}(q)$ is the Fourier transform of $\text{div}\mathbf{M}(\bm{\rho})$; $\delta(\mathbf{q})$ and $J_n(t)$ are the Dirac delta function and cylindrical Bessel function of order $n$. Figure~2 shows the parameter $\xi^\ast$ versus $\kappa$ for various $K_a$ (Fig.~\ref{fig:2}a) and $H_\text{ext}$ (Fig.~\ref{fig:2}b). In Fig.~\ref{fig:2}a,b the solid lines correspond to approximate Eq.~(\ref{eq:20a}), while the dots correspond to numerical calculations based on exact Eqs.~(\ref{eq:A18})--(\ref{eq:A19}), (\ref{eq:21a})--(\ref{eq:21c}). It can be seen that the Eq.~(\ref{eq:20a}) agrees with the exact calculation. The difference in the results seems to be due to the neglect of $\kappa qh/2$ in the second term of Eq.~(\ref{eq:9}). Nevertheless, the obtained approximate results are quite suitable for our evaluations.
\begin{figure}[h!!]
\includegraphics{fig_2
\caption{\label{fig:2}Equilibrium parameter $\xi^\ast$ vs. $\kappa$ in FM/PM for different (a) $K_a$ and (b) $H_\text{ext}$. The solid lines show the calculations using the approximate Eq.~(\ref{eq:20a}), the dots show the numerical calculations based on the exact Eqs. (\ref{eq:A18})--(\ref{eq:A19}), (\ref{eq:21a})--(\ref{eq:21c}).}
\end{figure}
Thus, MSk can be energetically favorable in FM/PM, which is in contrast to that in the FM/SC.
\section{\label{V}SUMMARY}
In conclusion, we have shown that the effective DMI term having a purely magnetostatic origin contributes to the free energy of the FM/PM and FM/SC hybrid systems. The value of this term can be evaluated as $\pi\kappa hM_0^2\sim1\text{ erg/cm}^2$ at $\pi h\sim10$ nm, $M_0\sim10^3 \text{erg}/(\text{Gs}\cdot\text{cm}^3)$, which is comparable to the conventional DMI energy in the FM/HM system. We have shown that the sign of the effective DMI constant is different for PM and SC substrates, i.e., $D_\text{eff}<0$ ($D_\text{eff}>0$) for FM/PM(SC). The value of $D_\text{eff}$ strongly depends on the system temperature near $T_c$, which allows for tuning the interaction between the film and substrate. Note that $D_\text{eff}$ can be measured experimentally by Brillouin spectroscopy \cite{DMISW2}. We have also shown that the effective DMI leads to stabilizing chiral magnetic textures (including MSk) only in the case of a PM substrate. This feature is explicable by decreasing the free energy of FM film in the FM/PM system owing to the effect of PM substrate, while it only increases in the FM/SC system. The difference between FM/PM and FM/SC systems is related to the difference in the formation of image dipoles in the PM and SC substrates (Fig.~\ref{fig:1}). To stabilize the MSk at $H_\text{ext}=0$, the condition that $\kappa>0.3$ ($\chi>0.07$) at $\eta\sim5$ and $K_a/(4\pi)\sim1.1$ should be fullfilled. If, for example, gadolinium ($C\approx0.4$ K, $T_c\approx294$ K \cite{Gd}) is taken as PM in the FM/PM system, the condition above will be effective in the temperature range $T-T_c<6$ K.
\begin{acknowledgments}
We acknowledge very useful discussions with M.V.~Sapozhnikov, A.S.~Mel’nikov, and I.S.~Burmistrov. The work was supported by the State Contract No.~0030-2021-0021 and Russian Foundation for Basic Research (Grant No.~20-02-00356). One of us (M.A.~K.) thanks to the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”.
\end{acknowledgments}
|
1,116,691,499,120 | arxiv | \section{Introduction}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{The Hodge--GUE conjecture}
Let $\overline{\mathcal{M}}_{g,k}$ denote the Deligne--Mumford moduli space of stable algebraic curves of genus $g$ with
$k$ distinct marked points. Denote by $\mathcal{L}_i$ the $i^{th}$ tautological line bundle over $\overline{\mathcal{M}}_{g,k}$,
and by $\mathbb{E}_{g,k}$ the rank $g$
Hodge bundle. Denote $\psi_i:=c_1(\mathcal{L}_i),\, i=1,\dots,k,$ and $\lambda_j:= c_j(\mathbb{E}_{g,k}),\, j=0,\dots,g.$
The Hodge integrals are integrals over $\overline{\mathcal{M}}_{g,k}$ of the form
$$
\int_{\overline{\mathcal{M}}_{g,k}} \psi_1^{i_1}\cdots \psi_k^{i_k} \cdot \lambda_1^{j_1} \cdots \lambda_g^{j_g}, \quad i_1,\dots,i_k,\, j_1,\dots,j_g \geq 0.
$$
These integrals will be defined to be \textit{zero} unless
$$
3g-3+k= \sum_{\ell=1}^k i_\ell + \sum_{\ell=1}^g \ell \cdot j_\ell.
$$
We will mainly consider in this paper the following {\it special cubic Hodge integrals}
\begin{equation}\label{chbcy}
\int_{\overline{\mathcal{M}}_{g,k}} \Lambda_g(-1)\,\Lambda_g(-1)\,\Lambda_g\left(\frac12\right)\,\psi_1^{i_1}\cdots \psi_k^{i_k}
\end{equation}
where $\Lambda_g(z):= \sum_{j=0}^g \lambda_j \, z^j$ denotes the Chern polynomial of $\mathbb{E}_{g,k}$. Interest to this particular case of cubic Hodge integrals was triggered by the celebrated R.\,Gopakumar--M.\,Mari\~no--C.\,Vafa conjecture \cite{GV, MV} regarding the Chern--Simons/string duality.
The special cubic Hodge free energy is the following generating function of Hodge integrals
\begin{equation} \label{cubic-hodge}
\mathcal H_{\tiny\rm cubic}({\bf t};\epsilon)=\sum_{g\geq 0} \epsilon^{2g-2} \sum_{k\geq 0} \frac 1{k!}
\sum_{i_1, \dots, i_k\geq 0} t_{i_1}\cdots t_{i_k}\int_{\overline{\mathcal M}_{g,k}}
\Lambda_g(-1)\,\Lambda_g(-1)\,\Lambda_g\left(\frac12\right)\, \psi_1^{i_1}\cdots \psi_k^{i_k}.
\end{equation}
Here and below, ${\bf t}=(t_0,t_1,\dots)$, and $\epsilon$ is a parameter.
Denote by $\mathcal{H}_g=\H_g({\bf t})$ the genus $g$ term of $\mathcal H_{\tiny\rm cubic}({\bf t};\epsilon)$, i.e.
\begin{equation}\label{genusexpand}
\mathcal{H}_{\tiny\rm cubic}({\bf t};\epsilon)=\sum_{g\geq0} \epsilon^{2g-2} \mathcal{H}_g({\bf t}).
\end{equation}
The exponential
\begin{equation}
e^{\mathcal H_{\tiny\rm cubic}({\bf t};\epsilon)}=:Z_{\tiny\rm cubic}({\bf t};\epsilon)
\end{equation}
is called the special cubic Hodge partition function.
On another hand, let ${\mathcal H}(N)$ denote the space of $N\times N$ Hermitean matrices. Denote by
$$
dM= \prod_{i=1}^N dM_{ii} \prod_{i<j} d{\rm Re} M_{ij} \, d{\rm Im}M_{ij}
$$
the standard unitary invariant volume element on ${\mathcal H}(N)$. The GUE partition function of size $N$ with even couplings is defined by
\begin{equation}\label{part}
Z_N({\bf s};\epsilon)=\frac{(2\pi)^{-{N}} }{Vol(N)} \int_{{\mathcal H}(N)} e^{- {\frac1\epsilon} \, {\rm tr} \,V(M; \, {\bf s})} dM.
\end{equation}
Here, $V(M; \, {\bf s})$ is an \textbf{even} polynomial or, more generally, a power series in $M$
\begin{equation}\label{pot}
V(M;{\bf s})=\frac12 M^2 -\sum_{j\geq 1} s_{2j} \, M^{2j},
\end{equation}
${\bf s}:=(s_2,s_4,s_6,\dots),$
and by $Vol(N)$ we denote the volume of the quotient of the unitary group over the maximal torus $\left[ U(1)\right]^N$
$$
Vol(N)=Vol\left( U(N)/\left[ U(1)\right]^N\right)=\frac{\pi^{\frac{N(N-1)}2} }{G(N+1)}, \quad G(N+1):=\prod_{n=1}^{N-1} n!.
$$
The integral will be considered as a formal saddle point expansion with respect to the small parameter $\epsilon$. Introduce the
\emph{'t Hooft coupling parameter} $x$ by
$$
x:=N\, \epsilon.
$$
Expanding the free energy $\mathcal{F}_N({\bf s};\epsilon):=\log Z_N({\bf s};\epsilon)$ in powers of $\epsilon$ and
replacing the Barnes $G$-function $G(N+1)$ by its asymptotic expansion yields
\begin{equation}\label{genusF}
\mathcal{F}_{\tiny\rm even}(x, {\bf s}; \epsilon):=\mathcal{F}_N({\bf s}) |_{N=\frac{x}{\epsilon}} -\frac1{12}\log \epsilon=\sum_{g\geq 0} \epsilon^{2g-2} \mathcal{F}_g (x, {\bf s}).
\end{equation}
The GUE free energy $\mathcal{F}_{\tiny\rm even}(x, {\bf s}; \epsilon)$ with even couplings can be represented \cite{thooft1, thooft2, BIZ, mehta} in the form
\begin{align}
&\mathcal{F}_{\tiny\rm even}(x, {\bf s};\epsilon )\nonumber\\
=& \frac{x^2}{2\epsilon^2} \left( \log x -\frac32\right) -\frac1{12} \log x +\zeta'(-1) +\sum_{g\geq 2} \epsilon^{2g-2} \frac{B_{2g}}{4g(g-1)x^{2g-2}} \nonumber\\
& + \sum_{g\geq 0} \epsilon^{2g-2} \sum_{k\geq 0} \sum_{i_1, \dots, i_k\geq 1} a_g(2i_1, \dots, 2i_k) \, s_{2i_1} \dots s_{2i_k} \, x^{2-2g -\left(k-{|i|}\right)}, \label{expan}
\end{align}
where
\begin{equation}
a_g(2i_1, \dots, 2i_k) = \sum_{\Gamma} \frac{1}{\#\, {\rm Sym} \, \Gamma}
\end{equation}
and the last summation is taken over all connected oriented ribbon graphs $\Gamma$
(with labelled half edges but unlabelled vertices) of genus $g$ with $k$ vertices of valencies $2i_1$, \dots, $2i_k$, $|i|:=i_1+\dots + i_k,$ and
$\#\,{\rm Sym}\, \Gamma$ is the order of the symmetry group of $\Gamma$. Here and below $B_k$ are the Bernoulli numbers.
The exponential
\begin{equation} \label{GUE-even-Z}
e^{\mathcal{F}_{\tiny\rm even}(x,{\bf s};\epsilon)}=:Z_{\tiny\rm even}(x,{\bf s};\epsilon
\end{equation}
is called the GUE partition function with \textbf{even} couplings. It is convenient to change normalization of the even couplings by introducing
\begin{equation}\label{ssbar}
\bar{s}_{k}:=\left(\begin{array}{c} 2k \\ k \end{array} \right)s_{2k}.
\end{equation}
The following statement was formulated in \cite{DY2}.
\begin{cnj} \label{conjecture1}
The following formula holds true
{\begin{equation}\label{id1}
Z_{\tiny\rm even}(x, \bar{\bf s}; \epsilon)=e^{\frac{A(x,\bar{\bf s})}{\epsilon^2}+\zeta'(-1)} Z_{\tiny\rm cubic}\left({\bf t}\left( x+\frac{\epsilon}2, \bar{\bf s}\right); \sqrt{2} \, \epsilon\right) Z_{\tiny\rm cubic}\left({\bf t}\left( x-\frac{\epsilon}2, \bar{\bf s}\right); \sqrt{2}\,\epsilon\right)
\end{equation}}
where
\begin{equation}\label{axs}
{A(x, \bar{\bf s})}=
\frac12 \sum_{k_1,k_2\geq 1} \frac{k_1\,k_2}{k_1+k_2} \, \bar{s}_{k_1} \, \bar{s}_{k_2} - \sum_{k\geq 1} \frac{k}{1+k}\, \bar{s}_{k} +
x \, \sum_{k\geq 1} \, \bar{s}_{k} + \frac14 - x
\end{equation}
and
\begin{equation}\label{time-sub}
{t_i(x, \bar{\bf s})} := \sum_{k\geq 1} k^{i+1} \bar{s}_k - 1 + \delta_{i,1} + x \cdot \delta_{i,0},\qquad i\geq 0.
\end{equation}
\end{cnj}
We refer to the conjectural identity \eqref{id1} as a \textit{Hodge--GUE correspondence}.
The Conjecture has been already verified in \cite{DY2} for genus $g=0,1,2$.
In the present paper we prove it for any genus.
\begin{theorem} [Main Theorem] \label{thm1}
Conjecture \ref{conjecture1} holds true.
\end{theorem}
The proof of the Main Theorem will be given in Sections \ref{section2},\,\ref{virgue},\,\ref{section4} below.
Expanding the logarithms of both sides of \eqref{id1} near $\bar{\bf s}=0$, $x=1$ one obtains a series of identities for the special cubic Hodge intersection numbers, e.g. $\forall\,g\geq 2,$
\begin{eqnarray}
&& \hspace{-6mm}
2^g \sum_{ \mu \in \mathbb{Y}} \frac{(-1)^{\ell(\mu)}}{m(\mu)!} \int_{\overline{\mathcal M}_{g,\ell(\mu)}}
\Lambda_g(-1)\,\Lambda_g(-1)\,\Lambda_g\left(\frac12\right)
\, \prod_{i=1}^{\ell(\mu)} \psi_i^{\mu_i+1} \nonumber\\
&& \hspace{-6mm}
\quad = \frac1{2g(2g-1)(2g-2)} \, \sum_{g'=0}^g (2g'-1) \, \left(\begin{array}{c} 2g \\ 2g'\\ \end{array}\right) \frac{E_{2g-2g'} \, B_{2g'}} {2^{2g-2g'}}. \label{simpleid}
\end{eqnarray}
Here, $\mathbb{Y}$ denotes the set of partitions; for $\mu\in \mathbb{Y}$,
$\ell(\mu)$ denotes the length of $\mu$, $m_i(\mu)$ denotes the multiplicity of $i$ in $\mu$,
$m(\mu)!:=\prod_{i=1}^\infty m_i(\mu)!.$ And $E_k$ are the Euler numbers, defined via
$$
\frac1{\cosh z} = \sum_{k=0}^\infty \frac{E_k} {k!} z^k.
$$
Note that the left hand side of the above identity is actually a finite sum, due to the dimension condition.
To the best of our knowledge such identities even the simplest one \eqref{simpleid} did not appear in the literature.
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Three integrable systems and their Virasoro symmetries}
Here we will try to explain connections between the main playing characters of Conjecture \ref{conjecture1} and integrable systems. These connections provided motivations for the formulation of the Main Conjecture in \cite{DY2}. They might also be helpful for the reader for a better understanding of the structure of the proof.
Our point is that the partition functions $Z_{\tiny\rm even}$ and $Z_{\tiny\rm cubic}$ as functions of coupling parameters are tau functions of certain integrable hierarchies. For the GUE partition function depending on even couplings only it was already observed in 1991 by E.\,Witten \cite{Witten}. The corresponding integrable hierarchy is made of the commuting flows for the Volterra lattice {equation}, also called the {{\it discrete} Korteweg--de Vries (KdV) equation}
$$
\dot w_n=w_n\left( w_{n+1}-w_{n-1} \right), \quad n\in\mathbb Z.
$$
We will write it in the interpolated version
\begin{equation}\label{volterra1}
u_t=\frac{ e^{u(x+\epsilon)}-e^{u(x-\epsilon)}}{\epsilon},
\end{equation}
$w_n=e^{u(n\,\epsilon, \epsilon \,t)}$.
The solution of interest is given by the formula
{\begin{equation}\label{volterra2}
e^{u(x,{\bf s};\epsilon)}=\frac{Z_{\tiny\rm even}(x+\epsilon, {\bf s}; \epsilon) \, Z_{\tiny\rm even}(x-\epsilon, {\bf s}; \epsilon)}{\left[Z_{\tiny\rm even}(x, {\bf s}; \epsilon)\right]^2}, \quad t=s_2.
\end{equation}}
Dependence on other even couplings is governed by the higher flows of the Volterra hierarchy.
For the special cubic Hodge partition function the corresponding integrable hierarchy\footnote{In \cite{Zhou2} J.~Zhou considered alternative generating functions of the cubic Hodge integrals and showed that they are tau functions of the 2D Toda integrable hierarchy. It would be interesting to establish a direct connection between these results and the constructions of \cite{DLYZ} used in the present paper.
It is also worthwhile to mention interesting results of A.~Buryak \cite{Bur1} and M.~Kazarian \cite{Kaz} about integrable hierarchies involved into the theory of {\it linear} Hodge integrals.} looks more complicated. The commuting flows of this hierarchy are represented by PDEs with infinite expansions w.r.t. an auxiliary parameter $\epsilon$. The first equation reads
\begin{eqnarray}\label{hodgehier}
\hspace{-8mm} && q_t=q\, q' +\frac{\epsilon^2}{12} \left( q'''-\frac32 q' \, q''\right) \nonumber
\\
&&\qquad -\frac{\epsilon^4}{2880}\left(6 \, q^{(5)} -9 \, q'\, q^{(4)} -45 \, q'' q''' +2 \, (q')^2 \, q''' +4 \, q'\, (q'')^2\right) \nonumber\\
&&\qquad +{\mathcal O}( \epsilon^6),\qquad '=\partial_{t_0}, ~ t=t_1.
\end{eqnarray}
The solution to \eqref{hodgehier} of interest is given by $$q({\bf t};\epsilon)=\epsilon^2 \partial_{t_0}^2 \log Z_{\tiny\rm cubic}({\bf t}; \epsilon). $$
All higher order terms of the $\epsilon^2$-expansion in this and in other equations of the {\it special cubic Hodge hierarchy} are
graded homogeneous differential polynomials in $q$. See in \cite{DLYZ} for the details about the construction
of the Hodge hierarchy.
One more integrable hierarchy appears in this story: this is the celebrated KdV hierarchy where the first equation reads
\begin{equation}\label{kdvhier}
v_t=v\, v' +\frac{\epsilon^2}{12} v''',\qquad '=\partial_{t_0}.
\end{equation}
Due to the remarkable discovery of E.\,Witten and M.\,Kontsevich, a particular tau function of the KdV hierarchy is given by the generating function of intersection numbers of psi-classes on the Deligne--Mumford moduli spaces
\begin{equation}\label{kdvtau}
Z_{\tiny\rm WK}({\bf t};\epsilon) := \exp \left(\sum_{g\geq 0} \epsilon^{2g-2}\sum_{k\geq 0} \frac1{k!} \, \sum_{i_1,\dots,i_k\geq 0} \int_{\overline{\mathcal{M}}_{g,k}} \psi_1^{i_1}\cdots \psi_k^{i_k} \, t_{i_1}\cdots t_{i_k} \right).
\end{equation}
The solution of \eqref{kdvhier} of interest is given in terms of this tau function by
$$
v=\epsilon^2 \partial_{t_0}^2 \log Z_{\tiny\rm WK}({\bf t};\epsilon).
$$
To establish relationships between the partition functions $Z_{\tiny\rm even}$, $Z_{\tiny\rm cubic}$ and $Z_{\tiny\rm WK},$ we will do it in a more general setting, working with arbitrary\footnote{We consider formal solutions to these PDEs admitting regular expansions in $\epsilon$
$$
v(x,{\bf t}; \epsilon)=\sum_{k\geq 0}\epsilon^k v_k(x; {\bf t}).
$$
}
tau functions $\tau_{\tiny\rm Volterra}$, $\tau_{\tiny\rm cubic}$ and $\tau_{\tiny\rm KdV}$ of \eqref{volterra1}, \eqref{hodgehier} and \eqref{kdvhier} respectively.
Recall, for the particular case of the Hodge hierarchy \eqref{hodgehier} the construction of \cite{DLYZ} gives a map
$$
\{ \mbox{tau functions of the KdV hierarchy}\} \to \{ \mbox{tau functions of the special cubic Hodge hierarchy}\}.
$$
Introduce a function
\begin{equation}\label{Gamma}
\Phi(z)=2^{-2z} \frac{\Gamma(1-z)}{\Gamma(1+z)}\sqrt{\frac{\Gamma(1+2z)}{\Gamma(1-2z)}}.
\end{equation}
It is a multivalued meromorphic function on the complex plane $z\in\mathbb C$ with branch points at nonzero half-integers. With a suitable choice of the branches one has
$$
\Phi(z)\to e^{\mp \frac{\pi i}4}, \qquad |z|\to \infty, ~ \pm {\rm Re}\, z>0.
$$
It satisfies the identity
$$
\Phi(-z) \, \Phi(z)=1,
$$
so it defines a canonical transformation
$$
f(z)\mapsto \Phi(z) f(z)
$$
on the Givental symplectic space (see Appendix \ref{appa} below for the details about the Givental's construction).
Denote $\widehat\Phi$ the quantization\footnote{In the quantization procedure we will identify the function $\Phi(z)$ with its asymptotic expansion at $z=\infty$; see Appendix \ref{appa} for the details.} of this symplectomorphism acting on the corresponding Fock space.
\begin{prp} 1) For an arbitrary tau function $\tau_{\tiny\rm KdV}({\bf t}; \epsilon)$ of the KdV hierarchy, the function $\tau_{\tiny\rm cubic}({\bf t}; \epsilon)$ defined by
\begin{equation}\label{kdvtocub}
\tau_{\tiny\rm cubic}({\bf t}; \epsilon):= \widehat \Phi \, \tau_{\tiny\rm KdV}({\bf t}; \epsilon )
\end{equation}
is a tau function of the special cubic Hodge hierarchy.
\newline 2) For $\tau_{\tiny\rm KdV}({\bf t}; \epsilon )=Z_{\tiny\rm WK}({\bf t}; \epsilon )$ the corresponding tau function $\tau_{\tiny\rm cubic}({\bf t}; \epsilon)$ coincides with $Z_{\tiny\rm cubic}({\bf t}; \epsilon)$.
\end{prp}
This proposition is just a reformulation of a part of the results of \cite{DLYZ}.
Let us now describe another map
$$
\{ \mbox{tau functions of the special cubic Hodge hierarchy}\} \to \{\mbox{tau functions of the Volterra hierarchy}\}.
$$
First, observe that any solution of the special cubic Hodge hierarchy admitting regular expansion in $\epsilon$ can be obtained from another such solution by a shift
$$
t_i \mapsto t_i + t_i^0, \quad i\geq 0
$$
for some constants $t_i^0$ that can also depend on $\epsilon$. A similar statement is valid, of course, also for solutions to the KdV and the Volterra hierarchy.
We choose a base point $\tau_{\tiny\rm cubic}^{\tiny\rm vac}(\bf t; \epsilon)$ in the space of tau-functions in such a way that
$$
Z_{\tiny\rm cubic}({\bf t}; \epsilon)= \tau_{\tiny\rm cubic}^{\tiny\rm vac}({\bf t}; \epsilon)|_{t_1 \mapsto t_1-1}
$$
(which is usually called the ``dilaton shift"). More generally, tau function of an arbitrary solution can be represented in the form
\begin{equation}\label{tshift}
\tau_{\tiny\rm cubic}({\bf t}; \epsilon) = \tau_{\tiny\rm cubic}^{\tiny\rm vac}({\bf t}; \epsilon)|_{t_i \mapsto t_i +\bar t_i^0, \,i\geq 0}.
\end{equation}
\begin{theorem} \label{thm11}
1) For any tau function \eqref{tshift} of the special cubic Hodge hierarchy
the function $\tau_{\tiny\rm Volterra} (x, \bar{\bf s}; \epsilon)$ defined by
\begin{equation}\label{cubtovolt}
\tau_{\tiny\rm Volterra} (x, \bar{\bf s}; \epsilon) :=c\cdot e^{\frac{A^{\rm vac}(x,\bar{\bf s}-\bar {\bf s}^0)}{\epsilon^2}} \tau_{\tiny\rm cubic}\left( {\bf t}\left( x+\frac{\epsilon}2, \bar{\bf s} - \bar {\bf s}^0\right); \sqrt{2} \, \epsilon\right) \tau_{\tiny\rm cubic}\left( {\bf t}\left( x-\frac{\epsilon}2, \bar{\bf s} - \bar {\bf s}^0\right); \sqrt{2} \, \epsilon\right)
\end{equation}
with
\begin{equation}\label{ashift}
A^{\rm vac}(x,\bar {\bf s})=\frac12\sum_{k_1, \, k_2\geq 1} \frac{k_1 k_2}{k_1+k_2} \bar s_{k_1} \bar s_{k_2} + x\sum_{k\geq 1}\bar s_k
\end{equation}
and
\begin{equation}\label{tshift2}
t_i(x, \bar{\bf s})=\sum_{k\geq 1} k^{i+1} \bar s_k +x\cdot \delta_{i,0} {+t_i^0}
\end{equation}
for an arbitrary constant $c$ is a tau function of the Volterra hierarchy.
2) For $\tau_{\tiny\rm cubic} ({\bf t}; \epsilon)=Z_{\tiny\rm cubic}({\bf t}; \epsilon)$, we have $t_i^0=\delta_{i,1}$. With the choice $c=e^{\zeta'(-1)}$
and $\bar{s}_k^0=\delta_{k,1}$ the formula \eqref{cubtovolt} gives the GUE partition function with even couplings $Z_{\tiny\rm even}(x, \bar{\bf s}; \epsilon)$.
\end{theorem}
The Main Theorem follows from the above statements. Also an integrable hierarchy conjecture of \cite{DLYZ} regarding the relationship between the special cubic Hodge hierarchy \eqref{hodgehier} and the Volterra hierarchy readily follows from the first part of the above Theorem \ref{thm11}.
In a nutshell the tool for proving the above statements including the Main Theorem is in using Virasoro constraints for tau functions of the three hierarchies.
For the KdV hierarchy this is well known. Recall \cite{DVV} that the operators $L_m^{KdV}=L_m^{KdV}\left( \epsilon^{-1}{\bf t}, \epsilon \,\partial/\partial{\bf t}\right)$, $m\geq -1$ given by
\begin{align}
L_{-1}^{KdV}:=&\sum_{i\geq 1} t_i \, {\partial\over \partial t_{i-1}} +{{t_0^2}\over 2 \, \epsilon^2} , \label{virakdvintro1} \\
L_0^{KdV}:=& \sum_{i\geq 0} {2i+1\over 2}\, t_i\,{\partial\over\partial t_{i}}+{1\over 16} , \label{virakdvintro2}\\
L_m^{KdV}:=& {\epsilon^2\over 2} \sum_{i+j=m-1} {(2i+1)!!\,(2j+1)!!\over 2^{m+1}}
{\partial^2\over \partial t_i \partial t_j}+ \sum_{i\geq 0} {(2i+2m+1)!!\over 2^{m+1}
(2i-1)!!}\, t_i\,{\partial\over\partial t_{i+m}},
\quad m\geq 1\nonumber\\\label{virakdvintro3}
\end{align}
define infinitesimal symmetries of the KdV hierarchy by their linear action on the tau-function
$$
\tau_{\tiny\rm KdV} ({\bf t}; \epsilon)\mapsto \tau_{\tiny\rm KdV} ({\bf t}; \epsilon)+\delta\cdot L_m^{KdV}\left( \epsilon^{-1}{\bf t}, \epsilon \,\partial/\partial{\bf t}\right)\tau_{\tiny\rm KdV} ({\bf t}; \epsilon)+{\mathcal O}(\delta^2),\quad m\geq -1.
$$
The Witten--Kontsevich tau function is uniquely specified by the system of {\it Virasoro constraints}
$$
L_m^{KdV}\left( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}\right)Z_{\tiny\rm WK} ({\bf t}; \epsilon)=0, \quad m\geq -1
$$
where $\tilde t_i=t_i-\delta_{i,1}$ (the so-called {\it dilaton shift}).
For the special cubic Hodge hierarchy one can use the operators
$$
L_m^{\rm cubic}\left( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}\right) =\widehat\Phi\circ L_m^{KdV}\left( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}\right)\circ \widehat\Phi^{-1}, \quad m\geq -1
$$
for formulating an analogous system of Virasoro constraints for the special cubic Hodge potential; they were already derived
in \cite{Zhou1}.
Surprisingly, we could not find in the literature any analogue of the Virasoro constraints for the Volterra hierarchy. We obtain them in the following way.
First, let us modify the Virasoro operators $L_m^{\rm cubic}:=L_m^{\rm cubic}\left( \epsilon^{-1}\bf t, \epsilon \,\partial/\partial{\bf t}\right)$ by introducing
\begin{equation}
\label{hankel}
\widetilde L_m^{\rm cubic} = \frac1{2\pi i} \int\limits_{-\infty}^{0+} L(z) \, e^{m\, z}\, dz , \quad m\geq 0
\end{equation}
where
$$
L(z)=\sum_{k\geq -1}\frac{L_k^{\rm cubic}}{z^{k+2}}
$$
and we use in \eqref{hankel} a Hankel loop integral
$$
\frac1{\Gamma(x)}= \frac1{2\pi i} \int\limits_{-\infty}^{0+} e^t \, t^{-x}dt.
$$
The modified operators still satisfy the Virasoro commutation relations.
The following Key Lemma is crucial for completing the proofs.
\begin{lem} [Key Lemma] \label{keyintro}
There exist unique operators
$L_m^{\rm even}=L_m^{\rm even}\left( \epsilon^{-1}x, \epsilon^{-1}{\bf s}, \epsilon \,\partial/\partial{\bf s}\right)$, $m\geq 0$,
${\bf s}=(s_2, s_4, \dots)$ satisfying
\begin{eqnarray}
&&
\hspace{-8mm} e^{-\frac{A^{\rm vac}}{2\epsilon^2} }\circ \left. L_m^{\rm even} \left( \epsilon^{-1}x, \epsilon^{-1} {\bf s}, \epsilon \,\partial/\partial{\bf s}\right)\circ e^{\frac{A^{\rm vac}}{2\epsilon^2} }=4^m \, \widetilde L_m^{\rm cubic}\left( \epsilon^{-1}{\bf t}, \epsilon \,\partial/\partial{\bf t}\right) \right|_{\small\begin{array}{l}
{\bf t}= {\bf t}\textsuperscript{\tiny\rm vac}(x,{\bf s})\\ \epsilon\mapsto \sqrt{2}\, \epsilon\end{array}}
\label{deflmeven}\\
&&
\nonumber\\
&&
\nonumber\\
&& \hspace{-8mm}
A^{\rm vac}(x, {\bf s}) =\frac12\sum_{k_1\, k_2} \frac{k_1 k_2}{k_1 +k_2} \left(\begin{array}{c} 2k_1\\
k_1\end{array}\right) \left(\begin{array}{c} 2k_2\\
k_2\end{array}\right) s_{2k_1} s_{2k_2}+x\, \sum_k \left(\begin{array}{c} 2k\\
k\end{array}\right) s_{2k}
\nonumber\\
&& \hspace{-8mm}
t_i\textsuperscript{\tiny\rm vac}(x,{\bf s})=\sum_k \left(\begin{array}{c} 2k\\
k\end{array}\right) k^{i+1} s_{2k} +x\cdot \delta_{i,0}.
\nonumber
\end{eqnarray}
The uniquely defined linear operators $L_m^{\rm even}$ satisfy the Virasoro commutation relations
\begin{equation}\label{even-comm}
\left[L_m^{\rm even}\,,\,L_n^{\rm even}\right]=(m-n) \, L^{\rm even}_{m+n}, \qquad \forall\,m,\, n\geq 0
\end{equation}
and have the following explicit expressions:
\begin{align}\label{vire0}
& L_0^{\rm even} = \sum_{k\ge 1} k \, s_{2k} \, \frac{\partial}{\partial s_{2k}}+\frac{x^2}{4\epsilon^2} - \frac1{16} ,\\ \label{viren}
& L_n^{\rm even} = \epsilon^2\sum_{k=1}^{n-1} \frac{\partial^2}{\partial s_{2k} \, \partial s_{2n-2k}}+ x \, \frac{\partial}{\partial s_{2n}}+ \sum_{k\ge 1} k\, s_{2k} \, \frac{\partial}{\partial s_{2k+2n}} ,\quad n\geq 1.
\end{align}
\end{lem}
We now explain how the operators $L_m^{\rm even}$ are related to symmetries of the Volterra hierarchy.
\begin{theorem}\label{ViraVolterraintro}
Let $\tau_{\tiny\rm Volterra}(x,{\bf s}; \epsilon)$ be a tau function of the Volterra hierarchy. Introduce the
modified tau function $\tilde\tau_{\tiny\rm Volterra}(x,{\bf s}; \epsilon)$ defined by the equation
$$
\tau_{\tiny\rm Volterra}(x,{\bf s}; \epsilon)=\tilde\tau_{\tiny\rm Volterra}\left(x+\frac{\epsilon}2,{\bf s}; \epsilon\right) \tilde\tau_{\tiny\rm Volterra}\left(x-\frac{\epsilon}2,{\bf s}; \epsilon\right).
$$
Then we have
\begin{itemize}
\item[1)] The operators $L_n^{\rm even}$ give infinitesimal symmetries of the Volterra hierarchy by means of linear action on the modified tau function
\begin{equation}\label{addsymmvol}
\tilde\tau_{\tiny\rm Volterra}(x,{\bf s}; \epsilon)\mapsto \tilde\tau_{\tiny\rm Volterra}(x,{\bf s}; \epsilon)+\delta\cdot L_n^{\rm even}\tilde\tau_{\tiny\rm Volterra}(x,{\bf s}; \epsilon)+{\mathcal O}(\delta^2).
\end{equation}
\item[2)] Denote $\widetilde Z(x,{\bf s}; \epsilon)$ the modified GUE partition function associated with $\tau_{\tiny\rm Volterra}=Z_{\rm even}$. It is annihilated, after the dilaton shift $s_2 \mapsto s_2-\frac12$ by the operators $L_n^{\rm even}$.
\end{itemize}
\end{theorem}
\paragraph{Organization of the paper}
Sections \ref{section2},\,\ref{virgue},\,\ref{section4} are all towards the proof of the Main Theorem.
In Section \ref{section5} we present two applications of the Main Theorem, including introduction
of a reduced GUE potential and a proof of an integrable hierarchy conjecture proposed in \cite{DLYZ}.
\paragraph{Acknowledgements} The authors are grateful to Don Zagier and Jian Zhou for several very useful discussions.
This work is partially supported by NSFC No.\,11371214 and No.\,11471182.
\setcounter{equation}{0}
\setcounter{theorem}{0}
\section{Virasoro constraints for the cubic Hodge partition function} \label{section2}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Cubic Hodge partition function}
Denote by ${\rm ch}_{i}(\mathbb{E}_{g,k}),\,i\geq 0$ the components of the Chern character of $\mathbb{E}_{g,k}$.
\begin{lem}
The partition function $Z_{\tiny\rm cubic}$ has the following alternative expression
$$
Z_{\tiny\rm cubic}({\bf t}; \epsilon) = \exp \left(\sum_{g=0}^\infty \sum_{k=0}^\infty \frac 1{k!} \, \sum_{i_1,\dots,i_k\geq 0}
\int_{\overline{\mathcal{M}}_{g,k}} \psi_1^{i_1}\cdots \psi_k^{i_k} \cdot \Omega_{g,k}\, t_{i_1}\cdots t_{i_k} \right)
$$
with
$$
\Omega_{g,k}:= \exp \left( \sum_{j=1}^\infty (2j-2)! \left(2^{-2j+1}-2\right) \, {\rm ch}_{2j-1}(\mathbb{E}_{g,k}) \right).
$$
\end{lem}
\begin{prf}
Let $x_1,\dots,x_g$ be the Chern roots of $\mathbb{E}_{g,k}$, i.e.
$$\Lambda_g(z)=\prod_{i=1}^g (1+z\, x_i),\qquad {\rm ch}_m(\mathbb{E}_{g,n})=\frac{1}{m!}(x_1^m + \dots+ x_g^m).$$
Then we have
\begin{eqnarray}
\Omega_{g,k} &=& e^{\sum_{j\geq 1} (2j-2)! \, ((1/2)^{2j-1}-2) \, {\rm ch}_{2j-1} }
= e^{\sum_{m\geq 1} (-1)^{m-1} \, (m-1)! \, ((-1)^m+(-1)^m+(1/2)^m) \, {\rm ch}_m }\nonumber\\
&=& e^{\sum_{m\geq 1} \frac{(-1)^{m-1}}m \, ((-1)^m+(-1)^m+(1/2)^m) \, (x_1^m + \dots+ x_g^m)} = \Lambda_g(-1) \cdot \Lambda_g(-1) \cdot \Lambda_g\left(\frac12\right).\nonumber
\end{eqnarray}
Note that in the above derivations we have used Mumford's relations \cite{Mu}
$$
{\rm ch}_{2j} (\mathbb{E}_{g,k}) = 0, \qquad \forall\, j\geq 0.
$$
The lemma is proved.
\end{prf}
\begin{lem} [\cite{FP}]
\label{exp-H}
The special cubic Hodge partition function has the following expression
\begin{equation}
Z_{\tiny\rm cubic} ({\bf t};\epsilon) = e^{\sum_{j = 1}^\infty \frac{B_{2j}}{j\,(2j-1)} \left(-1+ 2^{-2j}\right) \, D_j } \, Z_{\tiny\rm WK}({\bf t}; \epsilon).
\end{equation}
Here, $D_j$ are operators defined by
\begin{equation}\label{dj}
D_j:= \frac{\partial}{\partial t_{2j}} - \sum_{i\geq 0} t_i \frac{\partial }{\partial t_{i+2j-1}} + \frac{\epsilon^2}2 \, \sum_{a=0}^{2j-2} (-1)^a \frac{\partial^2 }{\partial t_a \partial t_{2j-2-a}},\qquad j\geq 1,
\end{equation}
and $Z_{\tiny\rm WK}$ denotes the Witten--Kontsevich partition function
\cite{Witten,Kontsevich}
$$
Z_{\tiny\rm WK}({\bf t};\epsilon) := \exp \left(\sum_{g\geq 0} \epsilon^{2g-2} \sum_{k\geq 0} \frac1{k!} \, \sum_{i_1,\dots,i_k\geq 0} \int_{\overline{\mathcal{M}}_{g,k}} \psi_1^{i_1}\cdots \psi_k^{i_k} \, t_{i_1}\cdots t_{i_k} \right).
$$
\end{lem}
\begin{prf}
This is just a particular case of the C.\,Faber--R.\,Pandharipande \cite{FP} algorithm
for computing Hodge integrals (see Proposition 2 therein).
\end{prf}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Virasoro constraints. The first version.}
It is known that \cite{DVV} $Z_{\tiny\rm WK}$ satisfies the following system of linear partial differential equations
\begin{equation} \label{Vira-WK}
L_m^{KdV}( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t})Z_{\tiny\rm WK} ({\bf t}; \epsilon)=0, \quad m\geq -1
\end{equation}
where the operators $L_m^{KdV},\,m\geq -1$ have been defined in \eqref{virakdvintro1}--\eqref{virakdvintro3}, and $\tilde t_i=t_i-\delta_{i,1}$.
The operators $L_m^{KdV},\,m\geq -1$ satisfy the
Virasoro commutation relations
$$
\left[L_m^{KdV} \,,\, L_n^{KdV} \right]= (m-n) \, L_{m+n}^{KdV},\quad \forall\,m,n\geq -1.
$$
Equations \eqref{Vira-WK} are called the \textit{Virasoro constraints} for $Z_{\tiny\rm WK}$. Note that solution to \eqref{Vira-WK} is unique up to a constant factor \cite{DVV,LYZ}.
It follows from Lemma \ref{exp-H} and eq.\,\eqref{Vira-WK} that
\begin{lem} [\cite{Zhou1}] \label{vira-v1}
The cubic Hodge partition function satisfies
\begin{equation}\label{vira-Hodge}
L_m^{\rm cubic}( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}) Z_{\tiny\rm cubic} = 0,\quad \forall\, m\geq -1
\end{equation}
where $\tilde t_i=t_i-\delta_{i,1}$, and $L_m^{\rm cubic}( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t})$ are linear operators defined by
\begin{eqnarray}
&& L_m^{\rm cubic}( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t})= e^{G} \circ L_m^{KdV}( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}) \circ e^{- G} \nonumber\\
&& G: = \sum_{j = 1}^\infty \frac{B_{2j}}{j\,(2j-1)} \left(-1+ 2^{-2j}\right) \, D_j,\quad m\geq -1\nonumber
\end{eqnarray}
where the operators $D_j$ are defined in eq.\,\eqref{dj}.
\end{lem}
Denote $L_m^{\rm cubic} = L_m^{\rm cubic}( \epsilon^{-1} { \bf t}, \epsilon \,\partial/\partial{\bf t})$. Clearly, the operators $L_m^{\rm cubic}$ satisfy the Virasoro commutation relation
$$
[L_m^{\rm cubic},L_n^{\rm cubic}]= (m-n) \, L_{m+n}^{\rm cubic},\qquad \forall\,m,n\geq -1.
$$
We call equations \eqref{vira-Hodge} the first version of Virasoro constraints for $Z_{\tiny\rm cubic}$; we also refer to this version of Virasoro constraints as
\textit{Zhou's version}.
\begin{emp} By a straightforward calculation we obtain that \cite{Zhou1,DLYZ}
\begin{equation}
L_{-1}^{\rm cubic} = \sum_{k\geq 1} t_k \frac{\partial}{\partial t_{k-1}}+\frac{ t_0^2}{2\epsilon^2}-\frac1{16}.
\end{equation}
\end{emp}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{A Lie algebra lemma}
\begin{lem} \label{alg_lem}
For any basis $\{L_m \, | \, m\geq -1\}$ of an infinite dimensional Lie algebra satisfying
$$
[L_m,L_n] = (m-n) \, L_{m+n},\qquad \forall \, m,n\geq -1
$$
where $[\,,\,]$ denotes the Lie bracket of the Lie algebra, define
$$
\widetilde L_{m} := \sum_{k\geq -1} \frac{m^{k+1}}{(k+1)!} \, L_k,\qquad m\geq 0.
$$
Then
$$
[ \widetilde L_{m} \,,\, \widetilde L_{n} ] = (m-n) \, \widetilde L_{m+n}, \qquad \forall \, m,n\geq 0.
$$
\end{lem}
\begin{prf}
\begin{eqnarray}
[ \widetilde L_{m} \,,\, \widetilde L_{n} ]
&=& \left [ \sum_{k_1=-1}^\infty \frac{m^{k_1+1}}{(k_1+1)!} \, L_{k_1} \,,\, \sum_{k_2=-1}^\infty \frac{n^{k_2+1}}{(k_2+1)!} \, L_{k_2} \right] \nonumber\\
&=& \sum_{k_1,k_2=-1}^\infty \frac{m^{k_1+1} \, n^{k_2+1}}{(k_1+1)! (k_2+1)!} (k_1-k_2)\, L_{k_1+k_2} \nonumber\\
&=& \sum_{k=-1}^\infty \left( \sum_{k_1+(k_2+1)=k+1\atop k_1\geq 0,k_2\geq -1} \frac{m^{k_1+1} \, n^{k_2+1}}{k_1! \, (k_2+1)!} - \sum_{(k_1+1)+k_2=k+1 \atop k_1\geq-1,k_2\geq 0 } \frac{ m^{k_1+1} \, n^{k_2+1}}{(k_1+1)! \, k_2!} \right)\, L_k \nonumber\\
&=& \sum_{k=-1}^\infty \frac1{(k+1)!} \, \left( m \, (m+n)^{k+1} - n \, (m+n)^{k+1} \right)\, L_k = (m-n) \, \widetilde L_{m+n}. \nonumber
\end{eqnarray}
The lemma is proved.
\end{prf}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Virasoro constraints. The second version.}
\begin{dfn} \label{wlmdef}
Define a set of linear operators $\widetilde L^{\rm cubic}_{m}$ by
$$
\widetilde L^{\rm cubic}_{m} := \sum_{k=-1}^\infty \frac{m^{k+1}}{(k+1)!} \, L_k^{\rm cubic},\quad m\geq 0.
$$
\end{dfn}
Clearly, this definition agrees with \eqref{hankel}.
Using Lemma \ref{vira-v1} and Lemma \ref{alg_lem} we obtain
\begin{theorem}\label{vira-2-lem}
The cubic Hodge partition function satisfies
\begin{equation}\label{vira-Hodge2}
\widetilde L_m^{\rm cubic} ( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}) Z_{\tiny\rm cubic}= 0,\quad \forall\, m\geq 0
\end{equation}
where $\tilde t_i = t_i-\delta_{i,1}$.
Moreover,
$$
\left[ \widetilde L_{m}^{\rm cubic} \,,\, \widetilde L_{n}^{\rm cubic} \right] = (m-n) \, \widetilde L_{m+n}^{\rm cubic}, \quad \forall \, m,n\geq 0.
$$
\end{theorem}
We call \eqref{vira-Hodge2} the second version of Virasoro constraints for $Z_{\tiny\rm cubic}$.
\begin{theorem} \label{lwl0-wl1-wl2}
The explicit expressions for $\widetilde L_0^{\rm cubic},$ $\widetilde L_1^{\rm cubic}$ and $\widetilde L_2^{\rm cubic}$ are
\begin{eqnarray}
&& \widetilde L_0^{\rm cubic} = \sum_{i\ge 1} t_i \frac{\partial }{\partial t_{i-1}}+\frac{t_0^2}{2\epsilon^2} -\frac1{16} , \label{wl0}\\
&& \widetilde L_1^{\rm cubic} = \frac12 \, \sum_{i\geq 0} \sum_{j=0}^i \binom{i}{j} \left(2\, t_{j+1}+ t_j \right)\frac{\partial }{\partial t_i} + \frac{t_0^2}{2\epsilon^2},\label{wl1}\\
&& \widetilde L_2^{\rm cubic} = \frac{\epsilon^2}8 \sum_{i,j\geq 0} \frac{\partial^2 }{\partial t_i \partial t_j} + \sum_{i\geq 0} \sum_{j=0}^i \binom{i}{j}2^{i-j}
({t}_{j+1}+{t}_{j}) \frac{\partial}{\partial t_i} \nonumber\\
&&\qquad - \frac18\sum_{i\geq 1} \sum_{j=0}^{i-1} \sum_{r=0}^{i-1-j} (-1)^r
\binom{i}{i-1-j-r} 2^{i-j-r} t_j \frac{\partial }{\partial t_i}+ \frac{ t_0^2}{2\epsilon^2} + \frac1{16}. \label{wl2}
\end{eqnarray}
\end{theorem}
\noindent{\textit{Proof}.} Formula \eqref{wl0} readily follows from the definition \ref{wlmdef}, namely,
we have $\widetilde L_0^{\rm cubic}=L_{-1}^{\rm cubic}.$
For $m>0$, the direct calculation of $\widetilde L_m^{\rm cubic}$ becomes more complicated,
however, we can use the Givental quantization to simplify the computations.
The rest of Section 2 is to prove \eqref{wl1} and \eqref{wl2}.
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Proof of \eqref{wl1} in Thm.\,\ref{lwl0-wl1-wl2}}
By Lemma \ref{g2} we have
$$
e^G=\widehat\Phi.
$$
So the operators $L_k^{\rm cubic}$ (see in Lemma \ref{vira-v1}) have the expressions
$$
L_k^{\rm cubic} ( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}) =\widehat\Phi\, L_k^{KdV}\left( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}\right) \, \widehat{\Phi}^{-1}, \quad k\geq -1.
$$
Now using Lemma \ref{g1} we obtain that
\begin{equation} \label{tobesim}
L_k^{\rm cubic} ( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}) =
\left. \left[\widehat{\Phi} \, \left(\widehat{l_k} + \frac{\delta_{k,0}}{16} \right) \, \widehat{\Phi}^{-1} \right]\right|_{q_i\mapsto \tilde t_i, \, \partial_{q_i} \mapsto \partial_{t_i},\,i\geq 0}, \quad k\geq -1
\end{equation}
where $l_k= (-1)^{k+1} z^{3/2} \partial_z^{k+1} z^{-1/2}, \,k\geq -1$.
Simplifying \eqref{tobesim} gives
$$ L_k^{\rm cubic} ( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}) = \left. \left[ \Phi(z) \, l_k \, \Phi(z)^{-1} \right]^\wedge \, \right|_{q_i\mapsto \tilde t_i, \, \partial_{q_i} \mapsto \partial_{t_i},\,i\geq 0} + \frac{\delta_{k,0}}{16}
- \frac{ \delta_{k,-1} }{16} ,\quad k\geq-1.$$
Here we used the fact that the only possible non-zero cocycle term of the above quantization formula
appears when $k=-1.$
We arrive at
\begin{lem} \label{wlmcubic}
The operators $\widetilde L_m^{\rm cubic},\,m\geq 0$ have the following expressions
\begin{equation} \label{wlm}
\widetilde L_m^{\rm cubic}( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}) = \left. \left( \sum_{k=-1}^\infty \frac{m^{k+1}}{(k+1)!} \, \Phi(z) \, l_k \, \Phi(z)^{-1} \right)^\wedge\,\right|_{q_i\mapsto \tilde t_i, \, \partial_{q_i} \mapsto \partial_{t_i},\,i\geq 0} + \frac{m-1}{16}.
\end{equation}
\end{lem}
For $m=0$, eq.\,\eqref{wlm} gives
$$
\widetilde L_0^{\rm cubic} ( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t})=\left. \hat{z}\,\right|_{q_i\mapsto \tilde t_i, \, \partial_{q_i} \mapsto \partial_{t_i},\,i\geq 0} -\frac1{16}= \sum_{i\ge 1} \tilde t_i \frac{\partial }{\partial t_{i-1}}+\frac{\tilde t_0^2}{2\epsilon^2} -\frac1{16}
$$
which agrees with the previously derived eq.\,\eqref{wl0}.
\noindent{\textit{Proof}} of \eqref{wl1}. \quad
Recall that
$$
l_k =
(-1)^{k+1} \, z^{3/2} \, \partial_z^{k+1} \, z^{-1/2},\qquad k\geq -1.
$$
Then we have
$$
\sum_{k=-1}^\infty \Phi \, \frac{l_k}{(k+1)!}\, \Phi^{-1} = \Phi \,z^{3/2} \, e^{-\partial_z} \, z^{-1/2}\, \Phi^{-1} = z^{3/2} \, (z-1)^{-1/2} \, \frac{\Phi(z)}{\Phi(z-1)} \, e^{-\partial_z}.
$$
Noting that
\begin{equation}\label{ratio}
\frac{\Phi(z)}{\Phi(z-1)}= \frac{z-\frac12}{\sqrt{z(z-1)}}
\end{equation}
we arrive at
$$ \sum_{k=-1}^\infty \Phi(z) \, \frac{l_k}{(k+1)!}\, \Phi(z)^{-1} = z\,\frac{z-1/2} {z-1} e^{-\partial_z}. $$
It remains to compute the following residue
$$
-\frac12 \, {\rm Res}_{z=\infty } \,f(-z)\, \frac{z-1/2} {z-1} \, f(z-1) \, \frac{dz}{z}.
$$
Write $f(z)=q(z)+p(z)$, where $q(z)= \sum_{i\geq 0} q_i \, z^{-i}$ and $p(z)= \sum_{i\geq 0} p_i \, (-z)^{i+1}.$ Then the above residue decomposes into the following four parts
\begin{eqnarray}
&& {\rm I}= -\frac12 \, {\rm Res}_{z=\infty } \,p(-z)\, \frac{z-1/2} {z-1} \, p(z-1) \, \frac{dz}{z}, \nonumber\\
&& {\rm II}= -\frac12 \, {\rm Res}_{z=\infty } \,p(-z)\, \frac{z-1/2} {z-1} \, q(z-1) \, \frac{dz}{z}, \nonumber\\
&& {\rm III}=-\frac12 \, {\rm Res}_{z=\infty } \,q(-z)\, \frac{z-1/2} {z-1} \, p(z-1) \, \frac{dz}{z}, \nonumber\\
&& {\rm IV}=-\frac12 \, {\rm Res}_{z=\infty } \,q(-z)\, \frac{z-1/2} {z-1} \, q(z-1) \, \frac{dz}{z}. \nonumber
\end{eqnarray}
Part I obviously vanishes. Part IV gives $\frac{q_0^2}{2\epsilon^2}$. Part II coincides with Part III.
So we are left to compute Part III. Using
$$
\frac{z-1/2} {z-1} = 1 + \frac12 \sum_{k\geq 1} z^{-k},\qquad z\rightarrow \infty
$$
we have
\begin{eqnarray}
{\rm III}
&=& \frac 12 \, \sum_{i\geq0} p_i \left(\sum_{j=0}^{i+1} q_j \binom{i+1}{j} + \frac12 \sum_{j=0}^i q_j \,\sum_{k=1}^{i+1-j} (-1)^k \binom{i+1}{j+k} \right)\nonumber\\
&=& \frac14 \, \sum_{i\geq 0} p_i \sum_{j=0}^i \binom{i}{j} \left(2\, q_{j+1}+ q_j \right). \nonumber
\end{eqnarray}
Note that in the last equality we have used the following elementary identity
\begin{equation}
\sum_{k=1}^{i+1-j} (-1)^k \binom{i+1}{j+k} = - \binom{i}{j}.
\end{equation}
As a result, we obtain \eqref{wl1}. \hfill$\Box$
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Proof of \eqref{wl2} in Thm.\,\ref{lwl0-wl1-wl2}}
\begin{prfn}{\eqref{wl2}} We have
$$
\sum_{k=-1}^\infty 2^{k+1}\,\Phi(z) \, \frac{l_k}{(k+1)!}\, \Phi(z)^{-1} = z^{3/2} \, (z-2)^{-1/2} \, \frac{\Phi(z)}{\Phi(z-2)} \, e^{-2\partial_z}.
$$
Noting that
$$
\frac{\Phi(z)}{\Phi(z-2)} =\frac{\left(z-\frac12\right)\left(z-\frac32\right)}{(z-1)\sqrt{z(z-2)}}
$$
we obtain
$$
\sum_{k=-1}^\infty 2^{k+1}\, \Phi(z) \, \frac{l_k}{(k+1)!}\, \Phi(z)^{-1} = \frac{ z\, (z-1/2) \, (z-3/2)}{(z-1) \, (z-2)} \, e^{-2\partial_z}.
$$
Computing the following residue
$$
-\frac12 \, {\rm Res}_{z=\infty } \,f(-z) \, \frac{ z \, (z-1/2) \, (z-3/2)}{(z-1) \, (z-2)} \, f(z-2) \,\frac{dz}{z^2}
$$
like it was done above in the proof of \eqref{wl1}
we obtain \eqref{wl2}. \end{prfn}
Theorem \ref{lwl0-wl1-wl2} is proved. \hfill $\square$
\begin{remark} Following the above procedure it is easy to derive a generating function for the operators $\widetilde{L}_m^{\rm cubic}\left( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}\right)$. Namely, the operator
$$
\sum_{m\geq 0} \frac{\lambda^m}{m!} \widetilde{L}_m^{\rm cubic}\left( \epsilon^{-1}\tilde{\bf t}, \epsilon \,\partial/\partial{\bf t}\right)
$$
depending on the auxiliary parameter $\lambda$ is obtained by quantization of the following quadratic Hamiltonian
$$
H(\lambda)=-\frac12 \, {\rm Res}_{z=\infty} \frac{dz}{z} f(-z) \, {}_1F_1\left(\frac12-z; 1-z; \lambda\, e^{-\partial_z}\right) f(z).
$$
Here
$$
{}_1F_1(a;b;z)=\sum_{n=0}^\infty \frac{(a)_n}{(b)_n}\frac{z^n}{n!}
$$
is the confluent hypergeometric function.
\end{remark}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\section{Virasoro constraints for the GUE partition function with even couplings}\label{virgue}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{The GUE partition function. Toda hierarchy and Virasoro constraints.}
Recall that the GUE partition function of size $N$ is defined by
\begin{equation}\label{part-GUE}
Z_N({\bf s})=\frac{(2\pi)^{-{N}} }{Vol(N)} \int_{{\mathcal H}(N)} e^{- N\, {\rm tr} \,V(M; \, {\bf s})} dM
\end{equation}
where
$V(M;{\bf s})=\frac12 M^2 -\sum_{j\geq 1} s_{j} \, M^{j},$
and ${\bf s}= (s_1,s_2,s_3,\dots)\footnote{In Section 1, we used ${\bf s}$ to denote $(s_2,s_4,\dots)$;
in this subsection we restore the odd coupling constants $s_1,s_3,\dots.$
However, there is no ambiguity as we use the symbol ${\bf s}$ just for emphasizing that it is a vector.}.$
Introduce
$$
x:=N\, \epsilon.
$$
Expanding the free energy $\mathcal{F}_N({\bf s}):=\log Z_N({\bf s})$ in powers of $\epsilon$ yields the GUE free energy
\begin{equation}\label{genusF-even-odd}
\mathcal{F}_{\tiny\rm GUE}(x, {\bf s}; \epsilon):=\mathcal{F}_N({\bf s}) |_{N=\frac{x}{\epsilon}} -\frac1{12}\log \epsilon=\sum_{g\geq 0} \epsilon^{2g-2} \mathcal{F}_g (x, {\bf s}).
\end{equation}
The GUE free energy $\mathcal{F}_{\tiny\rm GUE}(x, {\bf s}; \epsilon)$ has the form \cite{thooft1, thooft2, BIZ, mehta}
\begin{eqnarray}
&& \hspace{-6mm}
\mathcal{F}_{\tiny\rm GUE}(x, {\bf s};\epsilon ) = \frac{x^2}{2\epsilon^2} \left( \log x -\frac32\right) -\frac1{12} \log x +\zeta'(-1) +\sum_{g\geq 2} \epsilon^{2g-2} \frac{B_{2g}}{2g(2g-2)x^{2g-2}} \nonumber\\
&& \hspace{-6mm}
\qquad\qquad\qquad\quad + \sum_{g\geq 0} \epsilon^{2g-2} \sum_{k\geq 0} \sum_{i_1, \dots, i_k\geq 1}
a_g(i_1, \dots, i_k) \, s_{i_1} \dots s_{i_k} \, x^{2-2g -\left(k-\frac{|i|}2\right)}, \label{ribb1} \\
&& \hspace{-6mm} a_g(i_1, \dots, i_k) = \sum_\Gamma \frac{1}{\#\, {\rm Sym} \, \Gamma} \label{ribb2}
\end{eqnarray}
where the last summation is taken over all connected oriented ribbon graphs
of genus $g$ with $k$ vertices of valencies $i_1$, $\dots$, $i_k$.
The exponential
\begin{equation}
e^{\mathcal{F}_{\tiny\rm GUE}(x, {\bf s};\epsilon )} =: Z_{\tiny\rm GUE}(x,{\bf s}; \epsilon)
\end{equation}
is called the GUE partition function.
From \eqref{ribb1}, we see that the GUE free energy $\mathcal{F}_{\tiny\rm GUE}(x, {\bf s}; \epsilon)$ lives in
the following Bosonic Fock space
$$\mathcal{B}=\frac1{\epsilon^2}\mathbb{C}[\epsilon][[x-1,s_1,s_2,\dots]].$$
Denote $\Lambda=e^{\epsilon \partial_x}$. Define two functions $u,v$ by
\begin{equation} \label{uvgue}
u=u(x,{\bf s}; \epsilon): = \, (\Lambda-1) \, (1-\Lambda^{-1}) \, \mathcal{F}_{\tiny\rm GUE},\quad v=v(x, {\bf s}; \epsilon):= \epsilon \, \frac{\partial}{\partial s_1} \, (\Lambda-1) \, \mathcal{F}_{\tiny\rm GUE}
\end{equation}
and define
$$
L= \Lambda+ v+ e^u \, \Lambda^{-1}.
$$
\begin{lem} \label{guetodalem}
The functions $v,u$ satisfy the following equations of the Toda Lattice hierarchy
\begin{equation}\label{todapol}
\epsilon\, \frac{\partial L}{\partial s_j} =\left[A_j, L\right], \quad A_j:=\left( L^j \right)_+,\quad \forall\, j\geq 1.
\end{equation}
Moreover, $Z_{\tiny\rm GUE}$ is the tau-function (cf. Def.\,1.2.4 in \cite{DY1}) of the solution ($u,v$) to the Toda hierarchy.
\end{lem}
Proof of this lemma uses the orthogonal polynomial technique \cite{mehta}, see e.g. \cite{gmmmo} (see also \cite{DY1}, esp. Cor.\,A.2.2, Def.\,1.2.4 therein in particular regarding the normalization of the tau-function).
\begin{lem} \label{GUE-toda-virasoro}
The GUE partition function $Z_{\tiny\rm GUE}$ satisfies the following linear PDEs
\begin{equation} \label{vira-gue0}
L_m^{\rm Toda} Z_{\tiny\rm GUE} = 0, \quad \forall\, m\geq -1.
\end{equation}
Here, $L_m^{\rm Toda}$ are linear operators explicitly given by
\begin{align}
L_m^{\rm Toda} :=& \epsilon^2 \sum_{k=1}^{m-1} {\partial^2\over \partial s_k \, \partial s_{m-k}} + 2 \, x \, \frac{\partial }{\partial s_m}+ \sum_{k\geq 1} k\, s_k \, {\partial\over \partial s_{k+m}}
- \frac{\partial}{\partial s_{m+2}},\quad m\geq 1,\label{vira-gue1}\\
L_0^{\rm Toda} :=& \sum_{k\geq 1} k\, s_k \, {\partial\over \partial s_{k}} + \frac{x^2}{\epsilon^2} - \frac{\partial}{\partial s_{2}},\label{vira-gue2}\\
L_{-1}^{\rm Toda} :=& \sum_{k\geq 2} k \, s_k \, {\partial\over \partial s_{k-1}}- \frac{\partial}{\partial s_{1}} + \frac{x \, s_1}{\epsilon^2}. \label{vira-gue3}
\end{align}
Moreover, $L_m^{\rm Toda}$ satisfy the Virasoro commutation relations
$$
\left[L_m^{\rm Toda} \,,\, L_n^{\rm Toda}\right] \, = \, (m-n) \, L_{m+n}^{\rm Toda},\quad \forall\, m,n\geq -1.
$$
\end{lem}
This lemma is well-known; see e.g. \cite{mehta,MMMM,gmmmo}.
Eqs.\,\eqref{vira-gue0}--\eqref{vira-gue3} are called the Virasoro constraints for the GUE partition function.
For convenience of the reader we outline the proof of this Lemma.
\medskip
\begin{prfn}{Lemma \ref{GUE-toda-virasoro}}
Recall that the GUE partition function $Z_{\tiny\rm GUE}$
can be obtained from the vacuum tau-function \cite{DZ-norm} of the $\mathbb{P}^1$ Frobenius manifold
by shifting times; see in \cite{Du2} for the details.
So the above Virasoro constraints \eqref{vira-gue0}--\eqref{vira-gue3} are obtained
from e.g. eq.\,(5.4) of \cite{DZ-Toda} by taking $t^{2,k-1}=k! \, \left (s_k- \frac12\delta_{k,2}\right), ~ k\geq 1$ and $t^{1,0}=x,$ $t^{1,1}=1, \, t^{1,k}=0,~k\geq 2.$
\end{prfn}
\begin{lem} \label{simple-odd}
The GUE free energy $\mathcal{F}_{\tiny\rm GUE}$ satisfies the following property: if $k_1+\dots+k_m$ is an odd number, then
\begin{equation}
\left.\frac{\partial^m \mathcal{F}_{\tiny\rm GUE}}{\partial s_{k_1} \dots \partial s_{k_m}}\right|_{s_1=s_3=s_5=\dots=0}\equiv0.
\end{equation}
\end{lem}
\begin{prf}
By using the formulae \eqref{ribb1}, \eqref{ribb2} and by noticing that the total valency of any ribbon graph is an even number.
\end{prf}
\begin{lem}\label{todaidentities}
The following formulae hold true for the GUE free energy $\mathcal{F}_{\tiny\rm GUE}$:
\begin{align}
& \epsilon^2\frac{\partial^2\mathcal{F}_{\tiny\rm GUE}}{\partial s_1 \partial s_1}= e^{u}, \label{tdd1}\\
& \epsilon^2\frac{\partial^2\mathcal{F}_{\tiny\rm GUE}}{\partial s_1 \partial s_3}= e^{u} \left(v(x)^2+v(x-\epsilon)^2+v(x) v(x-\epsilon)\right) + e^u \left(1+ \Lambda+\Lambda^{-1}\right)(e^u), \label{tdd3}\\
& \epsilon^2\frac{\partial^2\mathcal{F}_{\tiny\rm GUE}}{\partial s_2 \partial s_2}= e^u \left(v(x-\epsilon)+ v(x)\right)^2+ e^u\left(\Lambda+\Lambda^{-1}\right)(e^u), \label{tdd4}\\
& \epsilon (\Lambda-1) \left(\frac{\partial \mathcal{F}_{\tiny\rm GUE}}{\partial s_2}\right)= v^2 + (\Lambda+1) \left(e^u\right). \label{td-2}
\end{align}
\end{lem}
\begin{prf}
One can use the recursion relations \cite{DZ-Toda} or the matrix resolvent method \cite{DY1} to obtain these identities.
\end{prf}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Reduction to even couplings. Discrete KdV hierarchy and Virasoro constraints.}
The GUE free energy/partition function with even couplings can be obtained from
the GUE free energy/partition function by putting $s_1=s_3=s_5=\dots=0$, namely,
\begin{eqnarray}
&& \mathcal{F}_{\tiny\rm even}(x,{\bf s}; \epsilon) = \mathcal{F}_{\tiny\rm GUE}(x,s_1=0, s_2, s_3=0,s_4, \dots; \epsilon),\nonumber\\
&& Z_{\tiny\rm even}(x,{\bf s}; \epsilon) = Z_{\tiny\rm GUE}(x,s_1=0, s_2, s_3=0,s_4, \dots; \epsilon) \nonumber
\end{eqnarray}
where ${\bf s}=(s_2,s_4,s_6,\dots)$.
It follows from Lemma \ref{simple-odd} and eq.\,\eqref{uvgue} that
$$
v\equiv 0,
$$
so we have
\begin{eqnarray}
&& L=\Lambda+e^u \, \Lambda^{-1}, \\
&& u=u(x,{\bf s};\epsilon)= (\Lambda-1) \, (1-\Lambda^{-1}) \, \mathcal{F}(x,{\bf s};\epsilon).
\end{eqnarray}
Note that
\begin{equation}
\mathcal{F}_{\tiny\rm even}(x,{\bf s};\epsilon)\in \mathcal{B}^{\rm even}=\frac1{\epsilon^2}\mathbb{C}[\epsilon][[x-1,s_2,s_4,\dots]], \quad u(x,{\bf s};\epsilon) \in \epsilon^2 \mathcal{B}^{\rm even}.
\end{equation}
It follows from Lemma \ref{guetodalem} the following
\begin{lem}
The function $u$ satisfies the discrete KdV hierarchy (aka the Volterra hierarchy):
\begin{equation}\label{dkdv}
\epsilon \frac{\partial L}{\partial s_{2k}}=\left[(L^{2k})_+\,, \,L\right],\quad k\ge 1
\end{equation}
as well as the initial condition
\begin{equation} \label{inilogx}
e^{u(x,{\bf 0})}= x.
\end{equation}
\end{lem}
It should be noted that solution to \eqref{dkdv} and \eqref{inilogx} exists and is unique in $\epsilon^2 \mathcal{B}^{\rm even}.$
Moreover, one can easily get an analogue of the definition of tau function of
the Volterra hierarchy from \cite{DY1} such that $Z_{\tiny\rm even}$ is a particular tau function.
The tau function $\tau$ of any solution to the Volterra hierarchy is uniquely determined up to a linear function
of ${\bf s}$ and $x$. This linear function can further be fixed by the so-called string equation (see below) up to a linear function in $x$. We omit
the details because these are just specializations of the results of \cite{DY1} to the even couplings.
\begin{emp}
The $k=1$ flow of the discrete KdV hierarchy \eqref{dkdv} reads as follows
$$
\frac{\partial u}{\partial s_2} = \frac1{\epsilon} (\Lambda-\Lambda^{-1})(e^u).
$$
\end{emp}
\begin{theorem} \label{thm-newtau-vira}
Let $Z_{\tiny\rm even}(x,{\bf s};\epsilon)$ denote the GUE partition function with \textbf{even} couplings (see \eqref{GUE-even-Z}).
Define $\widetilde Z(x,{\bf s}; \epsilon)$ by
$$
\log Z_{\tiny\rm even}(x,{\bf s};\epsilon) = \left(\Lambda^{1/2}+\Lambda^{-1/2}\right) \log \widetilde Z(x,{\bf s};\epsilon).
$$
Then $\widetilde Z(x,{\bf s};\epsilon)$ satisfies the followings system of equations (which we call the Virasoro constraints):
\begin{equation}\label{vir-ZZ-dkdv}
L_n^{\rm even} ( \epsilon^{-1}x, \epsilon^{-1} \tilde {\bf s}, \epsilon \,\partial/\partial{\bf s}) \widetilde Z(x,{\bf s};\epsilon) = 0 ,\quad n\geq 0
\end{equation}
where $\tilde s_{2k} = s_{2k} - \frac12 \delta_{k,1}$, and $L_n^{\rm even}$ are linear differential operators defined in \eqref{vire0}, \eqref{viren}.
Moreover, $L_m^{\rm even}$ satisfy (half of) the Virasoro commutation relations \eqref{even-comm}.
\end{theorem}
\begin{prf}
Equations \eqref{even-comm} can be verified straightforwardly.
It then suffices to prove \eqref{vir-ZZ-dkdv} for $n=0,1,2$, because the rest of \eqref{vir-ZZ-dkdv} can be proved via
\eqref{even-comm}. Denote $\widetilde \mathcal{F}= \log \widetilde Z$.
Start with $n=0$. Taking $m=0$ in \eqref{vira-gue0} of Lemma \ref{GUE-toda-virasoro} we have
$$
\sum_{k\geq 1} k\, s_k \, {\partial Z_{\tiny\rm GUE} \over \partial s_{k}}
+ \frac{x^2}{\epsilon^2} Z_{\tiny\rm GUE} - \frac{\partial Z_{\tiny\rm GUE}}{\partial s_{2}} = 0.
$$
Taking $s_1=s_3=s_5=\dots=0$ in this identity and dividing it by $Z$ we obtain
\begin{equation} \label{v02}
\sum_{k\geq 1} 2k\, s_{2k} \, {\partial \mathcal{F}_{\tiny\rm even} \over \partial s_{2k}} + \frac{x^2}{\epsilon^2} - \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_{2}} = 0.
\end{equation}
Applying $\left(\Lambda^{1/2}+\Lambda^{-1/2}\right)^{-1}$ on both sides of \eqref{v02} and dividing by $2$ we find
$$
\sum_{k\geq 1} k\, s_{2k} \, {\partial \widetilde \mathcal{F} \over \partial s_{2k}} + \frac{x^2}{4\epsilon^2} - \frac1{16} - \frac12\, \frac{\partial \widetilde{\mathcal{F}} }{\partial s_{2}} = 0.
$$
This proves \eqref{vir-ZZ-dkdv} with $n=0$.
For $n=1$, taking $m=2$ in \eqref{vira-gue0} we have
$$
2\,x\,\frac{\partial Z_{\tiny\rm GUE}}{\partial s_2}+\sum_{k\geq 1} k\, s_k \, {\partial Z_{\tiny\rm GUE} \over \partial s_{k+2}}
+ \epsilon^2 \, {\partial^2 Z_{\tiny\rm GUE} \over \partial s_1^2} - \frac{\partial Z_{\tiny\rm GUE}}{\partial s_{4}} =0.
$$
Taking $s_1=s_3=s_5=\dots=0$ in this identity and dividing it by $Z$ we obtain
$$
2\,x\,\frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_2}
+\sum_{k\geq 1} 2k\, s_{2k} \, {\partial \mathcal{F}_{\tiny\rm even} \over \partial s_{2k+2}} +
\left.\epsilon^2 \, \left({\partial^2 \mathcal{F}_{\tiny\rm GUE} \over \partial s_1^2}+ \left({\partial \mathcal{F}_{\tiny\rm GUE} \over \partial s_1}\right)^2 \right)\right|_{s_1=s_3=\dots=0}
- \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_{4}} =0.
$$
Using Lemma \ref{simple-odd} and the identity \eqref{tdd1} we have
\begin{equation}\label{vir-ZZ-dkdv2}
2\,x\,\frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_2}
+\sum_{k\geq 1} 2k\, s_{2k} \, {\partial \mathcal{F}_{\tiny\rm even} \over \partial s_{2k+2}} + e^u - \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_{4}} =0.
\end{equation}
Taking $s_1=s_3=s_5=\dots=0$ in \eqref{td-2} we have
\begin{equation} \label{simple1}
\epsilon\, \frac{\Lambda-1}{\Lambda+1} \left(\frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_2}\right)= e^u.
\end{equation}
Applying
$\left(\Lambda^{1/2}+\Lambda^{-1/2}\right)^{-1}$ on both sides of \eqref{vir-ZZ-dkdv2}, using \eqref{simple1}, and dividing by $2$ we find
$$
x \, \frac{\partial \widetilde\mathcal{F}}{\partial s_{2}}+ \sum_{k\ge 1} k\, s_{2k} \, \frac{\partial \widetilde\mathcal{F}}{\partial s_{2k+2}} - \frac12 \frac{\partial \widetilde\mathcal{F}}{\partial s_{4}}=0.
$$
This proves \eqref{vir-ZZ-dkdv} with $n=1$.
Finally, for $n=2,$ taking $m=4$ in \eqref{vira-gue0} we have
$$
2\,\epsilon^2 {\partial^2 Z_{\tiny\rm GUE} \over \partial s_1 \, \partial s_3} + \epsilon^2\, {\partial^2 Z_{\tiny\rm GUE} \over \partial s_2 \partial s_2}
+2\,x\, \frac{\partial Z_{\tiny\rm GUE}}{\partial s_4}+\sum_{k\geq 1} k\, s_k \, {\partial Z_{\tiny\rm GUE} \over \partial s_{k+4}} - \frac{\partial Z_{\tiny\rm GUE}}{\partial s_6} = 0.
$$
Taking $s_1=s_3=\dots=0$ in this identity, using Lem.\,\ref{simple-odd}, and dividing it by $Z_{\tiny\rm even} $ we obtain
\begin{align*}
&2 \, \epsilon^2 \, \left. {\partial^2 \mathcal{F}_{\tiny\rm GUE} \over \partial s_1 \, \partial s_3}\right|_{s_1=s_3=\dots=0} +
\epsilon^2 \, \left({\partial^2 \mathcal{F}_{\tiny\rm even} \over \partial s_2 \partial s_2} + \left({\partial \mathcal{F}_{\tiny\rm even} \over \partial s_2}\right)^2\right)\\
&+ 2\,x\, \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_4}+\sum_{k\geq 1} 2k\, s_{2k} \, {\partial \mathcal{F}_{\tiny\rm even} \over \partial s_{2k+4}}
- \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_6} = 0.
\end{align*}
Using Lemma \ref{todaidentities} and noticing that $v\equiv0$ we have
\begin{align}
&2 \, e^u \, (1+ \Lambda+\Lambda^{-1})(e^u) +
\epsilon^2 \, \left({\partial^2 \mathcal{F}_{\tiny\rm even} \over \partial s_2 \partial s_2} + \left({\partial \mathcal{F}_{\tiny\rm even} \over \partial s_2}\right)^2\right)\nonumber\\
&+ 2\,x\, \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_4}+\sum_{k\geq 1} 2k\, s_{2k} \, {\partial \mathcal{F}_{\tiny\rm even} \over \partial s_{2k+4}}
- \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_6} = 0 \label{midstep}
\end{align}
as well as
\begin{eqnarray}
&& \epsilon^2\frac{\partial^2\mathcal{F}_{\tiny\rm even}}{\partial s_2 \partial s_2}= e^u\,\left(\Lambda+\Lambda^{-1}\right)(e^u), \label{simple2}\\
&& \epsilon\, \frac{\Lambda-1}{\Lambda+1} \left(\frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_4}\right) = e^u \, (1+ \Lambda+\Lambda^{-1}) (e^u). \label{simple3}
\end{eqnarray}
Hence by using eqs.\,\eqref{simple1}, \eqref{simple2}, \eqref{simple3} we can rewrite \eqref{midstep} as
\begin{eqnarray}
&& \epsilon^2\, \left(\frac{\Lambda-1}{\Lambda+1} \left(\frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_2}\right)\right)^2+ \epsilon\, \frac{\Lambda-1}{\Lambda+1} \left(\frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_4}\right) \nonumber\\
&& \quad+ \epsilon^2 \, \left(2{\partial^2 \mathcal{F}_{\tiny\rm even} \over \partial s_2 \partial s_2} + \left({\partial \mathcal{F}_{\tiny\rm even} \over \partial s_2}\right)^2\right)
+ 2\,x\, \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_4}+\sum_{k\geq 1} 2k\, s_{2k} \, {\partial \mathcal{F}_{\tiny\rm even} \over \partial s_{2k+4}}
- \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_6} = 0. \nonumber
\end{eqnarray}
Using the definition $\mathcal{F}_{\tiny\rm even}= (\Lambda^{1/2}+\Lambda^{-1/2}) \,\widetilde \mathcal{F}$ we obtain
\begin{eqnarray}
&& 2 \,\epsilon^2 \left( \frac{\partial^2 \mathcal{F}_{\tiny\rm even}}{\partial s_{2} \, \partial s_{2}} + \left(\Lambda^{1/2}\left(\frac{\partial \widetilde \mathcal{F}}{\partial s_2}\right)\right)^2 +\left(\Lambda^{-1/2}\left(\frac{\partial \widetilde \mathcal{F}}{\partial s_2}\right)\right)^2\right) \nonumber\\
&& \qquad \qquad + \epsilon \, (\Lambda^{1/2}-\Lambda^{-1/2})\left(\frac{\partial \widetilde \mathcal{F}}{\partial s_{4}}\right)+ 2 \, x \, \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_{4}}+ \sum_{k\ge 1} 2k\, s_{2k} \, \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_{2k+4}} - \frac{\partial \mathcal{F}_{\tiny\rm even}}{\partial s_{6}} =0. \nonumber
\end{eqnarray}
Now applying $\left(\Lambda^{1/2}+\Lambda^{-1/2}\right)^{-1}$ on both sides of the above identity and dividing by $2$ we obtain
$$\epsilon^2 \left( \frac{\partial^2 \widetilde \mathcal{F}}{\partial s_{2} \, \partial s_{2}} + \left(\frac{\partial \widetilde \mathcal{F}}{\partial s_2}\right)^2\right) +x \, \frac{\partial \widetilde \mathcal{F}}{\partial s_{4}}+ \sum_{k\ge 1} k\, s_{2k} \, \frac{\partial \widetilde \mathcal{F}}{\partial s_{2k+4}} - \frac12 \frac{\partial \widetilde \mathcal{F}}{\partial s_{6}} =0$$
This proves \eqref{vir-ZZ-dkdv} with $n=2$. The theorem is proved.
\end{prf}
\begin{remark}
Finding Virasoro constraints for $Z_{\tiny\rm even}(x,{\bf s})$ in a compact form was an open question \cite{MMMM,gmmmo}.
Of course, $Z_{\tiny\rm even}$ \textit{itself} does satisfy certain Virasoro type constraints,
but these constraints may contain non-linear terms. E.g. for $L_1^{\rm even}$, the non-linear constraint reads
$$
\left(2\,x\,\frac{\partial }{\partial s_2}
+\sum_{k\geq 1} 2k\, s_{2k} \, {\partial \over \partial s_{2k+2}} + e^u - \frac{\partial }{\partial s_{4}}\right) Z_{\tiny\rm even}=0,\quad u= \, (\Lambda-1) \, (1-\Lambda^{-1}) \, \log Z_{\tiny\rm even}.
$$
The {\bf key} of our study is the introduction of $\widetilde Z$ which linearizes the nonlinear constraints, such that
one can write down all the constraints in a closed form.
The definition of $\widetilde Z$ is so simple, but surprisingly it completely solves the open question.
\end{remark}
\begin{prfn}{Theorem \ref{ViraVolterraintro}}
Part 2) of the theorem has been proved in Theorem \ref{thm-newtau-vira}.
Part 1) is proved by the fact that any modified tau function $\tilde \tau_{\tiny\rm Volterra}(x,{\bf s}; \epsilon)$
can be obtained from $\widetilde Z(x,{\bf s}; \epsilon)$ by
shifting the ${\bf s}$-variables.
\end{prfn}
\begin{remark}
Note that the modified tau function $\tilde\tau_{\tiny\rm Volterra}$ and the solution $u$ to the discrete KdV hierarchy is related by
$$
u=\left(\Lambda^{1/2}-\Lambda^{-1/2}\right)\left(\Lambda-\Lambda^{-1}\right)\log \tilde\tau_{\tiny\rm Volterra}.
$$
The Virasoro symmetries
$$\frac{\partial \tilde\tau_{\tiny\rm Volterra}}{\partial r_n} :=L_n^{\rm even} \tilde\tau_{\tiny\rm Volterra}, \quad n\geq 0$$
also act on $u$. For example,
\begin{eqnarray}
&& \frac{\partial u}{\partial r_0}=\sum_{k\geq 1} k \, s_{2k} \frac{\partial u}{\partial s_{2k}}+1, \nonumber\\
&& \frac{\partial u}{\partial r_1}= \frac12 \left(3\Lambda+3\Lambda^{-1}+2\right) e^{u(x)}+ x \frac{\partial u}{\partial s_2}+\sum_{k\ge 1} k s_{2k} \frac{\partial u}{\partial s_{2k+2}}. \nonumber
\end{eqnarray}
\end{remark}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\section{Proof of the Main Theorem}\label{section4}
\noindent \textit{Proof} of Thm. \ref{thm1}. \quad
Let $\H_{\tiny\rm cubic}({\bf t};\epsilon)$ denote the cubic Hodge free energy \eqref{cubic-hodge}.
Define
\begin{equation}\label{FstarH}
\mathcal{F}^*=\mathcal{F}^*(x, {\bf s};\epsilon) = \left(\Lambda^{\frac12}+\Lambda^{-\frac12}\right) \, \H_{\tiny\rm cubic} \left({\bf t}(x,{\bf s});\sqrt{2} \, \epsilon\right) + \epsilon^{-2} \, A +\zeta'(-1)
\end{equation}
where ${\bf s}=(s_2,s_4,s_6,\dots)$, $A$ is the series in $(x,{\bf s})$ defined in \eqref{axs},
$\bar{s}_{k}=\binom{2k}{k} s_{2k}$ and
$$
t_i(x, {\bf s}) = \sum_{k\geq 1} k^{i+1} \bar{s}_k - 1 + \delta_{i,1} + x \cdot \delta_{i,0}.
$$
Denote $\tilde{t}_i=t_i-\delta_{i,1}$ and $\tilde{\bar s}_k= \bar{s}_k-\delta_{k,1}$. Then we have
\begin{equation}\label{tsshort}
\tilde t_i(x, {\bf s}) = \sum_{k\geq 1} k^{i+1} \tilde{\bar{s}}_k + x \cdot \delta_{i,0}.
\end{equation}
We are to show $\mathcal{F}^*=\mathcal{F}_{\tiny\rm even}$. This is precisely the statement of the Conjecture.
Recall that in Thm.\,\ref{thm-newtau-vira} we have proved that the generating series
$\widetilde \mathcal{F}$ defined by
\begin{equation}\label{definingwf}
\widetilde \mathcal{F}(x,{\bf s};\epsilon)=\left(\Lambda^{1/2}+\Lambda^{-1/2}\right)^{-1} \mathcal{F}_{\tiny\rm even}(x,{\bf s};\epsilon)
\end{equation}
satisfies the following system of equations
\begin{equation} \label{levenwfagain}
L_n^{\rm even}(\epsilon^{-1} x, \epsilon^{-1} \tilde{\textbf{s}}, \epsilon \partial /\partial \textbf{s}) \, e^{ \widetilde \mathcal{F}(x,{\bf s};\epsilon)} = 0 ,\quad n\geq 0
\end{equation}
where the linear operators $L_n^{\rm even}$ are given by \eqref{vire0}, \eqref{viren}.
Now we define a series $\widetilde\mathcal{F}^*$ by
\begin{equation}
\widetilde \mathcal{F}^*(x,{\bf s};\epsilon) : =\left(\Lambda^{1/2}+\Lambda^{-1/2}\right)^{-1} \mathcal{F}^*(x,{\bf s};\epsilon) \mathop{=}^{\eqref{FstarH}}
\H_{\tiny\rm cubic}\left({\bf t}(x,{\bf s}); \sqrt{2}\epsilon\right) + \epsilon^{-2} \, \frac{A}2 + \frac{\zeta'(-1)}2. \label{def2wfs}
\end{equation}
In order to show that $\mathcal{F}^*=\mathcal{F}_{\tiny\rm even}$, it suffices to prove that $\widetilde \mathcal{F}^*= \widetilde \mathcal{F}$, as the operator
$$\Lambda^{1/2}+\Lambda^{-1/2}=2 \left(1+\frac{\epsilon^2}{8} \partial_x^2+\frac{\epsilon^4}{384}\partial_x^4+\dots\right)$$
is obviously invertible.
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Proof of the Key Lemma}
The Key Lemma \ref{keyintro} builds up a bridge connecting Hodge and GUE, which is crucial in the proof of the Main Theorem.
\noindent \textit{Proof}\, of the Key Lemma. \quad By \eqref{deflmeven} the operators $L^{\rm even}_m$, $m\geq 0$ if exist must be unique.
Note that the Key Lemma also gives the explicit form \eqref{vire0}--\eqref{viren} for $L^{\rm even}_m$, so we only need to verify that \eqref{vire0}--\eqref{viren} do satisfy
\eqref{deflmeven}. The operators $L_m^{\rm even}$ given by the formulae \eqref{vire0}--\eqref{viren} satisfy the commutation relations \eqref{even-comm} as it was already proven in Theorem \ref{thm-newtau-vira}, so it suffices to verify \eqref{deflmeven} for $m=0,1,2$.
Let us do it one by one for $m=0,1,2$ in the following three lemmas.
\begin{lem}\label{wfsl0lem}
The identity \eqref{deflmeven} is true for $m=0$.
\end{lem}
\begin{prf}
Noting that
\begin{equation}\label{apsk}
\frac{\partial A}{\partial \bar{s}_k} = \sum_{k_1\geq 1} \frac{k \, k_1}{k+k_1} \tilde{\bar{s}}_{k_1} +x
\end{equation}
where $A$ is the series in $(x,{\bf s})$ defined in \eqref{axs}, we have
\begin{equation}
e^{-\frac{A}{2\,\epsilon^2}} \circ L_0^{\rm even}(\epsilon^{-1} x, \epsilon^{-1} \tilde{\textbf{s}}, \epsilon \partial /\partial \textbf{s}) \circ e^{\,\frac{A}{2\,\epsilon^2}}
= \sum_{k\ge 1} k \, \tilde{\bar{s}}_{k} \, \frac{\partial }{\partial \bar{s}_{k}}+\frac{x^2}{4\epsilon^2} - \frac1{16}
+ \frac{1}{2 \, \epsilon^2} \sum_{k\ge 1} k \, \tilde{\bar{s}}_{k} \left(\sum_{k_1\geq 1} \frac{k \, k_1}{k+k_1} \tilde{\bar{s}}_{k_1} +x\right) . \nonumber
\end{equation}
Using \eqref{tsshort} and noticing
\begin{align}
\frac{\partial}{\partial x} = & \frac{\partial}{\partial t_0},\label{par1}\\
\frac{\partial}{\partial \bar{s}_k} = & \sum_{i\geq 0} k^{i+1} \frac{\partial}{\partial t_i}, \label{par2}
\end{align}
we obtain that
$$
\sum_{k\ge 1} k \, \tilde{\bar{s}}_k \frac{\partial}{\partial \bar{s}_{k}}=
\sum_{i\ge 1} \tilde{t}_i \frac{\partial}{\partial t_{i-1}}.
$$
Hence
\begin{equation}
e^{-\frac{A}{2\,\epsilon^2}} \circ L_0^{\rm even}(\epsilon^{-1} x, \epsilon^{-1} \tilde{\textbf{s}}, \epsilon \partial /\partial \textbf{s}) \circ e^{\,\frac{A}{2\,\epsilon^2}}
=\sum_{i\ge 1} \tilde t_i \frac{\partial }{\partial t_{i-1}}+\frac{t_0^2}{4\epsilon^2} -\frac1{16}. \nonumber
\end{equation}
The lemma is proved.
\end{prf}
\begin{lem} \label{wfsl1lem}
The identity \eqref{deflmeven} is true for $m=1$.
\end{lem}
\begin{prf}
We have
\begin{align}
&e^{-\frac{A}{2\,\epsilon^2}} \circ L_1^{\rm even}(\epsilon^{-1} x, \epsilon^{-1} \tilde{\textbf{s}}, \epsilon \partial /\partial \textbf{s}) \circ e^{\frac{A}{2\,\epsilon^2}} \nonumber\\
=& 2 x \,\frac{\partial }{\partial \bar{s}_{1}}
+ \sum_{k\ge 1} \frac{2k(2k+1)}{k+1} \tilde{\bar{s}}_{k} \, \frac{\partial}{\partial \bar{s}_{k+1}} + \frac{1}{\epsilon^2} x \,\frac{\partial A }{\partial \bar{s}_{1}}
+ \frac{1}{\epsilon^2}\sum_{k\ge 1} \frac{k(2k+1)}{k+1} \tilde{\bar{s}}_{k} \, \frac{\partial A}{\partial \bar{s}_{k+1}} \nonumber\\
=& 2 x \,\frac{\partial }{\partial \bar{s}_{1}}
+ \sum_{k\ge 1} \frac{2k(2k+1)}{k+1} \tilde{\bar{s}}_{k} \, \frac{\partial}{\partial \bar{s}_{k+1}} + \frac{x}{\epsilon^2} \left(\sum_{k\geq 1} \frac{k}{1+k} \tilde{\bar{s}}_{k} +x \right) \nonumber\\
&~
+ \frac{1}{\epsilon^2}\sum_{k\ge 1} \frac{k(2k+1)}{k+1} \tilde{\bar{s}}_{k} \left(\sum_{k_1\geq 1} \frac{(k+1) \, k_1}{k+1+k_1} \tilde{\bar{s}}_{k_1} +x\right) \nonumber \\
=& 2\,\sum_{i\geq 0} \sum_{j=0}^i \binom{i}{j} \left(2 \tilde{t}_{j+1}+ \tilde{t}_j \right)\frac{\partial }{\partial t_i} +\frac{t_0^2}{\epsilon^2}
\end{align}
where we used again \eqref{apsk}, \eqref{tsshort} and \eqref{par1}, \eqref{par2}. The lemma is proved.
\end{prf}
\begin{lem} \label{wfsl2lem}
The identity \eqref{deflmeven} is true for $m=2$.
\end{lem}
\begin{prf}
By a straightforward calculation we have
\begin{align}
&e^{-\frac{A}{2\,\epsilon^2}} \circ L_2^{\rm even}(\epsilon^{-1} x, \epsilon^{-1} \tilde{\textbf{s}}, \epsilon \partial /\partial \textbf{s})\circ e^{\frac{A}{2\,\epsilon^2}} \nonumber \\
=& 4 \, \epsilon^2 \, \frac{\partial^2}{\partial \bar{s}_{1} \partial \bar{s}_{1}} + 6 \, x \,\frac{\partial }{\partial \bar{s}_{2}}+ 4 \sum_{k\ge 1}
\frac{ k (2k+3)(2k+1) }{ (k+2)(k+1)} \tilde{\bar{s}}_{k} \, \frac{\partial }{\partial \bar{s}_{k+2}} \nonumber\\
&+ 2 \, \frac{\partial^2 A}{\partial \bar{s}_{1}^2} +4 \frac{\partial A}{\partial \bar{s}_1} \frac{\partial }{\partial \bar{s}_1} + \frac1{\epsilon^2} \left(\frac{\partial A}{\partial \bar{s}_1}\right)^2
+\frac1{2\epsilon^2} \left( 6 \, x \,\frac{\partial A }{\partial \bar{s}_{2}}+ 4 \sum_{k\ge 1}
\frac{ k (2k+3)(2k+1) }{ (k+2)(k+1)} \tilde{\bar{s}}_{k} \, \frac{\partial A}{\partial \bar{s}_{k+2}} \right). \nonumber
\end{align}
It follows from \eqref{par1}, \eqref{par2} and \eqref{tsshort} that
\begin{eqnarray}
&& 4 \sum_{k\ge 1}
\frac{ k (2k+3)(2k+1) }{ (k+2)(k+1)} \tilde{\bar{s}}_{k} \, \frac{\partial }{\partial \bar{s}_{k+2}} \nonumber\\
&=& 4\sum_{k\ge 1}
\frac{ k \left(4(k+1)^2-1\right) }{ (k+1)} \tilde{\bar{s}}_{k} \sum_{i\geq 0} (k+2)^{i} \frac{\partial}{\partial t_i} \nonumber\\
&=& 16\sum_{k\ge 1} \sum_{i\geq 0}
k(k+1) \tilde{\bar{s}}_{k} \sum_{j=0}^i \binom{i}{j} 2^{i-j} k^j \frac{\partial}{\partial t_i} - 4\sum_{k\ge 1} \frac{ k }{ k+1} \tilde{\bar{s}}_{k} \sum_{i\geq 0} \sum_{j=0}^i \binom{i}{j}(k+1)^{j} \frac{\partial}{\partial t_i} \nonumber\\
&=& 16 \sum_{i\geq 0}\sum_{j=0}^i
\left( \tilde{t}_{j} + \tilde{t}_{j+1} -x \, \delta_{j,0} \right) \binom{i}{j} 2^{i-j} \frac{\partial}{\partial t_i} \nonumber\\
&& - 4\sum_{k\ge 1}\sum_{i\geq 0} \frac{ k }{ k+1} \tilde{\bar{s}}_{k} \frac{\partial}{\partial t_i} - 4\sum_{i\geq 0} \sum_{j=1}^i \binom{i}{j} \sum_{\ell=0}^{j-1} \binom{j-1}{\ell} \left( \tilde{t}_{\ell}-x \, \delta_{\ell,0} \right) \frac{\partial}{\partial t_i}. \label{p20}
\end{eqnarray}
Also noticing eqs.\,\eqref{par1}, \eqref{par2}, \eqref{apsk}, \eqref{tsshort},
\begin{eqnarray}
\frac{\partial^2 A}{\partial \bar{s}_1^2} =\frac12. \label{p22}
\end{eqnarray}
and
\begin{equation}\label{p21}
\frac{\partial^2}{\partial \bar{s}_{1} \partial \bar{s}_{1}} = \sum_{i,j\geq 0} \frac{\partial^2}{\partial t_i \partial t_j},
\end{equation}
we find
\begin{align}
& e^{-\frac{A}{2\,\epsilon^2}} \circ L_2^{\rm even}(\epsilon^{-1} x, \epsilon^{-1} \tilde{\textbf{s}}, \epsilon \partial /\partial \textbf{s}) \circ e^{\frac{A}{2\,\epsilon^2}} \nonumber\\
=& 4 \, \epsilon^2 \,\sum_{i,j\geq 0} \frac{\partial^2}{\partial t_i \partial t_j} + 6 \, x \,\frac{\partial }{\partial \bar{s}_{2}}
+16 \sum_{i\geq 0}\sum_{j=0}^i
\left( \tilde{t}_{j} + \tilde{t}_{j+1} -x \, \delta_{j,0} \right)\binom{i}{j} 2^{i-j} \frac{\partial}{\partial t_i} \nonumber\\
& - 4\sum_{k\ge 1}\sum_{i\geq 0} \frac{ k }{ k+1} \tilde{\bar{s}}_{k} \frac{\partial}{\partial t_i} - 4\sum_{i\geq 0} \sum_{j=1}^i \binom{i}{j} \sum_{\ell=0}^{j-1} \binom{j-1}{\ell}\left( \tilde{t}_{\ell}-x \, \delta_{\ell,0} \right) \frac{\partial}{\partial t_i} \nonumber\\
&+1 + 4 \left(\sum_{k\geq 1} \frac{k}{1+k} \tilde{\bar{s}}_{k} +x\right) \frac{\partial }{\partial \bar{s}_1} + \frac1{\epsilon^2} \,\left(\sum_{k\geq 1} \frac{k}{1+k} \tilde{\bar{s}}_{k} +x\right)^2 + \frac{3 \,x}{\epsilon^2} \left( \sum_{k\geq 1} \frac{2 \, k}{2+k} \tilde{\bar{s}}_{k} +x \right) \nonumber\\
& + \frac{2}{\epsilon^2} \sum_{k\ge 1} \frac{ k (2k+3)(2k+1) }{ (k+2)(k+1)} \tilde{\bar{s}}_{k} \, \left(\sum_{k_1\geq 1} \frac{(k+2) \, k_1}{k+2+k_1} \tilde{\bar{s}}_{k_1} +x\right) \nonumber\\
=& 16 \left[\frac{\epsilon^2}4 \sum_{i,j\geq 0} \frac{\partial^2 }{\partial t_i \partial t_j} + \sum_{i\geq 0} \sum_{j=0}^i \binom{i}{j} 2^{i-j}
(\tilde{t}_{j+1}+\tilde{t}_{j}) \frac{\partial}{\partial t_i} \right. \nonumber\\
&\qquad \left. - \frac18\sum_{i\geq 1} \sum_{j=0}^{i-1} \sum_{r=0}^{i-1-j} (-1)^r
\binom{i}{i-1-j-r}2^{i-j-r} \tilde t_j \frac{\partial }{\partial t_i}+ \frac{ t_0^2}{4\epsilon^2} + \frac1{16}\right]. \nonumber
\end{align}
In the last equality we have used the following elementary identity
$$
\sum_{j=\ell}^i
\binom{i}{j}\binom{j-1}{\ell-1}= \sum_{r=\ell}^i (-1)^{r-\ell} \binom{i}{r}2^{i-r}, \quad \forall\, \ell,i \in \mathbb{Z}, \, i\geq \ell\geq 0.$$
The lemma is proved.
\end{prf}
The Key Lemma is proved. \hfill $\square$
Theorem \ref{vira-2-lem} along with the Key Lemma immediately implies the following theorem.
\begin{theorem} \label{virasorowfstarthm}
The function $\widetilde \mathcal{F}^*(x,{\bf s};\epsilon)$ satisfies the following Virasoro constraints
\begin{equation} \label{virasoroevenfstar}
L_n^{\rm even} \left( \epsilon^{-1}x, \epsilon^{-1} \tilde {\bf s}, \epsilon \,\partial/\partial{\bf s}\right) e^{ \widetilde \mathcal{F}^*(x,{\bf s};\epsilon)} = 0 ,\qquad n\geq 0
\end{equation}
where $\tilde s_{2k}=s_{2k} - \frac12 \delta_{k,1}.$
\end{theorem}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{End of the proof of the Main Theorem}
\paragraph{Genus expansion.}
By definition the special cubic Hodge free energy $\H_{\tiny\rm cubic}({\bf t};\epsilon)$ and the GUE free energy $\mathcal{F}_{\tiny\rm even}(x,{\bf s};\epsilon)$
with even couplings have the following genus expansions:
\begin{eqnarray}
&& \hspace{-6mm} \H_{\tiny\rm cubic}({\bf t};\epsilon) = \sum_{g=0}^\infty \epsilon^{2g-2}\, \H_g({\bf t}), \label{genushhh}\\
&& \hspace{-6mm} \mathcal{F}_{\tiny\rm even}(x,{\bf s};\epsilon) = \sum_{g=0}^\infty \epsilon^{2g-2}\, \mathcal{F}_g(x,{\bf s}). \label{genusfff}
\end{eqnarray}
Recall that it was proven in \cite{DY2} the genus $0,1,2$ parts of Conjecture \ref{conjecture1}, i.e.
\begin{eqnarray}
&& \mathcal{F}_0(x, {\bf s}) = \H_0({\bf t}(x,{\bf s})) + A, \label{g0p}\\
&& \mathcal{F}_1(x, {\bf s}) = 2 \, \H_1({\bf t}(x,{\bf s})) + \frac18 \, \frac{\partial^2 \H_0({\bf t}(x,{\bf s}))}{\partial x^2}+ \zeta'(-1),\label{g1p}\\
&& \mathcal{F}_2(x, {\bf s}) = 4 \, \H_2({\bf t}(x,{\bf s})) + \frac14 \, \frac{\partial^2 \H_1({\bf t}(x,{\bf s}))}{\partial x^2}+ \frac1{384} \, \frac{\partial^4 \H_0({\bf t}(x,{\bf s}))}{\partial x^4}.\label{g2p}
\end{eqnarray}
From \eqref{definingwf} and \eqref{def2wfs}, we see that $\widetilde \mathcal{F}$ and $\widetilde \mathcal{F}^*$ also have genus expansions:
\begin{eqnarray}
&& \hspace{-6mm} \widetilde \mathcal{F}(x,{\bf s};\epsilon) =: \sum_{g=0}^\infty \epsilon^{2g-2}\, \widetilde \mathcal{F}_g(x,{\bf s}), \label{genuswf}\\
&& \hspace{-6mm} \widetilde \mathcal{F}^*(x,{\bf s};\epsilon) =: \sum_{g=0}^\infty \epsilon^{2g-2}\, \widetilde \mathcal{F}^*_g(x,{\bf s}). \label{genuswfstar}
\end{eqnarray}
Noting that
$$
\widetilde \mathcal{F}_0(x,{\bf s}) = \frac{\mathcal{F}_0(x,{\bf s})}2, \quad \widetilde \mathcal{F}^*_0(x,{\bf s})= \frac{\H_0\left({\bf t}(x,{\bf s})\right)}2 + \frac A2
$$
and using \eqref{g0p} we
obtain the following identity
\begin{equation}\label{a1111}
\widetilde \mathcal{F}_0(x,{\bf s})=\widetilde \mathcal{F}^*_0(x,{\bf s}).
\end{equation}
Similarly, we have
\begin{equation}\label{a2222}
\widetilde \mathcal{F}_1(x,{\bf s})=\widetilde \mathcal{F}^*_1(x,{\bf s}),\quad \widetilde \mathcal{F}_2(x,{\bf s})=\widetilde \mathcal{F}^*_2(x,{\bf s}).
\end{equation}
Let us proceed to higher genera.
Recall two crucial lemmas which were proven in \cite{DLYZ} and \cite{DY2}, respectively.
\begin{lem}[\cite{DLYZ}] \label{polyHodge}
There exist functions $H_g(z,z_1, z_2, \dots, z_{3g-2})$, $g\geq 1$ of independent variables $z$, $z_1$, $z_2$, $\dots$ such that
\begin{equation}\label{quasiH}
\mathcal{H}_g({\bf t})=H_g\left( v({\bf t} ), \frac{\partial v({\bf t} )}{\partial t_0}, \dots, \frac{\partial^{3g-2} v({\bf t})}{\partial t_0^{3g-2}}\right), \quad g\geq 1
\end{equation}
and that
$$
\sum_{j=1}^{3g-2} j \, z_j \frac{\partial H_g}{\partial z_j} = (2g-2) \, H_g.
$$
Here $v({\bf t}):=\frac{\partial^2 \H_0({\bf t})}{\partial t_0^2}$ is the unique series solution to
\begin{equation}\label{v-dispkdv}
v= t_0+ \sum_{i\geq 1} t_i \frac{v^i}{i!},\quad v({\bf t})=t_0+\dots.
\end{equation}
\end{lem}
\begin{lem} [\cite{DY2}] \label{Fgthm}
There exist functions $F_g(z,z_1, \dots, z_{3g-2})$, $g\geq 1$ of independent variables
$v$, $v_1$, $v_2$, $\dots$ such that
\begin{equation}\label{quasiF}
\mathcal{F}_g(x, {\bf s})=F_g \left( u(x, {\bf s}), \frac{\partial u(x, {\bf s})}{\partial x}, \dots, \frac{\partial^{3g-2}u(x, {\bf s})}{\partial x^{3g-2}}\right), \quad g\geq 1
\end{equation}
and that
$$
\sum_{j=1}^{3g-2} j \, z_j \frac{\partial F_g}{\partial z_j} = (2g-2) \, F_g.
$$
Here
$
u(x,{\bf s}):=\frac{\partial^2 \mathcal{F}_0(x,{\bf s})}{\partial x^2}=\log w(x,{\bf s}),
$ and $w(x,{\bf s})$ is the unique series solution to
\begin{equation}\label{weven}
w= x+ \sum_{k\geq 1} \, k\, \bar s_k \, w^k,\qquad w(x,{\bf s})=x+\dots.
\end{equation}
\end{lem}
\begin{lem} \label{wfwfstar}
For any $g\geq 2$, there exist functions $\widetilde F_g(z,z_1,\dots,z_{3g-2})$ and $\widetilde F^*_g(z,z_1,\dots,z_{3g-2})$ of independent
variables $z,z_1,\dots,z_{3g-2}$ such that
\begin{eqnarray}
&& \widetilde \mathcal{F}_g(x,{\bf s}) = \widetilde F_g\left( u(x,{\bf s}), \frac{\partial u(x, {\bf s})}{\partial x}, \dots, \frac{\partial^{3g-2}u(x, {\bf s})}{\partial x^{3g-2}}\right), \label{quasiwf}\\
&& \widetilde \mathcal{F}^*_g(x,{\bf s}) = \widetilde F^*_g \left( u(x,{\bf s}), \frac{\partial u(x, {\bf s})}{\partial x}, \dots, \frac{\partial^{3g-2}u(x, {\bf s})}{\partial x^{3g-2}}\right) \label{quasiwfstar}
\end{eqnarray}
with $u(x,{\bf s})$ defined as in Thm.\,\ref{Fgthm}.
\end{lem}
\begin{prf}
Observe that, as in \cite{DY2}, under the substitution
$$t_i(x, {\bf s}) = \sum_{k\geq 1} k^{i+1} \bar{s}_k - 1 + \delta_{i,1} + x \cdot \delta_{i,0}$$
we have $v({\bf t}(x,{\bf s}))= u(x,{\bf s})$, where $v({\bf t})$ is defined as in Thm.\,\ref{polyHodge}.
The lemma is then proved by Lemmata \ref{polyHodge},\,\ref{Fgthm} and the defining eqs.\,\eqref{definingwf},\,\eqref{def2wfs}.
\end{prf}
\medskip
\paragraph{\textit{End of the proof of the Main Theorem.}}
\quad We have proved that $\widetilde \mathcal{F}$ and $\widetilde \mathcal{F}^*$ satisfy the same set of
PDEs \eqref{virasoroevenfstar} (or say \eqref{levenwfagain}), and have the same structures \eqref{genuswf}--\eqref{genuswfstar},
\eqref{quasiwf}--\eqref{quasiwfstar}.
We are left to prove certain uniqueness.
Similarly as in \cite{DVV,LYZ}, one can deduce that eqs.\,\eqref{virasoroevenfstar}, or eqs.\,\eqref{levenwfagain}
determine $\widetilde \mathcal{F}^*$, or $\widetilde \mathcal{F}$ in the ring $\mathcal{B}^{even}$ up to
an additive arbitrary function in $x$.
However, we already know that both $\widetilde \mathcal{F}^*$ and $\widetilde \mathcal{F}^*$ have genus expansions \eqref{genuswf}--\eqref{genuswfstar}.
So we must have
$$
\widetilde \mathcal{F} - \widetilde \mathcal{F}^*= K(x;\epsilon) = \sum_{g\geq 0} \epsilon^{2g-2} K_g(x)
$$
where $K_g(x)$ are functions in $x$ only.
We are going to show that $\forall\,g\geq 0$, $K_g(x)$ vanishes.
Note that the genus $0,1,2$ parts were proven (cf. \eqref{a1111},\,\eqref{a2222}).
So we are left to consider $g\geq 3$.
\begin{theorem} \label{unithm}
For any $g\geq 3$, $K_g(x)\equiv 0$.
\end{theorem}
\begin{prf}
Since $g\geq 3$, we know from Lem.\,\ref{wfwfstar} that there exist a function $Q_g(z,z_1,\dots,z_{3g-2})$ of independent variables $z,z_1,\dots,z_{3g-2}$ such that
$$
\widetilde \mathcal{F}_g(x,{\bf s}) - \widetilde \mathcal{F}^*_g(x,{\bf s}) = Q_g\left(u(x,{\bf s}), \frac{\partial u(x, {\bf s})}{\partial x}, \dots, \frac{\partial^{3g-2}u(x, {\bf s})}{\partial x^{3g-2}}\right)=K_g(x).
$$
It follows that
\begin{equation}\label{final!}
\frac{\partial Q_g\left( u(x,{\bf s}), \frac{\partial u(x, {\bf s})}{\partial x}, \dots, \frac{\partial^{3g-2}u(x, {\bf s})}{\partial x^{3g-2}}\right)}{\partial \bar{s}_k} \equiv 0,\quad \forall\,k\geq 1.
\end{equation}
Recall that the function $u(x,{\bf s})$ satisfies the following PDEs \cite{DY2}:
$$
\frac{\partial u}{\partial {\bar s}_k} = k \, e^{ku} \, u_x,\quad k\geq 1.
$$
Substituting these flows in the l.h.s. of \eqref{final!} and dividing the resulting expression by $e^{ku}$, we obtain a polynomial in $k$ of degree $3g-1$.
Since for any $k\in \mathbb{Z}$ this polynomial must vanish, all its coefficients must vanish. Looking at these coefficients from the highest degree ($=3g-1$)
to the lowest degree ($=1$), and using its triangular nature (can be deduced easily) we obtain
$$
\frac{\partial Q_g}{ \partial z_j} =0, \quad j=1,\dots,3g-2; \quad \frac{\partial Q_g}{ \partial z} =0.
$$
So $Q_g$ must be a constant. However, noting that
$$
\sum_{j=1}^{3g-2} j \, z_j \frac{\partial Q_g}{\partial z_j} = (2g-2) \, Q_g,
$$
we find that $Q_g$ must vanish. As a result, $K_g(x)\equiv 0$. The theorem is proved.
\end{prf}
We arrive at $\widetilde \mathcal{F}=\widetilde \mathcal{F}^*$, which implies $\mathcal{F}_{\tiny\rm even}=\mathcal{F}^*$. The Main Theorem is proved. \hfill $\Box$
\begin{prfn}{Theorem \ref{thm11}}
Part 1) of the theorem follows from the Main Theorem by shifting the time variables.
Part 2) of the theorem is a reformulation of the Main Theorem.
\end{prfn}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\section{Two applications of the Main Theorem} \label{section5}
Several applications of the Main Conjecture proven in the present paper have been given in \cite{DY2}. In this section we present two more new applications.
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Application I. On the reduced GUE free energy}
In this subsection, we introduce a reduced GUE free energy, which will provide a new understanding of the
relationship between the intersection numbers of psi-classes on the $\overline{\mathcal{M}}_{g,k}$ (``topological gravity") and matrix integrals (``matrix gravity").
Recall that the following formula has been obtained in \cite{DY2} as a consequence of the Hodge--GUE conjecture:
$\forall\,g\geq 2,$
\begin{equation} \label{now-identity!}
F_g(v, v_1,\dots,v_{3g-2})= \frac{v_{2g-2}}{2^{2g}(2g)!} + \frac{D_0^{2g-2} \left[H_1(v;v_1)\right]}{2^{2g-3} (2g-2)!}+\sum_{m=2}^g \frac{2^{3m-2g}}{(2g-2m)!} D_0^{2(g-m)} \left[H_m(v,v_1,\dots,v_{3m-2})\right]
\end{equation}
where $D_0:=v_1 \, \partial_v+ \sum_{k\geq 1} v_{k+1} \, \partial_{v_k}.$
Noticing that, for any $m\geq 2,$ $H_m$ is a rational function of $v_1,v_2,\dots$, which does not contain $v$ explicitly \cite{DLYZ}, and that
$$
H_1= \frac1{24} \log v_1 - \frac1{16} v
$$
we find that for any $g\geq 2$, $F_g$ is also a rational function in $v_1,v_2,\dots$, which does not contain $v$ explicitly, i.e.
$$\frac{\partial F_g}{\partial v} =0,\quad g\geq 2.$$
Introduce a gradation for these rational functions by assigning
$$
\widetilde{\deg} \, v_k=1, \quad \forall\, k\geq 1.
$$
Then both $H_g$ and $F_g$ with $g\geq 2$ decompose into homogeneous parts w.r.t. to $\widetilde{\deg}$
$$
H_g= \sum_{d=1-g}^{2g-2} H_g^{[d]}, \quad F_g = \sum_{d=1-g}^{2g-2} F_g^{[d]},\quad g\geq 2.
$$
\begin{dfn} Define the lowest degree GUE free energy $\mathcal{F}^{\rm red}$ by
\begin{eqnarray}
&& \mathcal{F}^{\rm red}(x,{\bf s};\epsilon) := \sum_{g\geq 0} \epsilon^{2g-2} \mathcal{F}^{\rm red}_g(x,{\bf s}) \nonumber\\
&& \mathcal{F}^{\rm red}_0(x,{\bf s}) := \mathcal{F}_0(x,{\bf s})\nonumber\\
&& \mathcal{F}^{\rm red}_1(x,{\bf s}) := \mathcal{F}_1(x,{\bf s})\nonumber\\
&& \mathcal{F}^{\rm red}_g(x,{\bf s}) := F_g^{[1-g]}\left(\frac{\partial u(x, {\bf s})}{\partial x}, \dots, \frac{\partial^{3g-2}u(x, {\bf s})}{\partial x^{3g-2}}\right),\quad g\geq 2\nonumber
\end{eqnarray}
where $u(x,{\bf s})$ is defined as in Lem.\,\ref{Fgthm}.
\end{dfn}
Now let $$\mathcal{F}^{\rm WK}({\bf t}; \epsilon)=\sum_{g\geq 0} \epsilon^{2g-2} \mathcal{F}_g^{\rm WK}({\bf t})$$ denote the free energy of the Witten--Kontsevich correlators, where
$$\mathcal{F}^{\rm WK}_g({\bf t}):=\sum_{k\geq 0} \frac 1{k!}
\sum_{i_1, \dots, i_k\geq 0} t_{i_1}\cdots t_{i_k}\int_{\overline{\mathcal M}_{g,k}}
\psi_1^{i_1}\cdots \psi_k^{i_k}.
$$
We know that for $g\geq 2,$
$$
\mathcal{F}_g^{\rm WK}({\bf t})= H^{[1-g]}_g \left(\frac{\partial v({\bf t} )}{\partial t_0}, \dots, \frac{\partial^{3g-2} v({\bf t})}{\partial t_0^{3g-2}}\right)
$$
where $v({\bf t})$ is defined in Lem.\,\ref{polyHodge}.
By comparing the lowest degree part of the identity \eqref{now-identity!}
and noticing that the operator $D_0$ does not change the degree defined by $\widetilde{\deg}$, we arrive at
\begin{cor} \label{wittengue}
The following formula holds true
\begin{align}
&\mathcal{F}^{\rm red}(x, {\bf s};\epsilon) + \epsilon^{-2}
\Big(-\frac12 \sum_{k_1,k_2\geq 1} \frac{k_1\,k_2}{k_1+k_2} \, \bar{s}_{k_1} \, \bar{s}_{k_2} + \sum_{k\geq 1} \frac{k}{1+k}\, \bar{s}_{k} -
x \, \sum_{k\geq 1} \, \bar{s}_{k} - \frac14 + x \Big) \nonumber\\
=& \sum_{g\geq 0}
\epsilon^{2g-2} \, 2^g \, \mathcal{F}^{\rm WK}_g\left({\bf t}(x, {\bf s})\right) +\zeta'(-1) \label{kdv}
\end{align}
where $\bar{s}_{k}:=\binom{2k}{k} s_{2k}$ and
$$
t_i(x, {\bf s}) := \sum_{k\geq 1} k^{i+1} \bar{s}_k - 1 + \delta_{i,1} + x \cdot \delta_{i,0},\quad i\geq 0.
$$
\end{cor}
\setcounter{equation}{0}
\setcounter{theorem}{0}
\subsection{Application II. Proof of the integrable hierarchy conjecture}
The Main Theorem confirms the validity of the following Integrable Hierarchy Conjecture originally proposed in \cite{DLYZ}.
\begin{theorem}\label{integrablehierarchy}
Let $\H_{\tiny\rm cubic}({\bf t}; b; \epsilon)$ denote the following generating series of special cubic Hodge integrals
$$
\mathcal H_{\tiny\rm cubic}({\bf t}; b;\epsilon)=\sum_{g\geq 0} \epsilon^{2g-2} \sum_{k\geq 0} \frac 1{k!}
\sum_{i_1, \dots, i_k\geq 0} t_{i_1}\cdots t_{i_k}\int_{\overline{\mathcal M}_{g,k}}
\Lambda_g(-2b)\,\Lambda_g(-2b)\,\Lambda_g(b)\, \psi_1^{i_1}\cdots \psi_k^{i_k}
$$
where $b$ is a non-zero parameter.
Define
\begin{equation}\label{hodgedkdv}
U({\bf t}; b ;\epsilon) = \left( \Lambda^{1/2}-\Lambda^{-1/2}\right)\left(\Lambda-\Lambda^{-1}\right) \H_{\tiny\rm cubic}\left({\bf t};b;\frac{\epsilon}{\sqrt{b}}\right),\quad \Lambda:= e^{\epsilon\,\partial_{t_0}}.
\end{equation}
Then $U$ satisfies the following discrete KdV equation (which is equivalent to \eqref{volterra1} by a rescaling):
\begin{equation}\label{Udkdv}
\frac{\partial U}{\partial t} = \frac1{2\epsilon} \left(e^{U(t_0+\epsilon )}-e^{U(t_0-\epsilon)}\right)
\end{equation}
where $$\partial_t:= \sum_{i\geq 0} 2^i b^i \, \partial_{t_i}.$$
\end{theorem}
\begin{prf}
Let us first prove \eqref{Udkdv} for $b=\frac12$.
According to the Main Theorem, under the following substitution of time variables
$$
t_i(x, {\bf s}) := \sum_{k\geq 1} k^{i+1} \bar{s}_k - 1 + \delta_{i,1} + x \cdot \delta_{i,0},\quad i\geq 0,
$$
the following identity holds true
\begin{equation}\label{id-again}
\mathcal{F}_{\tiny\rm even}(x, {\bf s};\epsilon) = \left(\Lambda^{\frac12}+\Lambda^{-\frac12}\right) \, \H_{\tiny\rm cubic} \left({\bf t}(x,{\bf s});\frac12;\sqrt{2} \, \epsilon\right) + \epsilon^{-2} \, A +\zeta'(-1)
\end{equation}
where $\mathcal{F}_{\tiny\rm even}$ is the GUE free energy with even couplings, $A$ is defined in \eqref{axs}.
Define $$u(x,{\bf s};\epsilon)= (\Lambda-1) \, (1-\Lambda^{-1}) \mathcal{F}_{\tiny\rm even}(x,{\bf s};\epsilon).$$
Noting that
$$
\partial_{t_0}=\partial_x \quad \Rightarrow \quad \Lambda= e^{\epsilon\,\partial_x}
$$
and applying $ (\Lambda-1) \, (1-\Lambda^{-1})$ on both sides of \eqref{id-again} we obtain
$$
u(x,{\bf s};\epsilon) = U\left({\bf t}(x,{\bf s}); 1/2;\epsilon\right).
$$
Since $u$ satisfies
$$
\frac{\partial u}{\partial \bar{s}_1} = \frac1{2\epsilon} (\Lambda-\Lambda^{-1})\, e^u
$$
and since
$$
\frac{\partial }{\partial \bar{s}_1} = \sum_{i\geq 0} \frac{\partial}{\partial t_i}
$$
we have
$$
\frac{\partial U}{\partial t} = \frac1{2\epsilon} \left(e^{U(t_0+\epsilon )}-e^{U(t_0-\epsilon)}\right).
$$
Let us now consider a general $b$, $b\neq 0$. Denote
$
v({\bf t})
$
the unique series solution to
$$
\sum_{i\geq 0} t_i \frac{v^i}{i!} = v.
$$
It was shown in \cite{DLYZ} that there exist functions $H_g(z,z_1,\dots,z_{3g-2};b)$ such that
$$
\mathcal{H}_g({\bf t};b)=H_g\left( v({\bf t} ), \frac{\partial v({\bf t} )}{\partial t_0}, \dots, \frac{\partial^{3g-2} v({\bf t})}{\partial t_0^{3g-2}};b\right), \quad g\geq 1.
$$
Expand
$$
U({\bf t};b;\epsilon)=\sum_{g\geq 0}\epsilon^{2g} U^{[2g]}({\bf t}; b).
$$
Eq.\,\eqref{Udkdv} is now proved by noticing
$$
U^{[0]}({\bf t};b) = 2 \, b\, v({\bf t})
$$
as well as observing the following identity \cite{DY3,DLYZ,GJV}
$$
-b\frac{\partial H_g(v,v_1,\dots,v_{g-2};b)}{\partial b}+\sum_{k= 0}^{3g-2} v_k \frac{\partial H_g(v,v_1,\dots,v_{3g-2};b)}{\partial v_k} = \frac1{24}\delta_{g,1} -(g-1) H_g,\quad \forall\,g\geq 1.
$$
The theorem is proved.
\end{prf}
Theorem \ref{integrablehierarchy} can be equivalently described as follows: the Hodge hierarchy \cite{DLYZ} associated with
$$ \Lambda_g(-2b)\,\Lambda_g(-2b)\,\Lambda_g(b),\quad b\neq 0$$
is {\it normal Miura equivalent} to the discrete KdV hierarchy (up to a simple rescaling).
\begin{remark}
At a first glance, it looks not easy to use eq.\,\eqref{Udkdv} for computing Hodge integrals. However,
actually, the single discrete KdV equation \eqref{Udkdv} contains the full information about the special cubic Hodge integrals
via the quasi-triviality approach \cite{DZ-norm, DLZ, LZ}! This is because the construction of the so-called Hodge hierarchy \cite{DLYZ}
associated to $\Lambda_g(-2b)\,\Lambda_g(-2b)\,\Lambda_g(b)$
tells that $V:=\epsilon^2 \, \partial_{t_0}^2 \H({\bf t};b;\epsilon)$ gives the quasi-triviality transformation of this hierarchy.
The algorithm of computing $\H_g$ from the quasi-triviality approach is given in \cite{LZ, DY3}.
\end{remark}
It would be interesting to study the so-called double ramification counterpart of the special cubic
Hodge hierarchy in the framework of the conjectural DR/DZ correspondence
formulated by A.\,Buryak \cite{Bur2}. We plan to do it in a subsequent publication.
|
1,116,691,499,121 | arxiv | \section{#1}}
\newcommand{\subsect}[1]{\subsection{#1}}
\newcommand{\subsubsect}[1]{\subsubsection{#1}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\newtheorem{proposition}{Proposition}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def{{\cal W}}{{{\cal W}}}
\def{\widehat}{{\widehat}}
\def\alpha{\alpha}
\def\Omega{\Omega}
\def{{\cal O}}{{{\cal O}}}
\defJ{J}
\def\omega{\omega}
\def\kappa{\kappa}
\defI_\k{I_\omega}
\def\,\mbox{diag}\,{\,\mbox{diag}\,}
\def\mbox{ad}\,{\mbox{ad}\,}
\def\>#1{{\bf #1}}
\def\<{\!<\!}
\def\1{\'{\i}}
\def\R{{\rm I\kern-.2em R}}
\def\C{{\rm I\kern-.5em C}}
\def\!\!\!\!\!\!{\!\!\!\!\!\!}
\parskip=1ex
\oddsidemargin= 0.5cm
\evensidemargin= 0.5cm
\parindent=1.5em
\textheight=23.0cm
\textwidth=15cm
\topmargin=-1.0cm
\begin{document}
\thispagestyle{empty}
\hfill \today
\ \vspace{2cm}
\begin{center}
{\LARGE{\bf{Casimir invariants for the complete}}}
{\LARGE{\bf{family of quasi-simple orthogonal }}}
{\LARGE{\bf{algebras}}}
\end{center}
\bigskip\bigskip
\begin{center}
Francisco J. Herranz${\dagger}$ and Mariano Santander${\ddagger}$
\end{center}
\begin{center}
\em
{ { {}${\dagger}$ Departamento de F\1sica, Universidad de Burgos}
\\ E-09006, Burgos, Spain}
{ { {}${\ddagger}$ Departamento de F\1sica Te\'orica, Universidad de
Valladolid } \\ E-47011, Valladolid, Spain }
\end{center}
\rm
\bigskip\bigskip
\begin{abstract}
A complete choice of generators of the center of the enveloping algebras
of real quasi-simple Lie algebras of orthogonal type, for arbitrary
dimension, is obtained in a unified setting. The results
simultaneously include the well known polynomial invariants of the
pseudo-orthogonal algebras
$so(p,q)$, as well as the Casimirs for many non-simple algebras such as
the inhomogeneous $iso(p,q)$, the Newton--Hooke and Galilei type, etc.,
which are obtained by contraction(s) starting from the simple algebras
$so(p,q)$. The dimension of the center of the enveloping algebra of a
quasi-simple orthogonal algebra turns out to be the same as for the
simple $so(p,q)$ algebras from which they come by
contraction. The structure of the higher order invariants is given in a
convenient ``pyramidal" manner, in terms of certain sets of
``Pauli--Lubanski" elements in the enveloping algebras. As an example
showing this approach at work, the scheme is applied to recovering the
Casimirs for the (3+1) kinematical algebras. Some prospects on the
relevance of these results for the study of expansions are also given.
\end{abstract}
\newpage
\sect{Introduction}
The role of Casimir (or polynomial) invariants of Lie algebras is rather
important in physics as well as in mathematics. They generate the center
of the universal enveloping algebra $U(g)$ of $g$. Physically, in any
theory with a symmetry algebra, they appear as related to conserved
quantities, as they commute with all generators. The work of Racah
\cite{Racah} solved the problem of obtaining the Casimir invariants
associated to a simple Lie algebra and, in particular, Gel'fand
explicitly found a particular basis of the center of the enveloping
algebra of $so(N+1)$ \cite{Gelfand}; the case for the pseudo-orthogonal
Lie algebras $so(p,q)$ (in fact, for all simple classical algebras) has
been also dealt with in explicit form by Perelomov and Popov
\cite{PerelPop}. General results concerning both simple and non-simple
Lie algebras have been established; for instance, a formula giving the
number of primitive independent Casimir operators of any Lie algebra can
be found in
\cite{Beltrametti}, and a complete description of the theory of
polynomial and/or rational invariants appears in \cite{Abellanas}.
These results allow to deduce all invariants related to any particular
Lie algebra on a case-by-case basis, but do not give directly a general
perspective of the structure of the invariants associated to a complete
family of ``neighbour" algebras, as for instance provided
by a given Lie algebra and (some of) its contractions: the general problem
of relating the universal enveloping algebra of a given Lie algebra and one
of its possible contractions is not yet solved, though complete results
are available for special cases (see for instance, \cite{AbdelMalek}
where Casimirs for a large family of contractions of $sl(3, \C)$ are
given, or \cite{Montigny} where the problem of behaviour of bilinear invariants
under contraction is studied within the graded contraction approach).
This problem has definite interest in physics, where
contractions are related to some kind of ``approximation" and
understanding how invariants behave under contraction and under the
``inverse" expansion or deformation process are illuminating aspects of
the theory.
The aim of this paper is to obtain, within a such
``simultaneous" approach, all the Casimir invariants for every Lie algebra
in a rather large family, the so-called quasi-simple or
Cayley--Klein (CK) algebras of orthogonal type \cite{Ros,Dubna96}. This
family includes the real simple pseudo-orthogonal Lie algebras $so(p,q)$
of the Cartan series $B_l$ and $D_l$ as well as many
non-simple Lie algebras which can be obtained by
contracting the former ones. The complete family of
quasi-simple orthogonal algebras can be obtained
starting from the compact algebra
$so(N+1)$ in two different ways. One possibility is to use a ``formal
transformation" which introduces numbers outside the real field
(either complex, double or dual (Study) numbers)
\cite{Grom}, and another makes use of the theory of graded
contractions \cite{MonPat,MooPat}, without leaving the real field.
Adopting this last point of view, it is shown that a particular
solution of the ${\bf Z}_2^{\otimes N}$ graded contractions of
$so(N+1)$ leads to the CK algebras as an $N$-parametric family of
real Lie algebras denoted $so_{\omega_1,\dots,\omega_N}(N+1)$
\cite{Grad}. The Lie algebra structure of the above family together
with a listing of its most interesting members are
briefly described in section 2.
When all $\omega_a$ are different from zero,
${so}_{\omega_1,\dots,\omega_N}(N+1)$ is a simple algebra (isomorphic to
$so(p,q)$ with $p+q=N+1$), whose rank is
$l=[\frac{N+1}{2}]$ (square brackets denoting here, as usual, the integer
part). The dimension of the center of its universal enveloping algebra
equals the rank $l$ of the algebra, and it is generated by a set of
homogeneous polynomials (Casimir operators) of orders $2, 4,\dots,
2[\frac{N}{2}]$, and an additional Casimir of order $l$ when $N+1$ is even.
We present in section 3 the explicit structure of the Casimir invariants
corresponding to any algebra in the family $so_{\omega_1,\dots,\omega_N}(N+1)$.
These invariants are deduced starting from
the original approach of Gel'fand but where the necessary modifications are
introduced in order to get expressions which cover simultaneously all
algebras in the family
$so_{\omega_1,\dots,\omega_N}(N+1)$, whether the constants $\omega_a$ are
different from zero or not. This means that the behaviour of these
Casimirs upon any contraction $\omega_a\to 0$ is built-in in the
formalism, and they do not require any rescaling which should be made
when the contraction is performed in the In\"on\"u--Wigner sense. Every
Casimir we obtain is non-trivial for any contracted
algebra (whether or not the constants $\omega_a$ are different from
zero); furthermore, these constitute a complete set of Casimirs
for CK algebras.
The main tool is provided by some elements in
the enveloping algebra, labeled by an even number $2s$ of indices,
$W_{a_1a_2\dots a_sb_1b_2\dots b_s}$, which are homogeneous of
order $s$ in the generators. In the case of the (3+1) Poincar\'e
algebra, the components of the Pauli--Lubanski vector (whose square
is the fourth-order Casimir) are in fact $W$-symbols with four
indices. In this way, the Casimir invariants are presented in a
pyramidal intrinsic form since each
$W_{a_1a_2\dots a_sa_{s+1}b_1b_2\dots b_sb_{s+1}}$ can be written
in terms of $W$'s with two less indices
$W_{a_1a_2\dots a_sb_1b_2\dots b_s}$, and ultimately, in terms of
$W$-symbols with two indices, which are simply the
generators themselves.
The problem of giving explicit expressions in terms of
generators for Casimirs in CK algebras has been also approached by
Gromov \cite{Gromov} by applying the above mentioned formal
transformation to the Casimir invariants of $so(N+1)$ obtained by
Gel'fand: those expressions should be equivalent to the ones we
obtain (as giving a possibly different basis for the center of the
enveloping algebra), but the explicit introduction of the
$W$'s makes the choice in this paper a lot simpler and easily tractable.
The general expressions for Casimirs written directly in terms of
generators, as in \cite{Gromov}, are overall more cumbersome to apply to
specific Lie algebras than the ones involving
$W$'s, specially when $N$ increases.
The results are illustrated in section 4 by writing the general
expressions of the invariants associated to
$so_{\omega_1,\dots,\omega_N}(N+1)$ for $N=2,3,4,5$; in particular, for
$N=4$ we focus on the (3+1) kinematical algebras \cite{BLL} thus
obtaining a global view of the limit transitions among their
corresponding Casimir operators. The way of getting the Casimirs
in the Minkowski space starting from those in the DeSitter
space by letting the universe ``radius" $R\to \infty$
is well known; this familiar example appears in our scheme as a
rather particular case, yet it may facilitate grasping the scope of the
results we obtain, which includes a much larger family of algebras than
the well-known kinematical ones.
We make in the Conclusions some brief comments on the role of these
results for the study of expansions.
\sect{The family of quasi-simple orthogonal algebras}
Consider the real Lie algebra $so(N+1)$ whose $\frac 12 N(N+1)$
generators $\Omega_{ab}$ $(a,b=0,1,\dots, N, \ a<b)$ satisfy the following
non-vanishing Lie brackets:
\begin{equation}
[\Omega_{ab}, \Omega_{ac}] = \Omega_{bc} \qquad
[\Omega_{ab}, \Omega_{bc}] = -\Omega_{ac} \qquad
[\Omega_{ac}, \Omega_{bc}] = \Omega_{ab} \qquad a<b<c .
\label{aa}
\end{equation}
Through a ${\bf {Z}}_2^{\otimes N}$ graded contraction process a
family of contracted real Lie algebras can be deduced from
$so(N+1)$. The general solution has been given in
\cite{SolGenGrad}; it includes from the simple Lie algebras
$so(p,q)$ to the abelian algebra of the same dimension. For reasons
which will become clear shortly, we restrict here to a particular
subfamily \cite{Grad}, whose members have been called quasi-simple
algebras \cite{Ros} because they are very ``near" to the simple
ones. They depend on
$N$ real coefficients $\omega_1,\dots,\omega_N$ and the generic member of this
family will be denoted $so_{\omega_1,\dots,\omega_N}(N+1)$. Their non-zero
commutators are given by
\begin{equation}
[\Omega_{ab}, \Omega_{ac}] = \omega_{ab}\Omega_{bc} \quad\
[\Omega_{ab}, \Omega_{bc}] = -\Omega_{ac} \quad\
[\Omega_{ac}, \Omega_{bc}] = \omega_{bc}\Omega_{ab} \quad\ a<b<c
\label{ab}
\end{equation}
without sum over repeated indices. Note that all Lie brackets
involving four different indices $a$, $b$, $c$, $d$ as $[\Omega_{ab},
\Omega_{cd}]$ are equal to zero.
The two-index coefficients $\omega_{ab}$ are written in terms of the
$N$ basic $\omega_{a}$ by means of:
\begin{equation}
\omega_{ab}=\omega_{a+1}\omega_{a+2}\cdots\omega_b \qquad a,b=0,1,\dots,N \qquad a<b,
\label{ac}
\end{equation}
therefore
\begin{eqnarray}
&&\omega_{a-1\, a}=\omega_a \qquad a=1,\dots,N \label{ad}\\
&&\omega_{ac}=\omega_{ab}\omega_{bc} \qquad a<b<c.
\label{ae}
\end{eqnarray}
Each coefficient $\omega_a$ can be reduced to $+1$, $-1$ or zero by a simple
rescaling of the initial generators; then the family
$so_{\omega_1,\dots,\omega_N}(N+1)$ embraces
$3^N$ Lie algebras called CK algebras of orthogonal type or quasi-simple
orthogonal algebras. Some of them can be isomorphic; a useful isomorphism
is:
\begin{equation}
so_{\omega_1,\omega_2,\dots,\omega_{N-1},\omega_N}(N+1)\simeq
so_{\omega_N,\omega_{N-1},\dots,\omega_2,\omega_1}(N+1).
\label{af}
\end{equation}
The algebra $so_{\omega_1,\dots,\omega_N}(N+1)$ has a (vector) representation by
$(N+1)\times (N+1)$ real matrices, given by
\begin{equation}
\Omega_{ab}=-\omega_{ab}e_{ab}+e_{ba}
\label{ccd}
\end{equation}
where $e_{ab}$ is the matrix with a single non-zero entry, 1,
in the row $a$ and column $b$. In this realization, any element $X \in
so_{\omega_1,\dots,\omega_N}(N+1)$ satisfies the equation:
\begin{equation}
X \, I_\k + I_\k\, {}^t X=0,
\label{cc}
\end{equation}
where $I_\k$ is the diagonal matrix
\begin{equation}
I_\k=\,\mbox{diag}\, (+,\omega_{01},\omega_{02},\dots,\omega_{0N})=
\,\mbox{diag}\, (+,\omega_{1},\omega_1\omega_{2},\dots,\omega_1\cdots\omega_{N})
\label{ag}
\end{equation}
and ${}^t X$ means the transpose matrix.
We state this property by saying that $X$ is an
$I_\k$-antisymmetric matrix; when all $\omega_a=1$, this reduces to the
standard antisymmetry for the generators of $so(N+1)$.
The CK algebras $so_{\omega_1,\dots,\omega_N}(N+1)$ are the Lie algebras of
the motion groups of $N$-dimensional symmetrical homogeneous spaces
${\cal X}_0$:
\begin{equation}
{\cal X}_0\equiv SO_{\omega_1,\dots,\omega_N}(N+1)/SO_{\omega_2,\dots,\omega_N}(N),\quad
\label{ah}
\end{equation}
where the subgroup $H_0\equiv SO_{\omega_2,\dots,\omega_N}(N)$ is generated by the
Lie subalgebra $h_0=\langle \Omega_{ab},\
a,b=1,\dots,N\rangle$. Each space ${\cal X}_0$ has
constant curvature equal to $\omega_1$ and its
principal metric can be reduced to the form $\,\mbox{diag}\,
(+,\omega_{2},\omega_2\omega_{3},\dots,\omega_2\cdots\omega_{N})$ at each point.
In the sequel we identify the most interesting Lie algebras
appearing within $so_{\omega_1,\dots,\omega_N}(N+1)$ according to the
cancellation of some $\omega_a$ \cite{Tesis,extension}. In particular, the
kinematical algebras \cite{BLL} associated to different models of
spacetime are CK algebras. In the list below, when we explicitly
say that some coefficient is equal to zero it will be
understood that the remaining ones are not. It is remarkable that each case
$\omega_a=0$ can be regarded as an
In\"on\"u--Wigner contraction limit, where some parameter
$\varepsilon_a\to 0$ \cite{Grad,IW}.
\noindent
{\bf (1)} $\omega_a\ne 0$ $\forall a$. They are the pseudo-orthogonal
algebras $so(p,q)$ with $p+q=N+1$ of the Cartan series $B_l$ or
$D_l$. The quadratic form invariant under the fundamental vector
representation is given by the matrix
$I_\k$ (\ref{ag}).
\noindent
{\bf (2)} $\omega_1=0$. They are inhomogeneous pseudo-orthogonal
algebras $iso(p,q)$ with $p+q=N$ which have a semidirect sum
structure:
$$
so_{0,\omega_2,\dots,\omega_N}(N+1)
\equiv t_N\odot so_{\omega_2,\dots,\omega_N}(N)\equiv iso(p,q) .
$$
The ``signature" of the metric invariant under $so_{\omega_2,\dots,\omega_N}(N)$ is
$(+,\omega_{12},\omega_{13},\dots,\omega_{1N})$.
The most interesting cases are the Euclidean algebra $iso(N)$ which
is recovered once for
$(\omega_1,\omega_2,\dots,\omega_N)={(0,+,\dots,+)}$,
and the Poincar\'e algebra $iso(N-1,1)$ which appears several
times, for instance, for
$(\omega_1,\omega_2,\dots,\omega_N)=\{(0,-,+,\dots,+)$,
$(0,+,\dots,+,-)$, $(0,+,\dots,+,-,-,+,\dots,+)$,
$(0,-,-,+,\dots,+)$, $(0,+,\dots,+,-,-)\}$.
From the isomorphism (\ref{af}) is clear that the CK algebras with
$\omega_N=0$ are also of this kind and similarly for the next types.
\noindent
{\bf (3)} $\omega_1=\omega_2=0$. They are twice
inhomogeneous pseudo-orthogonal algebras $iiso(p,q)$ with $p+q=N-1$:
$$
so_{0,0,\omega_3,\dots,\omega_N}(N+1)\equiv t_N\odot \left( t_{N-1}\odot
so_{\omega_3,\dots,\omega_N}(N-1)\right)\equiv iiso(p,q) .
$$
The signature of $so_{\omega_3,\dots,\omega_N}(N-1)$ is
$(+,\omega_{23},\omega_{24},\dots,\omega_{2N})$. Hence the Galilean algebra
$iiso(N-1)$ is associated to $(0,0,+\dots,+)$.
\noindent
{\bf (4)} $\omega_1=\omega_N=0$.
They are $ii'so(p,q)$ algebras with $p+q=N-1$:
$$
so_{0,\omega_2,\dots,\omega_{N-1},0}(N+1)\equiv t_N\odot({t'}_{\!N-1}\odot
so_{\omega_2,\dots,\omega_{N-1}}(N-1))\equiv ii'so(p,q),
$$
where $so(p,q)$ acts on $t_N$ through the vector representation
while it acts on ${t'}_{ N-1}$ through the contragredient of the
vector representation. Thus, $ii'so(N-1)$ is the Carroll algebra
with coefficients $(0,+,\dots,+,0)$ \cite{BLL}.
\noindent
{\bf (5)} $\omega_a=0$, $a\neq 1,N $.
They are the $t_r(so(p,q)\oplus so(p',q'))$ algebras \cite{WB}. In
particular, for $\omega_2=0$ we have $t_{2N-2} (so(p,q)\oplus
so(p',q'))$ with $p+q=N-1$ and $p'+q'=2$, which include the
expanding and oscillating Newton--Hooke algebras for $q=0$
\cite{BLL}.
\noindent
{\bf (6)}
When {\em} all coefficients $\omega_a=0$ we find the flag space algebra
$so_{0,\dots,0}(N+1)\equiv i\dots iso(1)$ \cite{Ros}.
\sect{The Casimir invariants}
We first recall the necessary definitions and tools
\cite{Beltrametti,Abellanas}. Then we summarize the approach and the
results of Gel'fand for $so(N+1)$ \cite{Gelfand}. Afterwards we
compute the Casimir invariants for the CK algebras
$so_{\omega_1,\dots,\omega_N}(N+1)$; when there is no risk of confusion we
shall denote this Lie algebra simply as $g$.
\subsect{Definition of polynomial invariants}
Let $Ug$ the {\it enveloping algebra} of $g$ generated by all
polynomials in $\Omega_{01},\dots,\Omega_{N-1N}$ and $S$ the {\it
symmetric algebra} of $g$ isomorphic to
${\R}[\alpha_{ab};\,a,b=0,1,\dots,N;\,a<b]$, this is, the ring of
polynomials in $\frac 12 {N(N+1)}$ commutative variables $\alpha_{ab}$.
A generic polynomial is denoted as
$p=p(\alpha_{01},\dots,\alpha_{N-1N})$.
The {\it adjoint action}, $\mbox{ad}\, \Omega_{ab}: g\longrightarrow g$
\begin{equation}
\mbox{ad}\, \Omega_{ab}(\Omega_{cd})=
[\Omega_{ab},\Omega_{cd}] ,
\label{ba}
\end{equation}
is extended to an ``adjoint action" of $g$ on
$Ug$ and also to another action on $S$:
\begin{equation}
\mbox{ad}\, \Omega_{ab}\ :\ u\in Ug \longrightarrow
[\Omega_{ab},u]\equiv \Omega_{ab} u- u
\Omega_{ab}\in Ug,
\label{bb}
\end{equation}
\begin{equation}
\mbox{ad}\, \Omega_{ab}\ :\ p=p(\alpha_{01},\dots,\alpha_{N-1N})\in S
\longrightarrow
{{{\cal O}}}_{ab}(p)\equiv { \sum_{
c,d,m,n=0}^N} C_{ab,cd}^{mn}\alpha_{mn}
\frac{\partial p}{\partial \alpha_{cd}} \in S,
\label{bc}
\end{equation}
where $C_{ab,cd}^{mn}$ are the structure constants of $g$. We are
using the pairs $ab$ as indices for the basis elements of $g$, so
the conditions $a<b$, $c<d$ and $m<n$ will be assumed without
saying. The structure constants themselves, read from
(\ref{ab}), are:
\begin{equation}
C_{ab,ac}^{mn}=\delta_{mb}\delta_{nc}\omega_{ab}, \quad
C_{ab,bc}^{mn} = - \delta_{ma}\delta_{nc}, \quad
C_{ac,bc}^{mn} =\delta_{ma}\delta_{nb}\omega_{bc}, \quad a<b<c .
\label{bd}
\end{equation}
The invariants in $Ug$ and $S$ under the adjoint action of $g$ are
the following subsets:
\begin{eqnarray}
U^I g&\equiv&\{u\in U g\ |\ [\Omega_{ab},u]=0,\ \forall
\Omega_{ab}\in g\}\subset
U g,\label{be}\\
S^I&\equiv&\{p\in S\ |\ {{{\cal O}}}_{ab}(p)=0,\ \forall
\Omega_{ab}\in g\}\subset S,
\label{bf}
\end{eqnarray}
and the elements of $U^I g$ are called
{\em polynomial} or {\em Casimir invariants} of $g$.
According to the general results described in
\cite{Beltrametti,Abellanas} the two main steps to obtain the Casimir
invariants are:
\noindent
$\bullet$ To compute the subset $S^I$ of $S$.
\smallskip
\noindent
$\bullet$ To apply to each element of $S^I$ the canonical
mapping $\phi: S\to U g$ defined for any monomial by symmetrization:
\begin{equation}
\phi(\alpha_{a_1b_1}\dots\alpha_{a_rb_r})=\frac 1{r!}\sum_{\pi\in
\Pi_r}\Omega_{\pi(a_1b_1)}\cdots \Omega_{\pi(a_rb_r)},
\label{bg}
\end{equation}
where $\Pi_r$ is the group of permutations on $r$ items, and extended to
$S$ by linearity.
An upper bound for the number of
independent invariants for $g$ is provided by:
\noindent
{\it Proposition 1} \cite{Abellanas}. The maximal number of algebraically
independent Casimir invariants $\tau$ associated to any algebra
$g$ is
\begin{equation}
\tau \le {\mbox{dim}}\, (g) -r(g),
\label{bh}
\end{equation}
where ${\mbox{dim}}\, (g)$ is the dimension of $g$, and $r(g)$ is
the rank of the antisymmetric matrix $M_g$ whose elements are
\begin{equation}
(M_g)_{ab,cd}={\sum_{m,n=0}^N}
C_{ab,cd}^{mn}\alpha_{mn} \qquad a<b, \ c<d, \ m<n.
\label{bi}
\end{equation}
For the simple Lie algebras $so(p,q), \ p+q=N+1$, the number $\tau$ is
equal to the rank of the algebra, $\tau=[\frac{N+1}{2}]$. In this case
the upper bound saturates the inequality, and this can be checked by
computing the rank $r(g)$ of the matrix (\ref{bi}). However, it is not
necessary that $g$ be simple in order to have the equality $\tau =
{\mbox{dim}}\,(g) -r(g)$. For instance, the
Poincar\'e algebra $iso(3,1)$ has two algebraically (quadratic and
fourth-order) independent Casimirs, and in this case the equality also
holds; the same happens for all the inhomogenous pseudo-orthogonal
algebras
${iso}(p,q)$ which appear in the CK family when $\omega_1=0$ or $\omega_N=0$ and
the remaining ones are different from zero.
We will show that equality holds in (\ref{bh}) for all CK algebras, so
although the CK set contains non-simple algebras, all members in each
family have the same number of algebraically independent Casimirs.
This is the main reason to restrict in the paper from the general set
of graded contractions of $so(N+1)$ to the subfamily of CK algebras. The
property of having exactly $[\frac{N+1}{2}]$ algebraically independent
Casimirs justifies the {\em quasi-simple} name allocated to them. This
property is no longer true for other contractions of
$so(N+1)$ beyond the CK family, as the extreme case of the abelian
algebra, which has as many primitive Casimirs as generators, clearly
shows.
\subsect{The Gel'fand method}
Instead of solving the set of differential equations (\ref{bf}) in
order to get the elements of $S^I$, Gel'fand considered the
following antisymmetric matrix associated to $so(N+1)$:
\begin{equation}
T=\left(\begin{array}{cccccc}
0 & \alpha_{10} & \alpha_{20} &\dots & \alpha_{ N-1\,0} & {\alpha_{N0}} \cr
\alpha_{01} & 0 & \alpha_{21} &\dots & \alpha_{ N-1\,1} & { \alpha_{N1}} \cr
\alpha_{02} & \alpha_{12} & 0 &\dots & \alpha_{N-1\,2} & { \alpha_{N2}} \\
\vdots & \vdots & \vdots &\ddots& \vdots & \vdots \cr
\alpha_{0\,N-1}&\alpha_{1\,N-1}&\alpha_{2\,N-1}&\dots& 0 & \alpha_{N\,N-1}\cr
\alpha_{0 N}& \alpha_{1 N}& \alpha_{2 N} &\dots & \alpha_{N-1\,N} & 0
\end{array}\right)
\label{bj}
\end{equation}
where $\alpha_{ab}=-\alpha_{ba}$. He obtained the Casimir invariants of
$so(N+1)$ from the coefficients of the characteristic polynomial of the
matrix $T$:
\begin{equation}
\det\big(T-\lambda \,{\bf I} \big)=0 ,
\label{bk}
\end{equation}
where ${\bf I}$ is the $(N+1)\times (N+1)$ identity matrix.
Due to the structure of the matrix $T$, these coefficients are
sums of all minors of the same order associated to the main
diagonal of $T$; the last coefficient is of course the determinant of
$T$ \cite{Gelfand}. It turns out that this determinant is equal to
zero when $N$ is even $N=2l$ , and it is a perfect square of an
homogeneous expression of order $l$ in the variables $\alpha_{ab}$ when
$N$ is odd $N=2l-1$. The Casimir themselves are obtained through
the replacement $\alpha_{ab}\to
\Omega_{ab}$ and further symmetrization on the variables $\alpha_{ab}$
in these coefficients.
When $N=2l$ is even, Gel'fand obtained $l$
such invariants, ${\cal C}_1, \dots, {\cal C}_s, \dots, {\cal C}_l$ for
$so(N+1)$ which are homogeneous polynomials of order $2s$ in the generators:
\begin{equation}
{\cal C}_s=\sum_{i_1,i_2,\dots,i_{2s-1},i_{2s} =0}^{N}
\Omega_{i_1i_2}\Omega_{i_2i_3}\dots
\Omega_{i_{2s-1}i_{2s}} \Omega_{i_{2s}i_1} \qquad s=1,2,\dots,l.
\label{bl}
\end{equation}
When $N$ is odd, $N=2l-1$, besides (\ref{bl}) there is another
invariant coming from the determinant of $T$, which is a
perfect square of an homogeneous expression in the generators of order
$l$ denoted simply as ${\cal C}$:
\begin{equation}
{\cal C} =
\sum_{i_0,i_1,\dots,i_{N}=0}^{N}\varepsilon_{i_0i_1\dots i_{N}}
\Omega_{i_0i_1}\Omega_{i_2i_3}\dots \Omega_{i_{N-1}i_N},
\label{bm}
\end{equation}
where $\varepsilon_{i_0i_1\dots i_{N}}$ is the completely
antisymmetric unit tensor. In both expressions
the relation $\Omega_{ab}=-\Omega_{ba}$ is understood.
\subsect{The case of CK algebras}
We now proceed to implement this scheme for the CK algebras.
As a first step we write the analogous of $T$
(\ref{bj}) {\em only} for the pseudo-orthogonal $so(p,q)$ algebras
{\em with all $\omega_a\ne 0$} (but {\em without} reducing them to $\pm 1$),
and compute the minors separately. Afterwards, we arrange the details, by
introducing some factors depending on the coefficients $\omega_a$ in such way
that all contractions $\omega_a\to 0$ are always well defined and do not
originate a trivial result for any of the Casimirs found.
Consider first the CK algebras with all $\omega_a \ne 0$.
Recall that the generators of $so_{\omega_1,\dots,\omega_N}(N+1)$ have
been taken as $\Omega_{ab}$ only for $a<b$. If all $\omega_a \ne 0$ we
can extend this set and introduce the (linearly dependent)
new generators by defining $\Omega_{ba}$ with $a<b$, and their
corresponding $\alpha_{ba}$ as follows:
\begin{equation}
\mbox{if}\quad a<b \qquad \Omega_{ba}:= -\frac 1{\omega_{ab}}\Omega_{ab}\qquad
\alpha_{ba}:= -\frac 1{\omega_{ab}}\alpha_{ab}
\label{ca}
\end{equation}
so that the commutation relations (\ref{ab}) can be written in the
standard form
\begin{equation}
[\Omega_{ab},\Omega_{lm}]=\delta_{am}\Omega_{lb}-\delta_{bl}\Omega_{am}+\delta_{bm}\omega_{lm}
\Omega_{al} +\delta_{al}\omega_{ab} \Omega_{bm}.
\label{etiqueta}
\end{equation}
which is the familiar form of the commutation relations for an $so(p,q)$
algebra, with the non-degenerate metric tensor (\ref{ag}). In this
special case
$\omega_a\ne 0$ we associate to $so_{\omega_1,\dots,\omega_N}(N+1)$ the
matrix (\ref{bj}) denoted $T_{\omega_1,\dots,\omega_N}$ which now reads
\begin{equation}
T_{\omega_1,\dots,\omega_N}=\left(\begin{array}{cccccc}
0& -\frac {\alpha_{01}}{\omega_{01}} &-\frac {\alpha_{02}}{\omega_{02}}
&\dots&-\frac {\alpha_{0N-1}}{\omega_{0N-1}} &
-\frac {\alpha_{0N}}{\omega_{0N}} \cr
\alpha_{01}&0&-\frac {\alpha_{12}}{\omega_{12}}&\dots &-\frac
{\alpha_{1N-1}}{\omega_{1N-1}} & -\frac {\alpha_{1N}}{\omega_{1N}} \cr
\alpha_{02}&\alpha_{12}&0&\dots&-\frac {\alpha_{2N-1}}{\omega_{2N-1}}&
-\frac {\alpha_{2N}}{\omega_{2N}}
\\ \vdots &\vdots&\vdots&\ddots&\vdots&\vdots \cr
\alpha_{0\,N-1}&\alpha_{1\,N-1}&\alpha_{2\,N-1}&\dots&0&
-\frac {\alpha_{N-1\,N}}{\omega_{N-1\,N}} \\
\alpha_{0 N}&\alpha_{1 N}&\alpha_{2 N}&\dots &\alpha_{N-1\,N}&0
\end{array}\right) .
\label{cb}
\end{equation}
This matrix satisfies the property
\begin{equation}
T_{\omega_1,\dots,\omega_N} \, I_\k + I_\k\, {}^t T_{\omega_1,\dots,\omega_N}=0,
\label{ccn}
\end{equation}
where $I_\k$ is the diagonal matrix (\ref{ag}). Recalling (\ref{cc}), we
say $T_{\omega_1,\dots,\omega_N}$ is an $I_\k$-antisymmetric matrix.
When {\em all} the constants $\omega_a$ are different from zero, it is clear
that the Casimir invariants for
$so_{\omega_1,\dots,\omega_N}(N+1)$ are the
coefficients of the characteristic polynomial coming from
equation (\ref{bk}) where $T$ is replaced
by $T_{\omega_1,\dots,\omega_N}$. In order to get them we have to
calculate the determinant of a generic diagonal submatrix, which will have
the same structure as (\ref{cb}) but with a non-consecutive subset of
indices, say
$T_{\omega_{i_1i_2},\dots,\omega_{i_{K-1}i_K}}$. Due to property
(\ref{ccn}) it is easy to show that for an odd $K$ the
determinant is always zero. Therefore, only determinants for even
$K=2s$ might be different from zero. For future convenience, we will denote
any arrangement of $2s$ indices taken from $012\dots N$ in
increasing order as $a_1\<a_2\<\dots\<a_s\<b_1\<b_2\<\dots\<b_s$.
We now define some symbols of $2s$ indices ${{\cal W}}_{a_1a_2\dots
a_sb_1b_2\dots b_s}$, for $s=1,2,\dots, l$ as:
\begin{equation}
{{\cal W}}^2_{a_1a_2\dots a_sb_1b_2\dots b_s}
:=\omega_{a_1b_s}\omega_{a_2b_{s-1}}\dots\omega_{a_sb_1}
\det\left[
T_{\omega_{a_1a_2},\dots,\omega_{a_sb_1},\dots,\omega_{b_{s-1}b_s}}
\right].
\label{cd}
\end{equation}
This definition is justified since the r.h.s.\ of (\ref{cd}) is a
perfect square. The set of coefficients $\omega_{ab}$ multiplying the
determinant assures that the final expressions
we are going to obtain are non-trivial even after the limits $\omega_a\to 0$.
Inserting these $\omega$ factors turns out to be equivalent to the
usual rescaling made in the contraction of Casimir invariants by means
of an In\"on\"u--Wigner contraction.
The $2s$-index ${{\cal W}}$-symbol is given in terms of the
$(2s-2)$-index ${{\cal W}}$-symbol through
\begin{eqnarray}
&& {{\cal W}}_{a_1a_2\dots a_sb_1b_2\dots b_s} =
{\sum_{\mu=1}^{s}} (-1)^{\mu+1} \alpha_{a_\mu b_s}
{{\cal W}}_{a_1a_2\dots\widehat{a_\mu}\dots
a_sb_1b_2\dots\widehat{b_s}} \cr
&& \qquad\qquad\qquad\qquad
+ \ {\sum_{\nu=1}^{s-1}} (-1)^{s+\nu+1} \omega_{a_sb_\nu} \alpha_{b_\nu b_s}
{{\cal W}}_{a_1a_2\dots
a_sb_1b_2\dots\widehat{b_\nu}\dots\widehat{b_s}}
\label{da}
\end{eqnarray}
where the ${{\cal W}}$-symbols in the r.h.s.\ of the equation
have $2s-2$ indices, those obtained by {\it removing} the two
indices marked with a caret $a_\mu,b_s$ or $b_\nu,b_s$ from the set of
$2s$ indices ${a_1a_2\dots a_sb_1b_2\dots b_s}$.
The ${{\cal W}}$-symbols give rise to the elements of $S^I$ (\ref{bf})
and the canonical mapping $\phi$ (\ref{bg}) transforms them
into invariants of the enveloping CK algebra (\ref{be}). The
symmetrization implied by action of $\phi$ on ${{\cal W}}$ reduces to a simple
substitution $\alpha_{ab}\to \Omega_{ab}$ since all generators appearing
in the products of the ${{\cal W}}$-symbols commute. Once the substitution of
the variables $\alpha_{ab}$ by the generators
$\Omega_{ab}$ has been performed, we will denote $W := \phi({{\cal W}})$. Now
$W_{ab},\ W_{a_1a_2b_1b_2},\
\dots,$ are elements in the universal enveloping algebra of the CK
Lie algebra. Their structure can be most clearly presented in a recursive way.
For $a<b$, let:
\begin{equation}
W_{ab} := \Omega_{ab},
\end{equation}
then $W$-symbols with four indices $a_1 < a_2 < b_1 < b_2$ are
given in terms of those with two by:
\begin{equation}
W_{a_1a_2b_1b_2} =
\Omega_{a_1 b_2} W_{a_2b_1}
- \Omega_{a_2 b_2} W_{a_1b_1}
+ \omega_{a_2b_1} \Omega_{b_1 b_2} W_{a_1a_2}
\label{dan}
\end{equation}
and further $W$-symbols with six, eight, \dots, $2s$ indices,
$W_{a_1a_2\dots a_sb_1b_2\dots b_s}$ (with $a_1 < a_2 < \dots <
a_s < b_1 < b_2 < \dots <b_s$), are given in terms of those with
two less indices by means of the relations:
\begin{eqnarray}
&& W_{a_1a_2\dots a_sb_1b_2\dots b_s} =
{\sum_{\mu=1}^{s}} (-1)^{\mu+1} \Omega_{a_\mu b_s}
W_{a_1a_2\dots\widehat{a_\mu}\dots
a_sb_1b_2\dots\widehat{b_s}} \cr
&& \qquad\qquad\qquad\qquad
+ \ {\sum_{\nu=1}^{s-1}} (-1)^{s+\nu+1} \omega_{a_sb_\nu} \Omega_{b_\nu
b_s}
W_{a_1a_2\dots
a_sb_1b_2\dots\widehat{b_\nu}\dots\widehat{b_s}}
\label{daa}
\end{eqnarray}
until we end up with $W$-symbols with $2l$ indices.
Through the use of these $W$'s we can produce
expressions for the Casimir invariants of the CK
algebras ${so}_{\omega_1,\dots,\omega_N}(N+1)$. The key to this adaptation
is to profit from the presence of the constants $\omega_a$.
When contraction is dealt with through an
In\"on\"u--Wigner type contraction, a suitable rescaling of the
non-contracted Casimir by some power of the contraction parameter is
required to give a non-trivial well defined Casimir for the contracted
algebra after the contraction limit. This is made unnecessary in our
approach, which has this rescaling automatically built-in.
We now give the expressions, in terms of these $W$'s, for
the $[\frac{N+1}{2}]$ Casimir operators in the general CK Lie
algebra ${so}_{\omega_1,\dots,\omega_N}(N+1)$:
\noindent
{\it Theorem 2.} The $l=[\frac{N+1}{2}]$ independent polynomial
Casimir operators of the CK Lie algebra
${so}_{\omega_1,\dots,\omega_N}(N+1)$ can be written as:
\noindent
$\bullet$ $[\frac{N}{2}]$ invariants ${\cal C}_s$, $s=1, \dots, [\frac{N}{2}]$
of order $2s$. We give the first, the second, and then the
general expression:
\begin{equation}
{\cal C}_1 = {\sum_{a_1,b_1=0 \atop
a_1<b_1}^{N}}
\omega_{0a_1}
\omega_{b_1 N}
W_{a_1b_1}^2
\label{dbi}
\end{equation}
\begin{equation}
{\cal C}_2 = {\sum_{a_1,a_2,b_1,b_2=0 \atop
a_1<a_2<b_1<b_2}^{N}}
\omega_{0a_1}\omega_{1a_2}
\omega_{b_1 (N-1)}\omega_{b_2 N}
W_{a_1a_2b_1b_2}^2
\label{dbii}
\end{equation}
\begin{equation}
{\cal C}_s = \!\!\!\!\!\!\!\!\!\!\!\!\!
{\sum_{a_1,a_2,\dots ,a_s,b_1,b_2,\dots, b_s =0 \atop
a_1<a_2<\dots <a_s<b_1<b_2<\dots<b_s }^{N}}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\omega_{0a_1}\omega_{1a_2} \!\dots\! \omega_{(s-1) a_s}
\omega_{b_1 (N\!-s+\!1)}\omega_{b_2 (N\!-s+\!2)} \!\dots\! \omega_{b_s N}
W_{a_1a_2\dots a_sb_1b_2\dots b_s }^2
\label{db}
\end{equation}
\noindent
$\bullet$ When $N+1$ is even, there is an extra Casimir ${\cal C}$ of
order $l=\frac{N+1}{2}$:
\begin{equation}
{\cal C} = W_{012 \dots N}.
\end{equation}
In these expressions any $\omega_{aa}$ should be understood as
$\omega_{aa}:=1$. It is easy to see that even in the most contracted
CK algebra, the flag space algebra, ${so}_{0,\dots,0}(N+1)$,
these Casimirs are not trivial. In fact, the term in
(\ref{db}) with the $W$-symbol whose first group of $s$ indices
are consecutive and start from $0$, and whose last group of $s$
indices are also consecutive and end with $N$,
$W_{012\dots (s-1) \, (N-s+1)
\dots (N-2) (N-1) N}^2$, is the only term whose $\omega$ factor is equal
to 1, and therefore the only one which survives in the Casimir
${\cal C}_s$ for the
${so}_{0,\dots,0}(N+1)$ algebra.
Therefore, theorem 2 provides a set of $[\frac{N+1}{2}]$
non-trivial independent Casimirs for any Lie algebra in the CK family.
The question now is whether there exists any other Casimir which cannot be
obtained by this contraction process. To answer this we should
analyze the upper bound $\tau$ (\ref{bh}).
\noindent
{\it Proposition 3.} The rank of the matrix defined by
(\ref{bi}) for $so_{\omega_1,\dots,\omega_N}(N+1)$ is
$N^2/2$ for even $N$ and $(N^2-1)/2$ for odd $N$, no matter of the
specific values of the coefficients $\omega_a$.
\noindent
{\em Proof}. The above result is a consequence of the existence of the
structure constants $C_{ab,bc}^{mn}=-\delta_{ma}\delta_{nc}$ (\ref{bd})
which are $\omega$-independent, and are non-zero for all algebras in the CK
family. Let us start with the case of an even
$N=2l$ and the $\frac 12 N(N+1)\times \frac 12 N(N+1)$ matrix $M_g$
(\ref{bi}). We consider the minor obtained by eliminating the rows and
columns associated to the following $l=N/2$ variables $\alpha_{ab}$:
\begin{equation}
\alpha_{0N} \quad
\alpha_{1\,N-1} \quad
\alpha_{2\,N-2}\quad \dots\quad\alpha_{l-2\,l+2} \quad\alpha_{l-1\,l+1},
\label{pro}
\end{equation}
this is, for $\alpha_{0N}$ we discard the row and column with elements
$(M_g)_{0N,kl}$ and $(M_g)_{kl,0N}$ ($\forall\, kl$), and so on.
The dimension of this submatrix is $N^2/2$. It can be checked
that in each row and in each column there is always a {\em
single} $\alpha$ of the sequence (\ref{pro}) without any factor $\omega$.
Then by permuting rows and columns we can arrange the minor in order to
get all those $\alpha$'s in the main diagonal, so its determinant is
(up to a sign), a monomial:
\begin{equation}
\alpha_{0N}^{2(N-1)}
\alpha_{1\,N-1}^{2(N-3)}
\alpha_{2\,N-2}^{2(N-5)} \cdots\alpha_{l-2\,l+2}^{2\cdot 3}
\alpha_{l-1\,l+1}^{2\cdot 1}.
\end{equation}
Since this term is $\omega$-independent, this minor is non-zero for any CK
algebra. A similar procedure is applied for an odd $N=2l-1$. Now
we take out the rows and columns linked to the $l=(N+1)/2$ variables:
\begin{equation}
\alpha_{0N} \quad
\alpha_{1\,N-1} \quad
\alpha_{2\,N-2}\quad \dots\quad\alpha_{l-2\,l+1} \quad\alpha_{l-1\,l},
\label{prodos}
\end{equation}
obtaining in this way a minor of dimension $(N^2-1)/2$. By ordering
rows and columns, we get all variables appearing in the sequence
(\ref{prodos}) placed in the main diagonal; the determinant is (again up to
a sign) the monomial:
\begin{equation}
\alpha_{0N}^{2(N-1)}
\alpha_{1\,N-1}^{2(N-3)}
\alpha_{2\,N-2}^{2(N-5)} \cdots\alpha_{l-3\,l+2}^{2\cdot 4}
\alpha_{l-2\,l+1}^{2\cdot 2},
\end{equation}
and the result follows.
Hence, as ${\mbox{dim}}\, (g)=\frac 12 N(N+1)$, the upper bound for
the number of algebraically independent Casimirs in any CK algebra
turns out to be $[\frac{N+1}{2}]$ which coincides with the upper bound
for the simple algebras where all $\omega_a\ne 0$. Since
we have just obtained that number of Casimirs, we conclude that
equality in the formula (\ref{bh}) holds for {\em all} CK algebras
and theorem 2 gives {\em all} the Casimirs for the CK family.
We remark that the flag algebra $so_{0,\dots,0}(N+1)$ is on
the borderline for this behaviour: if contractions are carried out
beyond this algebra, the rank of the matrix in proposition 3 might
not be given by the same values. This can be easily seen: for the
abelian algebra in $\frac 12 (N+1)N$ dimensions, which can of course
be reached by contracting $so(N+1)$, all generators are central
elements.
If we define now the {\it rank of a Lie algebra} in the CK
family as the number of algebraically independent Casimir
invariants associated to it, then theorem 2 shows that all
the CK algebras $SO_{\omega_1,\dots,\omega_N}(N+1)$ have the same rank: $N/2$
if $N$ is even and $(N+1)/2$ if $N$ is odd.
We recall that the first Casimir (\ref{dbi}) for $s=1$ is the
quadratic invariant related to the Killing--Cartan
form in the case of a simple algebra. If a ``Killing--Cartan"
form is defined for all CK algebras as usual:
\begin{equation}
\beta_{ab,cd}\equiv \beta(\Omega_{ab},\Omega_{cd})=
\mbox{Trace}\,(\mbox{ad}\, \Omega_{ab}\cdot\mbox{ad}\, \Omega_{cd})
={ \sum_{m,n,p,q=0}^N} C_{ ab,mn }^{ pq }
C_{ cd,pq }^{ mn }
\label{cj}
\end{equation}
we find, by using the structure constants (\ref{bd}), that in the
basis $\Omega_{ab}$ this ``Killing-Cartan" form of the CK algebra
${so}_{\omega_1,\dots,\omega_N}(N+1)$ is diagonal, and its non-zero
components are
\begin{equation}
\beta_{ab,ab}=-2(N-1)\,\omega_{ab} \qquad a,b=0,\dots,N\qquad a<b.
\label{ck}
\end{equation}
When all the $\omega_a\ne 0$ the Killing--Cartan form is regular (the
algebra is simple or semisimple in the exceptional case $D_2$), so we can
write:
\begin{equation}
{\cal C}_1=
\sum_{a,b=0}^N {\omega_{0a}}{\omega_{bN}}\Omega^2_{ab}=\sum_{a,b=0 }^N
\frac{\omega_{0N}}{\omega_{ab}}\Omega^2_{ab} = - 2(N-1) \omega_{0N}
\sum_{a,b; c,d=0}^N {\beta^{ab,cd}}\Omega_{ab}\Omega_{cd} .
\label{ci}
\end{equation}
This is the known relation giving the quadratic Casimir as the dual
of the Killing--Cartan form, which holds for the case of non-zero
$\omega_a$; otherwise the Killing--Cartan form is degenerate, and the last
term in (\ref{ci}) is indeterminate. However, the structure of this
equation shows that in the limit of some
$\omega_a\to 0$, while the Killing--Cartan form (\ref{ck}) degenerates, the
Casimir
${\cal C}_1$ remains well defined, because the impossibility of
inverting the matrix $\beta_{ab,cd}$ conspires with the factor $\omega_{0N}$ to
produce a well defined limit for ${\cal C}_1$. Similarly, higher order
Casimir invariants are dual to the polarized form of a symmetric
multilinear form.
The algebraic structure behind the $W$'s which
allows simple expressions for the higher order Casimirs should be
worth studying. For instance, let us consider the commutation
relations among a generator
$\Omega_{ab}$ and a symbol
$W_{a_1a_2\dots a_{s}b_1b_2\dots b_{s}}$. There are two possibilities:
\noindent
(1) If both indices $a$ and $b$, or none,
appear in the sequence
$\{a_1a_2\dots a_{s}b_1b_2\dots b_{s}\}$, then the Lie bracket
$[\Omega_{ab},W_{a_1a_2\dots a_{s}b_1b_2\dots b_{s}}]$ is zero.
\noindent
(2) If only {\em one} index $a$ or $b$ belongs to
$\{a_1a_2\dots a_{s}b_1b_2\dots b_{s}\}$, then we have:
\begin{equation}
[\Omega_{ab},W_{a_1a_2\dots a_{s}b_1b_2\dots b_{s}}]=(-1)^{p+1}\,
\sqrt{
\omega_{ab}\,\frac{\omega_{a_1b_{s}}\omega_{a_2b_{s-1}}\dots\omega_{a_sb_{1}} }{
\omega_{a'_1b'_{s}}\omega_{a'_2 b'_{s-1}}
\dots\omega_{a'_sb'_{1}}}} \,W_{a'_1a'_2\dots
a'_{s}b'_1b'_2\dots b'_{s}}
\label{cm}
\end{equation}
where the new set of indices $\{a'_1a'_2\dots a'_{s}b'_1b'_2\dots b'_{s}\}$ is
obtained by first writing the sequence
$\{a,b; a_1a_2\dots a_{s}b_1b_2\dots b_{s}\}$ in increasing order, and
then dropping the common
index. In the factor $(-1)^{p+1}$,
$p$ means the minimum number of traspositions needed to bring the sequence
$\{ab;a_1a_2\dots a_{s}b_1b_2\dots b_{s}\}$ into
increasing order; for instance, in the
sequence $\{13;0234\}\equiv\{0124\}$ $p=3$, while in $\{15;1234\}\equiv\{2345\}$
$p=4$. Notice that all $\omega_{a'b'}$ in the denominator cancel, and leave under
the square root a perfect square product of $\omega$'s.
Repeated use of this procedure would give the commutation relations
between two $W$-symbols. We do not write here the general
expressions, but some examples are given in the next section.
\sect{Examples}
\subsect{Results for N=2,3,4,5}
In the sequel we elaborate upon the results of the above section
by writing explicitly the Casimir invariants of
$so_{\omega_1,\dots,\omega_N}(N+1)$ up to $N=5$. These expressions should
be compared with the ones obtained in any approach giving the
invariants directly in terms of the generators $\Omega_{ab}$, without
using the $W$-symbols, which are cumbersome as soon as $N$
grows.
\medskip
\noindent
${\bf{ N=2}}$. There is only one invariant:
\begin{equation}
{\cal C}_1=\omega_{2}\Omega_{01}^2+\Omega_{02}^2+\omega_{1}\Omega_{12}^2.
\label{daii}
\end{equation}
\medskip
\noindent
${\bf{ N=3}}$. There are two invariants and the first relevant
$W$-symbol (\ref{dan}) appears:
\begin{equation}
\begin{array}{ccc}
{\cal C}_1=
\omega_2\omega_3\Omega_{01}^2 & +\omega_{3}\Omega_{02}^2 & +\Omega_{03}^2 \cr
& +\omega_1\omega_3 \Omega_{12}^2 & +\omega_1\Omega_{13}^2 \\
& & +\omega_1\omega_2\Omega_{23}^2
\label{dbiii}
\end{array}
\end{equation}
\begin{eqnarray}
{\cal C}\equiv W_{0123}\!\!&=&\!\!\omega_{12}\Omega_{23}W_{01} -\Omega_{13} W_{02} +
\Omega_{03}W_{12}\cr
&=&\!\!\omega_{2}\Omega_{23}\Omega_{01} -\Omega_{13} \Omega_{02} +
\Omega_{03}\Omega_{12}.\label{dc}
\end{eqnarray}
\medskip
\noindent
${\bf{ N=4}}$. From a physical point of view this is a rather interesting
case since the CK family $so_{\omega_1,\dots,\omega_4}(5)$ contains the (3+1)
kinematical algebras. There are two invariants:
\begin{equation}
\begin{array}{cccc}
{\cal C}_1=\omega_2\omega_3\omega_4 \Omega_{01}^2 & +\omega_3\omega_4 \Omega_{02}^2 &
+\omega_4\Omega_{03}^2 & +\Omega_{04}^2 \cr
& +\omega_1\omega_3\omega_4\Omega_{12}^2 & +\omega_1\omega_4\Omega_{13}^2 & +\omega_1\Omega_{14}^2\\
& & +\omega_1\omega_2\omega_4 \Omega_{23}^2 & +\omega_1\omega_2 \Omega_{24}^2 \cr
& & & + \omega_1\omega_2\omega_3 \Omega_{34}^2
\label{dd}
\end{array}
\end{equation}
\begin{equation}
{\cal C}_2=\omega_{24}W_{0123}^2+\omega_{23}W_{0124}^2+W_{0134}^2+\omega_{12}W^2_{0234}
+\omega_{02}W^2_{1234}
\label{de}
\end{equation}
where from (\ref{dan}) we have
\begin{equation}
\begin{array}{l}
W_{0123}= \omega_{12}\Omega_{23}\Omega_{01} -\Omega_{13} \Omega_{02} +
\Omega_{03}\Omega_{12} \cr
W_{0124}= \omega_{12}\Omega_{24}\Omega_{01} -\Omega_{14} \Omega_{02} +
\Omega_{04}\Omega_{12} \cr
W_{0134}= \omega_{13}\Omega_{34}\Omega_{01} -\Omega_{14} \Omega_{03} +
\Omega_{04}\Omega_{13} \cr
W_{0234}= \omega_{23}\Omega_{34}\Omega_{02} -\Omega_{24} \Omega_{03} +
\Omega_{04}\Omega_{23} \cr
W_{1234}= \omega_{23}\Omega_{34}\Omega_{12} -\Omega_{24} \Omega_{13} +
\Omega_{14}\Omega_{23} .
\end{array}
\label{df}
\end{equation}
As an application of (\ref{cm}) we write the
non-zero commutation relations among the ten generators of
$so_{\omega_1,\omega_2,\omega_3,\omega_4 }(5)$ and the five $W$-symbols (\ref{df}):
$$
\begin{array}{lll}
[\Omega_{04},W_{0123}]=-\omega_{02}W_{1234}&\
[\Omega_{03},W_{0124}]=\omega_{02}W_{1234}&\
[\Omega_{02},W_{0134}]=-\omega_{02}W_{1234}\cr
[\Omega_{14},W_{0123}]=\omega_{12}W_{0234}&\
[\Omega_{13},W_{0124}]=-\omega_{12}W_{0234}&\
[\Omega_{12},W_{0134}]=\omega_{12}W_{0234}\cr
[\Omega_{24},W_{0123}]=-W_{0134}&\
[\Omega_{23},W_{0124}]= W_{0134}&\
[\Omega_{23},W_{0134}]=- \omega_{23}W_{0124}\cr
[\Omega_{34},W_{0123}]= W_{0124}&\
[\Omega_{34},W_{0124}]=-\omega_{34}W_{0123}&\
[\Omega_{24},W_{0134}]=\omega_{24}W_{0123}
\end{array}
\nonumber
$$
\begin{equation}
\begin{array}{ll}
[\Omega_{01},W_{0234}]=\omega_{01}W_{1234}&\quad
[\Omega_{01},W_{1234}]= - W_{0234} \cr
[\Omega_{12},W_{0234}]=-W_{0134}& \quad
[\Omega_{02},W_{1234}]= W_{0134} \cr
[\Omega_{13},W_{0234}]=\omega_{23}W_{0124}&\quad
[\Omega_{03},W_{1234}]= - \omega_{23} W_{0124} \cr
[\Omega_{14},W_{0234}]=-\omega_{24}W_{0123}&\quad
[\Omega_{04},W_{1234}]= \omega_{24} W_{0123}
\end{array}
\label{dh}
\end{equation}
As these expressions show, the $W$-symbols
can be thought of as a kind of ``Pauli--Lubanski" components for
Lie algebras in the CK family. To see this, notice that when the
constants $(\omega_1,\omega_2,\omega_3,\omega_4)\equiv
(0,-1,+1,+1)$, the CK Lie algebra $so_{0,-1,+1,+1}(5)$ is isomorphic to
the Poincar\'e algebra $iso(3,1)$. In this case there are {\em five}
$W$-symbols; due to the vanishing of the constant $\omega_1$, $W_{1234}$ is missing
in the expression of the fourth-order Casimir (\ref{de}), and
the remaining {\em four} are the components
of the standard Pauli--Lubanski operator (see (\ref{dp}) below).
From (\ref{dh}) is
straightforward to find the Lie brackets of the $W$-symbols among themselves:
\begin{equation}
\begin{array}{l}
[W_{0123},W_{0124}] =\omega_{12}\Omega_{01}W_{0134} +\omega_{12}\Omega_{02}W_{0234}
+\omega_{02}\Omega_{12} W_{1234} \cr
[W_{0123},W_{0134}] =-\omega_{13}\Omega_{01}W_{0124} +\omega_{12}\Omega_{03}W_{0234}
+\omega_{02}\Omega_{13} W_{1234} \cr
[W_{0123},W_{0234}] =-\omega_{23}\Omega_{02}W_{0124} - \Omega_{03}W_{0134}
+\omega_{02}\Omega_{23} W_{1234}\cr
[W_{0123},W_{1234}] =-\omega_{23}\Omega_{12}W_{0124} - \Omega_{13}W_{0134}
-\omega_{12}\Omega_{23} W_{0234}\cr
[W_{0124},W_{0134}] =\omega_{14}\Omega_{01}W_{0123} + \omega_{02}\Omega_{14}W_{1234}
+\omega_{12}\Omega_{04} W_{0234}\cr
[W_{0124},W_{0234}] =\omega_{24}\Omega_{02}W_{0123} - \Omega_{04}W_{0134}
+\omega_{02}\Omega_{24} W_{1234}\cr
[W_{0124},W_{1234}] =\omega_{24}\Omega_{12}W_{0123} - \Omega_{14}W_{0134}
-\omega_{12}\Omega_{24} W_{0234}\cr
[W_{0134},W_{0234}] =\omega_{24}\Omega_{03}W_{0123} + \omega_{23}\Omega_{04}W_{0124}
+\omega_{03}\Omega_{34} W_{1234}\cr
[W_{0134},W_{1234}] =\omega_{24}\Omega_{13}W_{0123} +\omega_{23} \Omega_{14}W_{0124}
-\omega_{13}\Omega_{34} W_{0234}\cr
[W_{0234},W_{1234}] =\omega_{24}\Omega_{23}W_{0123} + \omega_{23}\Omega_{24}W_{0124}
+\omega_{23}\Omega_{34} W_{0134}
\end{array}
\label{di}
\end{equation}
\medskip
\noindent
${\bf{ N=5}}$. The three
invariants are given by:
\begin{equation}
\begin{array}{ccccc}
{\cal C}_1=\omega_{15} \Omega_{01}^2 & +\omega_{25}
\Omega_{02}^2 & +\omega_{35}\Omega_{03}^2 & +\omega_{45}\Omega_{04}^2 & +\Omega_{05}^2\cr
& +
\omega_{01}\omega_{25} \Omega_{12}^2 & +\omega_{01}\omega_{35} \Omega_{13}^2 & +
\omega_{01}\omega_{45} \Omega_{14}^2 & +\omega_{01}\Omega_{15}^2 \cr
& & +\omega_{02}\omega_{35} \Omega_{23}^2 & +
\omega_{02}\omega_{45} \Omega_{24}^2 & +\omega_{02}\Omega_{25}^2\\
& & & +
\omega_{03}\omega_{45}
\Omega_{34}^2 & +\omega_{03}\Omega_{35}^2 \cr
& & & & +\omega_{04}\Omega_{45}^2
\label{dj}
\end{array}
\end{equation}
\begin{equation}
\begin{array}{l}
{\cal C}_2=\omega_{35}\omega_{24}W^2_{0123}+\omega_{25}W^2_{0124}
+\omega_{24}W^2_{0125}+\omega_{35}W^2_{0134}+\omega_{34}W^2_{0135}\cr
\quad +
W^2_{0145}+\omega_{12}\omega_{35}W^2_{0234}+\omega_{12}\omega_{34}W^2_{0235}+
\omega_{12} W^2_{0245}+\omega_{13}W^2_{0345}\cr
\quad +
\omega_{02}\omega_{35}W^2_{1234}+\omega_{02}\omega_{34}W^2_{1235}+\omega_{02}W^2_{1245}+
\omega_{03}W^2_{1345}+\omega_{02}\omega_{13}W^2_{2345}
\label{dk}
\end{array}
\end{equation}
\begin{equation}
{\cal C}\equiv
W_{012345}=\omega_{24}\Omega_{45}W_{0123}-\omega_{23}\Omega_{35}W_{0124}
+\Omega_{25}W_{0134}-\Omega_{15}W_{0234}
+\Omega_{05}W_{1234}.
\label{dl}
\end{equation}
In the above expressions it can be noticed how all the contractions
$\omega_a\to 0$ are always well defined and lead to non-trivial results.
For the most contracted algebra
$so_{0,0,0,0,0}(6)$, for instance, we get:
\begin{equation}
{\cal C}_1=\Omega_{05}^2, \qquad
{\cal C}_2=W^2_{0145}, \qquad
{\cal C}=W_{012345}.
\end{equation}
And for $N$ arbitrary, the Casimirs for this most contracted CK
algebra are:
\begin{equation}
{\cal C}_1=\Omega_{0N}^2 \equiv W_{0N}^2, \qquad
{\cal C}_2=W^2_{01(N-1)N}, \qquad
{\cal C}_3=W^2_{012(N-2)(N-1)N}, \ \dots
\label{Wflag}
\end{equation}
In fact, even if the appearance of the factors $\omega_{ab}$
seems rather haphazard when written in specific cases,
like in (\ref{dk}), they are indeed easily reconstructed from
scratch, without reference to the general expressions (\ref{db}).
If here $[X]$ denotes the dimension of $X$, dimensional homogeneity of
commutation relations requires that
$[\Omega_{ab}]=[\omega_{ab}]^{1/2}$, so $[W_{ab}^2]=[\omega_{ab}]$,
$[W_{a_1a_2b_1b_2}^2]=[\omega_{a_1b_2}][\omega_{a_2b_1}]$, etc. By simply
recalling the $W$ term (\ref{Wflag}) entering into any Casimir {\em
without\/} any $\omega_{ab}$ factor, then the coefficient of any other $W^2$ is
unambiguously derived by simply requiring dimensional homogeneity for
all $W^2$ terms in the Casimirs and recalling that all terms enter with the
same global sign there. Signs coming from signatures which appear in the
standard known cases (in the Minkowski square of the Pauli-Lubanski vector, for
instance), are hidden inside the $\omega_{ab}$ themselves.
\subsect{Casimirs for 3+1 kinematical algebras}
After this algebraic description of the structure of the invariants of the
CK algebras we focus on the most important kinematical algebras
included in the $so_{\omega_1,\omega_2,\omega_3,\omega_4}(5)$ family.
Let $H$, $P_i$, $K_i$ and $J_i$ $(i=1,2,3)$ the usual generators of time
translation, space translations, boosts and spatial rotations,
respectively. Under the following identification with the ``abstract"
generators $\Omega_{ab}$:
\begin{eqnarray}
&&H=\Omega_{01} \qquad P_i=\Omega_{0\, i+1} \qquad
K_i=\Omega_{1\, i+1} \qquad i=1,2,3 \cr
&&J_1=\Omega_{34} \qquad J_2=-\Omega_{24} \qquad J_3=\Omega_{23}
\label{dm}
\end{eqnarray}
we can interpretate the six CK algebras $so_{\omega_1,\omega_2,+,+}(5)$ with
$\omega_2\le 0$ as the Lie algebras of the groups of motions of different
(3+1) spacetime models \cite{BLL}. Note that we have fixed the two
coefficients $\omega_3$ and $\omega_4$ to
$+1$ (this is a consequence of the space isotropy). The two remaining ones
have a definite physical interpretation:
$\omega_1$ is the constant curvature of the spacetime which
appears here as the homogeneous space given in (\ref{ah}), which is the
quotient ${\cal
X}_0=SO_{\omega_1,\omega_2,+,+}(5)/SO_{\omega_2,+,+}(4)$, where
$SO_{\omega_2,+,+}(4)$ is the subgroup generated by the subalgebra
$h_0=\langle K_i,J_i\rangle$: this is the subalgebra of isotopy of a point
in spacetime. Similarly,
$\omega_2$ is the curvature of the space of time-like lines in spacetime,
${\cal
X}_{01}=SO_{\omega_1,\omega_2,+,+}(5)/(SO_{\omega_1}(2)\otimes SO_{+,+}(3))$, where
now $SO_{\omega_1}(2)\otimes SO_{+,+}(3)$ is the isotopy subgroup of a
time-like line, generated by
$h_{01}=\langle H,J_i\rangle$. This curvature is linked to the
fundamental constant
$c$ of relativistic theories as
$\omega_2=\frac{-1}{c^2}$. To make the comparison easier for these
cases, we shall write the two constants involved in the
``kinematical" subfamily of CK algebras as:
$\kappa\equiv\omega_1$, $\frac{-1}{c^2}\equiv\omega_2$.
In this notation the commutation rules (\ref{ab}) of
$so_{\kappa,-1/c^2,+,+}(5)$ read now:
\begin{eqnarray}
&&\!\!\!\!\!\!\back [J_i,J_j]=\varepsilon_{ijk}J_k\qquad
[J_i,P_j]=\varepsilon_{ijk}P_k\qquad
[J_i,K_j]=\varepsilon_{ijk}K_k\cr
&&\!\!\!\!\!\!\back [P_i,P_j]=-\frac {\kappa}{c^2}\varepsilon_{ijk}J_k
\quad [K_i,K_j]=-\frac {1}{c^2}\varepsilon_{ijk}J_k
\quad [P_i,K_j]=-\frac {1}{c^2}\delta_{ij}H\\
&&\!\!\!\!\!\!\back [H,P_i]=\kappa K_i\qquad [H,K_i]=-P_i\qquad
[H,J_i]=0\qquad i,j,k=1,2,3.
\nonumber
\end{eqnarray}
The limit
$\kappa\to 0$ (space-time contraction) gives rise to the {\em flat}
universes (Min\-kows\-ki and Galilei) coming from the curved ones
(DeSitter and Newton-Hooke); in terms of the ``universe radius"
$R:=\frac{1}{\sqrt\kappa}$ or
$R:=\frac{1}{\sqrt{-\kappa}}$, this is usually made as $R\to\infty$.
The limit $c\to \infty$ (speed-space
contraction) leads to ``absolute-time" spacetimes (Newton-Hooke and
Galilei) coming from ``relative-time" ones (DeSitter and
Minkowski).
In this context, the Casimir invariants (\ref{dd}) and (\ref{de}) adopt
the form
\begin{equation}
{\cal
C}_1=P_1^2+P_2^2+P_3^2-\frac{1}{c^2}H^2+\kappa(K_1^2+K_2^2+K_3^2)
-\frac{\kappa}{c^2}(J_1^2+J_2^2+J_3^2)
\label{dn}
\end{equation}
\begin{equation}
{\cal C}_2= W_{0123}^2+ W_{0124}^2+W_{0134}^2-\frac{1}{c^2}W^2_{0234}
-\frac{\kappa}{c^2}W^2_{1234}
\label{do}
\end{equation}
where
\begin{eqnarray}
&&W_{0123}=-\frac{1}{c^2}HJ_3 -P_1 K_2+ P_2K_1 \cr
&& W_{0124}=\frac{1}{c^2}HJ_2 -P_1 K_3+ P_3K_1 \cr
&& W_{0134}=-\frac{1}{c^2}HJ_1 -P_2 K_3+ P_3K_2\label{dp}\\
&& W_{0234}= P_1J_1 +P_2 J_2+ P_3J_3 \cr
&& W_{1234}= K_1J_1 +K_2 J_2+ K_3J_3.
\nonumber
\end{eqnarray}
We display in table I these six kinematical algebras together their
invariants according to the values of $(\kappa,\frac{-1}{c^2},+,+)$.
The limit transitions among them can be clearly appreciated; note that some
$W$-symbols (\ref{dp}) are ``internally contracted" in the case of $c=\infty$.
\bigskip
{
\noindent
{\bf {Table I}}. The Casimir invariants of $so_{\kappa,-1/c^2,+,+}(5)$.
\smallskip \footnotesize
\noindent
\begin{tabular}{|l|l|l|}
\hline
Oscillating Newton--Hooke&Galilei&Expanding Newton--Hooke\\
$(+,0,+,+)$\hfill$\kappa=1, c=\infty$&
$(0,0,+,+)$\hfill$\kappa=0, c=\infty$&
$(-,0,+,+)$\hfill$\kappa=-1, c=\infty$\\
$t_6(so(3)\oplus so(2))$&$iiso(3)$&$t_6(so(3)\oplus so(1,1))$\\
\hline
${\cal C}_1=P_1^2+P_2^2+P_3^2$&${\cal C}_1=P_1^2+P_2^2+P_3^2$&
${\cal C}_1=P_1^2+P_2^2+P_3^2$\\
$\qquad + K_1^2+K_2^2+K_3^2 $& &$\qquad - K_1^2-K_2^2-K_3^2 $\\
${\cal C}_2= W_{0123}^2+ W_{0124}^2+W_{0134}^2$&
${\cal C}_2= W_{0123}^2+ W_{0124}^2+W_{0134}^2$&
${\cal C}_2= W_{0123}^2+W_{0124}^2+W_{0134}^2$\\
\hline
\hline
Anti-DeSitter\quad $so(3,2)$&Poincar\'e\quad $iso(3,1)$&DeSitter\quad
$so(4,1)$\\
$(+,-,+,+)$\hfill$\kappa=1, c=1$&
$(0,-,+,+)$\hfill$\kappa=0, c=1$&
$(-,-,+,+)$\hfill$\kappa=-1, c=1$\\
\hline
${\cal C}_1=P_1^2+P_2^2+P_3^2-H^2$&${\cal C}_1=P_1^2+P_2^2+P_3^2-H^2$&
${\cal C}_1=P_1^2+P_2^2+P_3^2-H^2$\\
$\qquad + K_1^2+K_2^2+K_3^2 $& &$\qquad - K_1^2-K_2^2-K_3^2 $\\
$\qquad - J_1^2-J_2^2-J_3^2 $& &$\qquad +J_1^2+J_2^2+J_3^2 $\\
${\cal C}_2= W_{0123}^2+ W_{0124}^2+W_{0134}^2$&
${\cal C}_2= W_{0123}^2+ W_{0124}^2+W_{0134}^2$&
${\cal C}_2= W_{0123}^2+W_{0124}^2+W_{0134}^2$\\
$\qquad -W^2_{0234} -W^2_{1234}$&
$\qquad -W^2_{0234} $&
$\qquad -W^2_{0234} + W^2_{1234}$\\
\hline
\end{tabular} }
\bigskip
\sect{Concluding remarks}
The Casimir invariants play a prominent role in any problem where a
Lie algebra and its enveloping algebra appear. One example of
current active interest is the theory of quantum groups; explicit
deformations of the
$W$-symbols can be found in the deformed commutation relations of the
quantum CK algebras $U_z so_{0,\omega_2,\dots,\omega_N}(N+1)$ with $\omega_1=0$, and
indeed the study of Casimirs in the classical undeformed algebras we have
presented here underlies the expressions for the deformed Casimirs
in \cite{afin}.
A more classical application concerns the expansions processes which can be
seen as the opposite situation of a contraction limit \cite{Gilmore}.
While the contractions makes to vanish some structure constants of a Lie
algebra which therefore gets more abelian, the expansions start from a Lie
algebra, with some Lie brackets typically equal to zero, and ends up with
another less abelian algebra, which is usually realized as a Lie
subalgebra of the universal enveloping algebra of the initial Lie
algebra. The more known transitions of this kind are the rank-one
expansions which allows to obtain the simple
$so(p,q)$ algebras starting from inhomogeneous $iso(p,q)$; the name
rank-one refers to the rank of the homogeneous spaces behind these
expansions (the constant curvature flat and curved space with a metric of
signature
$(p,q)$) \cite{Gilmore,Rosen}. In our framework this fact is equivalent to
``create" a non-zero coefficient $\omega_1$ out of the case $\omega_1=0$.
Technically, these expansions only involve the quadratic Casimir
${\cal C}_1$ to get the correct Lie subalgebra in the enveloping algebra
to be expanded (within an irreducible representation). Extension of
these results for high-order expansions would be of
great interest. The rank-two expansions would go from
$t_r(so(p,q)\oplus so(p',q'))$ (Newton--Hooke algebras) to
$so(p,q)$ algebras, this is, they would introduce curvature into the space
of lines, just in the same way as the rank-one expansions go from
flat to curved spaces of points. Here the two first Casimir
invariants
${\cal C}_1$ and
${\cal C}_2$ should participate, and it is reasonable to guess that the
explicit introduction of the constants $\omega_a$ may help in the choice of
the correct expansion procedure which, as far as we know, is yet unknown
for higher rank spaces.
\bigskip
\bigskip
\noindent
{\Large{{\bf Acknowledgments}}}
\bigskip
This work has been
partially supported by DGICYT (Project PB94--1115) from the
Ministerio de Educaci\'on y Ciencia de Espa\~na.
\bigskip
|
1,116,691,499,122 | arxiv | \section{Introduction}
\label{sec:Intro}
It is well known that partial wave unitarity is a powerful tool to estimate the perturbative validity of effective field theories (EFTs). It has been used in the past to provide useful insights both in strong and electroweak interactions~\cite{Lee:1977yc} as well as in quantum gravity~\cite{Atkins:2010re}. Perhaps the best known example is the bound on the Higgs mass derived from an analysis of $WW\to WW$ scattering within the Standard Model (SM)~\cite{Lee:1977yc,Lee:1977eg}. On the other end, unitairty has also been applied to a number of approaches beyond the Standard Model (BSM). For instance in composite Higgs models~\cite{DeCurtis:2016scv}, in searches of scalar di-boson resonances~\cite{DiLuzio:2017qgm,Luzio2017xxx}, searches for dark matter effective interactions~\cite{Endo:2014mja} and on generic dimension-6 operators~\cite{Corbett:2017qgl}.
One possible BSM alternative, widely discusses in literature and routinely pursued in high-energy experiments, is a
composite-fermions scenario which offers a possible solution to the hierarchy pattern of fermion masses~\cite{Pati:1975md,Pati:1983dh,Harari:1982xy,Greenberg:1974qb,Dirac:1963aa}.
In this context~\cite{Terazawa:1976xx,Eichten:1979ah,Eichten:1983hw,Cabibbo:1983bk,Baur:1989kv,Baur:1987ga}, SM quarks ``$q$'' and leptons ``$\ell$'' are assumed to be bound states of some as yet not observed fundamental constituents generically referred
as {\it preons}. If quarks and leptons have an internal substructure, they are expected to be accompained by heavy excited states $\ell^*, q^*$ of masses $M$
that should manifest themselves at an unknown energy scale, the compositeness scale
$\Lambda$.
As customary in an EFT approach, the effects of the high-energy physics scale, here $\Lambda$, are captured in higher dimensional operators that describe processes within a lower energy domain, where the fundamental building blocks of the theory cannot show up. Hence, the heavy excited states may interact with the SM ordinary fermions via dimension-5 gauge interactions of the SU(2)$_L \otimes$ U(1)$_Y$ SM gauge group of the magnetic-moment type (so that the electromagnetic current conservation is not spoiled by e.g.~$\ell^* \ell \gamma$ processes~\cite{Cabibbo:1983bk}). In addition, the exchange of preons and/or
binding quanta of the unknown interactions between ordinary fermions ($f$) and/or the excited states ($f^*$) results in effective
contact interactions (CI) that couple the SM fermions and heavy excited states~\cite{Baur:1989kv,Baur:1987ga,Peskin:1985symp,Tanabashi:2018oca}.
In the latter case, the dominant effect is expected to be given by the dimension-6 four-fermion interactions scaling with the inverse square of the compositeness scale $\Lambda$:
\begin{subequations}
\label{contact}
\begin{align}
\label{Lcontact}
\mathcal{L}^{6}&=\frac{g_\ast^2}{\Lambda^2}\frac{1}{2}j^\mu j_\mu \, , \\
\label{Jcontact}
j_\mu&=\eta_L\bar{f_L}\gamma_\mu f_L+\eta\prime_L\bar{f^\ast_L}\gamma_\mu f^\ast_L+\eta\prime\prime_L\bar{f^\ast_L}\gamma_\mu f_L + h.c. \nonumber\\&\phantom{=} +(L\rightarrow R) \, ,
\end{align}
\end{subequations}
where $g_*^2 = 4\pi$ and the $\eta$'s factors are usually set equal to unity. In this work the right-handed currents will be neglected for simplicity (this is also the setting adopted by the experimental collaborations).
As far as gauge interactions (GI) are concerned,
let us consider the first lepton family and assume that the excited neutrino and the excited electron are grouped into left handed singlets and a right-handed SU(2) doublet:
$
e^*_L, \nu_L^*\, ,
L^*_R=\left(
\nu^*_R \, , e^*_R
\right)^T
$,
so that a magnetic type coupling between the left-handed SM doublet and the right-handed excited doublet via the SU(2)$_L \otimes$ U(1)$_Y$ gauge fields can be written down~\cite{Cabibbo:1983bk,Takasugi:1995bb}:
\begin{equation}
\label{mirror}
{\cal L}^5 =\frac{1}{2 \Lambda} \, \bar{L}_R^* \sigma^{\mu\nu}\left( gf \frac{\bm{\tau}}{2}\cdot \bm{W}_{\mu\nu} +g'f' Y B_{\mu\nu}\right) L_L +h.c. \, .
\end{equation}
Here, $L^T =({\nu_\ell}_L, \ell_L)$ is the ordinary lepton doublet, $g$ and $g'$ are the SU(2)$_L$ and U(1)$_Y$ gauge couplings and $\bm{W}_{\mu\nu}$, $B_{\mu\nu}$ are the field strength tensor of the corresponding gauge fields respectively; $\bm{\tau}$ are the Pauli matrices and $Y$ is the hypercharge, $f$ and $f'$ are dimensionless couplings and are expected (and assumed) to be of order unity.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.3175]{FIG1_EXP_PAPER}
\caption{\label{fig:process}Feynman diagrams depicting the mechanisms responsible for the process $q \bar{q}' \to N^* \ell$, where $\ell$ stands for both $\ell^\pm$. The dark grey blob (diagram on the left) describes the production of an on-shell heavy Majorana neutrino $N$ in proton-proton collisions at
LHC. The production is possible both with gauge interactions (first diagram on the right-hand side) and with four-fermion contact
interactions (second diagram on the right-hand side).}
\end{figure}
Excited states interacting with the SM sector through the model Lagrangians (\ref{Lcontact})-(\ref{Jcontact}) and (\ref{mirror}) have been extensively searched for at high-energy collider facilities. The current strongest bounds are due to the recent LHC experiments. Charged leptons ($e^*
,\mu^*$) have been searched for in the channel $pp\to \ell \ell^* \to \ell \ell \gamma$~\cite{atlas-limit,atlas-limit-new,cms-limit-7TeV,cms-limit-8TeV,cms-limit-13TeV,CMS-PAS-EXO-18-004,Sirunyan:2018zzr}, i.e. produced via CI and then decay via GI, and in the channel $pp \to \ell \ell^* \to \ell \ell q\bar{q}^\prime$ \cite{CMS-PAS-EXO-18-013} where both production and decay proceed through CI.
Neutral excited leptons have been also discussed in the literature and the corresponding phenomenology at LHC has been discussed in detail in the case of a heavy composite Majorana neutrino $N^*$~\cite{Leonardi:2015qna}.
A dedicated experimental analysis has been carried out
by the CMS collaboration \cite{Sirunyan:2017xnz} on LHC data collected for $\sqrt{s}$ = 13 TeV and looking for the process \begin{equation}pp\rightarrow \ell N^*_\ell \, \rightarrow \ell\ell q\bar{q}^\prime\end{equation}
with dilepton (dielectrons or dimuons) plus diquark final states. The existence of $N^*_\ell$ is excluded for masses up to $4.60$ $(4.70)$ TeV at $95\%$ confidence level, assuming $M = \Lambda$. Moreover, the composite Majorana neutrinos of this model can be responsible for baryogenesis via leptogenesis~\cite{Zhuridov:2016xls,Biondini:2017fut}.
The phenomenology of other excited states has also been discussed in a series of recent papers~\cite{Biondini:2012ny,Majhi:2012bx,Leonardi:2014epa,Biondini:2014dfa,Leonardi:2015qna,Panella:2017spx,Presilla:2018ryu,Caliskan:2018vsk}.
We emphasize that in all phenomenological studies referenced above as well as all experimental analyses that have searched for excited states at colliders, it is customary to impose the constraint $M \le \Lambda$ on the parameter space of the model. To the best of our knowledge unitarity has never been taken into account and/or discussed in connection with the effective interactions of the so called excited states.
The main goal of this work is to report instead that the unitarity bounds, as extracted from Eq.~(\ref{Lcontact})-(\ref{Jcontact}) and (\ref{mirror}), are quite compelling and should be included in future studies of such effective composite models because they contraint rather strongly the parameter space.
While we present an explicit calculation of the unitarity bound for heavy composite neutrino searches, we expect that similar bounds (i.e. equally compelling) would apply for excited electrons ($e^*$), muons ($\mu^*$) and quarks ($q^*$). Indeed, the effective operators that describe the latter excited states have the very same structure of those referred to the composite neutrinos.
\section{Unitarity in single-excited-fermion production}
\label{unitarity}
For the derivation of the unitarity bound, we adopt a standard method that makes use of the optical theorem and the expansion of the scattering amplitude in partial waves. In order to specify the CI and GI Lagrangians for a definite situation, we consider the production of the excited Majorana neutrino at the LHC. However, we shall highlight when the results apply to other composite fermion states in the following.
The central object that we shall derive from the operators in the effective Lagrangians (\ref{CI_Lag_LH}) and (\ref{GI_Lag_LH}) is the interacting part of the $S$ matrix, indicated with $T$ in this letter. It enters the partial wave decomposition of the scattering amplitude as follows
\begin{equation}
\mathcal{M}_{i \to f} (\theta ) = 8 \pi \sum_j (2 j +1 ) T^j_{i \to f} d^j_{\lambda_f \, \lambda_i} (\theta) \, ,
\label{PW_dec}
\end{equation}
where $j$ is the eigenvalue of the total angular momentum $J$ of the incoming (outgoing) pair, $d^j_{\lambda_f \, \lambda_i} (\theta)$ is the Wigener d-function and $\lambda_i$ ($\lambda_f$) is the total helicity of the initial (final) state pair. Without loss of generality, we consider azimuthally symmetric processes and fix $\phi=0$ accordingly. From the optical theorem and the decomposition in Eq.~(\ref{PW_dec}), one can find the perturabtive unitarity condition of an inelastic process for each $j$ to be
\begin{equation}
\sum_{f \neq i} \beta_i \beta_f |T^j_{i \to f}|^2 \leq 1 \, ,
\label{Bound_Inelastic}
\end{equation}
where $\beta_i$ ($\beta_f$) is the factor obtained from the two-body phase space and reads for two generic particles with masses $m_1$ and $m_2$
\begin{equation}
\beta = \frac{\sqrt{\left[ \hat{s}-(m_1-m_2)^2\right] \left[ \hat{s}-(m_1+m_2)^2\right]}}{\hat{s}} \, .
\label{Beta_fi}
\end{equation}
It corresponds to the particle velocity when $m_1=m_2$. It is important to notice that the unitarity bound is imposed on the subprocess involving the proton valence quarks as initial state, namely $q \bar{q}' \to \ell N^*_\ell$ as shown in Figure~\ref{fig:process}. Then, for the process of interest, the relevant interaction(s) are as follows:
\begin{equation}
\mathcal{L}_{\text{CI}}= \frac{g_*^2 \, \eta}{\Lambda^2} \bar{q}' \gamma^\mu P_L q \, \bar{N} \gamma_\mu P_L \ell + h.c. \, ,
\label{CI_Lag_LH}
\end{equation}
\begin{equation}
\mathcal{L}_{\text{GI}}= \frac{g \, f}{\sqrt{2} \Lambda} \bar{N} \sigma^{\mu \nu} (\partial_\mu W_\nu^+) P_L \ell + h.c. \, .
\label{GI_Lag_LH}
\end{equation}
Accordingly, in Eq.~(\ref{Beta_fi}), $\hat{s}$ denotes the center-of-mass energy in each collision and it is obtained from the nominal collider energy and the parton momentum fractions as $\hat{s}= x_1 x_2 \, s$. As far as the kinematic is concerned, $\beta_i=1$ can be used since the valence quark masses are negligible with respect to the center-of-mass energy. Instead, one finds
$\beta_f = 1-M^{2}/\hat{s}$ for the final state, where the composite neutrino mass has to be kept.
The core of the method relies on the derivation of the amplitude for the process of interest induced by the contact and gauge-mediated effective Lagrangians (\ref{CI_Lag_LH}) and (\ref{GI_Lag_LH}). Then, one matches the so-obtained result for $\mathcal{M}_{i \to f}$ with the r.h.s of Eq.~(\ref{PW_dec}) and extracts the corresponding $T^j_{i \to f}$ for each definite eigenvalue of the total angular momentum ($j$). The latter are inserted into Eq.~(\ref{Bound_Inelastic}) in order to derive the unitarity condition that the model parameters ($\Lambda, M, g_*,g$) and the center-of-mass energy have to obey. To this end, the amplitude $\mathcal{M}_{i \to f}$ is decomposed in terms of definite helicity states and, therefore, we have to express the initial and final state particles spinors accordingly \cite{Jacob:1959at}. The helicity of each particle in the initial or final state is $\lambda=\pm 1/2$, being all the involved particles fermions (also the composite neutrinos are spin-1/2 fermion) . We shall simply use $\pm$ to label the initial and final state helicity combinations, $(+,+)$, $(+,-)$, $(-,+)$ and $(-,-)$. Since in the center-of-mass frame the incoming and outgoing particles travel in opposite directions, the helicities in the Wigner d-functions are defined as $\lambda_i=\lambda_q - \lambda_{\bar{q}'}$ and $\lambda_f=\lambda_{N^*} - \lambda_{\ell}$. One can adopt two different bases for expressing the spinors and the gamma matrices, the Dirac and chiaral bases (see e.g. appendix in ref.~\citep{Endo:2014mja}). We used both the options to derive $T^j_{i \to f}$ and checked that our findings are indeed invariant upon the choice of the basis.
We give the result for the CI Lagrangian in Eq.~(\ref{CI_Lag_LH}) first. The non-vanishing helicity amplitudes read
\begin{eqnarray}
&& T^{j=1}_{(-,+) \to (-,+)} = -\frac{\hat{s} \, g_*^2}{12 \pi \Lambda^2} \left( 1-\frac{M^{2}}{\hat{s}} \right)^{\frac{1}{2}} \, ,
\label{CI_Tampl_1}
\\
&& T^{j=1}_{(-,+) \to (+,+)} = \frac{\sqrt{\hat{s}} \, M \, g_*^2}{12 \sqrt{2}\pi \Lambda^2} \left( 1-\frac{M^{2}}{\hat{s}} \right)^{\frac{1}{2}} \, .
\label{CI_Tampl_2}
\end{eqnarray}
Only the amplitude with $j=1$ is non-zero, due to the initial helicity state. The same occurs with the vector and axial-vector operators studied in \citep{Endo:2014mja} for dark matter pair production at colliders. We notice that a finite composite neutrino mass allows for the helicity flip in the final state originating the term in Eq.~(\ref{CI_Tampl_2}). We obtain the same result if we work with right-handed particles in the CI operator, however the helicities in Eqs.~(\ref{CI_Tampl_1}) and (\ref{CI_Tampl_2}) flip as $+ \leftrightarrow -$. Using Eq.~(\ref{PW_dec}) and summing over the non-vanishing final helicity states, we obtain
\begin{eqnarray}
\frac{g_*^4 \, \hat{s} \, (2\hat{s}+M^{2})}{288 \pi^2 \Lambda^4} \left( 1-\frac{M^{2}}{\hat{s}} \right)^{2} \leq 1 \, .
\label{UNI_CI}
\end{eqnarray}
As far as the GI process is concerned, we proceed the same way. A dimension-5 operator is involved and, in this case, the $W$ boson mediates the scattering between the initial and final states. We keep the $W$ boson mass in our expression, even if it is much smaller than the typical $\hat{s}$ values of the $pp$ collisions. The SM electroweak current enters besides the one from the composite model and the helicity amplitudes are found to be
\begin{eqnarray}
&& T^{j=1}_{(-,+) \to (-,+)} = -\frac{i g^2}{24 \pi \Lambda} \frac{\hat{s}^{3/2}}{\hat{s}-m_W^2}\left( 1-\frac{M^{2}}{s} \right)^{\frac{1}{2}} \, ,
\\
&& T^{j=1}_{(-,+) \to (+,+)} = \frac{i g^2}{24 \sqrt{2}\pi \Lambda} \frac{\hat{s} \, M}{\hat{s}-m_W^2} \left( 1-\frac{M^{2}}{\hat{s}} \right)^{\frac{1}{2}} \, ,
\end{eqnarray}
and the corresponding result for the unitarity bound is
\begin{eqnarray}
\frac{g^4 }{1152 \, \pi^2 \Lambda^2} \frac{\hat{s}^2 \, (2\hat{s}+M^{2})}{(\hat{s}-m_W^2)^2} \left( 1-\frac{M^{2}}{\hat{s}} \right)^{2} \leq 1 \, .
\label{UNI_GI}
\end{eqnarray}
A comment is in order. The unitarity bound in Eq.~(\ref{UNI_CI}) is valid for the more generic production process $q \bar{q}' \to f^* f$, i.e. excited charged or neutral leptons and excited quarks accompanied by a SM fermion. This statement traces back to the particle-blind choice adopted in the CIs framework, where the $\eta$'s are set to unity in all the cases. More care has to be taken about a wider applicability of the result for GIs in Eq.~(\ref{UNI_GI}). Here, different factors can enter according to the gauge couplings and gauge bosons that describe the processes involving excited charged leptons and quarks instead of composite neutrinos.
\section{Implementing the bound}
\label{scalarUNC}
The production of heavy composite Majorana neutrinos has been studied by the CMS Collaboration by studying the final state with two leptons and at least one large-radius jet, with data from $pp$ collisions at $\sqrt{s} = 13$ TeV and with an integrated luminosity of $2.3$ fb$^{-1}$~\cite{Sirunyan:2017xnz}. Good agreement between the data and the SM expectations was observed in the search, but the whole dataset of the Run 2 of the LHC still needs to be analysed. Therefore, the issue of the unitarity condition on the accessible parameter space ($M,\Lambda$) urges to be assessed. As usual in BSM searches, the absence of a signal excess over the SM background is translated into an experimental bound on the parameter space $( M,\Lambda)$.
Moreover, the sensitivity of this search was investigated for two future collider scenarios: the High-Luminosity LHC (HL-LHC), with a centre-of-mass energy of 14 TeV and an integrated luminosity of 3 ab$^{-1}$, and the High-Energy LHC (HE-LHC), with a centre-of-mass energy of 27 TeV and an integrated luminosity of 15 ab$^{-1}$~\cite{CMS-PAS-FTR-18-006}. The projection studies, included in the recent Yellow Report CERN publication~\cite{CidVidal:2018eel,Collaborations:2651134}, have shown the potential of such facilities in reaching much higher neutrino masses.
In this section, the perturbative unitarity bounds are applied to these searches in the dilepton and a large-radius jet channel with the CMS detector for the three different collider scenarios.
As already clear from the rather
different coupling values entering the Lagrangians (\ref{CI_Lag_LH}) and (\ref{GI_Lag_LH}), namely $g/ \sqrt{2} \approx 0.4$ versus $g_*^2=4\pi$, the production mechanism of a heavy composite neutrino and other excited states is dominated by the contact interaction mechanism~\cite{Panella:2017spx}. In particular, it was shown that cross sections in contact-mediated production are usually more than two orders of magnitude larger than the gauge mediated ones for all values of the $\Lambda$ and $M$ relevant in the analyses. This means that it is a reasonable approximation to consider only the bounds given in Eq.~(\ref{UNI_CI}) to constraint the unitarity violation of the signal samples.
In order to estimate the effect of the unitarity condition on LHC searches, we need to implement the bounds in the case of hadron collisions. Then, the square of the centre-of-mass energy of the colliding partons system, $\hat{s}=x_1 x_2 s$ does not have a definite value, where $x_1$ and $x_2$ are the parton momentum fractions and $\sqrt{s}$ is nominal energy of the colliding protons. To this aim, we have estimated $\hat{s}$ in each event generated in the Monte Carlo (MC) samples, and we have plugged the result into Eq.~(\ref{UNI_CI}) in order to obtain level curves on the parameter space for which the unitarity bound is satisfied to some extent.
Indeed, the constraint in Eq.~(\ref{UNI_CI}) should not be interpreted too strictly. A violation of such bounds would signal the breakdown of the EFT expansion and call for higher order operators in $\sqrt{\hat{s}}/ \Lambda$ to help in restoring the unitarity of the process. Therefore, we implement a theoretical uncertainty by allowing up to 50\% of the events to violate the bound, that corresponds to assigning a relative correction to the cross section $\delta \sigma / \sigma_{\hbox{\tiny LO}} \leq 0.5$ from higher order terms.
The MC samples for the signal are generated at Leading Order (LO) with CalcHEP (v3.6)~\cite{Belyaev:2012qa} for $\sqrt{s}=13, 14$ and $27$ TeV proton-proton collisions, using the NNPDF3.0 LO parton distribution functions with the four-flavor scheme~\cite{Ball:2014uwa}, spanning over the $(\Lambda,M)$ region covered by the experimental searches~\cite{CMS-PAS-FTR-18-006}.
The information on the parton momenta is then retrieved from the Les Houches Event (LHE) files of each signal process through MadAnalysis~\cite{Conte:2012fm}.
We have explicitly checked that, in the mass range explored, the $\hat{s}$-distributions in our MC simulations are peaked at values around $M^{2}$ almost irrespective of the nominal collider energy $\sqrt{s}=13,14,27$ TeV. This is somehow expected on general grounds since in the generated signal events the available energy, $\sqrt{\hat{s}}$, is mostly used to produce a heavy excited state (of mass $M$). Of course, the larger the collider energy the more prominent the distribution tails at high $\hat{s}$ values. These expectations are corroborated by analytical expressions for the $\hat{s}$-distributions that can be retrieved from ref.~\cite{Leonardi:2015qna}, involving only the product of the parton luminosity functions and the hard production process cross section.
\begin{figure}[t!]
\includegraphics[width=0.48\textwidth]{new_Run2_100-95-50_v5.pdf}
\caption{
\label{figex2}
(Color online) The unitarity bound in the ($M,\Lambda$) plane compared with the Run 2 exclusion at 95\% CL from \protect\cite{Sirunyan:2017xnz}, dashed line (blue), for the $eeq\bar{q}'$ final state signature. The solid (violet) lines with decreasing thickness represent the unitarity bound respectively for 100\%, 95\% and 50\% event fraction satisfying Eq.~\ref{UNI_CI}. The dot-dashed (gray) line stands for the $M= \Lambda$ condition. Here and in the following figures both $\Lambda$ and $M$ start at 100 GeV, and the dotted (black) curve corresponds to the theoretical unitarity bound (Eq.~\ref{UNI_CI} with $\hat{s}=s$).
}
\end{figure}
The results are presented in Figs.~\ref{figex2},~\ref{figex3} and~\ref{figex4} for the Run 2, HL-LHC and HE-LHC scenario respectively.
The regions below the solid (violet) lines in Figs.~\ref{figex2}-\ref{figex4} delimit the parameter space for which respectively 100\%, 95\% and 50\% of the events satisfy the unitarity bound, i.e. they define the regions where the model should not be trusted because unitarity is violated for such $(M, \Lambda)$ values. It is important to underline that the impact of the unitarity bound is strongly dependent on this fraction of events ($f$) that satisfy the condition in Eq.~(\ref{UNI_CI}). This conforms with the results in~\cite{Endo:2014mja}, at least in the $(\Lambda, M)$ region considered here.
Moreover, we observe that there is a value of the compositeness scale $\Lambda$, which depends on the parton collision energy $\sqrt{\hat{s}}$ and the excited fermion mass $M$, above which the unitarity bound saturates. Indeed, one can estimate an upper bound for such a value from Eq.~(11) by setting the collision energy $\sqrt{\hat{s}}=\sqrt{s}$; it is represented with the dotted (black) line in Fig.~\ref{figex2}-\ref{figex5} for the corresponding nominal energies $\sqrt{s}=13, \, 14, \, 27$ TeV.
An approximated (maximal) value of $\Lambda \approx\sqrt{s/3}$, which saturates the unitary bound, is obtained when $s \gg M^2 $.
\begin{figure}[t!]
\includegraphics[width=0.48\textwidth]{HL_100-95-50_v5.pdf}
\caption{\label{figex3} (Color online)
The unitarity bound in the plane $(M, \Lambda) $ for the three event fractions as in Fig.~\ref{figex2} compared with the exclusion from the High Luminosity projections study in \cite{CidVidal:2018eel} for LHC at $\sqrt{s}=14$ TeV at 3 ab$^{-1}$ of integrated luminosity.}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=0.48\textwidth]{HE_100-95-50_v5.pdf}
\caption{\label{figex4} (Color online)
The unitarity bound in the plane $(M, \Lambda) $ for the three event fractions as in Fig.~\ref{figex2} compared with the exclusion curve from the HE-LHC projection studies in \protect\cite{CidVidal:2018eel} for $\sqrt{s}=27$ TeV at 15 ab$^{-1}$ of integrated luminosity.}
\end{figure}
\begin{figure}[htb!]
\includegraphics[width=0.48\textwidth]{new_ExcitedLeptons_100-95-50_v5.pdf}
\caption{\label{figex5} (Color online)
The unitarity bound in the plane $(M, \Lambda) $ for the three event fractions as in Fig.~\ref{figex2} compared with the exclusion from the Run 2 for charged leptons searches with two different final states~\cite{Sirunyan:2018zzr,CMS-PAS-EXO-18-013}.}
\end{figure}
\section{Discussion and Results}
\begin{table*}[htb!]
\centering
\begin{tabular}{||c|c|c|c||}
\hline\hline & & &\\
& LHC Run 2 ($N^*$) & LHC Run 2 ($e^*$) & LHC Run 2 ($e^*$) \\
& 2.3 fb$^{-1}, \sqrt{s}=13$ TeV
& 35.9 fb$^{-1}, \sqrt{s}=13$ TeV
& 77.4 fb$^{-1}, \sqrt{s}=13$ TeV\\
& & &\\\hline& & &\\
$M =\Lambda$ & $M \le 4.6 $ TeV~\protect\cite{Sirunyan:2017xnz} & $M \le 4.0 $ TeV ~\protect\cite{ Sirunyan:2018zzr}& $M \le 5.5 $ TeV~\protect\cite{CMS-PAS-EXO-18-013}\\ & & &\\
Unitarity 100\% & $M \le 3.6$ TeV \,\,\,($\Lambda = 6.4 $ TeV) & $M \le 3.3$ TeV\,\,\, ($\Lambda = 6.5 $ TeV) &$M \le 4.9$ TeV \,\,\, ($\Lambda = 6.4 $ TeV) \\ & & &\\
Unitarity 50\% & $-$ & $M \le 4.9$ TeV\,\,\, ($\Lambda = 2.8$ TeV) &$M \le 7.0$ TeV \,\,\, ($\Lambda = 2.9 $ TeV) \\ & & &\\
\hline\hline
\end{tabular}
\caption{In the first line we quote the bounds reported in the CMS analysis of like sign dilpetons and diquark for excited neutrinos \protect\cite{Sirunyan:2017xnz} and the bounds from CMS for two analyeses for excited charged leptons~\cite{Sirunyan:2018zzr,CMS-PAS-EXO-18-013}. In second (third) line, we quote instead the strongest mass bound obtained from Figs.~\ref{figex2} and \ref{figex5} when the perturbative unitarity bound with $f= 100\%$ (50\%) crosses the 95\% C.L. exclusion curve from the experimental studies. }
\label{tab:I}
\end{table*}
Let us elaborate on our findings and explain their impact on the experimental analyses carried out at the LHC. First of all, the experimental outcomes are summarized with exclusion regions in the $(M,\Lambda)$ plane, which are in turn set with the 95\% C.L. observed (Run 2)~\cite{Sirunyan:2017xnz} and expected limit (HL/HE-LHC)~\cite{CidVidal:2018eel,Collaborations:2651134}, namely the dashed blue lines in Figure~\ref{figex2}, \ref{figex3} and \ref{figex4} respectively. Above these lines the model is still viable, whereas below it is excluded. The experimental collaborations quote routinely the largest excluded excited-state mass by intersecting the 95\% C.L. exclusion curves with the $M \leq \Lambda$ constraint (dot-dashed gray line and gray shaded region in Figure~\ref{figex2}, \ref{figex3} and \ref{figex4}). This is the widely adopted condition imposed on the model validity and it originates from asking the heavy excited states to be at most as heavy as the new physics scale $\Lambda$. Despite it is a reasonable constraint, it does not take into account the typical energy scale that enters the production process, i.e.~$\sqrt{\hat{s}}$.
Comparing the unitarity bounds, as represented by the solid violet lines in Figure~\ref{figex2}, \ref{figex3} and \ref{figex4}, with the usual prescription of $M \leq \Lambda$ we can frame the interplay between the two. For small values of the heavy neutrino masses ($M$), the unitarity bound is more restrictive than $M \leq \Lambda$ and it shrinks the available parameter space quite considerably (at least for $f=100\%$ and $95\%$). On the other hand, for higher values of the masses the two becomes compelling. The relative importance depends on both the collider nominal energy and event fraction $f$.
If one applies the unitarity bound to the experimental results by following the same prescription as outlined before for $M \leq \Lambda$, then the maximal neutrino mass values are those collected in Table~\ref{tab:I}, second raw. For example, for LHC Run 2, we find $M \le 3.6$ TeV for $\Lambda= 6.4 $ TeV instead of $M \le 4.6$ TeV ($\Lambda=4.6$ TeV), when the unitarity bound is required to be satisfied by the 100\% of events. As anticipated, the new constraint set by the unitarity bound offers an alternative theoretical input for ongoing and future experimental analyses on this effective composite model. We provide the unitarity bound down to $M=100$ GeV. Smaller values clash with the original model setting that assumes some new physics above the electroweak scale triggering fermion excitations~\cite{Baur:1989kv,Cabibbo:1983bk}.
While in this study we have concentrated on the impact of unitarity bounds on the heavy composite neutrino production at the LHC, HL-LHC and HE-LHC, we expect that similar bounds will affect the searches for charged excited leptons. Leaving more detailed studies for future work, we apply the unitarity bound in Eq.~(\ref{UNI_CI}) for the recent experimental results reported in~\cite{Sirunyan:2018zzr,CMS-PAS-EXO-18-013}, where charged excited leptons are produced via CI in the process $pp \to \ell^* \ell$, with $\ell^*=e^*,\mu^*$. Being the effective Lagrangian for the production mechanism the very same as for excited Majorana neutrinos (if one insists on $\eta=1$ for the different states), we can use the unitarity bound as extracted for $\sqrt{s}=13$ TeV and apply it for the process involving charged excited leptons. The comparison with the experimental exclusion limit at 95\% C.L. is shown in Figure~\ref{figex5}.
As anticipated, the comparison with the observed and expected limits produces different mass reaches. For the case of LHC Run 2 searches recently performed in CMS, we quote the corresponding mass values in Table~\ref{tab:I} for the searches of excited charged leptons. As for the analyses on the excited charged states, we can exploit the data as provided in the region $M>\Lambda$ and inspect the interplay with the unitarity bound for $f=100\%,95\%,50\%$. On the other hand, we cannot provide as many mass values for the excited neutrino searches, due to the lack of experimental data in the same region $M>\Lambda$. When $f=100\%$ is considered, the largest excited lepton mass reads 3.3 TeV (4.9 TeV) instead of 4.0 TeV (5.5 TeV) for the $ee\gamma$ ($eeq\bar{q}'$) final state.
\emph{In conclusion}, we studied the perturbative unitarity bound extracted from the effective gauge and contact Lagrangians for a composite-fermion model. On general grounds, an effective theory is valid up to energy/momentum scales smaller than the large energy scale that sets the operator expansion. Since collider experiments are involving more and more energetic particle collisions, the use and the applicability of effective operators can be questioned. In order to address this issue and to be on the safe side, one can impose the unitarity condition both on the EFT parameters $(M, \Lambda, g^*, g)$ and the energy involved in a given process. To the best of our knowledge, such a constraint was not derived for the model Lagrangians in Eq.~(\ref{CI_Lag_LH}) and (\ref{GI_Lag_LH}), and we have obtained the corresponding unitarity bounds, namely Eqs.~(\ref{UNI_CI}) and (\ref{UNI_GI}). Thus, the applicability of the effective operators describing the production of composite neutrinos (and other excited states) has to be restricted accordingly. We have considered an estimation of the theoretical error on the unitarity bound, which is derived at leading order in the EFT expansion, by allowing up to 50\% of the events to evade the constraint. This originates the violet lines in Figures~\ref{figex2}-\ref{figex5}.
On the basis of the results, further investigations may be devoted to a better understanding of the theoretical error. Indeed the strong dependence of the unitarity bound on the fraction of events $f$, especially so in the low-mass region, calls perhaps for an estimate of possible higher order terms in the effective theory expansion (operators of dimension-7 for contact inteactions). In doing so, one could pinpoint to a particular choice of $f$ in a more rigorous way.
Nonetheless, it is the authors' opinion that the findings here discussed will have a significant impact on ongoing and future experimental searches for excited states coupling to the SM fermions with the interactions given in (\ref{CI_Lag_LH}) and (\ref{GI_Lag_LH}). At the very least, the unitarity bounds play the role of a collider-driven theoretical tool
for interpreting the experimental results of the considered composite models,
more rigorous
than the simple relation $M \leq \Lambda$.
\section*{Acknowledgments}
The authors thank P.~Azzi and F.~Romeo for useful comments and discussion on the manuscript. We are especially thankful to Dr. Oleg Zenin and Dr. Andrey Kamenshchikov (ATLAS Collaboration) for pointing out an inconsistency between our previous numerical results and the theoretical formula of the unitarity bound, and for crosschecking some of our revised numerical results.
\bibliographystyle{elsarticle-num}
|
1,116,691,499,123 | arxiv | \section{Introduction}
\label{intro}
In image analysis, ridges and valleys are curves along which the image
is brighter or darker, respectively, than the local background
\cite{eberly1994ridge}. Collectively, ridges and valleys are referred
to as creases. Reliable detection and localization of creases in noisy
images is an important and well-studied problem in medical image
analysis, one very common application being the detection of blood
vessels \cite{frangi1998multiscale}.
Often, ridges and valleys occur at multiple scales, i.e., their
cross-sectional radius varies throughout the image. For example, the
stem of a vessel tree is thicker than its branches. Gaussian scale
spaces are a classic strategy for extracting creases at different
scales \cite{lindeberg1998edge}. However, the fact that Gaussian
filters do not offer any specific mechanisms for preserving creases
gave rise to image filters such as coherence enhancing diffusion
\cite{weickert1998anisotropic}, crease enhancement diffusion (CED)
\cite{sole2001crease}, and vesselness enhancement diffusion (VED)
\cite{canero2003vesselness}. They are based on second order
anisotropic diffusion equations with a diffusion tensor that, in the
presence of crease lines, smoothes only along, but not across them. In
addition, the VED filter includes a multi-scale analysis that
automatically adapts it to the local scale of creases.
In this work, we argue that using fourth-order instead of second-order
diffusion to enhance creases allows for a more accurate localization
of their centerlines. We propose a novel fourth-order filter that
introduces a fourth-order diffusion tensor to specifically enhance
ridges, valleys, or both, in a scale-adaptive manner. Increased
accuracy of the final segmentation is demonstrated on simulated and
real-world medical images.
\section{Related Work}
\label{sec:related-work}
Diffusion-based image filters treat image intensities as an initial
heat distribution $u_{t=0}$, and solve the heat equation
$\partial_t u=\mathrm{div} (g\,\nabla_{\mathbf{x}} u)$ for larger
values of an artificial time parameter $t$, corresponding to
increasingly smoothed versions of the image. If the diffusivity
function $g$ is constant, the diffusion is linear and uniformly
smoothes image $u$. If $g=1$, the solution at time
$t$ can be obtained as the convolution
$u*G_{\sigma}$ with a Gaussian kernel
$G_{\sigma}$ with standard deviation $\sigma=\sqrt{2t}$
\cite{weickert1998anisotropic}.
Since linear diffusion fails to preserve important image structures,
Perona and Malik \cite{perona1990scale} introduced the idea of using
\emph{nonlinear} diffusion equations. By making the scalar diffusivity
$g$ a function of the spatial gradient magnitude
$\|\nabla_{\mathbf{x}} u\|$, they reduce the amount of smoothing near
image edges, and thus preserve edges. One such diffusivity function is
\begin{equation}
\label{eq:perona-malik}
g\left(\left\|\nabla_{\mathbf{x}} u\right\|^{2}\right)=\frac{1}{1+\frac{\left\|\nabla_{\mathbf{x}} u\right\|^{2}}{\lambda^{2}}} \text{ ,}
\end{equation}
where $\lambda$ is called the contrast parameter, and determines the
minimum strength of edges that should be preserved
\cite{perona1990scale}.
\begin{figure}
\centering
\begin{tabular}{@{}c@{ }c@{ }c@{}}
\includegraphics[scale=0.081]{Images/pepper-noisy} &\includegraphics[scale=0.081]{Images/pepper-second-order-filter}&\includegraphics[scale=0.081]{Images/pepper-fourth-order-filter}\tabularnewline
\shortstack{Input \\ \quad \\ \quad} & \shortstack{Second-order \\ diffusion} & \shortstack{Fourth-order \\ diffusion}\tabularnewline
\end{tabular}\caption{On smoothly shaded surfaces, second-order Perona-Malik diffusion creates a staircasing artifact that is avoided by fourth-order diffusion. \label{fig:staircase-artifact}}
\end{figure}
Perona-Malik diffusion turns smoothly shaded surfaces into piecewise
constant profiles, an effect that is often referred to as a
staircasing artifact, and that can be seen in the central ``Pepper''
image in Figure~\ref{fig:staircase-artifact}. To avoid this effect,
higher-order diffusion replaces the two first-order spatial
derivatives in the heat equation with second-order derivatives. More
recently, higher-order PDEs were also generalized to implicit surfaces
\cite{greer2006fourth} and image colorization \cite{peter2016turning}.
In a one-dimensional setting, discrete variants of higher order data
regularization can be traced back to a 1922 article by Whittaker
\cite{Whi22}. A first approach for higher order regularization in
image processing involving the absolute value of all second order
partial derivatives has been proposed by Scherzer \cite{Sch98}. The
resulting method has the drawback of not being rotationally invariant.
An extension of classical regularization by choosing $(\Delta u)^2$ as
argument of the penalizer in the smoothness term has been proposed by
You and Kaveh \cite{YK00}. Their method introduces speckle artifacts
around edges that require some post-processing. Both problems can be
solved by using the squared Frobenius norm of the Hessian matrix
$\| H(u) \|^2_F$ as argument of the penalizer. This has been proposed
by Lysaker et al.\ \cite{lysaker2003noise}.
Two very similar higher order methods based on evolution equations without underlying variational formulation have been introduced by Tumblin and Turk \cite{TT99} and Wei \cite{Wei99}. They use fourth-order evolution equations of the form
\begin{equation}
\partial_t u \; = \; - \mbox{div} \Big( g(m) \nabla \Delta u \Big) \text{,}
\end{equation}
where $m$ is the gradient norm \cite{TT99} or the Frobenius norm of the Hessian \cite{Wei99}.
Didas et al.\ \cite{didas2009properties} have generalized higher order
regularization methods and the corresponding partial differential
equations to arbitrary derivative orders. They have
shown that, when combined with specific diffusivity functions,
\emph{fourth-order} equations can enhance image curvature analogous to
how, by careful use of forward and backward diffusion, second-order
equations can enhance, rather than just preserve, image edges
\cite{weickert1998anisotropic}.
\begin{figure}
\centering\includegraphics[scale=0.26]{Images/1d-piecewise-constant-solution-for-gaussian}\protect\caption{
Second-order diffusion filter and fourth-order diffusion filter
applied on a 1-D input signal. Both filters use the Perona-Malik
diffusivity function.\label{fig:second-vs-fourth-order-diffusion}}
\end{figure}
Figure~\ref{fig:second-vs-fourth-order-diffusion} shows a simple
one-dimensional example with the Perona-Malik diffusivity function
(\ref{eq:perona-malik}). The edge enhancement of nonlinear
second-order diffusion, which leads to a piecewise constant result,
and the curvature enhancement of nonlinear fourth-order diffusion,
which leads to a piecewise linear result, are clearly
visible. Obviously, localizing the maximum will be much easier and
more reliable in case of the sharp peak created by fourth-order
diffusion than in the extended plateau that results from second-order
diffusion.
It is this curvature-enhancing property of fourth-order diffusion that
we exploit in our novel filter. We combine it with the idea of
\emph{anisotropic} diffusion, which was introduced to image processing
by Weickert \cite{weickert1998anisotropic} to address another
limitation of the Perona-Malik model. Namely, a consequence of
preserving edges by locally reducing the amount of smoothing is that
the neighborhoods of edges remain noisy. Anisotropic diffusion
$\partial_t u=\mathrm{div} (D\,\nabla_{\mathbf{x}} u)$ replaces the
scalar diffusivity $g$ by a second-order diffusion tensor $D$, which
makes it possible to reduce smoothing orthogonal to, but not along
image features, and therefore to denoise edges more effectively than
isotropic nonlinear diffusion, while still avoiding to destroy them.
We generalize this approach to a novel anisotropic fourth-order
diffusion equation that smoothes along the crease, while creating a
sharp peak in the orthogonal direction, to clearly indicate its
center. We are aware of only one previous formulation of anisotropic
fourth order diffusion, proposed by Hajiaboli
\cite{hajiaboli2011anisotropic}. However, it has been designed to
preserve edges, rather than enhance creases. Consequently, it is not
well-suited for our purposes, as we will demonstrate in the
results. Moreover, it differs from our approach in that it does not
make use of a fourth-order diffusion tensor, and includes no mechanism
for scale selection.
Despite the long history of research in this area, improved filtering
and detection of ridges continues to be an active topic in medical
image analysis. Our work on improving localization through
fourth-order diffusion complements recent advances. For example, the
SCIRD ridge detector by Annunziata et al.\ \cite{annunziata2015scale},
or the vesselness measure by Jerman et al.\
\cite{jerman2016enhancement} that gives better responses for vessels
of varying contrasts, could replace the vessel segmentation by Frangi
et al.\ \cite{frangi1998multiscale} that we use as a prefiltering
step. Several recent works
\cite{franken2009crossing,hannink2014crossing,scharr2012short,stuke2004analysing}
have addressed diffusion in crossings and bifurcations, and could be
combined with our work to improve the performance of our filter in
such cases.
\section{Method}
\subsection{Anisotropic Fourth-order Diffusion}
Building on work of Lysaker et al.\ \cite{lysaker2003noise}, Didas et
al.\ \cite{didas2009properties} formulate nonlinear fourth-order
diffusion as
\begin{equation}
\label{eq:nonlinear-fourth-order}
\begin{aligned}
\partial_t u = &-\partial_{xx}(g(\|H(u)\|_F^2)u_{xx})
-\partial_{yx}(g(\|H(u)\|_F^2)u_{xy})\\
&-\partial_{xy}(g(\|H(u)\|_F^2)u_{yx})
-\partial_{yy}(g(\|H(u)\|_F^2)u_{yy})\text{,}
\end{aligned}
\end{equation}
where $\|H(u)\|_F^2$ is the Frobenius norm of the Hessian matrix of image $u$, and
$u_{xy}=\partial_{xy} u$.
We propose the following novel \emph{anisotropic fourth-order} diffusion
model, which combines the ideas of higher-order diffusion with that of
making diffusivity a function of both spatial location and direction:
\begin{equation}
\label{eq:ts-fourth-order-diffusion}
\begin{aligned}
\partial_{t}u = &-\partial_{xx}\left[\mathcal{D}(H_{\rho}(u_{\sigma})):H(u)\right]_{xx}\\
&-\partial_{yx}\left[\mathcal{D}(H_{\rho}(u_{\sigma})):H(u)\right]_{xy}\\
&-\partial_{xy}\left[\mathcal{D}(H_{\rho}(u_{\sigma})):H(u)\right]_{yx}\\
&-\partial_{yy}\left[\mathcal{D}(H_{\rho}(u_{\sigma})):H(u)\right]_{yy}\text{.}
\end{aligned}
\end{equation}
Equation~(\ref{eq:ts-fourth-order-diffusion}) introduces a general
linear map $\mathcal{D}$ from the Hessian matrix $H$ to a transformed
matrix. Linear maps from matrices to matrices are naturally written as
fourth-order tensors, and we use the ``double dot product''
$\mathcal{D}:H$ as a shorthand for applying the map $\mathcal{D}$ to
the Hessian matrix $H$. This results in a transformed matrix $T$, and
we use square brackets $[T]_{ij}$ to denote its $(i,j)$th
component. Formally,
\begin{equation}
\label{eq:double-contraction}
\begin{aligned}
\left[T\right]_{ij} & = \left[
\mathcal{D}(H_{\rho}(u_{\sigma})):H(u)\right]_{ij}\\ & = \sum_{k=1}^2\sum_{l=1}^2\left[\mathcal{D}(H_{\rho}(u_{\sigma}))\right]_{ijkl}\left[H(u)\right]_{kl}.
\end{aligned}
\end{equation}
In this notation, we can define second-order eigentensors $E$ of
$\mathcal{D}$ corresponding to eigenvalue $\mu$ by the equation
$\mathcal{D}:E=\mu E$. An alternative notation, which will be used for
the numerical implementation in Section~\ref{sec:l2-stability}, writes
the Hessian and transformed matrices as vectors. This turns
$\mathcal{D}$ into a matrix whose eigenvectors are nothing but the
vectorized eigentensors as defined above. Similar to others
\cite{Basser:2007,Kindlmann:2007}, we find the fourth-order tensor and
``double dot'' notation more appealing for reasoning at a higher
level, because it allows us to preserve the natural structure of the
involved matrices.
Using our square bracket notation, an equivalent way of writing one of
the terms from Equation~(\ref{eq:nonlinear-fourth-order}),
$\partial_{ji}(g(\|H(u)\|_F^2)u_{ij})$, is
$\partial_{ji}\left[g(\|H(u)\|_F^2)H(u)\right]_{ij}$. Thus, the
difference between the model from
Equation~(\ref{eq:nonlinear-fourth-order}) and our new one in
Equation~(\ref{eq:ts-fourth-order-diffusion}) is to replace the
isotropic scaling of Hessian matrices using a scalar diffusivity
$g$, with a general linear transformation $\mathcal{D}$, which acts on
the second-order Hessian in analogy to how the established
second-order diffusion tensor acts on gradients in second-order
anisotropic diffusion. Due to this analogy, we call $\mathcal{D}$ a
fourth-order diffusion tensor.
In our filter, $\mathcal{D}$ is a function of the local normalized
Hessians, which are defined as
\begin{equation}
\label{eq:local_normalized_hessian}
H_{\rho}(u_{\sigma})=G_{\rho}\ast\left(\frac{1}{\sqrt{1+\left\Vert \nabla u_{\sigma}\right\Vert }}H\left(u_{\sigma}\right)\right)\text{, }
\end{equation}
where regularized derivatives are obtained by convolution with a
Gaussian kernel, $u_{\sigma}:=u\ast G_{\sigma}$. Its width $\sigma$
should reflect the scale of the crease, as will be discussed in
Section~\ref{sec:scale-selection}. Since scale selection might
introduce spatial discontinuities in the chosen $\sigma$, the
normalized Hessians are made differentiable by integrating them over a
neighborhood, for which we use a Gaussian width $\rho=0.5$ in our
experiments. As shown in \cite{haralick:1983}, and used for
vesselness enhancement diffusion in \cite{canero2003vesselness}, the
inverse gradient magnitude factor is used to make the eigenvalues of
$H_{\rho}(u_{\sigma})$ match the surface curvature values.
We emphasize that, unlike in a previous generalization of structure
tensors to higher order \cite{schultz2009higher}, the reason for going
to higher tensor order in
Equation~(\ref{eq:ts-fourth-order-diffusion}) is not to preserve
information at crossings; this is a separate issue that was recently
addressed by others \cite{hannink2014crossing}, and that we plan to
tackle in our own future work. In our present work, our goal is to
smooth along ridges and valleys, while sharpening them in the
orthogonal direction. This sharpening requires the curvature-enhancing
properties of fourth-order diffusion, and a fourth-order diffusion
tensor is a natural consequence of making fourth-order diffusion
anisotropic.
\subsection{Fourth-order Diffusion Tensor $\mathcal{D}$}
\label{sec:fourth-order-diff-tensor-D}
We now need to construct our fourth-order diffusion tensor
$\mathcal{D}$ so that it will smooth along creases, while enhancing
them in the perpendicular direction. Similar to Weickert's diffusion
tensors \cite{weickert1998anisotropic}, we will construct
$\mathcal{D}$ in terms of its eigentensors $E_i$ and corresponding
eigenvalues $\mu_i$, as defined above.
Didas et al.\ \cite{didas2009properties} have shown that fourth-order
diffusion with the Perona-Malik diffusivity \cite{perona1990scale}
allows for adaptive smoothing or sharpening of image curvature,
depending on a contrast parameter $\lambda$. In particular, in the 1-D
case, only forward diffusion (i.e., smoothing) happens in regions with
$|\partial_{xx}u|<\lambda$, while only backward diffusion (i.e.,
curvature enhancement) occurs where
$|\partial_{xx}u|>\sqrt{3}\lambda$. We wish to exploit this to enhance
creases whose curvature is strong enough to begin with, while
smoothing out less significant image features.
This is achieved by deriving the eigenvalues $\mu_{i}$ of
$\mathcal{D}$ from the eigenvalues $\nu_{1},\nu_{2}$ of the normalized
Hessian $H_{\rho}(u_{\sigma})$ using the Perona-Malik diffusivity
\cite{perona1990scale}, i.e.,
\begin{equation}
\mu_{i}=\frac{1}{1+\nu_{i}^{2}/\lambda^{2}} \text{, for }i\in\{1,2\}\label{eq:eigenvalues-in-2d}\text{.}
\end{equation}
If the user wishes to specifically enhance either ridges or valleys,
the sign of $\nu_{i}$ could be taken into account. For instance, a
ridge-like behaviour in the $i$th direction is characterized by
$\nu_{i}<0$. Therefore, we can decide to smooth out valleys by setting
$\mu_{i}=1$ wherever $\nu_{i}\geq 0$, and enhance ridges wherever
$\nu_{i}<0$ by defining $\mu_{i}$ as before. Enhancing only valleys
can be done in full analogy. In our experiments on synthetic data, we
found that, in terms of the $\ell_{2}$ difference between the ground
truth and the filtered image, better results were obtained when
enhancing both ridges and valleys. This is the setting used in all our
experiments.
The ridge and valley directions can be found from the eigenvectors
$e_{1},e_{2}$ of the normalized Hessian matrix $H_{\rho}(u_{\sigma})$, and are reflected
in the eigentensors $E_{i}$ of $\mathcal{D}$ by setting
\begin{equation}
\begin{array}{cccc}
E_{1}= & e_{1}\otimes e_{1}\qquad\quad & E_{3}= & \frac{1}{\sqrt{2}}(e_{1}\otimes e_{2}+e_{2}\otimes e_{1})\\
E_{2}= & e_{2}\otimes e_{2}\qquad\quad & E_{4}= & \frac{1}{\sqrt{2}}(e_{1}\otimes e_{2}-e_{2}\otimes e_{1})
\end{array}\text{ .}\label{eq:eigentensors-in-2d}
\end{equation}
The $E_i$ are orthonormal with respect to the tensor dot product
$A:B:=\mathrm{tr}(B^\mathrm{T}A)$. By definition, $E_4$ is
antisymmetric. Since Hessians of smooth functions are symmetric, the
value of $\mu_{4}$ does not play a role, and is simply set to zero. We
define $\mu_{3}$ as the average of $\mu_{1}$ and $\mu_{2}$.
\subsection{Scale Selection}
\label{sec:scale-selection}
In the previous sections, crease orientation was estimated using the
eigenvectors of the regularized and normalized Hessian in
Equation~(\ref{eq:local_normalized_hessian}). As in previous
approaches such as vesselness enhancement diffusion (VED)
\cite{canero2003vesselness}, this involves a regularization parameter
$\sigma$ that should be adapted to the local radius of the
crease. Setting this parameter is referred to as scale selection.
The vesselness measure $\mathcal{V}_{\sigma}$ by Frangi et
al. \cite{frangi1998multiscale} is maximal at the scale $\sigma$ that
matches the corresponding vessel size, and has been widely used for
detecting the local radius of vessel like
structures. $\mathcal{V}_{\sigma}$ is obtained from sorted and
scale-normalized eigenvalues $|\tilde\nu_{1}|\leq|\tilde\nu_{2}|$,
computed as $\tilde\nu_i:=\sigma^2 \nu_i$ from eigenvalues $\nu_i$ of
the Hessian $H(u_{\sigma})$ at a given scale $\sigma$. The factor
$\sigma^2$ compensates for the loss of contrast at larger scales
\cite{lindeberg1998edge}.
A vesselness measure $\mathcal{V}_{\sigma}$ should be low in
background regions where overall curvature and thus
$\mathcal{S}=\sqrt{\tilde\nu_{1}^{2}+\tilde\nu_{2}^{2}}$ are low
overall. Moreover, it should detect tubular structures, where
$|\tilde\nu_{1}|\ll|\tilde\nu_{2}|$, as opposed to blobs, in which
$\mathcal{R_{B}=}\frac{\tilde\nu_{1}}{\tilde\nu_{2}}$ would be
large. For ridges ($\tilde\nu_2<0$), Frangi et al.\ achieve this by
combining $\mathcal{S}$ and $\mathcal{R_{B}}$ according to
\begin{equation}
\mathcal{V}_{\sigma}u=\begin{cases}
0 & \text{\quad if }\quad\tilde\nu_{2}>0\\
\left(\text{e}^{-\frac{\mathcal{R_{B}^{\mathrm{2}}}}{2\beta^{2}}}\right)\left(1-\text{e}^{-\frac{\mathcal{S}^{2}}{2c^{2}}}\right) & \text{\quad otherwise}
\end{cases}\text{ ,}\label{eq:vesselness-measure}
\end{equation}
where the $\beta$ and $c$ parameters tune $\mathcal{V}_{\sigma}u$ to
be more specific with respect to suppression of blob shapes or
background structures, respectively. We use $\beta=0.5$ and
$c=\frac{1}{2}(\text{max}(\mathcal{S}))$, as recommended in \cite{frangi1998multiscale}.
The scale for each pixel is selected as the $\sigma$ for which the
maximum
$\mathcal{V}u=\max_{\sigma=\sigma_{\text{min}},...,\sigma_{\text{max}}}\mbox{\ensuremath{\mathcal{V}_{\sigma}}\ensuremath{u}}$
is attained, where $\{\sigma_{\text{min}},...,\sigma_{\text{max}}\}$
are the range of expected scales in the image. For pixels that are
part of the background, $\mathcal{V}u$ is low, and it can be
thresholded by parameter $\theta\in\left[0,1\right]$ for vessel
segmentation. This segmentation indicates the extent of vessels, and
is used for our scale-image postprocessing, as described below.
It has been observed previously \cite{descoteaux2008geometric} that
vesselness often fails to correctly estimate the scale of the vessel
along its boundary. This can happen in two cases: When the ridge has a
step-like shape, the curvature near the corner points will be much
larger at the finest scale than at all other scales, leading to an
underestimation of the real scale near the boundary.
On the other hand, the cross-sectional intensity profile of vessels
may have inflection points near its edges, where $\tilde\nu_2$ changes
its sign. In this case, some points near the boundary will have zero
vesselness at the finest scale, but the sign of $\tilde\nu_2$ will flip,
and therefore vesselness becomes non-zero, at coarser scales, leading
to an overestimation of scale. Figure \ref{fig:scale-image} shows
both scale underestimation or overestimation at vessel boundaries.
\begin{figure}
\centering%
\begin{tabular}{c@{ }c@{ }c@{}c}
Input & Scale Image & \shortstack{Post Processed} & \tabularnewline
\includegraphics[scale=0.084]{Images/branch-input} & \includegraphics[scale=0.084]{Images/branch-scale-pre} & \includegraphics[scale=0.084]{Images/branch-scale-post} & \includegraphics[scale=0.147]{Images/scales}\tabularnewline
\end{tabular}\caption{The color of each pixel in the scale image illustrates the scale of the underlying vessel. Artifacts near the boundaries are removed in a post processing step.
\label{fig:scale-image}}
\end{figure}
While such effects are less problematic for the VED filter, which uses
the same vesselness measure for scale selection, it can lead to
serious artifacts in our filter, where misestimating the scales at
boundaries can cause the curvature-enhancing diffusion to enhance the
boundary of large-scale ridges more than their center.
We avoid such boundary effects by introducing a novel
postprocessing of the computed scales. For each pixel on a vessel,
the vessel cross-section containing that pixel is extracted by
following the eigenvector direction that corresponds to the strongest
eigenvalue of the Hessian matrix computed at the scale suggested by
the vesselness measures at each point. Then, all pixels are assigned
the scale closest to the average of all pixels that lie on the same
cross-section. This removes the problem of scale over- or
underestimation on the boundaries. Figure \ref{fig:scale-image} shows
the scale image before and after being post processed.
\subsection{Stability}
\label{sec:l2-stability}
In order to solve Equation~(\ref{eq:ts-fourth-order-diffusion}), we
discretize it with standard finite differences, and use an explicit
numerical scheme. In matrix-vector notation, this can be written as
\begin{equation}
u^{k+1}=u^{k}-\tau Pu^{k}=(I-\tau P)u^{k}\text{ ,} \label{eq:explicit-scheme}
\end{equation}
where $u^{k}\in\mathbb{R}^{m}$ is the vectorized image at iteration
$k$, and the exact form of matrix $P\in\mathbb{R}^{m\times m}$ will be
discussed later. We call a numerical scheme $\ell_{2}$ stable if
\begin{equation}
\left\Vert u^{k+1}\right\Vert _{2}\leq\left\Vert u^{k}\right\Vert _{2}\text{ ,}\label{eq:stability-condition}
\end{equation}
i.e., the $\ell_{2}$ norm of the image is guaranteed not to increase
from iteration $k$ to $k+1$. It follows from
Equation~(\ref{eq:explicit-scheme}) that
\begin{equation}
\left\Vert u^{k+1}\right\Vert _{2}\leq\left\Vert I-\tau P\right\Vert _{2}\cdot\left\Vert u^{k}\right\Vert _{2}\text{ ,}\label{eq:stability-condition-for-our-scheme}
\end{equation}
where $\left\Vert P\right\Vert _{2}$ denotes the $\ell_2$ norm of $P$,
i.e., $\left\Vert P\right\Vert _{2}:=\sqrt{\rho(P^{T}P)}$, where
$\rho(P^T P)$ computes the largest modulus of eigenvalues of the
symmetric matrix $P^T P$.
Consequently, the condition in
Equation~(\ref{eq:stability-condition-for-our-scheme}) is satisfied if
\begin{equation}
\label{eq:stability-in-terms-of-i-tau-p}
\left\Vert I-\tau P\right\Vert _{2}\leq 1\text{ .}
\end{equation}
Since $P$ is positive semi-definite, the eigenvalues of $I-\tau P$ are
within the interval
$\left[1-\tau\left\Vert P\right\Vert _{2},1\right]$. Thus,
Equation~(\ref{eq:stability-in-terms-of-i-tau-p}) is satisfied if
$1-\tau\left\Vert P\right\Vert _{2}\geq -1$. This results in the
following constraint on the permissible time step size $\tau$:
\begin{equation}
\label{eq:timestep-bound}
\tau \leq \frac{2}{\left\Vert P\right\Vert_{2}}\text{ .}
\end{equation}
\begin{figure}
\centering%
\begin{tabular}{cc}
\includegraphics[scale=0.082]{Images/filteredBranch/u-0} & \includegraphics[scale=0.082]{Images/filteredBranch/u-4}\tabularnewline
$k=0$, $t=0$ & $k=4$, $t=20$\tabularnewline
\includegraphics[scale=0.082]{Images/filteredBranch/u-8} & \includegraphics[scale=0.082]{Images/filteredBranch/u-12}\tabularnewline
$k=8$, $t=40$ & $k=12$, $t=60$\tabularnewline
\end{tabular}
\caption{MAFOD filter applied on the noisy synthesized branch image at different FED cycles $k$. Value $t$ represents the overall diffusion time at each cycle $k$.\label{fig:MAFOD-over-different-cycles}}
\end{figure}
This clarifies that the restriction on the time step size only depends
on $\left\Vert P\right\Vert_{2}$. To compute it, we will now write
down the system matrix $P$ for our discretization of fourth-order
anisotropic diffusion filtering.
Let $L_{xx}$, $L_{xy}$, $L_{yx}$, $L_{yy}$ be matrices approximating
the corresponding derivatives. For ``natural'' boundary condition, it
is important only to approximate the derivatives at pixels $i$ where
the whole stencil fits in the image domain, i.e., where enough data is
available. Let us combine these four matrices pixelwise into one big
matrix $L$ such that
\begin{equation}
\label{eq:deriv-matrix}
Lu\approx\left(\begin{array}{c}
\vdots\\
\left[L_{xx}u\right]_{i}\\
\left[L_{xy}u\right]_{i}\\
\left[L_{yx}u\right]_{i}\\
\left[L_{yy}u\right]_{i}\\
\vdots
\end{array}\right)\text{,}
\end{equation}
i.e., the approximations of the four derivatives will be next to each
other for every pixel $i$.
The $4\times4$ matrix form of the fourth-order diffusion tensor in
pixel $i$, acting on
\begin{equation*}
\left(\left[L_{xx}u\right]_{i}\text{ }\left[L_{xy}u\right]_{i} \text{ }\left[L_{yx}u\right]_{i}\text{
}\left[L_{yy}u\right]_{i}\right)^{T},
\end{equation*}
can be written as $D_i=E M E^{T}$, where $E$ is an orthogonal matrix
containing the vectorized $E_{1}$, $E_{2}$, $E_{3}$, $E_{4}$ from
Equation~(\ref{eq:eigentensors-in-2d}) as its columns and $M$ is a
diagonal matrix with the eigenvalues $\mu_{1}$, $\mu_{2}$, $\mu_{3}$,
$\mu_{4}$ on its diagonal. Due to the choice of the Perona-Malik
diffusivity in our model, $\left\Vert D_i \right\Vert_{2} \leq 1$.
If we arrange all per-pixel matrices $D_i$ in one big matrix $D$ with
a $4\times4$ block-diagonal structure,
\begin{equation}
D=\left(\begin{array}{ccccc}
D_{1} & & \cdots & & 0\\
& D_{2}\\
\vdots & & D_{3} & & \vdots\\
& & & \ddots\\
0 & & \cdots & & D_{m}
\end{array}\right) \text{,}
\end{equation}
it is clear that $\left\Vert D\right\Vert_{2} \leq 1$, and the whole
scheme reads as
\begin{equation}
u^{k+1}=u^{k}-\tau L^{T}DLu^{k}\text{.} \label{eq:explicit-scheme3}
\end{equation}
Substituting into Equation~(\ref{eq:timestep-bound}) yields
\begin{equation}
\tau \leq \frac{2}{\left\Vert L^{T}DL\right\Vert_{2}}\text{,}
\end{equation}
meaning that, in order to find a stable step size $\tau$, we
have to bound
\begin{equation}
\left\Vert L^{T}DL\right\Vert_{2}\leq
\left\Vert L\right\Vert_{2}^{2} \leq \sum_{i,j\in\{x,y\}}\left\Vert L_{i,j}\right\Vert _{2}^{2}\text{,}
\end{equation}
whose value will depend on the exact second-order finite difference
stencils. We will use the same discretization as Hajiaboli
\cite{hajiaboli2011anisotropic}, i.e.,
\begin{align*}
u_{xx}&\approx \frac{\left(u_{i-1,j}-2u_{i,j}+u_{i+1,j}\right)}{\left(\Delta x\right)^2}\\
u_{yy}&\approx \frac{\left(u_{i,j-1}-2u_{i,j}+u_{i,j+1}\right)}{\left(\Delta y\right)^2}\\
u_{xy}&\approx \frac{\left(u_{i-1,j-1}+u_{i+1,j+1}-u_{i-1,j+1}-u_{i+1,j-1}\right)}{4\Delta x\Delta y}\\
u_{yx}&= u_{xy}\text{ ,}
\end{align*}
where $\Delta x$ and $\Delta y$ are the pixel edge lengths in $x$ and
$y$ directions, respectively. It is easy to verify using Gershgorin's
theorem that this results in
\begin{equation}
\tau \leq \frac{2}{16\left(\Delta x\right)^2+16\left(\Delta y\right)^2+2\left(\Delta x\Delta y\right)}\text{,}
\end{equation}
i.e., for $\Delta x=\Delta y=1$, $\tau\leq 1/17$.
\subsection{Implementation Using Fast Explicit Diffusion}
\label{sec:fed-scheme}
Since the time step size $\tau$ derived in the previous section is
rather small, solving the discretized version of
Equation~(\ref{eq:ts-fourth-order-diffusion}) numerically using a
simple explicit Euler scheme requires significant computational
effort.
The recently proposed Fast Explicit Diffusion (FED) provides a
considerable speedup by varying time steps in cycles, in a way that up
to half the time steps within a cycle can violate the stability
criterion, but the cycle as a whole still remains stable
\cite{weickert2016cyclic}. Consequently, a much smaller number of
iterations is required to reach the desired stopping time.
The FED scheme is defined as follows:
\begin{equation}
\begin{array}{cc}
u^{k+1,0}=u^{k}\text{,}\qquad\qquad\qquad\qquad\\
u^{k+1,i+1}=(I-\tau_{i}P(u_{\sigma}^{k}))u^{k+1,i} & \text{ } i=0,\ldots,n-1\text{,}\\
u^{k+1}=u^{k+1,n}\qquad\qquad\qquad\quad\text{ }
\end{array}\label{eq:fed}
\end{equation}
where index $k$ is the cycle iterator, $i$ is the inner cycle
iterator, and $n$ is the number of sub-steps in each cycle. In order
to ensure stability, $P(u_{\sigma}^{k})$ must be constant during each
cycle. For computing $\tau_i$, first the number of sub-steps in each
cycle must be computed using
\begin{equation}
n=\left\lceil -0.5+0.5\sqrt{1+\frac{12T}{M\tau_{\text{max}}}}\right\rceil \text{,}
\end{equation}
where $T$ is the diffusion stopping time, $M$ is the number of FED cycles
and $\tau_{\text{max}}$ is the step size limit that ensures stability. In our experiments we set $\tau_{max}=0.05$ according to the limit computed in Section \ref{sec:l2-stability}.
As it is shown in \cite{weickert2016cyclic}, $n$ determines $\tau_i$:
\begin{equation}
\tau_{i}=\frac{3T}{2M\left(n^{2}+n\right)\cos^{2}\left(\pi\cdot\frac{2i+1}{4n+2}\right)}\text{, }\left(i=0,\ldots,n-1\right)\text{.}
\end{equation}
In order to decrease balancing error within each cycle, $\tau_i$'s order should be
rearranged. In our experiments we have used the \emph{$\kappa$-cycles} method for $\tau_i$ reordering \cite{weickert2016cyclic}.
The fast explicit diffusion framework can be combined with our
discretization in a straightforward manner, and has led to a speedup
of around two orders of magnitude in some of our experiments. For both $\tau_i$ computation and reordering we have used the provided source code by Weickert et al. \cite{weickert2016cyclic}. Figure~\ref{fig:MAFOD-over-different-cycles} shows our filter applied on a synthesized image using the FED scheme with different cycle iterators $k$, corresponding to different stopping times.
\begin{figure*}
\centering%
\begin{tabular}{ccc}
Input & CED & VED \tabularnewline
\includegraphics[scale=0.11]{Images/circles-Noisy} & \includegraphics[scale=0.11]{Images/circles-CED} & \includegraphics[scale=0.11]{Images/circles-VED}\tabularnewline
& $\mathcal{E}=1.096\text{, }\mathbf{p=100\%}$ & $\mathcal{E}=1.116\text{, }\mathbf{p=100\%}$\\[1ex]
IFOD & Single-scale Gaussian & Multi-scale Gaussian \tabularnewline
\includegraphics[scale=0.11]{Images/circles-ISO} & \includegraphics[scale=0.11]{Images/circles-Gaussian-SingleScale} & \includegraphics[scale=0.11]{Images/circles-Gaussian}\tabularnewline
$\mathcal{E}=1.579\text{, }p=96\%$ & $\mathcal{E}=1.263\text{, }p=98\%$ & $\mathcal{E}=0.566\text{, }\mathbf{p=100\%}$ \\[1ex]
Bilateral & Hajiaboli & MAFOD \tabularnewline
\includegraphics[scale=0.11]{Images/circles-BIL} & \includegraphics[scale=0.11]{Images/circles-HAJ} & \includegraphics[scale=0.11]{Images/circles-MAFOD} \tabularnewline
$\mathcal{E}=0.624\text{, }p=99\%$ & $\mathcal{E}=1.024\text{, }p=92\%$ & $\mathbf{\mathcal{E}=0.332\text{, }p=100\%}$ \tabularnewline
\end{tabular}\caption{Red curves show the ground truth ridge location, while blue curves
show the location reconstructed from the filtered noisy image. Our
MAFOD filter restores ridge locations from the noisy image with ridges of different
scales better than other filters.
\label{fig:concentric-circles}}
\end{figure*}
\begin{figure*}
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={left,top},capbesidewidth=4cm}}]{figure}[\FBwidth]
{\caption{Each plot shows the intensities on a left to center line of the 2D images in Figure~\ref{fig:concentric-circles}. The lines are taken from the middle of each image. The red dashed lines show the true position of the ridge points, and the blue lines show the position of local maxima over the intensity line scan of each filtered image. }\label{fig:concentric-circles-line-scan}}
{\begin{tabular}{ccc}
Input & CED & VED\tabularnewline
\includegraphics[scale=0.52]{Images/concentric-circles-line-scan/noisy} & \includegraphics[scale=0.52]{Images/concentric-circles-line-scan/CED} & \includegraphics[scale=0.52]{Images/concentric-circles-line-scan/VED}\tabularnewline
IFOD & Single-scale Gaussian & Multi-scale Gaussian\tabularnewline
\includegraphics[scale=0.52]{Images/concentric-circles-line-scan/IFOD} & \includegraphics[scale=0.52]{Images/concentric-circles-line-scan/single-scale-gaus} & \includegraphics[scale=0.52]{Images/concentric-circles-line-scan/multi-scale-gaus}\tabularnewline
Bilateral & Hajiaboli & MAFOD\tabularnewline
\includegraphics[scale=0.52]{Images/concentric-circles-line-scan/bilateral} & \includegraphics[scale=0.52]{Images/concentric-circles-line-scan/hajiaboli} & \includegraphics[scale=0.52]{Images/concentric-circles-line-scan/MAFOD}\tabularnewline
\end{tabular}}
\end{figure*}
\subsection{Ridge and Valley Extraction}
\label{sec:ridge-detection-scale-selection}
After enhancing ridges and valleys with our filter, we extract a
polygonal representation of them using a 2D counterpart of an
established 3D algorithm \cite{schultz2010crease}. Our algorithm is
based on the idea of marching squares \cite{schroeder2005overview} and
involves the zero contour of the scalar field $d=\text{det}(g|Hg)$,
where $(g|Hg)$ indicates a matrix whose first column is the local
gradient vector $g$ and the second column $Hg$ is the result of
multiplying the gradient vector to the Hessian matrix;
$\text{det}(\centerdot)$ is the matrix determinant. The zero level set
of $d$ is a superset of the creases \cite{peikert2008height}.
Our overall approach of ridge extraction involves two different
notions of scale: The first one refers to the selection of scales at
which derivatives are taken, as discussed in
Section~\ref{sec:scale-selection}; the second one to the stopping time
$t$ of our filter. To clarify their respective roles, we compare our
approach to the seminal work by Lindeberg \cite{lindeberg1998edge} on
ridge extraction in Gaussian scale space.
In Lindeberg's approach, ridge curves in 2D images sweep out surfaces
in three-dimensional scale space, and curves on these surfaces are
found along which a measure of ridge strength is locally maximal with
respect to diffusion time $t$. An example of such a measure is
\begin{equation}
R(u_{\sigma})=t^{4\gamma}\left(u_{xx}+u_{yy}\right)^{2}\left(\left(u_{xx}-u_{yy}\right)^{2}+4u_{xy}^{2}\right)\text{.}
\label{eq:ridge-strength}
\end{equation}
In this approach, stopping time $t$ and the scale $\sigma$ of
derivatives are related by $t=\sigma^2/2$, and can thus be considered
as one single parameter, whose value is determined automatically. The
exponent $\gamma$ in the normalization factor that is used to
compensate for the loss of contrast at later diffusion times is
treated as a tunable parameter. In our experiments, we set it to
$\gamma=\frac{3}{4}$, as proposed in \cite{lindeberg1998edge}.
Decoupling the $t$ and $\sigma$ parameters is a price that we pay in
our method in order to preserve and enhance creases, for which
Gaussian scale space does not have any mechanism. Our current
implementation selects the derivative scale $\sigma$ automatically, as
discussed in Section~\ref{sec:scale-selection}, but does not have an
objective criterion for setting the stopping time $t$, unless ground
truth is available. In practice, we found it relatively simple to tune
this parameter based on viewing the corresponding images, especially
given that, after image noise has been removed, results are relatively
stable (cf.\ Figure~\ref{fig:MAFOD-over-different-cycles}). Future
work might investigate automated selection of this parameter.
Another difference between our approach and Lindeberg's is that his
crease extraction algorithm operates on the full scale space, while
ours, similar to previous work by Barakat et al. \cite{Barakat:2011},
works on a single, pre-filtered image. Both approaches have relative
benefits and drawbacks: Scale space crease extraction is challenging
to implement, and requires much more time and memory, especially when
dealing with the four-dimensional scale space resulting from
three-di\-men\-sion\-al input images \cite{Kindlmann:2009Vis}. On the other
hand, it might, in rare cases, indicate spatially intersecting creases
at different scales, which our current approach is not able to
reproduce.
\section{Experimental Results}
\label{sec:experimental-results}
We compare our multi-scale anisotropic fourth-order diffusion (MAFOD)
to crease enhancement diffusion (CED) \cite{sole2001crease},
vesselness enhancement diffusion (VED) \cite{canero2003vesselness},
isotropic fourth-order diffusion (IFOD) \cite{lysaker2003noise}, the
anisotropic fourth-order diffusion by Hajiaboli
\cite{hajiaboli2011anisotropic}, bilateral, and a multi-scale Gaussian
filter. Since it was already shown in \cite{sole2001crease} that the
coherence enhancing diffusion filter \cite{weickert1998anisotropic}
tends to more strongly deform non linear structures compared to the CED
filter, it is not included in the comparison.
The multi-scale Gaussian filter is defined to approximate Lindeberg's
scale selection, as described in
Section~\ref{sec:ridge-detection-scale-selection}. From a range of
stopping times between $t=1$ and $t=30$, it first selects an optimal
scale for each pixel, by finding the $t$ that maximizes $R(u_\sigma)$
from Equation~(\ref{eq:ridge-strength}). Then, the intensity of each
pixel in the output image is obtained by convolving the input image
with a Gaussian at the locally optimal scale $\sigma=\sqrt{2t}$ that
is then normalized between $\left[0,1\right]$. The normalization is
necessary to compensate for the intensity range shrinkage after
Gaussian blurring.
The crease extraction algorithm from
Section~\ref{sec:ridge-detection-scale-selection} results in a set of
polygonal chains. For each crease line segment in the ground truth, a
corresponding segment in the reconstruction is selected by picking the
one with minimum Hausdorff distance \cite{huttenlocher1993comparing}
in a neighborhood around the ground truth line segment. This
neighborhood is set to six pixels for the experiments on synthetic
data, and to ten pixels for real data. The average
Euclidean distance $\mathcal{E}$ between the ground truth and the
corresponding reconstruction is then used to quantify the accuracy of
vessel locations in the filtered image. In addition to $\mathcal{E}$,
we show the percentage $p$ of ground truth for which a
corresponding ridge was detected from the filtered images while
computing $\mathcal{E}$.
In the experiments on synthesized images, image evolution of all
filters, except for multi-scale Gaussian and bilateral filters, was
stopped when the $\ell_{2}$ difference between the filtered image and
the noise-free ground truth was minimized. $\ell_2$ difference was
chosen over $\mathcal{E}$ as a stopping criterion due to its much
lower computational cost.
\subsection{Confirming Theoretical Properties}
\label{sec:simple-synthetic-image}
Our filter has been designed to improve localization accuracy while
accounting for creases at multiple scales and being rotationally
invariant. Results on a simple simulated image with three concentric
ridges of different radii, which is contaminated with zero-mean
Gaussian noise with a signal to noise ratio $\mathrm{SNR}=6.81$,
verify that these design goals are met.
Both Figure~\ref{fig:concentric-circles} and Figure~\ref{fig:concentric-circles-line-scan} show that our MAFOD filter restores ridge
locations most accurately as assessed both by visual inspection and
Euclidean distance $\mathcal{E}$. MAFOD outperforms CED, IFOD,
Hajiaboli and bilateral filtering since it accounts for different
scales. On the other hand, the curvature enhancement of our filter,
which is not part of multiscale VED or Gaussian filters, clearly makes
it easier for the ridge extraction algorithm to localize the
centerline, especially in the largest ridge. IFOD does perform
curvature enhancement but, due to its isotropic nature, it is not
effectively guided to act specifically across the ridge. As it is obvious
on the largest circle, the multi-scale Gaussian filter leads to ridge displacement.
The result of the anisotropic fourth-order filter by Hajiaboli clearly
illustrates the fact that it was designed to preserve edges, not to
enhance creases.
\begin{figure*}
\centering%
\begin{tabular}{ccc}
Ground Truth & CED & VED\tabularnewline
\includegraphics[scale=0.092]{Images/vessel-GT} & \includegraphics[scale=0.092]{Images/vessel-CED} & \includegraphics[scale=0.092]{Images/vessel-VED}\tabularnewline
& $\mathcal{E}=0.343\text{, }p=98\%$ & $\mathcal{E}=0.345\text{, }\mathbf{p=100\%}$\tabularnewline
Multi-scale Gaussian & Bilateral & MAFOD\tabularnewline
\includegraphics[scale=0.092]{Images/vessel-Gaussian} & \includegraphics[scale=0.092]{Images/vessel-BIL} & \includegraphics[scale=0.092]{Images/vessel-MAFOD}\tabularnewline
$\mathcal{E}=0.506\text{, }p=97\%$ & $\mathcal{E}=0.341\text{, }\mathbf{p=100\%}$ & $\mathbf{\mathcal{E}=0.302\text{, }p=100\%}$\tabularnewline
\end{tabular}\caption{In
a simulated occluded vessel, restored (blue) curves again best match
the (red) ground truth locations in case of our MAFOD filter. In
addition, MAFOD better preserves the occlusions than VED.
\label{fig:vessel-occlusion}}
\end{figure*}
For the MAFOD filter, scales $\sigma$ and
vesselness threshold $\theta$ are the same as for VED,
$\sigma=\{0.5,1.0,...,8.5,9.0\}$, $\theta=0.2$. Other parameters
are $\lambda=0.005$ for MAFOD, IFOD, and $\sigma=1.0$
for IFOD; for Hajiaboli, $\lambda=0.01$; for CED, $\sigma=2.0$ and it is
set to enhance both ridges and valleys; for single-scale Gaussian smoothing,
$\sigma=1.25$. For the MAFOD filter, FED stopping time is set
to $500$, and the number of cycles is set to $10000$.
For other fourth-order equations $\tau=0.03$ and for the second-order
diffusion equations such as the VED and CED filters, $\tau=0.2$; for the
bilateral filter $\sigma_{spatial}=3.0$ and $\sigma_{range}=1.0$.
\subsection{Simulated Vessel Occlusion}
\label{sec:simulated-vessel}
Figure~\ref{fig:vessel-occlusion} shows a second image, simulating an
occluded vessel, and corrupted with Gaussian noise with
$\mathrm{SNR}=6.40$. Our MAFOD filter leads to the most accurate
localization in terms of Euclidean error $\mathcal{E}$. In particular,
we observed that VED widens the occlusions. They are better preserved
by our filter, which we set to enhance both ridges and valleys.
Again, an amount of smoothing that minimized $\ell_2$ error was used for
all filters except for multi-scale Gaussian and bilateral filters.
The parameters for VED and MAFOD
are $\sigma=\{0.5,1.0,1.5,2.0,2.5,3.0\}$, and $\theta=0.35$;
$\lambda=0.017$, stopping time is $20$ and
the number of cycles is set to $1000$ for MAFOD;
for CED, $\sigma=1.0$; for the bilateral filter,
$\sigma_\text{spatial}=1.5$ and $\sigma_\text{range}=1.0$.
For the numerical solver, we set $\tau=0.05$
for the fourth-order equation and for second-order equations, $\tau=0.2$.
\subsection{Real Vessel Tree}
\label{sec:real-vessel-tree}
To demonstrate our filter on a real-world example, we applied it to
several ROIs from an infrared fundus image, on which one of our co-authors
(MWMW), who is an ophthalmologist, manually marked the exact vessel
locations to provide a ground truth for comparison, without being
shown the filtered images. Results in Figure~\ref{fig:real-data} show
that our MAFOD filter outperforms VED, multi-scale Gaussian and
bilateral filters in restoring vessel locations. In ROI~3, at some
point the two thickest vessels run close to each other. By looking at
the filtered image with the VED, the two vessels are erroneously
connected to each other in that area, even though they are not connected in
the corresponding extracted valley curves. Our MAFOD filter
correctly avoided connecting the vessels to each other.
Even though vessels generally appear dark (i.e., as valleys) in these
images, the larger ones exhibit a thin ridge at their center, due to a
reflex in the infrared image. This leads to an incorrect double
response in single-scale filters as shown for the bilateral
filter. CED and IFOD filters suffer from similar problems (results not
shown).
For each filter separately, we carefully tuned the parameters for
optimum results. Specially $\theta$, $\lambda$ and the stopping time
are the parameters that need more careful tuning compared to others.
For the MAFOD filter, we set
$\sigma=\{0.2,0.3,0.5,1.0,...,6.5,7.0\}$, $\lambda=0.005$,
$\theta=0.13$, and used a FED scheme with stopping time $12$ and cycle
number $2$. For the VED filter, an
explicit Euler scheme is used with $600$ iterations and $\tau=0.2$,
and the same parameters for scale selection as for MAFOD; for the
bilateral filter, $\sigma_\text{range}=0.3$ and
$\sigma_\text{spatial}=3.0$; for the multi-scale Gaussian filter
an additional Gaussian smoothing with kernel size $\sigma=2$ is
applied to the filtered image to blur out discontinuities from scale
selection and thus achieve an even better result. The computational
effort of all filters is reported in Table~\ref{tab:fundus-time}.
\begin{figure*}[t]
\centering%
\begin{tabular}{ccccc}
Input & VED filter & Bilateral Filter & Ms Gaussian & MAFOD filter\tabularnewline
\includegraphics[scale=0.58]{Images/Real-Data/ROI1/ROI-1} & \includegraphics[scale=0.08]{Images/Real-Data/ROI1/Smaller/VED-2} & \includegraphics[scale=0.08]{Images/Real-Data/ROI1/Smaller/bilateral-2} & \includegraphics[scale=0.08]{Images/Real-Data/ROI1/Smaller/Gauss-2} & \includegraphics[scale=0.08]{Images/Real-Data/ROI1/Smaller/Maf-2}\tabularnewline
ROI 1 & $\mathcal{E}=1.45$ & $\mathcal{E}=1.36$ & $\mathcal{E}=1.89$ & $\mathbf{\mathcal{E}=1.17}$\tabularnewline
\includegraphics[scale=0.58]{Images/Real-Data/ROI3/ROI-3} & \includegraphics[scale=0.29]{Images/Real-Data/ROI3/smaller/VED} & \includegraphics[scale=0.29]{Images/Real-Data/ROI3/smaller/bil} & \includegraphics[scale=0.29]{Images/Real-Data/ROI3/smaller/Gauss} & \includegraphics[scale=0.29]{Images/Real-Data/ROI3/smaller/Maf}\tabularnewline
ROI 2 & $\mathcal{E}=1.31$ & $\mathcal{E}=1.42$ & $\mathcal{E}=1.27$ & $\mathbf{\mathcal{E}=1.04}$\tabularnewline
\includegraphics[scale=0.58]{Images/Real-Data/ROI4/ROI-4} & \includegraphics[scale=0.29]{Images/Real-Data/ROI4/smaller/VED} & \includegraphics[scale=0.29]{Images/Real-Data/ROI4/smaller/Bil} & \includegraphics[scale=0.29]{Images/Real-Data/ROI4/smaller/Gauss} & \includegraphics[scale=0.29]{Images/Real-Data/ROI4/smaller/Maf}\tabularnewline
ROI 3 & $\mathcal{E}=1.43$ & $\mathcal{E}=1.63$ & $\mathcal{E}=1.62$ & $\mathbf{\mathcal{E}=1.15}$\tabularnewline
& $p=89\%$ & $p=80\%$ & $p=88\%$ & $\mathbf{p=92\%}$\tabularnewline
\end{tabular}\protect\caption{In three ROIs of a fundus image,
reconstructed vessel locations (blue) best match a manually marked
ground truth (red) when our MAFOD filter is used.
\label{fig:real-data}}
\end{figure*}
\begin{table}
\centering%
\caption{The average filtering time for a single ROI of size $200\times200$ pixels
in Figure~ \ref{fig:real-data}.\label{tab:fundus-time}}
\label{tab:1}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
Filter & VED & Bilateral & Ms Gaussian & MAFOD \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Time ($sec$) & $1251$ & $0.16$ & $2.24$ & $6.51$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
We have proposed a new multi-scale fourth order anisotropic diffusion (MAFOD) filter to enhance ridges and valleys in images. It uses a fourth order diffusion tensor which smoothes along creases, but sharpens them in the perpendicular direction, and optionally enables enhancing either ridges or valleys only. Our results indicate that the curvature enhancing properties of fourth-order diffusion allow our filter to better restore the exact crease locations than traditional methods. In addition, we found that our filter better preserves vessel occlusions.
In the future, we would like to extend our 2-D filter to 3-D images,
and to better handle crossings and bifurcations
\cite{hannink2014crossing}.
\bibliographystyle{abbrv}
|
1,116,691,499,124 | arxiv | \section{Introduction}
Winner-take-all-like (WTA-like) circuits constitute a ubiquitous motif of cortical microcircuits \cite{DouglasMartin:04}. Previous models and theories for competitive Hebbian learning in WTA-like circuit from \cite{RumelhartZisper:85} to \cite{nessler2013bayesian} were based on the assumption of strong WTA-like lateral inhibition.
Several theoretical studies showed that spike-timing dependent plasticity (STDP) supports the emergence of Bayesian computation in such winner-take-all (WTA) circuits \cite{nessler2013bayesian, habenschuss2013emergence, KlampflETAL:13}. These analyses were based on a probabilistic generative model approach. In particular, it was shown that the network implicitly represents the distribution of input patterns through a generative mixture distribution and that STDP optimizes the parameters of this mixture distribution.
But this analysis assumed that the input to a WTA is explained at any point in time by a single neuron, and that strong lateral inhibition among pyramidal cells ensures a basically fixed total output rate of the WTA. These assumptions, however, may not be suitable in the context of more realistic activity dynamics in cortical networks.
In fact, recent modeling results \cite{avermann2012microcircuits, JonkeETAL:17a} show that the WTA model is not adequate for a softer form of inhibition that has been reported for cortical layer 2/3. This softer form of inhibition is often referred to as feedback inhibition, or lateral inhibition, and has been termed more abstractly based on its influence on pyramidal cells as divisive inhibition \cite{WilsonETAL:12,CarandiniHeeger:12}. It stems from dense bidirectional interconnections between layer 2/3 pyramidal cells and nearby Parvalbumin-positive ($\text{PV}^+$) interneurons (often characterized as fast-spiking interneurons, in particular basket cells), see e.g. \cite{PackerYuste:11, FinoETAL:12,avermann2012microcircuits}.
The simulations results in \cite{JonkeETAL:17a} also indicate that blind source separation emerges as the computational function of this microcircuit motif when STDP is applied to the input synapses of the circuit.
The results of \cite{JonkeETAL:17a} raise the question whether they can be understood from the perspective of a corresponding probabilistic generative model, that could replace the mixture model that underlies the analysis of emergent computational properties of microcircuit motivs with hard WTA-like inhibition. We propose here such a model that is based on a Gaussian prior over the number of active excitatory neurons in the network and a noisy-OR-like likelihood term. We develop a novel analysis technique based on the neural sampling theory \cite{BuesingETAL:11} to show that the microcircuit motif model approximates probabilistic inference in this probabilistic generative model. Further, we derive a plasticity rule that optimizes the parameters of this generative model through online expectation maximization (EM), the arguably most powerful tool from statistical learning theory for the optimization of generative models. We show that this plasticity rule can be approximated by an STDP-like learning rule.
This theoretical analysis strengthens the claim that blind source separation \cite{foldiak1990forming} --- also referred to as independent component analysis \cite{HyvaerinenETAL:04} --- emerges as
a fundamental computation on assembly codes through STDP in this microcircuit motif. This computational operation enables a network to disentangle and separately represent superimposed inputs that result from independent assembly activations in different upstream networks. Furthermore, our theoretical analysis reveals that the ability of this cortical microcircuit motif to perform blind source separation is facilitated either by the normalization of activity patterns in input populations, or by homeostatic mechanisms that normalize excitatory synaptic efficacies within each neuron.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{motif_theo.eps}
\end{center}
\caption{{\bf Data-based network model $\mathcal{M}$ for a microcircuit motif.}
\textbf{A}) Network anatomy. Circles denote excitatory (black) and inhibitory (red) pools of neurons. Black arrows indicate excitatory connections. Red lines with dots indicate inhibitory connections. Numbers above connections denote corresponding connection probabilities. \textbf{B}) Network physiology. Same as in (A), but connection delays $\delta$ are indicated. All synapses are modeled with the same PSP shape using a decay time constant of $\tau_\text{f}=10$ ms as indicated on top right. Input synapses are subject to STDP. }
\label{fig:fig_model}
\end{figure}
\section{Results}
A data-based microcircuit motif model for the interaction of pyramidal cells with $\text{PV}^+$ inhibitory neurons in layer 2/3 has been introduced in \cite{avermann2012microcircuits}. Based on this study, \cite{JonkeETAL:17a} analyzed the computational properties that emerge in this microcircuit motif from synaptic plasticity. We first briefly introduce the microcircuit motif model analyzed in \cite{JonkeETAL:17a} and discuss its properties. Subsequently, we present a theoretical analysis of this network motif based on a probabilistic generative model $\mathcal{P}$.
\subsection{A data-based model for a network motif consisting of excitatory and inhibitory neurons}\label{sec:ei-model}
\cite{JonkeETAL:17a} proposed a specific model for interacting populations of pyramidal cells with $\text{PV}^+$ inhibitory neurons in cortical layer 2/3 based on data from the Petersen Lab \cite{avermann2012microcircuits}, see Fig.~\ref{fig:fig_model}A, B. We refer to this specific model as the microcircuit motif model $\mathcal{M}$.
The model $\mathcal{M}$ consists of
two reciprocally connected pools of neurons, an excitatory pool and an inhibitory pool.
$M$ stochastic spiking neurons constitute the excitatory pool. Their dynamics is given by a stochastic version of the spike response model that has been fitted to experimental data in \cite{JolivetETAL:06}.
The instantaneous firing rate $\rho_m$ of a neuron $m$ depends exponentially on its current membrane potential $u_m$,
\begin{align}\label{eq:neuron_firing_prob}
\rho_m(t)\ &= \frac{1}{\tau} \exp(\gamma \cdot u_m(t))\;,
\end{align}
where $\tau=10$ ms and $\gamma=2$ are scaling parameters that control the shape of the response function.
After emitting a spike, the neuron enters a refractory period.
The excitatory neurons are reciprocally connected to a pool of recurrently connected inhibitory neurons. All connection probabilities in the model were taken from \cite{avermann2012microcircuits}.
Excitatory neurons receive excitatory synaptic inputs $\tilde y_1(t),..,\tilde y_N(t)$ with corresponding synaptic efficiencies $w_{im}$ between the input neuron $i$ and neuron $m$. These afferent connections are subject to a standard form of STDP.
Thus, the membrane potential of excitatory neuron $m$ is given by the sum of external inputs, inhibition from inhibitory neurons, and its excitability $\alpha$
\begin{align}
u_m (t) &= \sum_i w_{im} \tilde y_i(t) -\sum_{j \in \mathcal{I}_m} w^{\text{IE}} I_j(t) + \alpha,\label{eq:membrane}
\end{align}
where $\mathcal{I}_m$ denotes the set of indices of inhibitory neurons that project to neuron $m$, and $w^{\text{IE}}$ denotes the weight of these inhibitory synapses. $I_j(t)$ and $\tilde y_i(t)$ denote synaptic input from inhibitory neurons and input neurons respectively, see above.
Inhibitory contributions to the membrane potential of pyramidal cells have in this neuron model a divisive effect on the firing rate. This can be seen by by substituting \me{membrane} in \me{neuron_firing_prob}, see also eq.~\eqref{eq:neuron_firing_prob_divisive} in {\em Methods}, thus implementing divisive inhibition (see \cite{CarandiniHeeger:12} for a recent review).
Divisive inhibition has been shown to be
a ubiquitous computational primitive in many brain circuits (see \cite{CarandiniHeeger:12} for a recent review). In mouse visual cortex, divisive inhibition is implemented through $\text{PV}^+$ inhibitory neurons \cite{WilsonETAL:12}.
Although the inhibitory signal is common to all neurons in the pool of excitatory neurons, contrary to the inhibition modeled in \cite{nessler2013bayesian} it does not normalize the firing rates of neurons exactly and therefore the total firing rate in the excitatory pool is variable and depends on the input strength. Importantly, in contrast to \cite{nessler2013bayesian} where inhibition strictly enforced that only a single neuron in the excitatory pool is active at any given time, the data-based model $\mathcal{M}$ allows several neurons to be active concurrently.
\begin{figure}
\begin{center}
\includegraphics[width=0.85\textwidth]{swta_tuning_theo.eps}
\end{center}
\caption[]{{\bf Emergent computational properties of the data-based network model $\mathcal{M}$.} {\bf A}) Network inputs are given by images of randomly oriented bars (inputs arranged in 2D for clarity; pixel gray-level indicates effective network input $\tilde y_i(t)$, see eq.~(\ref{eq:output_trace})). {\bf B}) Input neuron spike patterns (every $4^{th}$ neuron shown). Presence of a bar in the input with orientation indicated in panel A indicated by gray shading. {\bf C, D}) Spike responses of a subset of excitatory neurons in $\mathcal{M}$ to the input in (B) before (C) and after (D) learning (neurons sorted by preferred orientation). {\bf E}) Tuning curves of excitatory neurons in with preferred orientations between $90$ and $120$ degrees. {\bf F}) Orientation tuning curves in a WTA model \cite{nessler2013bayesian,habenschuss2013emergence}. Figure modified from Fig.~2 in \cite{JonkeETAL:17a}.
}
\label{fig:fig_tilted}
\end{figure}
\subsection{Emergent properties of the data-based network model: From WTA to k-WTA}\label{sec:emergent-coding-properties}
The computational properties of this data-based network model $\mathcal{M}$ were extensively studied through simulations in \cite{JonkeETAL:17a}.
In order to compare the properties of this data-based network model $\mathcal{M}$ to previously considered WTA models, they examined the emergence of orientation selectivity, which we briefly discuss here. For details, please see \cite{JonkeETAL:17a}. Pixel-representations of noisy bars in random orientations were provided as external spike inputs
(Fig.~\ref{fig:fig_tilted}A). Input spike trains were generated from these pixel arrays by converting pixel values to Poisson firing rates of input neurons (black pixel: $75$ Hz; white pixel: $1$ Hz). Randomly oriented bars were presented to the network for $400$ s where each bar was presented for $50$ ms, see Fig.~\ref{fig:fig_tilted}B (STDP was applied to synapses from input neurons to excitatory neurons). The resulting network response (Fig.~\ref{fig:fig_tilted}D)
shows the emergence of assembly codes for oriented bars. The resulting Gaussian-like tuning curves of excitatory neurons (Fig.~\ref{fig:fig_tilted}E) densely cover all orientations, resembling experimental data from orientation pinwheels (see Fig. 2 d,e in \cite{OhkiETAL:06}). Also consistent with experimental data \cite{KerlinETAL:10,IsaacsonScanziani:11}, inhibitory neurons did not exhibit orientation selectivity (not shown).
In contrast, previously considered models with idealized strong inhibition in WTA-circuits \cite{nessler2013bayesian} show a clearly distinct behavior, see Fig.~\ref{fig:fig_tilted}F. For this model, at most a single neuron could fire at any moment of time, and as a result at most two neurons responded after a corresponding learning protocol with an increased firing rate to a given orientation (see Fig.~\ref{fig:fig_tilted}F and Fig.~5 in \cite{nessler2013bayesian}).
In the simulations of the data-based model $\mathcal{M}$, on average $k=17$ neurons responded to each orientation with an increased firing rate. This suggests that the emergent computational operation of the layer 2/3 microcircuit motif with divisive inhibition is better described as k-WTA computation, where $k$ winners may emerge simultaneously from the competition. This number $k$ is however not a strict constraint in the data-based model $\mathcal{M}$. The actual number of winners depends on synaptic weights and the external input. Its computation is thus better describe as an adaptive k-WTA operation.
The k-WTA characterisitcs of the layer 2/3 microcircuit motif is quite attractive, since it is known from computational complexity theory that the k-WTA computation is more powerful than the simple WTA computation (for $k > 1$) \cite{Maass:00}.
\subsection{Theoretical framework for understanding emergent computational properties the layer 2/3 microcircuit motif}\label{sec:theory}
Fig.~\ref{fig:fig_tilted} demonstrates that significantly different computational properties emerge in the data-based model $\mathcal{M}$ through STDP as compared to previously considered WTA models \cite{nessler2013bayesian,habenschuss2013emergence}.
The main aim of this article is to understand this different emergent computational capability theoretically, in particular since the analysis from \cite{nessler2013bayesian} and \cite{habenschuss2013emergence} in terms of mixture distributions is only applicable to WTA circuits.
The novel analysis technique that we will use is summarized as follows. First, using some simplifications on the network dynamics, we formulate the network dynamics in the neural sampling framework \cite{BuesingETAL:11}. This allows us to deduce the distribution $p(\v z | \v y, \v W)$ of activities $\v z$ of excitatory neurons in the network for a given input $\v y$ and for the given network weights $\v W$. We then show that this distribution approximates the posterior distribution of a generative probabilistic model $\P$.
This generative model is not a mixture distribution as in the WTA case \cite{nessler2013bayesian}, but a more complex distribution that is based on a noisy-OR-like likelihood.
We make the nature of the approximation explicit and evaluate its severity through simulations. Finally, we derive a plasticity rule that implement online EM in this generative model, thus implementing blind source separation. We find that this plasticity rule can be approximated by an STDP-like learning rule.
\subsubsection{Formulation of the network dynamics of $\mathcal{M}$ in the neural sampling framework}
The neural sampling framework \cite{BuesingETAL:11} provides us with the ability to determine the stationary distribution (defined in the following) of network states for the given network parameters and a given network input.
In order to be able to describe the probabilistic relationships between input and network activity, we
describe network inputs by binary vectors $\v y(t)$ and responses of excitatory neurons in $\mathcal{M}$ by binary vectors $\v z(t)$.
The vectors $\v y(t)$ and $\v z(t)$ capture the spiking activity of ensembles of spiking neurons in continuous time according to the common convention introduced in \cite{BerkesETAL:11} and \cite{BuesingETAL:11}: A spike of the $i^{th}$ neuron in the ensemble at time $t$ sets the corresponding component $y_i(t)$ ($z_i(t)$) of the bit vector $\v y(t)$ ($\v z(t)$) from its default value $0$ to $1$ for some duration $\tau$ (that can be chosen for example to reflect the typical time constant of an EPSP), see {\em Methods}. Note the difference between the vectors $\v y(t)$, $\v z(t)$ and the output traces $\tilde \v y(t)$, $\tilde \v z(t)$ used in eq.~\eqref{eq:membrane} (and defined in eq.~\eqref{eq:output_trace} in {\em Methods}). The former constitute an abstract convention to describe the momentary state of the network based on its current firing activity, while the latter describe the impact that the neurons have on their postsynaptic targets in terms of real-valued double-exponential EPSPs.
We want to describe the distribution of network states $\v z(t)$ for given inputs $\v y(t)$ and network parameters $\v W$ in terms of a probability distribution $p(\v z | \v y, \v W)$.
In this distribution, the activities of network inputs and excitatory neurons in $\mathcal{M}$ are represented by two vectors of binary random variables: $\v y = (y_1, \dots, y_N)$ (termed input variables in the following) and $\v z = (z_1, \dots, z_M)$ (termed hidden variables or hidden causes).
The network state $\v y(t), \v z(t)$ at time $t$ is interpreted as one specific realization of these random variables.
In order to make this mapping between network activity in $\mathcal{M}$ and the distribution of network states feasible, one has to make three simplifying assumptions about the dynamics of the neural network models similar as in \cite{BuesingETAL:11}.
First, PSPs of inputs and network neurons are rectangular with length $\tau$ (chosen here to be $\tau=10$ ms) and network neurons are refractory for the same time span $\tau$ after each spike. Second, synaptic connections are idealized in the sense that the synaptic delay is $0$ (i.e., a presynaptic spike leads instantaneously to a PSP in the postsynaptic neuron). And finally, the weights of recurrent synaptic connections are symmetric (i.e., the weight from neuron $i$ to neuron $j$ is identical to the weight from neuron $j$ to neuron $i$). This necessitates that lateral inhibition is not implemented through a pool of inhibitory neurons.
Instead, the network dynamics is defined by only one pool of $M$ network neurons (the same number as the number of excitatory network neurons in $\mathcal{M}$). Since the inhibitory neurons in $\mathcal{M}$ show linear response properties, the inhibition in the network depends linearly on the activity of excitatory neurons in the network.
One can therefore model the inhibition in the network by direct inhibitory connections between excitatory neurons (where synaptic delays are neglected) with weight $\beta$.
For clarity, we provide the full description of the approximate dynamics in the following.
The approximate dynamics is described by $M$ network neurons.
Network neurons have instantaneous firing rates that depend exponentially on their membrane potential, as given in \me{neuron_firing_prob}. Whenever neuron $m$ spikes, the output trace $\tilde z_m(t)$ of neuron $m$ is set to $1$ for a period of duration $\tau$ (this corresponds to a rectangular PSP; the same definition applies to output traces $\tilde y_m(t)$ of input neurons).
After emitting a spike, the neuron enters a refractory period of duration $\tau=10$ ms, during which its instantaneous spiking probability is zero.
Note that for this definition of the output trace, the state vector $\v z(t)$ is identical to the vector of output traces $\tilde \v z(t)$.
Lateral inhibition in the network is established by direct inhibitory connections between excitatory neurons, leading to membrane potentials
\begin{equation}\label{eq:membrane_NS}
u_m(t) = \sum_{i}^N w_{im} \tilde y_i(t) - \sum_{j\neq m} \beta \tilde z_j(t) + c_m ,
\end{equation}
where $c_m$ denotes some neuron-specific excitability of the neuron that is independent of the input and network activity.
Each network neuron receives feedforward synaptic inputs $\tilde y_1(t),\dots ,\tilde y_N(t)$ whose contribution to the membrane potential of a neuron $m$ at time $t$ depends on the synaptic efficiency $w_{im}$ between the input neuron $i$ and the network neuron $m$. Network neurons are all-to-all recurrently connected.
The second term in \eqref{eq:membrane_NS} specifies this recurrent input where $\beta$ is the inhibitory recurrent weight of the connection between network neuron $j$ and network neuron $m$.
It has been shown in \cite{BuesingETAL:11} that for such membrane potentials, the distribution of network states is given by the Boltzmann distribution:
\begin{align}\label{eq:post_NS}
p_\text{Network}(\v z| \v y, \v W) = \frac{1}{Z}\exp\left\{ \gamma \cdot \left( \sum_{i,m} w_{im} y_i z_m + \frac{1}{2}\sum_{m\neq l} \beta z_m z_l + \sum_m c_m z_m \right) \right\} ,
\end{align}
where $Z$ is a normalizing constant.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{generative_model_8.eps}
\end{center}
\caption{{\bf Relationship between the data-based model $\boldsymbol{\mathcal{M}}$ and the generative probabilistic model $\boldsymbol{\P}$.}
{\bf A-C}) Schema of the response of model $\mathcal{M}$ to superimposed bars. Network inputs (left) are schematically arranged in a 2D array for clarity of the argument. Black indicates highly active inputs neurons. {\bf A}) A vertical bar with added noise is presented to $\mathcal{M}$. This input activates an excitatory neuron (filled circle, spiking activity is indicated), similar as in hard WTA models. {\bf B}) Another neuron is activated by a horizontal bar, also similar as in hard WTA models. {\bf C}) A combination of these two basic input patterns activates both neurons in $\mathcal{M}$. This response is inconsistent with a WTA model, and with any generative model based on mixture distributions. But it can still be viewed as approximate inference of the posterior distribution $p(\v z|\v y,\v W)$ over hidden causes $\v z$ for the given inputs $\v y$ in the probabilistic model $\P$ shown in D. {\bf D}) Schema of probabilistic model $\P$. The joint $p(\v z, \v y | \v W)$ is defined by the prior $p(\v z)$ and the likelihood $p(\v y|\v z,\v W)$. Synaptic efficacies $\v W$ implicitly define the likelihood over inputs $\v y$ for given hidden causes $\v z$ (probability values for inputs $y_i$ indicated by shading of squares; $\sigma_\text{LS}$ denotes the logistic sigmoid function). In the likelihood model,
a given input $y_i$ is $1$ (corresponding to a black pixel in this example) with high probability if it has a large $w_{im}$ to at least one active hidden cause $z_m$. In the depicted example, $y_i$ belongs to two bars (see A, B) with corresponding active hidden causes. Due to the nonlinear behavior of the likelihood, its probability is comparable to one where only one of the hidden causes $z_m$ is active. The inset on the right depicts the Gaussian prior $p(\v z)$ on hidden causes $\v z$ with $\mu=4$ and $\sigma=2.5$. The prior implicitly incorporates in $\P$ the impact of the inhibitory feedback in the data-based model $\mathcal{M}$ (therefore indicated with dashed lines).}
\label{fig:fig_generative_model}
\end{figure}
\subsubsection{A probabilistic model $\boldsymbol{\P}$ for the layer 2/3 microcircuit motif:}
Fig.~\ref{fig:fig_generative_model}A-C illustrates the putative stochastic computation performed by the model $\mathcal{M}$, i.e., how network input leads to network activity in the model.
Assume that we have a probabilistic model $\P$ for the inputs $\v y$ defined by a prior $p(\v z)$ and a likelihood $p(\v y|\v z, \v W)$. These distributions describe how one can generate input samples $\v y$ by first drawing a {\em hidden state} vector $\v z$ from $p(\v z)$
and then drawing an input vector $\v y$ from $p(\v y|\v z, \v W)$. Therefore, such a probabilistic model $\P$ is also called a generative model (for the inputs).
If the distribution of network states \eqref{eq:post_NS} is the posterior distribution given by
\begin{align}\label{eq:posterior-spelled}
p(\v z| \v y, \v W) = \frac{p(\v z) p(\v y| \v z, \v W)}{\sum_{\v z'} p(\v y, \v z'|\v W)},
\end{align}
then the network performs probabilistic inference in this probabilistic model $\P$.
The inference task described by eq.~\eqref{eq:posterior-spelled}
assumes that $\v y$ is given and the hidden causes $\v z$ (such as the basic components of a visual scene) have to be inferred. This inference can intuitively also be described as providing an "explanation" $\v z$ for the current observation $\v y$ according to the generative model $\P$.
In the following, we describe a probabilistic model $\P$ and show that eq.~\eqref{eq:post_NS} approximates the posterior distribution of this model. This implies that the simplified dynamics of the data-based model $\mathcal{M}$ approximate probabilistic inference in the probabilistic model $\P$.
The probabilistic model $\P$ is defined by two distributions, the prior over hidden variables $p(\v z)$ (that captures constraints on network activity imposed for example by lateral inhibition) and the conditional likelihood distribution over input variables $p(\v y|\v z, \v W)$ that describes the probability of input $\v y$ for a given network state $\v z$ in a network with parameters $\v W$. These two distributions define the joint distribution over hidden and visible variables since $p(\v z, \v y | \v W) = p(\v y|\v z, \v W) p(\v z)$, see Fig.~\ref{fig:fig_generative_model}D.
The specific forms of these two distributions in the probabilistic model $\P$ considered for $\mathcal{M}$ are discussed in the following and defined by \mes{genmod_prior}-\eqref{eq:genmod_likefact} below.
In previously considered hard WTA models \cite{nessler2013bayesian,habenschuss2013emergence}, strong lateral inhibition was assumed. This corresponded to a prior where only a single component $z_j$ of the hidden vector $\v z$ can be active at any time. The biologically more realistic divisive inhibition in $\mathcal{M}$ allows several of them to fire simultaneously.
This corresponds to a prior that induces sparse activity
in a soft manner (``adaptive'' k-WTA): It does not enforce a strict ceiling $k$ on the number of $z$-neurons that can fire within a time interval of length $\tau$, but only tries to keep this number within a desired range.
Hence we use as prior in $\P$ a Gaussian distribution
\begin{align}\label{eq:genmod_prior}
p(\v z) &= \frac{1}{Z_{\text{prior}}} \exp\left(- \frac{1}{2\sigma^2}\left(\sum_{m=1}^M z_m - \mu\right)^2\, \right)\quad ,
\end{align}
where $Z_{\text{prior}}$ is a normalizing constant, $\mu \in \mathbb{R}$ is a parameter that shifts the mean of the distribution, and $\sigma^2>0$ defines the variance (see Fig.~\ref{fig:fig_generative_model}D). Note that the Gaussian is
restricted to integers as the sum runs over binary random variables $z_1, \ldots, z_M$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\textwidth]{likelihoods.eps}
\end{center}
\caption{{\bf Likelihood model of $\P$. (A)}
Likelihood that an input $y_i$ is $1$ (red) or $0$ (black) for the proposed likelihood model \eqref{eq:genmod_likefact} (full lines) and for the noisy-OR likelihood \eqref{eq:nOR} (broken lines). The likelihood of an input $y_i$ depends on the hidden causes $\v z$ through the weighted contribution $a_i = \gamma \bar{\v w}_i^T \v z$ of the hidden causes to this input. For large $a_i$, the likelihood of $y_i=1$ approaches $1$. While for $a_i=0$, the likelihood of $y_i=1$ is $0.5$ in the proposed likelihood model, whereas it is $0$ in the noisy-OR model. {\bf B} Approximation of $\log (1+\exp(a_i))$ (black full line) by $a_i$ (red broken line) as used in Eq.~\eqref{eq:appr_A}.
}
\label{fig:likelihood}
\end{figure}
As in
other generative models
we assume for the sake of theoretical tractability that the conditional likelihood $p(\v y|\v z, \v W)$ factorizes, so that each input $y_i$ is independently explained by the current network state $\v z$:
\begin{align}\label{eq:genmod_condlike}
p(\v y|\v z, \v W) &= \prod_{i=1}^N p(y_i|\v z, \v W).
\end{align}
A probabilistic model for hard WTA circuits can only explain each input variable $y_i$ by a single hidden cause $z_m$.
In contrast, in probabilistic models with soft inhibition and the prior \eqref{eq:genmod_prior}, several hidden causes can be active simultaneously and explain together an input variable $y_i$. We define $\bar{\v w}_i$ as the vector of weights $\bar{\v w}_i = (w_{i1}, \dots, w_{iM})^T$ that define the likelihood for variable $y_i$. We consider the following likelihood model
\begin{align}
p(y_i|\v z, \v W) &= \frac{\exp({\gamma \bar{\v w}_i^T} \v z)^{y_i}}{1+\exp({\gamma \bar{\v w}_i^T} \v z)} = \frac{\exp(a_i)^{y_i}}{1+\exp(a_i)} = \sigma_\text{LS}\left((-1)^{y_i} a_i\right), \label{eq:genmod_likefact}
\end{align}
where we have defined $a_i = \gamma \bar{\v w}_i^T \v z$, and $\sigma_\text{LS}$ is the logistic sigmoid $\sigma_\text{LS}(u) = \frac{1}{1+\exp(-u)}$. This likelihood function is shown in Figure \ref{fig:likelihood}A together with the often used noisy-OR likelihood. Note that if none of the hidden causes $z_m$ is active, i.e., $z_m=0$ for all $m$, then $y_i=1$ with probability $0.5$. Each active hidden cause $z_m=1$ with $w_{im}>0$ increases the probability that input variable $y_i$ assumes the value $1$, see also Fig.~\ref{fig:fig_generative_model}D). This likelihood, allows the generative model to deal with situations where an input neuron can fire in the context of different hidden causes, for example with pixels in the network inputs that lie in the intersection of different patterns, see Fig.~\ref{fig:fig_generative_model}). The soft Gaussian prior \eqref{eq:genmod_prior} allows the internal model to develop modular representations for different components of complex input patterns.
This likelihood is quite similar to the frequently used noisy-OR model (see e.g. \cite{neal1992connectionist,saund1995multiple}):
\begin{align}
p_\text{nOR}(y_i=0|\v z, \v W) &= \exp(-a_i)^{1-y_i}(1-\exp(-a_i))^{y_i} \label{eq:nOR}
\end{align}
One difference is that (for purely excitatory weights), the probability of an input $y_i$ being zero is at most $0.5$ in the proposed likelihood, while it can become $0$ in the noisy-OR model. Such a model may reflect the situation that network inputs are noisy, so their firing rates are never zero.
We now analyze the relationship between the probabilistic model $\P$ and the description of the data-based model $\mathcal{M}$ in the neural sampling framework. We will see that $\mathcal{M}$ approximates probabilistic inference in $\P$. Finally, we show that adaptation of network parameters in $\mathcal{M}$ through STDP can be understood as an approximate stochastic expectation maximization (EM) process in the corresponding probabilistic model $\P$.
\subsubsection{Interpretation of the dynamics of $\boldsymbol{\mathcal{M}}$ in the light of $\boldsymbol{\P}$: }
Using the likelihood and prior of $\P$, the posterior of hidden states $\v z$ for given inputs $\v y$ and given parameters $\v W$ is
\begin{align}\label{eq:post}
p(\v z| \v y, \v W) = \frac{1}{Z}\exp\left\{ \gamma \cdot \left(\sum_{i,m} w_{im} y_i z_m - \frac{1}{2} \sum_{m\neq l} \beta z_m z_l + \sum_m \alpha z_m - \frac{1}{\gamma} \sum_i \log (1+\exp(a_i))\right) \right\},
\end{align}
where $Z$ is a normalizing constant, $\beta=\frac{1}{\gamma \sigma^2}$, and $\alpha=\frac{2\mu-1}{2 \gamma \sigma^2}$. The terms including $\beta$ and $\alpha$ stem from the prior, while the last term stems from the normalization of the likelihood. When we compare this posterior to the posterior of the network model eq.~\eqref{eq:post_NS}, we see that they are quite similar with $\beta$ denoting the strength of inhibitory connections and $\alpha$ being the neural excitabilities.
The last term in eq.~\eqref{eq:post} is problematic since the $a_i$'s depend on $\v z$ and thus the whole posterior is not a Boltzmann distribution and can therefore not be computed by the model $\mathcal{M}$. It turns out however that this last term can be approximated quite well by a term that is linear in $\v z$.
Note that for zero $a_i$ (i.e., for zero weights or for the zero-$\v z$-vector), this last term evaluates to $\log(2)$. But as $a_i$ increases, one can neglect the $1$ in the logarithm and the expression quickly approaches $a_i$. We can thus write
\begin{align}\label{eq:appr_A}
\sum_i \log (1+\exp(a_i)) \approx \sum_i a_i = \gamma \sum_i \sum_m w_{im} z_m = \gamma \sum_m z_m \left(\sum_i w_{im}\right),
\end{align}
where the term in the brackets on the right is just the L1-norm of the weight vector of neuron $j$. Note that for a given weight matrix, an increased $\gamma$ leads to a better approximation. The approximation of $\log (1+\exp(a_i))$ by $a_i$ is illustrated in Fig.~\ref{fig:likelihood}B.
Hence, the first approximate posterior we consider is given by
\begin{align}\label{eq:post_A1}
p_\text{A1}(\v z| \v y, \v W) = \frac{1}{Z}\exp\left\{ \gamma \cdot \left( \sum_{i,m} w_{im} y_i z_m - \frac{1}{2} \sum_{m\neq l} \beta z_m z_l + \sum_m \alpha z_m - \sum_m z_m \sum_i w_{im}\right) \right\}.
\end{align}
This is a Boltzmann distribution of the form \eqref{eq:post_NS} and the last term accounts to a neuron-specific homeostatic bias that depends on the sum of incoming excitatory weights.
Matching the terms of this equation to the terms in eq.~\eqref{eq:post_NS} and performing the same match in the membrane potential \eqref{eq:membrane_NS}, we see that the membrane potential of neurons in this approximation is given by
\begin{equation}\label{eq:membrane_A1}
u_m(t) = \sum_{i}^N w_{im} \tilde y_i(t) - \sum_{j\neq m} \beta \tilde z_j(t) + \alpha - \sum_i w_{im}.
\end{equation}
If excitatory weight vectors are normalized to an L1-norm of $w_\text{norm}=\sum_i w_{im}$ for all $m$, this simplifies to
\begin{equation}\label{eq:membrane_A2}
u_m(t) = \sum_{i}^N w_{im} \tilde y_i(t) - \sum_{j\neq m} \beta \tilde z_j(t) + \alpha - w_\text{norm}.
\end{equation}
Note that $w_\text{norm}$ can be incorporated into $\alpha$.
Such constant weight sum could be enforced in a biological network by a synaptic scaling mechanism \cite{turrigiano2004homeostatic,savin2010independent} that normalizes the sum of incoming weights to a neuron. For the data-based model $\mathcal{M}$, \cite{JonkeETAL:17a} used uniform excitabilities $\alpha$ for all excitatory neurons and no homeostasis for simplicity, see \me{membrane}.
We will argue below under which conditions this approximation is justified.
We consider the posterior distribution of such circuits as our second approximation of the exact posterior:
\begin{align}\label{eq:post_A2}
p_\text{A2}(\v z| \v y, \v W) = \frac{1}{Z}\exp\left\{ \gamma \cdot \left( \sum_{i,m} w_{im} y_i z_m - \frac{1}{2} \sum_{m\neq l} \beta z_m z_l + \sum_m z_m (\alpha - w_\text{norm}) \right) \right\}.
\end{align}
Note that in this case, $w_\text{norm}$ effectively leads to a smaller mean of the Gaussian prior. A detailed discussion about how parameters of the generative probabilistic model $\P$ can be mapped to parameters of the data-based microcircuit motif model $\mathcal{M}$ is provided in {\em Network parameter interpretation} in {\em Methods}.
\begin{figure}[tp]
\begin{center}
\includegraphics[width=0.6\textwidth]{approx.eps}
\end{center}
\caption{{\bf Empirical evaluation of approximations in a superposition-of-bars task.}
\textbf{A}) Sample input patterns, depicted on the 8$\times$8 grid. Patterns consist of a varying number of superimposed horizontal or vertical bars.
\textbf{B}) Evolution of the Kullback-Leibler divergence between the exact posterior and the posterior of approximation A1 during learning (red).
As a comparison, the KL-divergence to a uniform distribution is indicated in blue.
\textbf{C}) Example reconstruction of inputs from panel A from the posterior according to the hidden states with maximum probability in the exact posterior (top row)
and the posterior of approximation A1 after learning (bottom). Scale between 0 (white) and 1 (black).
\textbf{D}) Weights vectors of network neurons depicted on the 8$\times$8 grid as the input in panel A. Scale between 0 (white) and 6 (black).
\textbf{E}) Evolution of the Kullback-Leibler divergence between the exact posterior, the posterior of approximation A2 during learning (red),
and the posterior of approximation A2 with an adjusted sparsity prior d (yellow) during learning. As a comparison, the KL-divergence to a uniform distribution is indicated in blue.}
\label{fig:approx}
\end{figure}
We evaluated the impact of these two approximations in F\"oldiak's superposition-of-bars problem \cite{foldiak1990forming}. This is a standard blind-source separation problem that has also been used in \cite{JonkeETAL:17a} to evaluate the data-based network model $\mathcal{M}$. In this problem, input patterns $\v y$ are two-dimensional pixel arrays on which horizontal and vertical bars (lines) are superimposed, see Fig.~\ref{fig:approx}A. Input patterns were generated with a superposition of 1 to 3 bars from the distribution that was used in \cite{JonkeETAL:17a}. We performed inference of hidden causes by sampling from the approximate posterior distribution $p_\text{A1}(\v z| \v y, \v W)$ given by eq.~\eqref{eq:post_A1} for a network of 20 hidden-cause neurons. We performed approximate stochastic online EM in order to optimize the parameters of the model. The synaptic update rule \eqref{eq:update1} used for parameter updates is discussed in detail below. We compared the approximated posterior to the exact one \eqref{eq:post} by computing the Kullback-Leibler (KL) divergence $D_\text{KL}(p(\v z| \v y, \v W) || p_\text{A1}(\v z| \v y, \v W))$. The KL divergence was small throughout learning, with a slight decrease during the process, see Fig.~\ref{fig:approx}B (mean KL divergence during the second half of training was $0.55$). To evaluate what that KL-divergence means for the inference, we considered the hidden state vector $\v z_\text{max}$ with the maximum posterior probability after learning in the exact and approximate posterior and used this to reconstruct the input pattern by computing $\hat y = \sigma_\text{LS}(W^T \v z_\text{max})$. The reconstructed inputs for the 8 example inputs of Fig.~\ref{fig:approx}A are shown in Fig.~\ref{fig:approx}C for the exact posterior (top) and the approximate posterior (bottom). The approximate reconstructions resemble the exact ones in many cases, with occasional misses of a basic pattern. The final weights of the 20 neurons are shown in Fig.~\ref{fig:approx}D in the two-dimensional layout of the input to facilitate interpretability. Note that all basic patterns were represented by individual neurons with additional neurons that specialized on combined patterns.
The approximate posterior $p_\text{A2}$ is equivalent to the approximate posterior $p_\text{A1}$ if the synaptic weights of each neuron are normalized to a common $L1$ norm. It turned out in \cite{JonkeETAL:17a} that such a normalization is not strictly necessary. In this work, the data-based model $\mathcal{M}$ managed to perform blind source separation with a posterior that can be best described by $p_\text{A2}$ without normalization of synaptic efficacies. We found that for the superposition-of-bars problem, the posterior distribution $p_\text{A2}$ differs significantly from the exact posterior if we set $w_\text{norm}=0$ in eq.~\eqref{eq:post_A2}, see red line in Fig.~\ref{fig:approx}E. This difference is mostly induced by the tendency of $p_\text{A2}$ to prefer many hidden causes due to the missing last term in eq.~\eqref{eq:post_A2}. If the prior was corrected to reduce the number of hidden causes, we found that the approximation was significantly improved, in particular as the network weight vectors approached their final norm values, see yellow line in Fig.~\ref{fig:approx}E. A closer inspection of eq.~\eqref{eq:post_A2} in comparison with eq.~\eqref{eq:post_A1} shows that this approximation is effective if basic patterns consist of a similar number of active units, because otherwise patterns with strong activity are preferred over weakly active ones (this is exactly what the last term in eq.~\eqref{eq:post_A1} compensates for).
We conclude from this analysis that the microcircuit motif model $\mathcal{M}$ approximates probabilistic inference in a noisy-OR-like probabilistic model of its inputs. The prior on network activity favors sparse network activity, but does not strictly enforce a predefined activity level. Such a more flexible regulation of network activity is obviously important when the network input is composed of a varying number of basic component patterns.
We show below that this network behavior in combination with STDP allows the microcircuit motif model $\mathcal{M}$ to perform blind source separation of mixed input sources.
Our analysis above has shown that the computation of blind source separation in $\mathcal{M}$ can be facilitated either by the normalization of activity in input populations, or by homeostatic mechanisms that normalize excitatory synaptic efficacies within each neuron.
\subsubsection{STDP in the microcircuit motif model $\mathcal{M}$ creates an internal model of network inputs:}
After we have established a link between a well-defined probabilistic model $\P$ and the spiking dynamics of microcircuit motif model $\mathcal{M}$, we can now analyze plasticity in the network.
The probabilistic model $\P$ defines a likelihood distribution over inputs $\v y$ that depends on the parameters $\v W$ through
\begin{align}\label{eq:genmod_like}
p(\v y|\v W) &= \sum_{\v z} p(\v z) p(\v y| \v z, \v W),
\end{align}
where the sum runs over all possible hidden states $\v z$.
We propose that STDP in $\mathcal{M}$ can be viewed as an adaptation of the parameters $\v W$ so that $p(\v y|\v W)$ as defined by the probabilistic model $\P$ with parameters $\v W$ approximates the actually encountered distribution of spike inputs $p^*(\v y)$
within the constraints of the prior $p(\v z)$. Since the prior typically is defined to favor sparse representations, this tends to extract the hidden sources of these patterns, an operation called blind source separation \cite{foldiak1990forming}.
More precisely, we show that STDP in $\mathcal{M}$ approximates stochastic online EM \cite{Sato:99, Bishop:06} in $\P$. Given some external distribution $p^*(\v y)$ of synaptic inputs, EM adapts the model parameters $\v W$ such that the model likelihood distribution $p(\v y|\v W)$ approximates the given distribution $p^*(\v y)$.
More formally, the Kullback-Leibler divergence between the likelihood of inputs in the internal model $p(\v y|\v W)$ and the empirical data distribution $p^*(\v y)$ is brought to a local minimum.
The theoretically optimal learning rule for EM contains non-local terms which are hard to interpret from a biological point of view.
In the following, we derive a local approximation to yield a simple STDP-like learning rule.
The goal of the EM algorithm is to find parameters that (locally) minimize the Kullback-Leibler divergence between the likelihood $p(\v y|\v W)$ of inputs in the probabilistic model $\P$ and the empirical data distribution $p^*(\v y)$, that is, the distribution of inputs experienced by the network $\mathcal{M}$. This is equivalent to the maximization of the average data log-likelihood $E_{p^*}[\log p(\v y|\v W)]$.
For a given set of training data $\v Y$ and corresponding unobserved hidden variables $\v Z$, this corresponds to maximizing $\log p(\v Y|\v W)$.
The optimization is done by iteratively performing two steps. For given parameters $\v W^{\text{old}}$, the posterior distribution over hidden variables $p(\v Z|\v Y, \v W^{\text{old}})$ is determined (the E-step). Using this distribution, one then performs the M-step where $E_{p(\v Z|\v Y,\v W^{\text{old}})}[\log p(\v Y,\v Z|\v W)]$ is maximized with respect to $\v W$ to obtain better parameters for the model. These steps are guaranteed to increase (if not already at a local optimum) a lower bound $\mathcal{L} = E_{p(\v Z|\v Y,\v W^{\text{old}})} \left[ \log \frac{p(\v Y, \v Z |\v W)}{p(\v Y, \v Z |\v W^{\text{old}})}\right]$ on the data log likelihood, that is, $\mathcal{L}\le \log p(\v Y|\v W)$.
These steps are iterated until convergence of parameters to a local optimum \cite{Bishop:06}. Computation of the M-step in the probabilistic model $\P$ is hard. In the generalized EM algorithm, the M-step is replaced by a procedure that just improves the parameters, without necessarily obtaining the optimal ones for a single M-step. This can be done for example by changing the parameters in the direction of the gradient
\begin{equation}
\Delta w_{im} \propto \frac{\partial}{\partial w_{im}} E_{p(\v Z|\v Y,\v W^{\text{old}})}[\log p(\v Y,\v Z|\v W)].
\end{equation}
Since in our model, we assume that synaptic efficacy changes are instantaneous for each pre-post spike pair, we need to consider an online-version of the generalized EM algorithm. In stochastic online EM, for each data example $\v y^{(k)}$, a sample $\v z^{(k)}$ from the posterior is drawn (the stochastic E-step) and parameters are changed according to this sample-pair.
As shown above, the $\mathcal{M}$ network implements an approximation of the stochastic E-step.
In the M-step, each parameter $w_{im}$ is then updated in the direction of the gradient $\Delta w_{im} \propto \frac{\partial}{\partial w_{im}} \log p(\v y^{(k)}, \v z^{(k)}| \v W)$. As the prior $p(\v z)$ in $\P$ does not depend on $\v W$, this is equivalent to $\Delta w_{im} \propto \frac{\partial}{\partial w_{im}} \log p(\v y^{(k)} | \v z^{(k)}, \v W)$.
For the likelihood given by eq.~\eqref{eq:genmod_likefact}, this derivative is given by
\begin{equation}\label{eq:dw}
\frac{\partial}{\partial w_{im}} \log p (\v y^{(k)}|\v z^{(k)}, \v W) =
\gamma z_m \left( y_i - \frac{\exp(a_i)}{1+\exp(a_i)} \right) = \gamma z_m \left( y_i - \sigma_{LS}(a_i) \right),
\end{equation}
where $\sigma_{LS}$ denotes the logistic sigmoid function.
Hence, the synaptic update rule for weight $w_{im}$ is given by
\begin{equation} \label{eq:lr_nonlocal}
\Delta w_{im} = \eta z_m \left( y_i - \sigma_{LS}(a_i) \right),
\end{equation}
where $\eta>0$ is a learning rate.
This learning rule is not local as it requires information about the activation of all output neurons as well as values of all synaptic weights originating from input neuron $i$. In order to make this biologically plausible we approximate rule \eqref{eq:lr_nonlocal} by
\begin{equation}\label{eq:update1}
\Delta w_{im} = \eta z_m \left( y_i - \sigma_{LS}(\gamma w_{im}) \right),
\end{equation}
This rule uses only locally available information at the synapse. What are the consequences of this approximation during learning? If only a single neuron in the network is active, then the approximation is exact. Otherwise, the approximation ignores what other neurons contribute to the explanation of input component $y_i$. This means that for $y_i=1$, the weight will further be increased even if $y_i=1$ is already fully explained by the network activity. For $y_i=0$, the decrease will in general be smaller than in the exact rule (since weights are non-negative). Note however that only the magnitudes of weigh changes are affected, but not which weights change and the sign of the change. Hence, we can conclude that the angle between the approximate parameter change vector and the exact parameter change vector is between $0$ and $90$ degrees. In other words, the inner product of these two vectors is always non-negative and the updates are performed in the correct direction. This was confirmed in simulations. In the learning experiment described in Fig.~\ref{fig:approx}, we compared the approximate update (that was used to optimize the model) with the update that was proposed by the exact rule \eqref{eq:dw} at every $50^\text{th}$ update step. The angle between the exact and approximate update vector was between $0^\circ$ and $84^\circ$ with a mean of $57^\circ$.
In the simplified dynamics, a value of $z_m^{(k)}=1$ is indicated by a spike in network neuron $m$ when pattern $\v y^{(k)}$ is presented as input. We therefore map this update to the following synaptic plasticity rule: For each postsynaptic spike, update weight $w_{im}$ according to
\begin{align}\label{eq:STDP_meth}
\Delta w_{im} = \eta \left(y_i(t) - \sigma_{LS}(\gamma w_{im}) \right)\;.
\end{align}
According to this learning rule, when the presynaptic neuron $i$ spikes shortly before the postsynaptic neuron this results in long-term potentiation (LTP) which is weight dependent according to the term $\sigma_{LS}(w_{im})$. Due to the weight dependence, large weights lead to small weight changes, with vanishing changes for very large weights. When a post-synaptic spike by neuron $m$ is not preceded by a presynaptic spike by neuron $i$ (e.g.~when the pre-synaptic spike comes after the post-synaptic spike), this results in long term depression (LTD). LTD is also weight dependent, but to a much lesser extent as the weight-dependent factor varies only between $0.5$ and $1$. This behavior is mimicked by the standard STDP rule implemented in the data-based model $\mathcal{M}$ that a standard weight dependence where updates exponentially decreased with $w_{im}$ for LTP and did not depend on $w_{im}$ for LTD.
Hence, the dynamics and synaptic plasticity of the data-based model $\mathcal{M}$ can be understood as an approximation of EM in the probabilistic model $\P$, that creates an internal model for the distribution $p^*(\v y)$ of network inputs. This internal probabilistic model is defined by a noisy-OR-like likelihood term and a sparse prior on the hidden causes of the current input pattern. Hence, STDP can be understood as optimizing model parameters such that the observed distribution of input patterns can be explained through a set of basic patterns (hidden causes). It is assumed that the input at each time point can be described by a combination of a sparse subset of these patterns. In other words, STDP in the microcircuit motif model $\mathcal{M}$ performs blind source separation of input patterns.
\section{Discussion}
We have provided a novel theoretical framework for analyzing and understanding computational properties that emerge from STDP in a prominent cortical microcircuit motif: interconnected populations of pyramidal cells and $\text{PV}^+$ interneurons in layer 2/3.
The computer simulations in \cite{JonkeETAL:17a}, that were based on the data from \cite{avermann2012microcircuits}, indicate that
the computational operation of this network motif cannot be captured adequately by a WTA model.
Instead, this work suggests a k-WTA model, where a varying number of the most excited neurons become active.
Since the WTA circuit model turns out to be inadequate for capturing the dynamics of interacting pyramidal cells and $\text{PV}^+$ interneurons, one needs to replace the probabilistic model that one had previously used to analyze the impact of STDP on the computational function of the network motif.
Mixture models such as those proposed by \cite{nessler2013bayesian} and \cite{habenschuss2013emergence} are inseparably tied to WTA dynamics: For drawing a sample from a mixture model one first decides stochastically from which component of the mixture model this sample should be drawn (and only a single component can be selected for that). We have shown here that a quite different generative model, similar to the noisy-OR model, captures the impact of soft lateral inhibition on emergent network codes and computations much better (Fig.~\ref{fig:fig_generative_model}). The noisy-OR model is well-known in machine learning \cite{Neal:93,saund1995multiple}, but has apparently not previously been considered in computational neuroscience.
Our probabilistic model $\P$ further suggests that the varying number of active neurons in the circuit may depend both on a prior that is encoded by the network parameters and the familiarity of the network input.
We have shown that the evolution of the dynamics and computational function of the network motif under STDP can be understood from the theoretical perspective as an approximation of expectation maximization (EM) for fitting a noisy-OR based generative model to the statistics of the high dimensional spike input stream. This link to EM is very helpful from a theoretical perspective, since EM is one of the most useful theoretical principles that are known for understanding self-organization processes.
In particular, this theoretical framework allows us to elucidate emergent computational properties of the network motif for spike input streams that contain superimposed firing patterns from upstream networks. It disentangles these patterns and represents the occurrence of each pattern component by a separate sparse assembly of neurons, as already postulated in \cite{foldiak1990forming}.
The established relationship between the network $\mathcal{M}$ and the probabilistic model $\P$ allows us to relate the network parameters $\alpha$ and $w^{\text{IE}}$ (\me{membrane}) of $\mathcal{M}$ to the parameters of the generative model $\P$. Briefly (for a detailed discussion, see {\em Network parameter interpretation} in {\em Methods}), the excitability $\alpha$ of pyramidal cells is proportional to $\frac{2 \mu -1}{2\sigma^2}$, see eq.~\eqref{eq:alpha}. The strength of inhibitory connections $w^{\text{IE}}$ to the pool of pyramidal cells is proportional to $\frac{1}{\sigma^2}$, see eq.~\eqref{eq:beta}. Hence, a large $\mu$ in combination with a small $\sigma^2$ (i.e., a sharp activity prior), leads to a large spontaneous activity that is tightly regulated by strong inhibitory feedback. On the other hand, a broad prior (larger $\sigma^2$) leads to weaker inhibitory feedback, thus allowing the network to attain a broader range of activities.
\subsubsection*{Related work}
A related theoretical study for WTA circuits was performed in \cite{nessler2013bayesian, HabenschussETAL:13, kappel2014stdp} and extended to sheets of WTA circuits in \cite{bill2015distributed}. It was assumed in these models that inhibition normalizes network activity exactly, leading to a strict WTA behavior. The analysis in the present work is much more complex and necessarily has to include a number of approximations. Out analysis reveals that the softer type of inhibition that we studied provides the network with additional computational functionality.
There exists also a structural similarity of the proposed learning rule \eqref{eq:STDP_meth} to those reported in \cite{nessler2013bayesian, HabenschussETAL:13, kappel2014stdp}. This is insofar significant as it raises the question why the application of almost the same learning rule in one motif leads to learning and extraction of a single hidden cause and in another to the extraction of multiple causes. The answer most likely lies in the interplay between ``prior knowledge'' in the model (e.g.~in the form of the intrinsic excitability of neurons), the learning rule and inhibition strength: As there are multiple neurons in the proposed microcircuit motif model which can spike in response to the same input, each one of them can adapt its synaptic weights to increase the likelihood of spiking again whenever the same or a similar input pattern is presented in the future, possibly in conjunction with other different input components. This is manifested through increased total input strength to those neurons when the pattern is seen again. But this results also in increased total inhibition to all other neurons, thereby effectively limiting the number of winners. As there is no fixed normalization of firing rates (probabilities), as soon as the input strength caused by a single feature component is strong enough to trigger the spike in some neuron, the neuron will respond to each pattern which consists of that particular feature. On average this will force neurons to specialize on a single feature component. Therefore, after learning, each spike can be interpreted as indication of a particular feature component.
The noisy-OR model \me{nOR} is tightly related to the likelihood model used in this article. It is one of the most basic likelihood models that allows to combine basic patterns. Noisy-OR and related models have previously been used in the machine learning literature as models for nonlinear component extraction \cite{saund1995multiple,lucke2008maximal}, or as basic elements in belief networks \cite{neal1992connectionist}, but they have so far not been linked to cortical processing.
The extraction of reoccurring components of input patterns is closely related to blind source separation and independent component analysis (ICA) \cite{HyvaerinenETAL:04}. Previous work in this direction includes implementations of ICA in artificial neural networks \cite{hyvarinen1999fast}, see also \cite{lucke2010expectation}. These abstract models are only loosely connected to computation in cortical network motifs.
\cite{savin2010independent} investigated ICA in the context of spiking neurons. Theoretical rules for intrinsic plasticity were derived which enable neurons in combination with input normalization, weight scaling, and STDP, to extract independent components of inputs. An interesting difference is that the inhibition in \cite{savin2010independent} acts to decorrelate neuronal activity. Intrinsic plasticity on the other hand enforces sparse activity (this sparsening has to happen on the time scale of input presentations). In our probabilistic model $\P$, sparse network activity is enforced by a prior over network activities, implemented in $\mathcal{M}$ through the inhibitory feedback that models experimentally found network connectivity \cite{avermann2012microcircuits}. This inhibition naturally acts on a fast time scale \cite{OkunLampl:08}, while the time scale for intrinsic plasticity is unclear \cite{turrigiano2004homeostatic}.
\section{Methods}\label{sec:methods}
\subsection{Definition of $\boldsymbol{\mathcal{M}}$: Data-based network model for a layer 2/3 microcircuit motif}
The layer 2/3 microcircuit motif was modeled in \cite{JonkeETAL:17a} by the data-based model $\mathcal{M}$. The model is described here briefly for completeness. See \cite{JonkeETAL:17a} for a thorough definition. The model $\mathcal{M}$ consists of
two reciprocally connected pools of neurons, an excitatory pool and an inhibitory pool. Inhibitory network neurons are recurrently connected. Excitatory network neurons receive additional excitatory synaptic input from a pool of $N$ input neurons.
Fig.~\ref{fig:fig_model}A summarizes the connectivity structure of the data-based model $\mathcal{M}$ together with connection probabilities.
Connection probabilities have been chosen according to the experimental data described in \cite{avermann2012microcircuits}.
Let $t_i^{(1)}, t_i^{(2)}, \dots$ denote the spike times of input neuron $i$.
The {\em output trace} $\tilde y_i(t)$ of input neuron $i$ is given by the temporal sum of unweighted postsynaptic potentials (PSPs) arising from input neuron $i$:
\begin{align} \label{eq:output_trace}
\tilde y_i(t) = \sum_{f} \epsilon(t-t_i^{(f)}),
\end{align}
where $\epsilon$ is the synaptic response kernel, i.e., the shape of the PSP.
It is given by a double-exponential function with a rise time constant $\tau_r=1$ ms and a fall time constant $\tau_f=10$ ms.
For given spike times, output traces of excitatory network neurons and inhibitory network neurons are defined analogously and denoted by $\tilde z_m(t)$ and $I_j(t)$ respectively.
The network consists of $M=400$ excitatory neurons, modeled as stochastic spike response model neurons \cite{JolivetETAL:06}, see eqs.~\eqref{eq:neuron_firing_prob} and \eqref{eq:membrane}.
See Sec.~\ref{sec:param_mot} for a motivation of network parameter values from a theoretical perspective.
The instantaneous firing rate $\rho_m$ of neuron $m$ can be re-written (by substituting \me{membrane} in \me{neuron_firing_prob}) as:
\begin{align}\label{eq:neuron_firing_prob_divisive}
\rho_m(t) &=
\frac{1}{\tau} \frac{\exp\left(\gamma \sum_i w_{im} \tilde y_i(t) + \gamma \alpha \right)}{\exp\left(\gamma \sum_{j \in \mathcal{I}_m} w^{\text{IE}} I_j(t)\right)}\;\;.
\end{align}
Here, the numerator includes all excitatory contributions to the firing rate $\rho_m(t)$. The denominator in this term for the firing rate describes inhibitory contributions, thereby reflecting divisive inhibition \cite{CarandiniHeeger:12}.
Apart from excitatory neurons there are $M_{\text{inh}}=100$ inhibitory neurons in the network.
Inhibitory neurons are also modeled as stochastic spike response neurons with an instantaneous firing rate given by
\begin{align}\label{eq:inh_firing_prob}
\rho_m^{\text{inh}}(t)\ &= \sigma_\text{rect}(u_m^{\text{inh}}(t)),
\end{align}
where $\sigma_\text{rect}$ denotes the linear rectifying function $\sigma_\text{rect}(u)=u$ for $u\ge 0$ and $0$ otherwise. The membrane potentials of inhibitory neurons are given by
\begin{align}
u_m^{\text{inh}} (t) = \sum_{i \in \mathcal{E}_m} w^{\text{EI}} \tilde z_i(t) -\sum_{j \in \mathcal{II}_m} w^{\text{II}} I_j(t),\label{eq:inh_membrane}
\end{align}
where $\tilde z_i(t)$ denotes synaptic input (output trace) from excitatory network neuron $i$,
$\mathcal{E}_m$ ($\mathcal{II}_m$) denotes the set of indices of excitatory (inhibitory) neurons that project to inhibitory neuron $m$, and
$w^{\text{EI}}$ ($w^{\text{II}}$) denotes the excitatory (inhibitory) weight to inhibitory neurons.
Synaptic connections from input neurons to excitatory network neurons are subject to STDP. A standard version of STDP is employed with an exponential weight dependency for potentiation \cite{habenschuss2013emergence}, see \cite{JonkeETAL:17a}.
The simulations for Fig.~\ref{fig:fig_tilted} are described in detail in \cite{JonkeETAL:17a}.
\subsection{Network parameter interpretation: }\label{sec:param_mot}
In the section {\em Interpretation of the dynamics of $\boldsymbol{\mathcal{M}}$ in the light of $\boldsymbol{\P}$}, we have established a relationship between the parameters of the probabilistic model $\P$ and network parameters. This relationship was however derived based on a simplified network model that included for example rectangular EPSPs and direct inhibitory connections without explicit inhibitory neurons. Nevertheless, one can also determine reasonable parameter settings for the data-based model $\mathcal{M}$ based on a prior on network activity that is defined in the probabilistic model $\P$. These parameters are the excitability $\alpha$ and the synaptic weights between and within excitatory and inhibitory network neurons.
In this section we start by assuming such a prior \me{genmod_prior} with parameters $\mu=-3.4$
(this includes already a correction of the prior for the missing $w_\text{norm}$) and $\sigma^2=0.35$
as well as a fitting parameter $\gamma=2$ (eq.~\ref{eq:neuron_firing_prob}) and deduce the parameters used in the simulations. As shown above, the neural excitability $\alpha$ is then given by
$\alpha=\frac{1}{\gamma} \frac{2\mu-1}{2\sigma^2}$.
We obtain for the chosen $\gamma=2$:
\begin{equation}\label{eq:alpha}
\alpha = \frac{1}{\gamma} \left(\frac{2\mu-1}{2\sigma^2}\right) = -5.57.
\end{equation}
The inhibition strength $\beta$ of the approximate dynamics \me{membrane_NS} is replaced by the weight $w^\text{IE}$ from inhibitory neurons to excitatory neurons in $\mathcal{M}$.
From the probabilistic model, we determined $\beta$ as $\beta=\frac{1}{\gamma \sigma^2}$ under the assumption of rectangular inhibitory PSPs. In $\mathcal{M}$ we use double-exponential PSPs instead of rectangular ones. One therefore has to correct for differences in the PSP integrals. Using this correction, one obtains
\begin{equation}
\beta'= c_{\text{PSP}} \beta = \frac{c_{\text{PSP}}}{\gamma \sigma^2} = 1.114,
\end{equation}
where $c_{\text{PSP}}$ is the ratio between the integrals over the rectangular PSPs used in the approximate dynamics and the double-exponential PSPs used in $\mathcal{M}$ ($c_{\text{PSP}}=0.78$ for the shapes used for $\mathcal{M}$).
The weights $w^\text{IE}$ can be determined by comparing \me{membrane} with \me{membrane_NS} with corrected inhibition strength
\begin{equation}
\sum_{j \in \mathcal{I}_m} w^{\text{IE}} I_j(t) = \sum_{j=1}^M \beta' z_j(t).
\end{equation}
Under the assumption that the number of spikes in the pool of inhibitory neurons is at any time (with a slight delay) approximately equal to the number of spikes in the pool of excitatory neurons, we obtain
\begin{equation}
p^{\text{IE}} w^{\text{IE}} = \beta',
\end{equation}
where $p^{\text{IE}}$ denotes the connection probability from inhibitory to excitatory network neurons. This yields
\begin{equation}\label{eq:beta}
w^{\text{IE}} = \beta' \frac{1}{p^{\text{IE}} } = \frac{c_{\text{PSP}}}{\gamma \sigma^2 p^{\text{IE}} } = 1.86.
\end{equation}
We now first consider the weights $w^{\text{EI}}$ from excitatory to inhibitory neurons under the assumption of no I-to-I connections. In this case, in order to obtain the same number of spikes in the inhibitory neurons as in the excitatory neurons, each spike from an excitatory neuron should induce on average one spike within all inhibitory neurons, that is,
\begin{equation}
w^{\text{EI,no II}} \bar \epsilon M_\text{inh} p^{\text{EI}} = 1,
\end{equation}
where $\bar \epsilon=0.01/c_\text{PSP}$ is the integral over the alpha-PSPs, $p^{\text{EI}}$ is the connection probability from excitatory to inhibitory neurons, and $M_\text{inh}=100$ is the number of inhibitory neurons. We obtain
\begin{equation}
w^{\text{EI,no II}} = c_\text{PSP}/p^{\text{EI}} = 1.357.
\end{equation}
Without I-to-I connections, this guarantees that excitation is balanced by inhibition. However, the single spike (on average) will occur on average with a delay of $5$ ms. Interestingly, the I-to-I connections can help to decrease this delay. In particular, if one demands that the inhibitory spike is elicited with a delay of less than one ms on average, then one can simply increase the weights $w^{\text{EI}}$ by some factor $c^\text{EI}=10$, leading to
\begin{equation}
w^\text{EI} = w^{\text{EI,no II}} c^\text{EI} = c_\text{PSP} c^\text{EI} / p^{\text{EI}} = 13.57.
\end{equation}
Now, each spike in the excitatory population induces in the inhibitory population for approximately $10$ ms a total rate of $1000$ Hz, leading to an average delay of $1$ ms. Without I-to-I connections this would however lead to too many successive spikes within these $10$ ms. The I-to-I connections can compensate this too large excitation. For an approximately correct compensation, the first inhibitory spike has to balance out this excitation, which is approximately achieved by providing exactly the same amount of inhibition to inhibitory neurons, leading to
\begin{equation}
w^\text{II} = c_\text{PSP} c^\text{EI} / p^{\text{II}} .
\end{equation}
Since $p^{\text{II}} \approx p^{\text{EI}}$, we used $w^\text{II}=w^\text{EI}$ for simplicity.
These are the parameter values used for the $\mathcal{M}$ model in \cite{JonkeETAL:17a}.
\subsection{Details to simulations for Figure \ref{fig:approx} }\label{sec:details_approx}
{\bf Creation of basic patterns:}
Basic patterns were $64$-dimensional vectors, each representing a horizontal or vertical bar on an $8 \times 8$ two-dimensional pixel array. We defined $16$ basic patterns $\v x^{(1)}, \dots, \v x^{(16)}$ in total, corresponding to all possible horizontal and vertical bars of width $1$ in this pixel array. For a horizontal (vertical) bar, all pixels of a row (column) in the array attained the value $1$ while all other pixels were set to $0$. The entries of the basic pattern vectors were then defined by the values of the corresponding pixels in the array.
\vspace{0.2cm}
\noindent
{\bf Superposition of basic patterns:}
To generate an input pattern, basic rate patterns were superimposed as follows. The number of superimposed basic patters $n_{sup}$ was chosen between $1$ and $3$ drawn from the distribution $p(n_{sup}=k) = \frac{0.9^k 0.1^{3-k}}{\sum_{l=1}^{3}0.9^l 0.1^{3-l}}$. Then each basic pattern to be superimposed was drawn uniformly from the set of basic patterns without replacement. This corresponds to the distribution used in \cite{JonkeETAL:17a}. The input vector $\v y$ was then given by $\v y = \max\{1, \sum_{i=1}^{n_{sup}} \v x^{(bp(i))}\}$, where $bp(i)$ denotes the $i^\text{th}$ basic pattern to be superimposed and the $\max$ operation is performed element-wise.
\vspace{0.2cm}
\noindent
{\bf Optimization of the generative model:}
The generative model \eqref{eq:genmod_prior}--\eqref{eq:genmod_likefact} with $20$ hidden causes $\v z$ was fitted to this data in an iterative manner. One iteration of the fitting algorithm was performed as follows:
\begin{enumerate}
\item draw an input vector $\v y$ as described above;
\item draw a sample from the approximate posterior $p_\text{A1}(\v z| \v y, \v W)$, eq.~\eqref{eq:post_A1};
\item update the parameters $\v W$ of the model according to eq.~\eqref{eq:update1}.
\end{enumerate}
Since the posterior in step (2) is intractable, it was approximated by assuming that a maximum of $4$ hidden causes are active in the posterior distribution (state vectors with more active hidden causes usually had negligible probabilities). This allowed us to compute the partition function and therefore to sample hidden state vectors in a straight-forward manner. Further, we did not consider hidden state vectors with no active hidden state since those would not lead to any parameter changes.
\vspace{0.2cm}
\noindent
{\bf Parameters of the model and learning rule:}
A prior distribution $p(\v z)$ with parameters $\mu=6$ and $\sigma^2=0.35$ was used.
The scaling parameter $\gamma$ was set to $1$.
Weights $w_{ij}$ were initialized with values drawn from a uniform distribution in $[0,0.1]$. Weights were clipped between a minimal value of $0$ and a maximal value of $6$. A constant learning rate of $\eta=0.1$ was used. Training was performed for $15000$ updates. Network characteristics (such as KL divergences) were computed every $50^\text{th}$ update.
\vspace{0.2cm}
\noindent
{\bf Figure \ref{fig:approx}B:}
Every $50^\text{th}$ update, we computed the KL-divergence between the true posterior and the approximate posterior $D_\text{KL}(p(\v z| \v y, \v W) || p_\text{A1}(\v z| \v y, \v W))$. In addition we also computed the KL-divergence between $p(\v z| \v y, \v W)$ and a uniform distribution over state vectors. In all these divergences, we only considered the distribution over vectors with at most $4$ active hidden causes for tractability (see above). For Fig.~\ref{fig:approx}B, the data was smoothed using a box-car filter of size $10$.
\vspace{0.2cm}
\noindent
{\bf Figure \ref{fig:approx}C:}
We considered the input patters given in panel A and computed the hidden state $\v z_\text{max}$ with the maximum posterior probability (computed as described above) after learning in the exact and approximate posterior. This hidden state vector was then used to reconstruct the input pattern by computing $\hat y = \sigma_\text{LS}(W^T \v z_\text{max})$. Note that this is not a sample of $\v y$ but it defines the probability of each individual pixel to be $1$.
\vspace{0.2cm}
\noindent
{\bf Figure \ref{fig:approx}E:}
Every $50^\text{th}$ update of the simulation described above, we computed the KL-divergence $D_\text{KL}(p(\v z| \v y, \v W) || p_\text{A2}(\v z| \v y, \v W))$ between the true posterior and the posterior according to approximation $A2$. For the red curve we used $p_\text{A2}$ with $w_\text{norm}=0$ and the same $\mu=6$ as given for the original model. For the yellow curve, we corrected the prior of the model to have $\mu=-12$. The KL-divergence to the uniform distribution was computed as described above.
\vspace{2cm}
\noindent
{\em Acknowledgements:} Written under partial support by the Human Brain Project of the European Union \#604102 and \#720270, and the Austrian Science Fund (FWF): I 3251-N33.
\bibliographystyle{apalike}
|
1,116,691,499,125 | arxiv | \section{Introduction}
It has been shown that machine learning models are often vulnerable to
adversarial manipulation of their input intended to cause
incorrect classification \citep{dalvi2004adversarial}.
In particular, neural networks and many other categories of machine
learning models are highly vulnerable to
attacks based on small modifications of the input to the model
at test time
\citep{biggio2013evasion,Szegedy-ICLR2014,Goodfellow-2015-adversarial,Papernot-2016-transferability}.
The problem can be summarized as follows.
Let's say there is a machine learning system $M$ and input sample $C$ which we call a clean example.
Let's assume that sample $C$ is correctly classified by the machine learning system, i.e. $M(C) = y_{true}$.
It's possible to construct an adversarial example $A$ which is perceptually indistinguishable
from $C$ but is classified incorrectly, i.e. $M(A) \ne y_{true}$.
These adversarial examples are misclassified far more often than examples that have been perturbed
by noise, even if the magnitude of the noise is much larger than the magnitude of the adversarial
perturbation \citep{Szegedy-ICLR2014}.
Adversarial examples pose potential security threats for practical machine learning applications.
In particular, \citet{Szegedy-ICLR2014} showed that an adversarial example that was designed
to be misclassified by a model $M_1$ is often also misclassified by a model $M_2$.
This adversarial example transferability property means
that it is possible to generate adversarial examples
and perform a misclassification attack on a machine learning system without access
to the underlying model.
\citet{Papernot-MGJCS16} and \citet{Papernot-2016-transferability} demonstrated such
attacks in realistic scenarios.
It has been shown~\citep{Goodfellow-2015-adversarial,LearningWithStrongAdversary} that
injecting adversarial examples into the training set (also called adversarial training) could
increase robustness of neural networks to adversarial examples.
Another existing approach is to use defensive distillation to train the network~\citep{PapernotDistillation2015}.
However all prior work studies defense measures only on relatively small datasets like MNIST and CIFAR10.
Some concurrent work studies attack mechanisms on ImageNet \citep{rozsa2016accuracy},
focusing on the question of how well adversarial examples transfer between different
types of models, while we focus on defenses and studying how well different types
of adversarial example generation procedures transfer between relatively similar models.
In this paper we studied adversarial training of Inception models trained on ImageNet.
The contributions of this paper are the following:
\begin{itemize}
\item We successfully used adversarial training to train an Inception v3 model~\citep{Inception-v3}
on ImageNet dataset~\citep{russakovsky2014imagenet} and to significantly increase
robustness against adversarial examples generated by the \textit{fast gradient sign method}~\citep{Goodfellow-2015-adversarial}
as well as other one-step methods.
\item We demonstrated that different types of adversarial examples tend to
have different transferability properties between models.
In particular we observed that
those adversarial examples which are harder to resist using adversarial training
are less likely to be transferrable between models.
\item We showed that models which have higher capacity (i.e. number of parameters) tend to
be more robust to adversarial examples compared to lower capacity model of the same architecture.
This provides additional cue which could help building more robust models.
\item We also observed an interesting property we call ``label leaking''.
Adversarial examples constructed with a single-step method making use of the true labels
may be easier to classify than clean adversarial examples, because an adversarially
trained model can learn to exploit regularities in the adversarial example construction
process.
This suggests using adversarial example construction processes that do not make use of
the true label.
\end{itemize}
The rest of the paper is structured as follows:
In section~\ref{sec:adv-examples} we review different methods to generate adversarial examples.
Section~\ref{sec:adv-training} describes details of our adversarial training algorithm.
Finally, section~\ref{sec:experiments} describes our experiments and results of adversarial training.
\section{Methods generating adversarial examples}\label{sec:adv-examples}
\subsection{Terminology and Notation}
In this paper we use the following notation and terminology regarding adversarial examples:
\begin{enumerate}
\item $\bm{X}$, the \textit{clean image}~--- unmodified image from the dataset (either train or test set).
\item $\bm{X}^{adv}$, the \textit{adversarial image}: the output of any procedure intended to produce an approximate
worst-case modification of the clean image. We sometimes call this a
\textit{candidate adversarial image} to emphasize that an adversarial image is not
necessarily misclassified by the neural network.
\item \textit{Misclassified adversarial image}~--- candidate adversarial image which is misclassified by the neural network.
In addition we are typically interested only in those misclassified adversarial images when
the corresponding clean image is correctly classified.
\item $\epsilon$: The size of the adversarial perturbation. In most cases, we require the
$L_\infty$ norm of the perturbation to be less than $\epsilon$, as done by
\citet{Goodfellow-2015-adversarial}.
We always specify $\epsilon$ in terms of pixel values in the range $[0, 255]$.
Note that some other work on adversarial examples minimizes the size of the perturbation
rather than imposing a constraint on the size of the perturbation~\citep{Szegedy-ICLR2014}.
\item The cost function used to train the model is denoted $J(\bm{X}, y_{true})$.
\item $Clip_{\bm{X}, \epsilon}( \bm{A} )$ denotes element-wise clipping $\bm{A}$, with $A_{i,j}$
clipped to the range $[X_{i,j} - \epsilon, X_{i,j} + \epsilon ]$.
\item {\em One-step} methods of adversarial example generation generate a candidate
adversarial image after computing only one gradient.
They are often based on finding the optimal perturbation of a linear approximation
of the cost or model.
{\em Iterative} methods apply many gradient updates. They typically do not rely
on any approximation of the model and typically produce more harmful adversarial
examples when run for more iterations.
\end{enumerate}
\subsection{Attack methods}
We study a variety of attack methods:
\paragraph{Fast gradient sign method}\label{sec:fast-method}
\citet{Goodfellow-2015-adversarial} proposed the \textit{fast gradient sign method} (FGSM)
as a simple way to generate adversarial examples:
\begin{equation}
\label{eq:fast-adv-examples}
\bm{X}^{adv} = \bm{X} + \epsilon \sign \bigl( \nabla_X J(\bm{X}, y_{true}) \bigr)
\end{equation}
This method is simple and computationally efficient compared to more complex methods like L-BFGS~\citep{Szegedy-ICLR2014},
however it usually has a lower success rate.
On ImageNet, top-1 error rate on candidate adversarial images for the FGSM is about $63\% - 69\%$ for $\epsilon \in [2, 32]$.
\paragraph{One-step target class methods}\label{sec:step-ll-method}
FGSM finds adversarial perturbations which increase the value of the loss function.
An alternative approach is to maximize probability $p(y_{target} \mid \bm{X})$
of some specific target class $y_{target}$ which is unlikely to be the true class for a given image.
For a neural network with cross-entropy loss this will lead to the following formula for the one-step target class method:
\begin{equation}
\label{eq:stepll-adv-examples}
\bm{X}^{adv} = \bm{X} - \epsilon \sign \bigl( \nabla_X J(\bm{X}, y_{target}) \bigr)
\end{equation}
As a target class we can use the least likely class predicted by the network
$y_{LL} = \argmin_{y} \bigl\{ p( y \mid \bm{X} ) \bigr\}$,
as suggested by \citet{PhysicalAdversarialExamples}.
In such case we refer to this method as one-step least likely class or just ``step l.l.''
Alternatively we can use a random class as target class.
In such a case we refer to this method as ``step rnd.''.
\paragraph{Basic iterative method}
A straightforward extension of FGSM is to apply it multiple times with small step size:
\[
\bm{X}^{adv}_{0} = \bm{X}, \quad
\bm{X}^{adv}_{N+1} = Clip_{X, \epsilon}\Bigl\{ \bm{X}^{adv}_{N} + \alpha \sign \bigl( \nabla_X J(\bm{X}^{adv}_{N}, y_{true}) \bigr) \Bigr\}
\]
In our experiments we used $\alpha = 1$, i.e. we changed the value of each pixel only by $1$ on each step.
We selected the number of iterations to be $\min(\epsilon + 4, 1.25\epsilon)$.
See more information on this method in~\citet{PhysicalAdversarialExamples}.
Below we refer to this method as ``iter. basic'' method.
\paragraph{Iterative least-likely class method}
By running multiple iterations of the ``step l.l.'' method we can get adversarial examples which are misclassified in more than $99\%$ of the cases:
\[
\bm{X}^{adv}_{0} = \bm{X}, \quad
\bm{X}^{adv}_{N+1} = Clip_{X, \epsilon}\left\{ \bm{X}^{adv}_{N} - \alpha \sign \left( \nabla_X J(\bm{X}^{adv}_{N}, y_{LL}) \right) \right\}
\]
$\alpha$ and number of iterations were selected in the same way as for the basic iterative method.
Below we refer to this method as the ``iter. l.l.''.
\section{Adversarial training}\label{sec:adv-training}
The basic idea of adversarial training is to inject adversarial examples into the training
set, continually generating new adversarial examples at every step of training \citep{Goodfellow-2015-adversarial}.
Adversarial training was originally developed for small models that did not use
batch normalization.
To scale adversarial training to ImageNet, we recommend using batch
normalization \citep{Ioffe+Szegedy-2015}.
To do so successfully, we found that it was important for
examples to be grouped into batches containing both normal and adversarial examples before
taking each training step, as described in algorithm~\ref{alg:adv-training}.
\begin{algorithm}
\caption{Adversarial training of network $N$.\\
Size of the training minibatch is $m$. Number of adversarial images in the minibatch is $k$.}\label{alg:adv-training}
\begin{algorithmic}[1]
\State Randomly initialize network $N$
\Repeat
\State Read minibatch $B = \{ X^1, \ldots, X^m \}$ from training set
\State Generate $k$ adversarial examples $\{ X^1_{adv}, \ldots, X^k_{adv} \}$ from corresponding
\Statex \hspace{\algorithmicindent} clean examples $\{ X^1, \ldots, X^k \}$ using current state of the network $N$
\State Make new minibatch $B' = \{ X^1_{adv}, \ldots, X^k_{adv}, X^{k+1}, \ldots, X^m \}$
\State Do one training step of network $N$ using minibatch $B'$
\Until training converged
\end{algorithmic}
\end{algorithm}
We use a loss function that allows independent control of the number and relative weight
of adversarial examples in each batch:
\[
Loss = \frac{1}{(m-k) + \lambda k}\left( \sum_{i \in CLEAN} L(X_{i}|y_{i}) + \lambda \sum_{i \in ADV} L(X_{i}^{adv}|y_{i}) \right)
\]
where $L(X|y)$ is a loss on a single example $X$ with true class $y$;
$m$ is total number of training examples in the minibatch;
$k$ is number of adversarial examples in the minibatch and
$\lambda$ is a parameter which controls the relative weight of adversarial examples in the loss.
We used $\lambda=0.3$, $m=32$, and $k=16$.
Note that we \textit{replace} each clean example with its adversarial counterpart,
for a total minibatch size of $32$, which is a departure from previous approaches
to adversarial training.
Fraction and weight of adversarial examples which we used in each minibatch differs
from \citet{LearningWithStrongAdversary} where authors replaced entire minibatch with adversarial examples.
However their experiments was done on smaller datasets (MNIST and CIFAR-10)
in which case adversarial training does not lead to decrease of accuracy on clean images.
We found that our approach works better for ImageNet models
(corresponding comparative experiments could be found in Appendix~\ref{app:num-adv-minibatch}).
We observed that if we fix $\epsilon$ during training then networks become robust only to
that specific value of $\epsilon$.
We therefore recommend choosing $\epsilon$ randomly, independently for each training example.
In our experiments we achieved best results when magnitudes were drawn from
a truncated normal distribution defined in interval $[0, 16]$
with underlying normal distribution $N(\mu=0, \sigma=8)$.\footnote{
In TensorFlow this could be achieved by {\tt tf.abs(tf.truncated\_normal(shape, mean=0, stddev=8))}.
}
\section{Experiments}\label{sec:experiments}
We adversarially trained an Inception v3 model~\citep{Inception-v3} on ImageNet.
All experiments were done using synchronous distributed training on $50$ machines,
with a minibatch of $32$ examples on each machine.
We observed that the network tends to reach maximum accuracy at around $130k - 150k$ iterations.
If we continue training beyond $150k$ iterations then eventually accuracy might decrease by a fraction of a percent.
Thus we ran experiments for around $150k$ iterations and then used the obtained accuracy as the final result of the experiment.
Similar to~\citet{Inception-v3} we used RMSProp optimizer for training.
We used a learning rate of $0.045$ except where otherwise indicated.
We looked at interaction of adversarial training and other forms or regularization (dropout, label smoothing and weight decay).
By default training of Inception v3 model uses all three of them.
We noticed that disabling label smoothing and/or dropout leads to
small decrease of accuracy on clean examples (by $0.1\%$ - $0.5\%$ for top 1 accuracy)
and small increase of accuracy on adversarial examples (by $1\%$ - $1.5\%$ for top 1 accuracy).
On the other hand reducing weight decay leads to decrease of accuracy on both clean and adversarial examples.
We experimented with delaying adversarial training by $0$, $10k$, $20k$ and $40k$ iterations.
In such case we used only clean examples during the first $N$ training iterations and
after $N$ iterations included both clean and adversarial examples in the minibatch.
We noticed that delaying adversarial training has almost no effect on accuracy on clean examples (difference in accuracy within $0.2\%$)
after sufficient number of training iterations (more than $70k$ in our case).
At the same time we noticed that larger delays of adversarial training might cause
up to $4\%$ decline of accuracy on adversarial examples with high magnitude of adversarial perturbations.
For small $10k$ delay changes of accuracy was not statistically significant to recommend against it.
We used a delay of $10k$ because this allowed us to reuse the same partially trained model
as a starting point for many different experiments.
For evaluation we used the ImageNet validation set which contains $50,000$ images and does not intersect with the training set.
\subsection{Results of adversarial training}\label{sec:results-adv-train}
We experimented with adversarial training using several types of one-step methods.
We found that adversarial training using any type of one-step method increases robustness to all types of one-step adversarial examples
that we tested.
However there is still a gap between accuracy on clean and adversarial examples which could vary depending on
the combination of methods used for training and evaluation.
Adversarial training caused a slight (less than $1\%$) decrease of accuracy on clean examples
in our ImageNet experiments.
This differs from results of adversarial training reported
previously, where adversarial training increased accuracy on the test
set~\citep{Goodfellow-2015-adversarial,Takeru-2016,miyato2016virtual}.
One possible explanation is that adversarial training acts as a regularizer.
For datasets with few labeled examples where overfitting is the primary concern,
adversarial training reduces test error.
For datasets like ImageNet where state-of-the-art models typically have high training set
error, adding a regularizer like adversarial training can increase training set error
more than it decreases the gap between training and test set error.
Our results suggest that adversarial training should be employed in two scenarios:
\begin{enumerate}
\item When a model is overfitting, and a regularizer is required.
\item When security against adversarial examples is a concern. In this case,
adversarial training is the method that provides the most security of
any known defense, while losing only a small amount of accuracy.
\end{enumerate}
By comparing different one-step methods for adversarial training
we observed that the best results in terms or accuracy on test set are achieved using ``step l.l.'' or ``step rnd.'' method.
Moreover using these two methods helped the model to become robust to adversarial examples
generated by other one-step methods.
Thus for final experiments we used ``step l.l.'' adversarial method.
For brevity we omitted a detailed comparison of different one-step methods here,
but the reader can find it in Appendix~\ref{app:flavors-fast}.
\begin{table}[!h]
\caption{Top~1 and top~5 accuracies of an adversarially trained network on clean images and adversarial images with various
test-time $\epsilon$.
Both training and evaluation were done using ``step l.l.'' method.
Adversarially training caused the baseline model to become robust to adversarial examples but lost some accuracy on
clean examples.
We therefore also trained a deeper model with two additional Inception blocks.
The deeper model benefits more from adversarial training in terms of robustness to adversarial perturbation,
and loses less accuracy on clean examples than the smaller model does.
}
\label{table:adversarial-training}
\begin{center}
\begin{tabular}{|l|l|c|c|c|c|c|}
\hline
& & Clean & $\epsilon=2$ & $\epsilon=4$ & $\epsilon=8$ & $\epsilon=16$ \\
\hline
Baseline & top 1 & 78.4\% & 30.8\% & 27.2\% & 27.2\% & 29.5\% \\
(standard training) & top 5 & 94.0\% & 60.0\% & 55.6\% & 55.1\% & 57.2\% \\
\hline
Adv. training & top 1 & 77.6\% & 73.5\% & 74.0\% & 74.5\% & 73.9\% \\
& top 5 & 93.8\% & 91.7\% & 91.9\% & 92.0\% & 91.4\% \\
\hline
Deeper model & top 1 & 78.7\% & 33.5\% & 30.0\% & 30.0\% & 31.6\% \\
(standard training) & top 5 & 94.4\% & 63.3\% & 58.9\% & 58.1\% & 59.5\% \\
\hline
Deeper model & top 1 & 78.1\% & 75.4\% & 75.7\% & 75.6\% & 74.4\% \\
(Adv. training) & top 5 & 94.1\% & 92.6\% & 92.7\% & 92.5\% & 91.6\% \\
\hline
\end{tabular}
\end{center}
\end{table}
Results of adversarial training using ``step l.l.'' method are provided in Table~\ref{table:adversarial-training}.
As it can be seen from the table we were able to significantly increase
top-1 and top-5 accuracy on adversarial examples (up to $74\%$ and $92\%$ correspondingly)
to make it to be on par with accuracy on clean images.
However we lost about $0.8\%$ accuracy on clean examples.
We were able to slightly reduce the gap in the accuracy on clean images by slightly increasing the size of the model.
This was done by adding two additional Inception blocks to the model.
For specific details about Inception blocks refer to~\citet{Inception-v3}.
Unfortunately, training on one-step adversarial examples does not confer robustness to iterative
adversarial examples, as shown in Table~\ref{table:adversarial-training-iter-result}.
\begin{table}[!h]
\caption{Accuracy of adversarially trained network on iterative adversarial examples.
Adversarial training was done using ``step l.l.'' method.
Results were computed after $140k$ iterations of training.
Overall, we see that training on one-step adversarial examples does not confer resistance
to iterative adversarial examples.
}
\label{table:adversarial-training-iter-result}
\begin{center}
\begin{tabular}{|l|l|l|c|c|c|c|c|}
\hline
Adv. method & Training & & Clean & $\epsilon=2$ & $\epsilon=4$ & $\epsilon=8$ & $\epsilon=16$ \\
\hline
Iter. l.l. & Adv. training & top 1 & 77.4\% & 29.1\% & 7.5\% & 3.0\% & 1.5\% \\
& & top 5 & 93.9\% & 56.9\% & 21.3\% & 9.4\% & 5.5\% \\ \cline{2-8}
& Baseline & top 1 & 78.3\% & 23.3\% & 5.5\% & 1.8\% & 0.7\% \\
& & top 5 & 94.1\% & 49.3\% & 18.8\% & 7.8\% & 4.4\% \\
\hline
Iter. basic & Adv. training & top 1 & 77.4\% & 30.0\% & 25.2\% & 23.5\% & 23.2\% \\
& & top 5 & 93.9\% & 44.3\% & 33.6\% & 28.4\% & 26.8\% \\ \cline{2-8}
& Baseline & top 1 & 78.3\% & 31.4\% & 28.1\% & 26.4\% & 25.9\% \\
& & top 5 & 94.1\% & 43.1\% & 34.8\% & 30.2\% & 28.8\% \\
\hline
\end{tabular}
\end{center}
\end{table}
We also tried to use iterative adversarial examples during training, however we were unable to gain any benefits out of it.
It is computationally costly and we were not able to obtain robustness to adversarial examples or to prevent
the procedure from reducing the accuracy on clean examples significantly.
It is possible that much larger models are necessary to achieve robustness to such a large class of inputs.
\subsection{Label leaking}\label{sec:label-leaking}
We discovered a {\em label leaking} effect:
when a model is trained on FGSM adversarial examples and
then evaluated using FGSM adversarial examples,
the accuracy on adversarial images becomes much higher than the accuracy on clean images
(see Table~\ref{table:label-leaking}).
This effect also occurs (but to a lesser degree) when using other one-step methods
that require the true label as input.
We say that label for specific example has been leaked if and only if
the model classifies an adversarial example correctly
when that adversarial example is generated using the true label
but misclassifies a corresponding adversarial example
that was created without using the true label.
If too many labels has been leaked then accuracy on adversarial examples
might become bigger than accuracy on clean examples which we observed on ImageNet dataset.
We believe that the effect occurs because one-step methods that use the true label
perform a very simple and predictable transformation that the model can learn to
recognize.
The adversarial example construction process thus inadvertently leaks information
about the true label into the input.
We found that the effect vanishes if we use adversarial example construction processes
that do not use the true label.
The effect also vanishes if an iterative method is used, presumably because the
output of an iterative process is more diverse and less predictable than the output
of a one-step process.
Overall due to the label leaking effect, we do not recommend to use FGSM or other methods
defined with respect to the true class label to evaluate robustness to adversarial examples;
we recommend to use other one-step methods that do not directly access the label instead.
We recommend to replace the true label with the most likely label predicted by the model.
Alternately, one can maximize the cross-entropy between the full distribution over all
predicted labels given the clean input and the distribution over all predicted labels given
the perturbed input \citep{Takeru-2016}.
\begin{table}[!h]
\caption{Effect of label leaking on adversarial examples.
When training and evaluation was done using FGSM
accuracy on adversarial examples was higher than on clean examples.
This effect was not happening when training and evaluation
was done using ``step l.l.'' method.
In both experiments training was done for $150k$ iterations
with initial learning rate $0.0225$.}
\label{table:label-leaking}
\begin{center}
\begin{tabular}{|l|l|c|c|c|c|c|}
\hline
& & Clean & $\epsilon=2$ & $\epsilon=4$ & $\epsilon=8$ & $\epsilon=16$ \\
\hline
No label leaking, & top 1 & 77.3\% & 72.8\% & 73.1\% & 73.4\% & 72.0\% \\
training and eval using ``step l.l.'' & top 5 & 93.7\% & 91.1\% & 91.1\% & 91.0\% & 90.3\% \\
\hline
With label leaking, & top 1 & 76.6\% & 86.2\% & 87.6\% & 88.7\% & 87.0\% \\
training and eval using FGSM & top 5 & 93.2\% & 95.9\% & 96.4\% & 96.9\% & 96.4\% \\
\hline
\end{tabular}
\end{center}
\end{table}
We revisited the adversarially trained MNIST classifier from \citet{Goodfellow-2015-adversarial}
and found that it too leaks labels.
The most labels are leaked with $\epsilon = 0.3$ on MNIST data in $[0, 1]$.
With that $\epsilon$, the model leaks 79 labels on the test set of 10,000 examples.
However, the amount of label leaking is small compared to the amount of error caused
by adversarial examples.
The error rate on adversarial examples exceeds the error rate on
clean examples for $\epsilon \in \{ .05, .1, .25, .3, .4, .45, .5\}$.
This explains why the label leaking effect was not noticed earlier.
\subsection{Influence of model capacity on adversarial robustness}\label{sec:model-capacity}
\begin{figure}
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{Capacity_NoAdvTrain_Top1}
\caption{No adversarial training, ``step l.l.'' adv. examples}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{Capacity_AdvTrain_Top1}
\caption{With adversarial training, ``step l.l.'' adv. examples}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{Capacity_IterLL_NoAdvTrain_Top1}
\caption{No adversarial training, ``iter. l.l.'' adv. examples}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{Capacity_IterLL_AdvTrain_Top1}
\caption{With adversarial training, ``iter. l.l.'' adv. examples}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{Capacity_IterBasic_NoAdvTrain_Top1}
\caption{No adversarial training, ``basic iter.'' adv. examples}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{Capacity_IterBasic_AdvTrain_Top1}
\caption{With adversarial training, ``basic iter.'' adv. examples}
\end{subfigure}
\caption{Influence of size of the model on top~1 classification accuracy of various adversarial examples.
Left column~--- base model without adversarial training,
right column~--- model with adversarial training using ``step l.l.'' method.
Top row~--- results on ``step l.l.'' adversarial images,
middle row~--- results on ``iter. l.l.'' adversarial images,
bottom row~--- results on ``basic iter.'' adversarial images.
See text of Section~\ref{sec:model-capacity} for explanation of meaning of horizontal and vertical axes.
}\label{fig:capacity-influence}
\end{figure}
We studied how the size of the model (in terms of number of parameters) could affect robustness to adversarial examples.
We picked Inception v3 as a base model and varied its size by changing the number of filters in each convolution.
For each experiment we picked a scale factor $\rho$ and multiplied the number of filters in each convolution by $\rho$.
In other words $\rho=1$ means unchanged Inception v3, $\rho=0.5$ means Inception with half of the usual number of filters in convolutions, etc \ldots
For each chosen $\rho$ we trained two independent models: one with adversarial training and another without.
Then we evaluated accuracy on clean and adversarial examples for both trained models.
We have run these experiments for $\rho \in [0.5, 2.0]$.
In earlier experiments (Table \ref{table:adversarial-training}) we found that {\em deeper} models benefit
more from adversarial training. The increased depth changed many aspects of the model architecture.
These experiments varying $\rho$ examine the effect in a more controlled setting, where the architecture
remains constant except for the number of feature maps in each layer.
In all experiments we observed that accuracy on clean images kept increasing with increase of $\rho$,
though its increase slowed down as $\rho$ became bigger.
Thus as a measure of robustness we used the ratio of accuracy on adversarial images to accuracy on clean images
because an increase of this ratio means that the gap between accuracy on adversarial and clean images becomes smaller.
If this ratio reaches $1$ then the accuracy on adversarial images is the same as on clean ones.
For a successful adversarial example construction technique, we would never expect this ratio
to exceed $1$, since this would imply that the adversary is actually helpful.
Some defective adversarial example construction techniques, such as those suffering from label
leaking, can inadvertently produce a ratio greater than $1$.
Results with ratios of accuracy for various adversarial methods and $\epsilon$ are provided in Fig.~\ref{fig:capacity-influence}.
For models without adversarial training, we observed that there is an optimal value of $\rho$
yielding best robustness. Models that are too large or too small perform worse.
This may indicate that models become more robust to adversarial examples until they become
large enough to overfit in some respect.
For adversarially trained models, we found that robustness consistently increases with
increases in model size.
We were not able to train large enough models to find when this process ends, but we
did find that models with twice the normal size have an accuracy ratio approaching $1$
for one-step adversarial examples.
When evaluated on iterative adversarial examples, the trend toward increasing robustness
with increasing size remains but has some exceptions. Also, none of our models was
large enough to approach an accuracy ratio of $1$ in this regime.
Overall we recommend exploring increase of accuracy (along with adversarial training)
as a measure to improve robustness to adversarial examples.
\subsection{Transferability of adversarial examples}\label{sec:transfer}
\begin{table}[!h]
\caption{Transfer rate of adversarial examples generated using different adversarial methods and perturbation size $\epsilon=16$.
This is equivalent to the error rate in an attack scenario where the attacker prefilters their adversarial examples
by ensuring that they are misclassified by the source model before deploying them against the target.
Transfer rates are rounded to the nearest percent in order to fit the table on the page.
The following models were used for comparison: \textit{A} and \textit{B} are Inception v3 models with different random initializations,
\textit{C} is Inception v3 model with ELU activations instead of Relu, \textit{D} is Inception v4 model.
See also Table \ref{table:transfer-error-rate} for the absolute error rate when the attack is not prefiltered,
rather than the transfer rate of adversarial examples.
}
\label{table:transfer-rate}
\begin{center}
\setlength{\tabcolsep}{5pt}
\begin{tabular}{|l|l|cccc|cccc|cccc|}
\hline
& & \multicolumn{4}{|c|}{FGSM} & \multicolumn{4}{|c|}{basic iter.} & \multicolumn{4}{|c|}{iter l.l.} \\ \cline{3-14}
& source & \multicolumn{4}{|c|}{target model} & \multicolumn{4}{|c|}{target model} & \multicolumn{4}{|c|}{target model} \\
& model & A & B & C & D & A & B & C & D & A & B & C & D \\
\hline
top 1 & A (v3) & 100 & 56 & 58 & 47 & 100 & 46 & 45 & 33 & 100 & 13 & 13 & 9 \\
& B (v3) & 58 & 100 & 59 & 51 & 41 & 100 & 40 & 30 & 15 & 100 & 13 & 10 \\
& C (v3 ELU) & 56 & 58 & 100 & 52 & 44 & 44 & 100 & 32 & 12 & 11 & 100 & 9 \\
& D (v4) & 50 & 54 & 52 & 100 & 35 & 39 & 37 & 100 & 12 & 13 & 13 & 100 \\
\hline
top 5 & A (v3) & 100 & 50 & 50 & 36 & 100 & 15 & 17 & 11 & 100 & 8 & 7 & 5 \\
& B (v3) & 51 & 100 & 50 & 37 & 16 & 100 & 14 & 10 & 7 & 100 & 5 & 4 \\
& C (v3 ELU) & 44 & 45 & 100 & 37 & 16 & 18 & 100 & 13 & 6 & 6 & 100 & 4 \\
& D (v4) & 42 & 38 & 46 & 100 & 11 & 15 & 15 & 100 & 6 & 6 & 6 & 100 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{TransferTop1}
\caption{Top 1 transferability.}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{TransferTop5}
\caption{Top 5 transferability.}
\end{subfigure}
\caption{Influence of the size of adversarial perturbation on transfer rate of adversarial examples.
Transfer rate was computed using two Inception v3 models with different random intializations.
As could be seen from these plots, increase of $\epsilon$ leads to increase of transfer rate.
It should be noted that transfer rate is a ratio of number of transferred adversarial examples to
number of successful adversarial examples for source network.
Both numerator and denominator of this ratio are increasing with increase of $\epsilon$,
however we observed that numerator (i.e. number of transferred examples)
is increasing much faster compared to increase of denominator.
For example when $\epsilon$ increases from $8$ to $16$
relative increase of denominator is less than $1\%$ for each of the considered methods,
at the same time relative increase of numerator is more than $20\%$.}\label{fig:transferability-eps}
\end{figure}
From a security perspective, an important property of adversarial examples is
that they tend to transfer from one model to another, enabling an attacker
in the black-box scenario
to create adversarial examples for their own substitute model, then deploy
those adversarial examples to fool a target model \citep{Szegedy-ICLR2014,Goodfellow-2015-adversarial,Papernot-2016-transferability}.
We studied transferability of adversarial examples between the following models:
two copies of normal Inception v3 (with different random initializations and order or training examples),
Inception v4~\citep{Inception-v4} and
Inception v3 which uses ELU activation~\citep{EluActivation} instead of Relu\footnote{
We achieved $78.0\%$ top 1 and $94.1\%$ top 5 accuracy on Inception v3 with ELU activations,
which is comparable with accuracy of Inception v3 model with Relu activations.}.
All of these models were independently trained from scratch
until they achieved maximum accuracy.
In each experiment we fixed the source and target networks,
constructed adversarial examples from $1000$ randomly sampled clean images from the test set
using the source network and performed classification of all of them using both source and target
networks. These experiments were done independently for different adversarial methods.
We measured transferability using the following criteria.
Among $1000$ images we picked only misclassified adversarial example for the source model
(i.e. clean classified correctly, adversarial misclassified)
and measured what fraction of them were misclassified by the target model.
Transferability results for all combinations of models and $\epsilon = 16$ are provided in Table~\ref{table:transfer-rate}.
Results for various $\epsilon$ but fixed source and target model are provided in Fig.~\ref{fig:transferability-eps}.
As can be seen from the results, FGSM adversarial examples are the most transferable,
while ``iter l.l.'' are the least.
On the other hand ``iter l.l.'' method is able to fool the network in more than $99\%$ cases (top 1 accuracy),
while FGSM is the least likely to fool the network.
This suggests that there might be an inverse relationship between transferability of specific method and ability of the method to fool the network.
We haven't studied this phenomenon further, but one possible explanation could be the fact that
iterative methods tend to overfit to specific network parameters.
In addition, we observed that for each of the considered methods transfer rate is increasing with increase of $\epsilon$ (see~Fig.~\ref{fig:transferability-eps}).
Thus potential adversary performing a black-box attack have an incentive to use higher $\epsilon$ to increase the chance of success of the attack.
\section{Conclusion}\label{sec:conclusion}
In this paper we studied how to increase robustness to adversarial examples of large models (Inception~v3)
trained on large dataset (ImageNet).
We showed that adversarial training provides robustness to adversarial examples generated using one-step methods.
While adversarial training didn't help much against iterative methods we observed that
adversarial examples generated by iterative methods are less likely to be transferred between networks,
which provides indirect robustness against black box adversarial attacks.
In addition we observed that increase of model capacity could also help to increase robustness to adversarial examples
especially when used in conjunction with adversarial training.
Finally we discovered the effect of label leaking which resulted in higher accuracy on
FGSM adversarial examples compared to clean examples when the network was adversarially trained.
|
1,116,691,499,126 | arxiv | \section{Introduction}
The existence, stability and properties of nonlinear localized
excitations (NLEs) in Hamiltonian lattices with discrete
translational symmetry have been subjects of growing
research interest (see e.g.
\cite{st88},\cite{cp90},\cite{jbp90},\cite{st92} and references therein).
NLEs can be viewed as generalized discrete analogs to the breather
solution in a sine-Gordon equation \cite{cp90}. They are characterized
by a localized vibrational state of the lattice.
There are two basic reasons for the generic
NLE existence on Hamiltonian lattices: i) the lattice forces
acting on a given particle are { \sl nonlinear} (thus one
can tune oscillation frequencies by varying the energy) and
ii) the {\sl discrete translational symmetry} of the lattice
(in contrast to the continuous translational symmetry of Hamiltonian
field equations) provides a {\sl finite} upper phonon band edge
of the spectrum of extended small amplitude oscillations of the
lattice around its groundstate \cite{fw2},\cite{fw7}.
There are a few known rigorous NLE existence proofs.
First NLEs are exact solutions of
the {\sl integrable} Ablowitz-Ladik lattice
\cite{al76}.
In fact they form a three-parameter family of solutions.
Secondly
NLEs are exact solutions for
the Fermi-Pasta-Ulam chain with box-like interaction potential \cite{sps92}.
In this case the NLEs are of compact support.
Finally, and most importantly, MacKay and Aubry have derived
an existence proof for NLEs in an array of weakly coupled
anharmonic oscillators \cite{ma94}. Remarkably this existence proof works
independent of the lattice dimension.
In this contribution we will first deal with the existence of NLEs
in {\sl nonintegrable} generic one-dimensional Hamiltonian
lattices. In the second part we will investigate a class of
Fermi-Pasta-Ulam chains and give rigorous proofs for the
NLE existence.
Let us briefly outline the main steps in the approach presented
below. We will assume the existence of a time-periodic NLE
on a one-dimensional lattice. We represent
the lattice displacements at each lattice site in the NLE ansatz
in a Fourier series with respect to time. When we insert the NLE ansatz
into the lattice equations of motion, we obtain a set of coupled
algebraic equations for the Fourier components of the NLE ansatz,
which form an infinite-dimensional map. The NLE solution has to correspond
to a common point of two separatrix manifolds in the phase space
of the map, or a homoclinic point.
Analyzing the map in the (linearizable) tails of the NLE,
we can derive the dimension of the separatrix manifolds. Consequently
we show that if a homoclinic point exists for a given system,
then generically the homoclinic point will survive
under perturbations of the system. We then consider
a subclass of Hamiltonian chains and rigorously prove the existence
of two different (with respect to symmetry) NLE solutions.
For this particular example we show the emergence of horseshoe patterns
- a consequence of the existence of homoclinic points.
\section{Stability of NLE solutions under Hamiltonian perturbations}
We consider a classical one-dimensional Hamiltonian lattice
of interacting particles (perhaps feeling an external
field periodic with the lattice) with lattice cite $a=1$.
The displacements of the particles from their groundstate
(equilibrium) positions are given by a $n$-dimensional
vector $\vec{X}_l$, where $n$ is the number of components
per unit cell ($n \leq n_0$, $n_0$ finite) and the integer
$l$ marks the number of the unit cell. The range of the interaction
$r$ is considered to be finite: $r \leq r_0$, $r_0$ finite.
Here $n_0$ and $r_0$ are positive integers.
The potential energy of the system is required to have anharmonic
terms in the displacements if expanded in a Taylor series around
the groundstate (minimum of potential energy) of the system.
Furthermore the potential energy should become a positive definite
quadratic form in the limit of infinitely small displacements.
The Hamilton function $H$ is given by the sum of the kinetic
energy of all particles and the potential energy.
As it was shown in \cite{fw2},\cite{fw7},
the only possible { \sl exact} NLE solution
on an arbitrary lattice (by that we mean no additional
symmetries are present) has to have the form:
\begin{equation}
\vec{X}_l(t)=\vec{X}_l(t+2\pi/\omega_1)\;\;,\;\;
\vec{X}_{l \rightarrow \pm \infty} \rightarrow 0 \;\;. \label{1}
\end{equation}
Then one can avoid resonance conditions of multiples of the
fundamental frequency $\omega_1$ (as they appear in the Fourier
transformed functions in (\ref{1}) with respect to time because
of the nonlinearity of the system) with phonon frequencies
of the linearized (around the groundstate) system. Since the
motion of assumed existent NLEs requires the
excitation of at least a second fundamental frequency
in the ansatz (\ref{1}) \cite{fw6}, we can exclude them from the consideration
and search for {stationary time-periodic} NLEs as given in (\ref{1}).
Note that in the case of the Ablowitz-Ladik lattice additional symmetries
are present (the lattice is integrable) and thus the above statement
does not hold - moving NLEs are exact solutions in this
nongeneric case.
Because of the assumed time-periodicity of all displacements in (\ref{1})
we obtain for the assumed NLE solution
\begin{equation}
\vec{X}_l = \sum_{k=0,\pm 1, \pm 2, ...} \vec{A}_{(l,k)}{\rm e}
^{i\omega_1 k t}\;\;. \label{2}
\end{equation}
Now we can insert this ansatz (\ref{2}) into the Newtonian equations
of motion $\ddot{\vec{X}}_l = \partial H / \partial \vec{X}_l$.
The left and right hand expressions of the equation of motion
are represented again as a general Fourier series.
Equaling the prefactors at identical exponential terms to each other
we finally obtain a coupled set of algebraic equations for
the unknown Fourier coefficients $\vec{A}_{(l,k)}$. Because of (\ref{1})
the Fourier coefficients have to satisfy the boundary condition
$\vec{A}_{(l \rightarrow \pm \infty,k)} \rightarrow 0$.
In the following we will study properties of this algebraic set
of equations.
First we can note that because of the $d=1$ dimensionality of the
considered lattice the coupled set of algebraic equations for
the Fourier coefficients $\vec{A}_{(l,k)}$ can be represented
as a discrete map of a $d_M=(2r_0n_0k_{max})$-dimensional phase space,
where the integer $k_{max}$ represents the number of
considered higher harmonics in (\ref{2}) and has to tend
to infinity.
In other words, given the Fourier coefficients
at $2r_0$ neighboring lattice sites completely determines
the Fourier coefficients to the left and right of the specified
chain segment.
Secondly because of the required asymptotic vanishing of the Fourier
coefficients for $l \rightarrow \pm \infty$ we can linearize
the $d_M$-dimensional map in the tails of the assumed NLE solution
with respect to the variables $\vec{A}_{(l,k)}$.
The linearized map will decouple into a set of $k_{max}$ independent
$d_s=2r_0n_0$-dimensional linear submaps \cite{fw7}. In each of these submaps
Fourier components with only one Fourier integer $k$ will appear.
Each linear submap is equivalent to the problem of finding
solutions of the linearized (around the groundstate) lattice equations
of motion using the ansatz $\vec{X}_l(t)= A_l \exp{i\tilde{\omega}t}$.
Here for every submap one has to substitute $\tilde{\omega}= k\omega_1$.
If $\tilde{\omega}$ equals to an arbitrary phonon frequency of
the linearized lattice equations, then it follows that the
corresponding linear submap is characterized by a matrix ${\bf M}_k$
of dimension $d_s \times d_s$ with ${\rm det}{\bf M}_k=1$ and
$\lambda_i \lambda_{i+d_s} =1$ where $\lambda_{i,i+d_s}$ are two
of the $2d_s$ eigenvalues
of ${\bf M}_k$. Since these properties do not depend on the specific
value of the considered phonon frequency, it follows that they
are independent on $\tilde{\omega}$ and thus independent on $k$
and $\omega_1$. Consequently we find that every linear submap
is volume preserving and symplectic.
Finally we note that for every considered linear submap (and thus
also for the original $d_M$-dimensional map) the phase space point of
zero Fourier components is a fixed point of the map.
If $k\omega_1$ equals a phonon frequency, the fixed point is
of elliptic character. If however $k \omega_1$ does not equal
any phonon frequency, the fixed point has to be a saddle point
(because it can not be an elliptic fixed point and the map is
symplectic). That means that $r_0n_0$ eigenvalues of the matrix ${\bf M}_k$
are real and of absolute value lower than one. Consequently
the separatrix manifold of all points in the space of the
submap attracted by the saddle point is of dimension $d_{sm}=r_0n_0$.
If we require that neither of the multiples of the fundamental
frequency $k\omega_1$ resonates with the phonon band, the
corresponding original $d_M$-dimensional map in the phase space
of the Fourier coefficients has a separatrix manifold
of dimension $d_M/2$. To get a NLE solution
we have to find a point in the space of the original map which
belongs simultaneously to the separatrix manifold $S_-$ attracting the
solution to zero for $l \rightarrow - \infty$ and to the
separatrix manifold $S_+$ attracting the solution
to zero for $l \rightarrow +\infty$ (cf e.g. \cite{jd93},\cite{ekns84}).
This point has to be then a homoclinic point \cite{ll92}. If a homoclinic
point exists, there exist an infinite number of homoclinic points, which
can be obtained by subsequent mapping of a given homoclinic point.
As a result the horseshoe structure of intersections between the
stable and unstable manifolds have to emerge \cite{ll92}.
Because of the assumed nonresonance condition (see above) every manifold
has dimension $d_M/2$. Let us choose one of the homoclinic points.
Then we can define the tangent planes to each of the two manifolds
in this point.
There are two possibilities for the topology of these two planes.
Either i) these two planes span the whole phase space
of the map (of dimension $d_M$) or ii) the two planes span a
space of lower dimension than $d_M$. In case i) any small perturbation
of the original system (consequently of the map, consequently
of the two manifolds and consequently of the two tangent planes)
will only shift the homoclinic point smoothly. Also in this case i)
all other homoclinic points will have the same topology with
respect to the tangent planes. In case ii) there exist perturbations
of the system which will lead to a vanishing of the homoclinic point
(and thus of all other homoclinic points too). Still there will
exist a perturbation for case ii) such that the homoclinic point
is smoothly shifted {\sl and} simultaneously the two tangent maps
will span the whole phase space of dimension $d_M$.
Consequently we can conclude, that if we have a NLE solution which
corresponds to a set of homoclinic points of type i) then any small
perturbation of the system will smoothly transform
the NLE solution into a new NLE solution.
If we have a NLE solution which corresponds to a set of homoclinic
points of type ii), we can always find a perturbation such that
we transform the NLE solution into a new NLE solution which corresponds
to a set of homoclinic points of type i) and is then stable under
subsequent small perturbations. From the above said it follows
that NLE solutions, if they appear, are generically not isolated
objects in the sense that small perturbations of the system
either do not destroy them at all, or that a proper perturbation
of the system will transform them into nonisolated solutions.
We have operated with finite phase space dimensions
$d_M$. Of course $d_M$ was not bounded from above, so that the
limit $d_M \rightarrow \infty$ can be considered without altering
the arguments.
Let us summarize the results obtained so far. If we assume that
for a given Hamiltonian chain a NLE solution exists, then it
corresponds to an infinite number of homoclinic points in the
map as introduced above. If neither of the multiples of the
frequency of the NLE solution resonate with the phonon band
(nonresonance condition),
then the dimension of the stable and unstable manifolds is
exactly one half of the map's phase space. From topological
arguments it follows then, that either the NLE solution is structurally
stable - i.e. it is smoothly transformed under {\sl any} perturbation
of the Hamiltonian of the system, or there exists at least one perturbation
of the Hamiltonian such, that the NLE solution is smoothly transformed
into a structural stable one. Of course a perturbation of the Hamiltonian
has to preserve the general structure of the map, i.e. we are
not allowed to consider e.g. a two-dimensional perturbation, which
would lead to the impossibility of defining the map.
A central point in the consideration so far has been the nonresonance
condition. If this condition is violated for $n$ values of
$k\omega_1$, then the dimension of the stable and unstable manifolds
is lowered by $(2n)$ to $(d_M - 2n)$. If an NLE solution exists
for such a case, then the tangent planes of the two manifolds
in a given homoclinic point can not span the whole phase space
of the map. Consequently there will be always perturbations of
the Hamiltonian such that the NLE solution will vanish.
At this point it also becomes clear that in the analogous problem
of a Hamiltonian field equation, where resonances can be never
avoided \cite{ekns84}, systems which allow for NLE solutions
become isolated (see also \cite{jd93}).
If a system is given with a NLE solution which fulfills the
nonresonance condition, it could be possible that a given
perturbation of the Hamiltonian would lead to a violation
of the nonresonance condition. Consequently one has to check in
a given case, whether the nonresonance condition survives under
a perturbation.
\section{Proof of existence of NLE solutions for Fermi-Pasta-Ulam
chains}
In the last part of this work we will prove rigorously the
existence of NLE solutions for a class of Fermi-Pasta-Ulam
chains governed by the following equations of motion:
\begin{equation}
\ddot{X}_l=-(X_l-X_{l-1})^{2m-1}-(X_l-X_{l+1})^{2m-1}\ \label{3}
\end{equation}
with $m=2,3,4,...$.
As it was shown in \cite{fw7},\cite{ysk93} we can consider a time-space
separation ansatz $X_l(t)=A_l G(t)$. Inserting the separation
ansatz into (\ref{3}) we get a differential equation for $G(t)$:
$\ddot{G}(t)=-\kappa G^{2m-1}(t)$ and a two-dimensional map
for the amplitudes $A_l$:
\begin{equation}
\kappa A_l= (A_l-A_{l-1})^{2m-1}+(A_l-A_{l+1})^{2m-1}\;.\label{4}
\end{equation}
Here $\kappa > 0$ is required in order to get a bound oscillatory
solution for $G(t)$. The phase space properties of map (\ref{4})
are shown in Fig.1 for the case $m=2$ and $\kappa=1$. Below we
will refer to particular patterns observed in Fig.1.
We consider cases when $A_l/A_{l-1} < 0$
and introduce $f_l=|-1|^{l}A_l$ with
\begin{equation}
\kappa f_l = (f_l+f_{l-1})^{2m-1}+(f_l + f_{l+1})^{2m-1}\;. \label{5}
\end{equation}
Equation (\ref{5}) can be viewed as a two-dimensional map $\bf M$
of a vector $\vec{f}_l=(f_l,f_{l-1})$: $\vec{f}_{l+1}={\bf M} \vec{f}_l$.
The
task is then to show the existence of at least one
value of $\kappa > 0$ such that (\ref{5})
yields $f_{|l|} \geq f_{|l'|} > 0$ for $|l| < |l'|$ and $f_{l \rightarrow
\pm \infty} \rightarrow 0$.
First we note that for any value of $\kappa$ the map (\ref{5})
has a fixed point $\vec{f}_{fp1}=(0,0)$. Adding weak harmonic
nearest neighbour interactions to (\ref{3}) and considering
the limit of vanishing harmonic interactions yields that the
fixed point $(0,0)$ is a saddle point with eigenvalues
$\lambda_{1,fp1}=0$ and $\lambda_{2,fp1}= \infty$.
Consequently we find that there exist two one-dimensional
separatrix manifolds $S_+$ and $S_-$ of the map (\ref{5}).
All points in the two-dimensional phase space of the map
belonging to $S_{\pm}$ are attracted to the saddle point
after an infinite number of iterations for $l \rightarrow \pm \infty$.
Secondly
it follows that for $\kappa_{fp2}=2^{2m}f^{2m-2}$
the point $\vec{f}_{fp2}=(f,f)$ is also a fixed point (see also Fig.1).
Linearizing the map around
$\vec{f}_{fp2}$ yields the elliptic character of $\vec{f}_{fp2}$.
The two eigenvalues of the linearized map $\lambda_{1,fp2}$ and
$\lambda_{2,fp2}$ obey the relations $|\lambda_{1,fp2}|=|\lambda_{2,fp2}|=1$
and are given by the expression $\lambda_{12,fp2}=\tilde{\kappa}-1
\pm i (1-(1-\tilde{\kappa})^2)^{1/2}$ with $\tilde{\kappa}=2/(2m-1)$.
Let us mention another useful property of the map (\ref{5}),
which holds due to the discrete translational symmetry
of the considered system (\ref{3}).
If we generate a sequence of $f_2,f_3,...$ starting with a vector
$\vec{f}_1=(a,b)$ and iterating (\ref{5}) towards growing integers $l$,
then we can generate the same sequence of numbers
by iterating (\ref{5}) towards decreasing integers $l$ starting
with the vector $\vec{f}_1=(b,a)$.
We consider an initial vector $\vec{f}_l=(a,b)$ with $0 < a \leq b$
and iterate towards increasing $l$. If we require $0 \leq f_{l+1} \leq a$,
the parameter $\kappa$ is confined to $\kappa_l^- \leq \kappa
\leq \kappa_l^+$
with $\kappa_l^- = ((f_l+f_{l-1})^{2m-1} + f_l^{2m-1})/f_l$ and $\kappa_l^+
= \kappa_l^- + (2^{2m-1}-1)f_l^{2m-2}$. For any of the obtained values
of $f_{l+1}=c$ we can consider the next step requiring $0 \leq f_{l+2}
\leq c$. By itself this requirement again yields a confinement
of $\kappa$ to $\kappa_{l+1}^- \leq \kappa \leq \kappa_{l+1}^+$.
If we choose $c=0$ then $\kappa_{l+1}^- = +\infty$. If we choose
$c=a$ it follows $\kappa_{l+1}^+ \leq \kappa_l^+$. Because $c$ is
a monotonic function of $\kappa$ we can satisfy $a \geq f_{l+1} \geq
f_{l+2} \geq 0$ only if $\kappa_{l}^- < \kappa \leq \kappa_l^+$ for
$a=b$ and $\kappa_l^- < \kappa < \kappa_l^+$ for $a < b$.
Repeating this procedure for every following lattice site
we get a sequence of monotonically increasing lower bounds on
$\kappa$ defined by the
vanishing of $f_{l+m}=0$ in the $m$-th step. In the limit $m \rightarrow
\infty$ we thus obtain exactly one value of $\kappa$, which is
the limit of the mentioned sequence of lower bounds. This limiting
value is finite, because it is always smaller than $\kappa_l^+$.
It can not be equal to $\kappa_l^+$ either, since we would
then get the infinitely weak perturbed elliptic point of the
map. Because of the elliptic character of this fixed point
all amplitudes $f_l$ have to stay infinitely close to their
fixed point values which are finite. Thus we have found, that
for any initial vector $\vec{f}_l=(a,b)$ with $a \leq b$ there exists
exactly one value of $\kappa$ such that the initial vector
belongs to $S_+$. The point $\vec{f}_l=(b,a)$ belongs then
to $S_-$.
In the case $a=b$ it follows that it is always possible
to find exactly one value of $\kappa$ such that the initial
vector $\vec{f}_l=(a,a)$ belongs to {\sl both} $S_+$ and $S_-$.
Consequently we proved rigorously the existence of one type
of NLEs, which are known as 'even-parity-mode' \cite{jbp90}.
Their characteristic
feature is that the center of energy density of the corresponding
solution is located between two lattice sites $l$ and $(l-1)$ at
$(l-0.5)$ \cite{fw6}.
Finally let us consider the case $0 \leq f_{l-2}=f_{l}=a \leq f_{l-1}=b $.
Solving the equation (\ref{5}) for $l \rightarrow (l-1)$ we get
allowed values of $\kappa$ in an interval defined by $b$. Appending
the above described procedure for the sequence of lower bounds
of $\kappa$ we again find that for exactly one value of $\kappa$
such that $a < b$ the vectors $(a,b)$ and $(b,a)$ belong
to $S_+$. By the symmetry of the assumed initial configuration they
also belong to $S_-$. Consequently we proved rigorously the
existence of a second type of NLEs, which are known as 'odd-parity mode'
\cite{jbp90}.
Their characteristic feature is that the center of energy density of
the solution is located on the lattice site $(l-1)$ \cite{fw6}.
Due to the radius of interaction $r_0=1$ it follows that there
are no other allowed stationary time-periodic NLE solutions
in lattices of type (\ref{3}). In the limit $m \rightarrow \infty$
the two NLE solutions become compact and can be calulated
analytically \cite{sps92}.
It would be now very interesting to analyse the map (\ref{5})
in order to observe the previously discussed stable and unstable
manifolds, and consequently the homoclinic points. However
this map (\ref{5}) is not linearizable around the fixed point $f_l=0$ -
some matrix elements of the Jacobian diverge. Consequently
it is impossible to determine the structure of the manifolds
near the fixed point numerically. In order to proceed we
add to the right hand side of (\ref{5}) the terms
\begin{equation}
C(f_l+f_{l-1}) + C(f_l + f_{l+1}) \;\;. \label{10}
\end{equation}
In this case the modified map becomes linearizable around
the fixed point $f_l=0$. We can then visualize the manifolds,
and hope that in the limit $C \rightarrow 0$ the structure
of the manifolds does not change with respect to their intersections.
In Fig.2 we show the stable and unstable manifolds for
$m=2$, $C=2$ and $\kappa = 10$, where we used the variables $g_l=f_l+f_{l-1}$
instead of the original ones.
We computed the unstable manifold using standard procedures \cite{ll92}
and indicated the position of the stable manifold as it can be
evaluated out of the linearization around the fixed point.
Clearly the horseshoe patterns
are visible together with the homoclinic points. If one chooses one
homoclinic point and iterates, then the image is the next-to-next
homoclinic point. Thus we indeed observe the two possible
even and odd parity solutions. If we lower the value of $C$, then
it becomes increasingly harder to find the unstable manifold.
Still the structure of the unstable manifold can be seen partially, which
indicates the persistence of the horseshoe patterns in phase space regions
where the linear terms (\ref{10}) are small perturbations.
Then we can use the results of Section II and conclude that NLE solutions
will survive under perturbations of the considered system. In particular
it follows that NLEs are exact solutions for Fermi-Pasta-Ulam chains
with additional linear and cubic spring forces, as often studied
in the literature \cite{st88},\cite{st92}.
\section{Conclusion}
In conclusion we have shown, that if a one-dimensional
system is known where
NLE solutions exist and obey the nonresonance condition, then
they are stable under perturbations of the Hamiltonian of the
system or they can be perturbed in such a way that they
become stable. Consequently we can use exact proofs of the
existence of NLE solutions, e.g. the one given in this work
for (\ref{3}) or the one in \cite{ma94} and obtain new NLE solutions
for the corresponding perturbed system. Remarkably we are not
restricted in the choice of the perturbation as long as the
map for the perturbed system has the same phase space dimension.
Let us discuss the problems of the presented approach which
appear when we try to consider lattices of dimensionality higher
than $d=1$. We can still make the ansatz of an existing time-periodic
NLE solution and will yield again a coupled set of algebraic
equations for the Fourier coefficients. However this set can not
be viewed as a discrete map of a certain phase space of the
Fourier components, if $d > 1$. Consequently we do not know
how to formulate the conditions of the decay of the solution
at infinity by imposing certain constraints on the set of equations.
Still it is well known from numerical simulations that periodic
NLEs exist \cite{fw8}.
\\
\\
\\
Acknowledgements
\\
\\
I wish to thank C. R. Willis for his continuous support of
these studies and for the many helpful discussions. I
also thank M. Peyrard, D. K. Campbell, K. Kladko,
U. Bahr and B. Flach for
interesting discussions and comments and S. Aubry for
sending a preprint prior publication. This work was supported
by the Deutsche Forschungsgemeinschaft (Fl 200/1-1).
\newpage
|
1,116,691,499,127 | arxiv | \section{Introduction}\label{sec:intro}
Understanding the impact of inter-particle repulsions in fermionic quantum systems remains one of the most challenging problems in physics.
As was shown by Landau, electron-electron repulsions in a solid may often be treated by redefining excitations as quasiparticles. The resulting physics is thus essentially of free particles (the so-called Landau quasiparticles), with interactions mostly leading to
a renormalization of some physical parameters such as the mass. This Fermi liquid (FL) approach has seen very broad success and allowed to understand the effects caused by perturbations on top of the Fermi liquid, such as instabilities towards e.g. a Bardeen-Cooper-Schrieffer (BCS) superconducting (SC) state, a magnetically ordered (MO) state or a charge density wave (CDW) state~\cite{BookZiman1972}.
There are however situations where the FL approach breaks down, leading to so-called ``non-Fermi liquid'' behavior. At high spatial dimensionality, this can happen if the interactions are particularly strong, or if the filling of the systems is commensurate with the lattice, with e.g. one particle per
site leading to a Mott (MI) state. Reduced dimensionality of the system can further enhance the effect of interactions. In one dimension in particular, the interactions always have drastic effects and lead to physical properties very different from the ones of a Fermi liquid. The FL is replaced by another set of universal features, called the Tomonaga-Luttinger liquid (TLL), where the elementary excitations are the collective excitations of charge and spin in the system.
Such systems are critical and at zero temperature $T=0$ possess various competing quasi-long range orders, ranging from antiferromagnetism to superconductivity~\cite{BookGiamarchi2003}.
Non-FLs have thus been prime candidates to search for novel physical phases, in particular unconventionally superconducting (USC) phases mediated by excitations other than the electron-phonon one, of which the high-temperature (high-$T_c$) superconductors~\cite{BookAnderson1997,Orenstein2000}. are just one example. The observed proximity between USC and MO phases has made fluctuations around the MO state prominent candidates for such pairing mechanisms. On the experimental side, many systems ranging from heavy fermions~\cite{Varma1985,Si2010}, organic superconductors~\cite{Bourbonnais2007,Jerome2012}, high-$T_c$ superconductors and pnictides~\cite{Damascelli2003,Fischer2007,Scalapino2012a} show USC phases.
Yet, obtaining any widely accepted theory on the nature of such exotic pairing for any of these material-groups has proven to be difficult; even the simplest physical models, the so-called \textit{minimal models}, which are abstracted from the materials' full physical structure are fundamentally hard to solve: the 2D \mbox{U-V} model at half filling (organics), and the weakly doped 2D Hubbard model (cuprates). So far, any solution of these minimal models has almost always necessitated additional technical approximations that introduce errors of unknown magnitude. It is thus difficult to determine whether phases predicted for such models in the framework of a given approximation scheme are truly present or an artefact of the
approximation, or whether to explain the experiments, different, more extended models are required, such as e.g. in the three-band models of the high-$T_c$ cuprates~\cite{Varma1991}, or in those approaches that investigate the role of phonons in USC materials~\cite{BookSalje1996}.
Even when moving to the more tractable regime of weak repulsion, SC and MO instabilities interfere at all orders of the perturbative renormalization group (RG) treatments applied so far~\cite{Schulz1987,Furukawa1998,Lauchli2004}, necessitating complex numerical methods~\cite{Deng2015} or sophisticated, and still approximate, functional RG procedures~\cite{Nickel2005,Nickel2006,Bourbonnais2011,Sedeki2012,Metzner2012a} .
More uncertain yet is the situation in the most important regime, actually relevant to the models' underlying materials: when repulsion becomes large and competes with kinetic energy. For this regime, numerical methods are a route of choice. However, in two or more spatial dimensions these too are faced with steep challenges. Quantum Monte Carlo (QMC) treatments suffer from the so-called sign problem for fermions leading to an exponential degradation of the signal to noise ratio as e.g. the temperature is lowered~\cite{LeBlanc2015}. And while innovative techniques like diagrammatic QMC have delivered intriguing insights into USC pairing for the 2D Hubbard model, these are still constrained to intermediate interactions at most and doping very far from unity~\cite{Deng2015}.
Our numerical approach here is different, and rests on two observations: (1) The fact that for the Density Matrix Renormalization Group (DMRG) technique, and other tensor-network methods such as PEPS, any possible bias can be controlled against via extrapolation, and that these methods are free of the sign problem~\cite{Schollwock2011,Stoudenmire2012}. (2) That the one minimal 2D model in which a USC ground state from strong electron repulsions can unambiguously be shown to exist, is comprised of quasi-1D electrons (doped \mbox{2-leg} Hubbard ladders) weakly coupled in parallel~\cite{Carlson2000,Karakonstantakis2011}. This minimal model however, as yet, does not correspond to any extant material.
These two observations have led us to conclude that the minimal 2D \mbox{U-V} model of the organic superconductors - which has an analogous structure, of many 1D systems coupled weakly in parallel to each other - offers an unique opportunity for insight on the strongly interacting ground state of a candidate model for USC pairing. This would be done by swapping out the perturbative theory used on the Hubbard-ladders in the low-energy field-theory limit, which would not be applicable to the \mbox{U-V} model setting, with quantitatively reliable DMRG.
Yet, using DMRG in this particular setting comes with issues to resolve first. While DMRG delivers highly accurate results both for the statics and the dynamical correlations in 1D systems while using only modest computational resources, the 2D setting of the \mbox{U-V} model is different. On such lattices, DMRG is known tho require resources that increase quasi-exponentially with lattice-width when trying to maintain any pre-set accuracy~\cite{Stoudenmire2012}.
Tractable sizes for systems of itinerant electrons so far usually have been on the order of about 100 sites, depending on the specific problem~\cite{White2009,Scalapino2012,Liu2012,LeBlanc2015,Ehlers2017,Zheng2017} - pushing towards around 200 sites, while done, is generally challenging and goes along with a decrease in accuracy. As the sum of the evidence indicates that states with SC order are in close energetic competition with MO ones, even changes in geometry (i.e. boundary effects) could easily result in different orders winning out for lattices of such size. As a result, no unbiased approach has so far been able to show unambiguously that a ground state of either of the two minimal models may exhibit USC order when repulsion is non-perturbatively large.
In the present work, we take a significant step in that direction. We build on our development of the numerical parallelized density matrix renormalization group (pDMRG), a distributed-memory version of one of the standard DMRG formulations~\cite{Schollwock2011} capable of
exploiting modern supercomputer architectures.
With this method we investigate the properties of the 2D \mbox{U-V} Hamiltonian at half filling of the bands. We consider an array of parallel chains of strongly correlated 1D electrons, each described by such a U-V Hamiltonian and weakly coupled to each other by transverse tunnelling.
In addition to being the model Hamiltonian for organic superconductors~\cite{Giamarchi2004}, and thus potentially containing the unconventionally superconducting phase observed in this system,
this Hamiltonian has two critical advantages for numerical study compared to the doped 2D Hubbard model~\cite{LeBlanc2015}: (1) it is trivial to maintain fixed ratios of electrons to lattice sites and thus to extrapolate to infinite system size here, since no doping is required. The pDMRG then allows doing this with the accuracy required to extrapolate energies of large systems, such as to enable the reliable calculation of energy differences, as appearing in e.g. the most important observable we use, the \textit{spin gap}. Underlying this accuracy is that pDMRG can exploit the strong anisotropy of electron tunneling in the 2D \mbox{U-V} model by spreading out a single ground state calculation across many supercomputer nodes. This anisotropy ties directly into the U-V models' second advantage: (2) the strip or cylinder geometry, that DMRG is especially good at handling intrinsically, suits the U-V model naturally if aligned with the strong-tunneling direction, as correlations across the strips' width will be naturally weak due to the small perpendicular tunneling, which is not the case for the doped 2D Hubbard model.
We show that in the thermodynamic limit this systems' averaged spin gap $\overline{\Delta}_s$ is incompatible with either a FL or an MO ground state, for strips of finite width. The behaviour of $\overline{\Delta}_s$ as $L\rightarrow\infty$ appears most compatible with $\overline{\Delta}_s>0$ at infinite length, and being either non-decaying or even increasing with the strips' width, i.e. pointing towards a fully gapped spin excitation spectrum. But we also consider the alternate possibility that $\overline{\Delta}_s$ might ultimately scale to zero beyond the system sizes we can access. However, we show this alternative is still at most compatible with a spin excitation spectrum with isolated zero-energy points, i.e. in line with weak-coupling theories of USC systems. But either scenario is incompatible with either antiferromagnetic order or Fermi liquid-like ground state.
Together with the strong enhancement exclusive to the $d_{xy}$-channel of correlations that we find with increasing strip width, our results make the case for SC singlet pairing with $d_{xy}$-order being the dominant component in the ground state of the \mbox{U-V} model at half filling when repulsion is competitive with kinetic energy.
The structure of the paper is as follows: in section~\ref{sec:over} we outline the \mbox{U-V} model and our approach to analysing the system and choosing the model parameters; in section~\ref{sec:method} we describe the key features of pDMRG relevant to this work; in section~\ref{sec:scale} we detail our scaling procedure; in section~\ref{sec:res} we analyse the behaviour of the spin gap and the correlation functions as the width of the 2D \mbox{U-V} strips is increased, making the case for a SC ground state with $d_{xy}$-order and against any MO or FL competitor-order; in section~\ref{sec:disc} we discuss these results and how they connect to experiment; in Appendix~\ref{app:scale} we provide details for the used scaling procedure; finally, in Appendix~\ref{app:dmrg_tech} we provide a broader technical overview on pDMRG.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{Fig_over.png}
\caption{
\textbf{(a)} Overview of the 2D \mbox{U-V} model. Conduction electrons tunnel along effective 1D chains with amplitude $t$, where electrons are subject to strong on-site repulsion $U$ and a weaker nearest-neighbour repulsion $V$. Inter-chain tunneling $\ensuremath{t_\perp}$ is much weaker than $t$.
\textbf{(b)} Schematic parameter space of the 2D \mbox{U-V} model at half filling, for fixed $U=4t$. The only limit in which all possible ground states
and the transition between them is understood is the 1D limit, i.e. at $\ensuremath{t_\perp}=0$, when the chains decouple, and are either in a TLL or MI phase, depending
on whether $V<V_c$ or $V>V_c$. The red dots show the parameters at which we work in this present article.}
\label{fig:over}
\end{figure}
\section{Model and physical observables}\label{sec:over}
As depicted in Fig.~\ref{fig:over}a, the \mbox{U-V} model in 2D is formed by a parallel array of $\ensuremath{N_{\rm ch}}$ 1D chains of lattice electrons, each of length $L$, with inter-chain tunneling.
Its lattice-Hamiltonian is
\begin{align}\label{eq:ham}
\hat{H} & = -t \sum_{i=1}^{L-1}\sum_{n=1}^{\ensuremath{N_{\rm ch}}} \sum_{\sigma=\ensuremath{\uparrow},\ensuremath{\downarrow}} \left(\Cdag{i}{n}{\sigma} \C{i+1}{n}{\sigma} +\mbox{h.c}\right) \nonumber \\
& -\ensuremath{t_\perp} \sum_{i=1}^{L}\sum_{n=1}^{\ensuremath{N_{\rm ch}}-1} \sum_{\sigma=\ensuremath{\uparrow},\ensuremath{\downarrow}} \left(\Cdag{i}{n}{\sigma} \C{i}{n+1}{\sigma} + \mbox{h.c}\right) \nonumber \\
& +U \sum_{i=1}^{L}\sum_{n=1}^{\ensuremath{N_{\rm ch}}} \N{i}{n}{\ensuremath{\uparrow}} \N{i}{n}{\ensuremath{\downarrow}} + V \sum_{i=1}^{L-1}\sum_{n=1}^{\ensuremath{N_{\rm ch}}} \Ntot{i}{n}\Ntot{i+1}{n}
\end{align}
Here, $\Cdag{i}{n}{\sigma}$ ($\C{i}{n}{\sigma}$) denotes the electron creation (annihilation) operator on site $i$ of chain $n$ of spin $\sigma=\ensuremath{\uparrow},\ensuremath{\downarrow}$, and $\N{i}{n}{\sigma}:=\Cdag{i}{n}{\sigma}\C{i}{n}{\sigma}$, $\Ntot{i}{n}:=\N{i}{n}{\ensuremath{\uparrow}}+\N{i}{n}{\ensuremath{\downarrow}}$ denote local electron density for spin $\sigma$ and total electron density operators respectively. The tunneling amplitude along the chains is given by $t$, while in-between adjacent chains it is given by $\ensuremath{t_\perp}$, and generally $t\gg\ensuremath{t_\perp}$. We point out that the $\ensuremath{t_\perp}$-term above obeys open boundary conditions, just as the $t$-term does. We have made this choice because obtaining ground state energies accurate enough to compute reliable differences between them is an indispensable requirement to extrapolate this systems spin gaps in the limit $\ensuremath{L\rightarrow\infty}$. Cylindrical 2D-lattices however are well known to immediately suffer from lower accuracy, and are thus avoided here. Finally, the onsite Coulomb repulsion is given by $U$, while intrachain nearest-neighbour repulsion is $V$, with $V<U/2$. The entire system is at half-filling, i.e. $N_\ensuremath{\uparrow} = N_\ensuremath{\downarrow} = L\ensuremath{N_{\rm ch}}/4$, and thus $\ensuremath{k_F}^\uparrow=\ensuremath{k_F}^\downarrow=\ensuremath{k_F}=\pi/4a$, where $a$ is the lattice spacing.
This minimal model originally arose from the study of the organic Bechgaard and Fabre salts (`the organics'), the first materials discovered to support an USC phase~\cite{Jerome1989,Bourbonnais2007}. These compounds show a very rich phase diagram, and especially a change in effective dimensionality with temperature - 1D TLL-like at high temperature, then 2D-like and finally 3D-like at near zero - as quantum coherence can increasingly be established along the three orthogonal and successively weaker directions for electron-tunneling. Their most striking feature is the phase transition from a magnetically ordered to a unconventionally superconducting phase at low temperature, as the proximity, and thus the inter-chain tunneling $\ensuremath{t_\perp}$, between the strong-coupling chains of the \mbox{U-V} model is increased~\footnote{Experimentally, this distance is controlled either by applying external pressure to the sample, or chemically (choice of anions between the cationic stacks of molecules that make up the 1D chains)}.
To understand this competition, and whether the minimal \mbox{U-V} model~(\ref{eq:ham}) of the organics can actually capture it at least in a weak-interaction scenario (which does not correspond to the actual materials), perturbative RG has been repeatedly applied~\cite{Nickel2005,Nickel2006,Bourbonnais2011,Sedeki2012}. While beset with the same technical limitations this approach has for the doped 2D Hubbard-model of the cuprates (c.f. Sec.~\ref{sec:intro}), it does suggest that the minimal 2D \mbox{U-V} model, might indeed support a USC phase transition from a MO phase when extended with e.g. a next-to-nearest neighbour tunneling perpendicular to the chains, and that pairing would be in the $d_{x^2-y^2}$-channel (gapless superconductivity would be compatible with experiments on the organics~\cite{Shinagawa2007,Yonezawa2012}). However, so far there has been no complementary method to either validate these results, and especially no numerics that could deal with the \mbox{U-V} model~(\ref{eq:ham}) for the experimentally relevant case of strong interactions.
In order to be able to address the case $\ensuremath{t_\perp}\neq 0$, $\ensuremath{N_{\rm ch}}>2$ for this Hamiltonian with reliable numerics, we have developed pDMRG to exploit the distributed-memory architectures of modern supercomputers. Based on the tensor-network formulation widely used in modern DMRG, the code has major advantages over other quantitative numerical approaches for this problem: it inherits the lack of sign-problem and the ability to use extrapolation in the known error to filter out possible implicit bias towards any particular order (which is inherent e.g. to any wavefunction ansatz of variational quantum Monte Carlo) from conventional DMRG. And further, as described in Sec.~\ref{sec:method} and Appendix~\ref{app:dmrg_tech}, the parallelized nature of pDMRG makes the necessary ground state calculations tractable in the first place, by distributing the many non-local tunneling terms of the model across different nodes of the supercomputer. As laid out in Sec.~\ref{sec:method}, the large number of such terms becomes unavoidable when keeping the amount of bipartite entanglement at manageable levels, which is performance-critical for any 2D model.
As detailed in Sec.~\ref{subsec:obs}, the spin gap is the most important observable for our study of this system, as its scaling as $\ensuremath{L\rightarrow\infty}$ allows the crucial elimination of possibilities for ground state ordering in the thermodynamic limit that has so far proven elusive for any minimal model of USC systems. This is because ground states with either MO order or in a conventional FL state will show a linear vanishing of any spin-gap. In contrast, standard analytical theories of USC systems combine weak-coupling RG, for identifying the leading instability and its symmetry, with mean-field descriptions of the ordered phase. For these theories, the unavoidable zero-nodes of the order parameter result in sublinear scaling to zero for a spin gap~\cite{Sigrist1991,Hasegawa1996,Moriya2003,Shinagawa2007,Chubukov2008,Vladimirov2011}, while experiments on the actually strongly coupled regime in USC materials have found finite spin gaps (c.f. Sec.~\ref{sec:disc}). It is therefore vital that the known maximal value of the approximation error of the DMRG technique allows us to extrapolate the computed spin gaps to zero truncation error for any given lattice size, as described in~\ref{subsec:truncerr}.
To outline our strategy of analysis, we now discuss in Sec.~\ref{subsec:obs} the concrete ground state observables we considered, providing the roadmap for ruling out the main competitor states to any superconducting ground states, the FL and the MO states. In Sec.~\ref{subsec:parms} we then discuss our choice of parameters for Hamiltonian~(\ref{eq:ham}), and why these should reduce the competition with charge-ordered ground states.
\subsection{Observables}\label{subsec:obs}
One of our key priorities is to establish whether any set of parameters for Hamiltonian~(\ref{eq:ham}) could or could not result in one of the main competitor orders to a USC ground state in the 2D limit, a MO or FL state, being realized. One quantity that would allow making these distinctions is the
spin susceptibility at zero temperature
\begin{widetext}
\begin{equation} \label{eq:spinsuc}
\chi^{-1}_s(L,\ensuremath{N_{\rm ch}}) = L \frac{\E{L,\ensuremath{N_{\rm ch}}}(S_z = s) + \E{L,\ensuremath{N_{\rm ch}}}(S_z = -s) - 2 \E{L,\ensuremath{N_{\rm ch}}}(S_z = 0)}{(2 s)^2}
\end{equation}
\end{widetext}
where $\E{L,\ensuremath{N_{\rm ch}}}$ denotes the ground state energy of (\ref{eq:ham}) at half-filling, and $S_z=N_\uparrow-N_\downarrow$ is the difference between spin-populations.
Here, $s$ can be any spin number, as long as it is negligible compared to a macroscopic number of spins (proportional to $L$).
Typically, $s$ is chosen so that other quantum numbers in the system can be kept identical between $s$ and $s=0$. The usual
choice is $s=1$, corresponding to the spin flip of a single spin. However, as discussed in subsection \ref{subsec:remosc}, the choice
$s=\ensuremath{N_{\rm ch}}$ has a crucial technical advantage for extrapolating to infinite length $L\rightarrow\infty$.
When $L\to\infty$, $\chi_s(L,\ensuremath{N_{\rm ch}})$ becomes the thermodynamic spin susceptibility.
In order to get a finite susceptibility, as expected for a FL or a MO state, one would thus need the
quantity
\begin{eqnarray} \label{eq:endif}
A(L,\ensuremath{N_{\rm ch}},s) & = & \E{L,\ensuremath{N_{\rm ch}}}(S_z = s) + \E{L,\ensuremath{N_{\rm ch}}}(S_z = -s) \nonumber \\
&& - 2 \E{L,\ensuremath{N_{\rm ch}}}(S_z = 0)
\end{eqnarray}
to scale as $1/L$. Other scalings of this quantity
thus directly signal a non-FL, non-MO behavior for the spin susceptibility. The simplest case is that the \textit{averaged spin gap}
\begin{equation}\label{eq:Ds}
\Dsl{L,\ensuremath{N_{\rm ch}}} := \frac{ \E{L,\ensuremath{N_{\rm ch}}}(S_z=\ensuremath{N_{\rm ch}}) - \E{L,\ensuremath{N_{\rm ch}}}(S_z=0)}{\ensuremath{N_{\rm ch}}},
\end{equation}
remains finite when $L\to \infty$, where $\Dsl{L,\ensuremath{N_{\rm ch}}}$ is the generalization of the standard spin gap
\begin{equation}\label{eq:ds}
\dsl{L,\ensuremath{N_{\rm ch}}} := \E{L,\ensuremath{N_{\rm ch}}}(S_z=1) - \E{L,\ensuremath{N_{\rm ch}}}(S_z=0),
\end{equation}
Then the spin susceptibility would scale as ${(L\Dsl{L\rightarrow\infty,\ensuremath{N_{\rm ch}}})^{-1}}$, leading to zero susceptibility in the thermodynamic limit, as expected for fully gapped systems.
For SC systems with unconventional electron pairing such fully gapped spin-excitations have been actually been found experimentally (see Section~\ref{sec:disc}).
More complex behavior can be expected if (\ref{eq:endif}) has a different scaling with the size of the system such as e.g. $1/L^\alpha$ with $0 < \alpha < 1$. The thermodynamic spin susceptibility remains zero in this case, yet the systems' spin excitations are not fully gapped. In mean-field theories of USC systems all this stems from the line- or point-nodes of the SC gap~\cite{Sigrist1991,Hasegawa1996,Moriya2003,Shinagawa2007,Chubukov2008,Vladimirov2011}. At finite temperatures, the scaling with $L$ would be converted into a non-trivial temperature dependence of the spin susceptibility corresponding to a pseudogap behavior~\cite{James2013,Chen2017a}.
Establishing the $L$-dependence of $\Dsl{L\rightarrow\infty,\ensuremath{N_{\rm ch}}}$ through our numerical results therefore allows to differentiate between three possibilities for the ground state of the two dimensional system in the $T=0$ limit: FL/MO, or fully spin gapped USC, or mostly spin-gapped USC - to the extent that we can extrapolate trends as $\ensuremath{N_{\rm ch}}$ is increased.
The use of pDMRG allows meaningful extrapolation of $\Dsl{L\rightarrow\infty,\ensuremath{N_{\rm ch}}}$ in $1/L$ for the first time, enabling the calculation of ground states of multiple coupled, long \mbox{U-V} chains ($\ensuremath{N_{\rm ch}} \leq 8$, $L\leq 64$)
with quantitatively accurate numerics (c.f. Sec.~\ref{sec:method})~\footnote{ $\ensuremath{N_{\rm ch}}$ larger than presented in this work can be handled by pDMRG. As a single calculation can be spread out across dozens of nodes i.e. thousands of CPU cores, a given problem is less limited by the capacity of the computer as such, but by the available time on the supercomputer.}. As discussed in Sec.~\ref{subsec:spingap}, this allows us to make a strong case that for our choice of Hamiltonian parameters any magnetic order or Fermi liquid behaviour is suppressed in the ground state of the 2D \mbox{U-V} model, due to the way $\Dsl{L,\ensuremath{N_{\rm ch}}}$ behaves with $L\rightarrow\infty$, and showing this beviour to be robust as $\ensuremath{N_{\rm ch}}$ grows.
Our point of reference for this are 2D arrays of coupled spin-chains, which demonstrate the opposite scenario, of the spin gap decaying with $\ensuremath{N_{\rm ch}}$. If infinitely many 1D Heisenberg-chains were to be coupled in parallel, the result is a gapless 2D Heisenberg model - but when only two such chains are coupled this system exhibits a finite spin gap~\cite{Haldane1983a,Dagotto1996}. Multiple techniques have since shown how these two extremes are bridged: as the number of coupled Heisenberg chains increases, the spin gap remains finite (for an even numbers of chains) but continuously decreases as $\ensuremath{N_{\rm ch}}$ grows, scaling to zero in the limit of infinitely many chains~\cite{Schulz1986,Affleck1989}.
The information gained from spin gaps can be supplemented by studying the change in the short-range behaviour of certain correlation functions
as $\ensuremath{N_{\rm ch}}$ is increased. When magnetic orders can be ruled out, the functions of interest here are those of the two other main competitor groups: SC-ordering on the one hand - with the possibility of pairing in the $d_{x^2-y^2}$-, $d_{xy}$- and extended $s$-wave channels - and CDW-ordering on the other hand. The corresponding correlation functions considered for this are
\begin{widetext}
\begin{eqnarray}
\label{eq:dx2y2corr} d_{x^2-y^2}(r) & := & \left\langle \left( \hat{D}^\dagger_{L/2,\ensuremath{N_{\rm ch}}/2,0,1} - \hat{D}^\dagger_{L/2,\ensuremath{N_{\rm ch}}/2+1,1,0} \right) \times \left( \hat{D}_{L/2+1+r,\ensuremath{N_{\rm ch}}/2,0,1} - \hat{D}_{L/2+1+r,\ensuremath{N_{\rm ch}}/2+1,1,0} \right)\right\rangle \\
\label{eq:dxycorr} d_{xy}(r),s(r) & := & \left\langle \left( \hat{D}^\dagger_{L/2+1,\ensuremath{N_{\rm ch}}/2,-1,1} \mp \hat{D}^\dagger_{L/2+1,\ensuremath{N_{\rm ch}}/2,1,1} \right) \times\left( \hat{D}_{L/2+3+r,\ensuremath{N_{\rm ch}}/2,-1,1} \mp \hat{D}_{L/2+3+r,\ensuremath{N_{\rm ch}}/2+1,1,1} \right)\right\rangle \\
\label{eq:ddcorr} C(r) & := & \left\langle \hat{n}_{L/2,\ensuremath{N_{\rm ch}}/2} \hat{n}_{L/2+r,\ensuremath{N_{\rm ch}}/2} \right\rangle - \left\langle \hat{n}_{L/2,\ensuremath{N_{\rm ch}}/2} \right\rangle \left\langle\hat{n}_{L/2+r,\ensuremath{N_{\rm ch}}/2} \right\rangle,
\end{eqnarray}
\end{widetext}
where $\hat{D}$ denotes the nearest-neighbour spin-singlet operators $\hat{D}_{i,n,j,m}:=\C{i}{n}{\uparrow}\C{i+j}{n+m}{\downarrow}-\C{i}{n}{\downarrow}\C{i+j}{n+m}{\uparrow}$. As illustrated in Fig.~\ref{fig:corrdecay}b, we consider the SC correlation functions
in the two central chains $\ensuremath{N_{\rm ch}}/2$, $\ensuremath{N_{\rm ch}}/2+1$, and $C(r)$ on chain $\ensuremath{N_{\rm ch}}/2$, in each case at distance $r$ from the middle of the chain(s).
Based on these correlation functions, we present further evidence in Sec.~\ref{subsec:corrs} beyond the behaviour of the spin gap that for the \mbox{U-V} model we study here, electrons do in fact pair, with a dominant component in the $d_{xy}$-channel.
\subsection{Choosing model parameters - $U=4t$, $V/t = 0.5,1$}\label{subsec:parms}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{Fig_parms.png}
\caption{
\mbox{U-V} parameter space. Green shaded area: single chain enters MI-regime. Red shaded area: two coupled chains can enter MI-regime~\cite{Kantiana}. To be certain we are outside the regime where this counter-intuitive increase in parameter space for the MI-insulator takes place in the few-chain regime, we work within the blue-shaded area of parameter space. For reference, the yellow shaded area shows the parameter regime realized in the Bechgaard and Fabre salts. Black lines show lines of constant $K_\rho$ for the single \mbox{U-V} chain (Data and figure reprinted from~\cite{Ejima2005}).
}
\label{fig:parms}
\end{figure}
The basic results serving as background to any choice of Hamiltonian parameters are summarized in Fig.~\ref{fig:over}b. In the limit $\ensuremath{t_\perp}=0$, the chains decouple and are described by TLL-theory for sufficiently small $V$. As long as $U$ is fixed to any value $\geq 2t$, there exists a value $V_c$ above which a charge gap opens in the system, turning it into a Mott insulator (MI)~\cite{Mila1993,Ejima2005}. The phase boundary at $\ensuremath{t_\perp}=0$ is shown as a green line in Fig.~\ref{fig:parms}. Now, in order to maximise the chance of finding SC order, we attempt to target a parameter regime in which the decoupled chains would be in the TLL phase at the same time as the coupled chains ($\ensuremath{t_\perp}>0$) would remain in a delocalized regime. Ultimately, only the computational results can indicate whether one succeeds with the latter condition. But given the limited supercomputing time at hand, we try to maximise our chances for this to happen from the outset, as described within the rest of this section.
A look at the minimal \mbox{U-V} model of the organic superconductors (Bechgaard and Fabre salts) is instructive here. Experimental probes of these show that here the 1D chains of the \mbox{U-V} model have a Luttinger-liquid parameter $K_\rho$ (which characterizes interactions and correlations of the charge mode of an isolated chain) in the vicinity of $0.25$~\cite{Giamarchi2004}. When $K_\rho= 0.25$, theory predicts the single chain at half-filling to develop a charge gap and become a MI. In terms of the microscopic parameters, this vicinity to $K_\rho=0.25$ for the single chain would correspond to $U/t=10$, $V/t\approx 2 - 3$, c.f. yellow area in Fig.~\ref{fig:parms}. When the ratio $\ensuremath{t_\perp}/t$ is small enough, experiments show the physics of the single chain extending to the whole material, i.e. the system is an MI as well, and magnetically ordered on top of it. Only as $\ensuremath{t_\perp}/t$ grows and $\ensuremath{t_\perp}$ becomes competitive with the Mott gap of the single chain is superconductivity found in experiments, typically at $\ensuremath{t_\perp}/t\approx 0.1$.
It may be tempting to try to straightforwardly replicating this setting for our calculations. However, there has recently been new insight on the two-chain limit by two co-authors of the present work~\cite{Kantiana}, which cautions against such an approach. Combining analytical RG treatment of the bosonized chains with DMRG, we find that finite $\ensuremath{t_\perp}$ for two chains \textit{increases} the \mbox{U-V} parameter space in which the system develops a Mott gap (red line in Fig.~\ref{fig:parms}). While early exploratory work indicates that this parameter space might shrink again for $\ensuremath{N_{\rm ch}}=4$, it is beyond the scope of this work or of Ref.~\onlinecite{Kantiana} for definite statements how exactly the few-chain regime evolves in this regard.
In the following, we therefore focus on a regime in which these complications of few-chain physics are unlikely to apply, at $U/t=4$ and $V/t=0.5,1$, and with $\ensuremath{t_\perp}/t=0.1$ (blue shaded region in Fig.~\ref{fig:parms}). This regime realizes the essential physics present
in the superconductivity of the organics, i.e. a regime where inter-chain tunneling is not effectively suppressed by a single-chain gap. A priori, this regime thus appears to offer the highest chance of finding a superconducting ground state due to repulsively mediated pairing. We have targeted two values of $V/t$ to be certain our results will be representative for a finite area of parameter space.
\section{The Method: parallel DMRG}\label{sec:method}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{Fig_pdmrg.png}
\caption{
\textbf{(a)} Mapping the physical 2D lattice to the 1D DMRG chain, along the $t$-direction.
Many more long-range tunneling terms appear than would be present when mapping along the $\ensuremath{t_\perp}$-direction.
\textbf{(b)} How the two different mappings perform: mapping along $t$ (blue) rapidly converges to the exact solution, while
$t_p$-mapping is trapped in a metastable state, even at $\chi=5000$ and many sweeps.
{\bf (c)} The primary parallelization layer of pDMRG used for the present work, using the left boundary $L$ as an example. Each element $L[b_l]$ of $L$ is dispatched to a MPI-process, and each such MPI-process may control a number of physical nodes. The process then locally handles contractions of $L[b_l]$ with the wavefunction tensors to be optimized, requesting the results of other MPI-processes for different $b_l$ as necessary.
}
\label{fig:pdmrg}
\end{figure}
As stated in Sec.~\ref{sec:over}, exploiting the physical structure of the 2D \mbox{U-V}-model is essential for being able to handle the bipartite entanglement of this system when mapping the physical lattice to the DMRG-chain. Being comprised of many strongly
correlated 1D chains in parallel with weak coupling in-between, it would seem intuitive that performing the mapping along the direction of strong tunneling (c.f. Fig.~\ref{fig:pdmrg}a) delivers lower overall bipartite entanglement in the 1D DMRG chain. This argument is complemented by theoretically estimating the performance of the alternate mapping along the $\ensuremath{t_\perp}$-direction that is used conventionally when applying DMRG to what is effectively a long and narrow 2D-strip. The \textit{bond dimension}, denoted as $\chi$ in the following, is the central quantity that controls the accuracy of DMRG. It denotes the number of optimally chosen basis states in which any bipartition of the system is represented. If $\chi_{\rm chain}$ was required to represent a single \mbox{U-V}-chain of a given length with truncation error $\epsilon$, achieving the same accuracy for $\ensuremath{N_{\rm ch}}$ chains would require $\chi=\chi_{\rm chain}^{\ensuremath{N_{\rm ch}}}$ - even in case $\chi_{\rm chain}=10$, this scaling would be hopeless beyond $\ensuremath{N_{\rm ch}}=2$. A simple test comparing the two mappings for a representative system of free fermions, summarized in Fig.~\ref{fig:pdmrg}b, confirms the total disparity between the performance of these mappings. Shown is the energy vs. the number of DMRG iterations for $L=20$, $\ensuremath{N_{\rm ch}}=5$, $N_\ensuremath{\uparrow}=N_\ensuremath{\downarrow}=50$, $\ensuremath{t_\perp}/t=0.2$, $U=V=0$, $\chi=5000$, once with mapping along $t$ (blue line), once with mapping along $\ensuremath{t_\perp}$(green line). The $t$-mapping converges quickly to the exact ED solution, while the $\ensuremath{t_\perp}$-mapping flounders even for this small system.
The price paid for the $t$-direction mapping, necessary to have manageable $\chi$ in the first place, is the very large number of long-range tunneling terms. In the two-site DMRG we use, for every adjacent pair of sites ${(i,i+1)}$ on the DMRG-chain for which the tensors are jointly optimized, the Hamiltonian is represented as a matrix product operator (MPO)~\cite{Schollwock2011}:
\begin{equation}\label{eq:mpo}
\hat{H}=\sum_{b_l,b_c,b_r=1}^D \hat{H}_{\rm left}[b_l] \otimes \hat{h}_{i}[b_l,b_c] \otimes \hat{h}_{i+1}[b_c,b_r] \otimes \hat{H}_{\rm right}[b_r]
\end{equation}
Here, each sum runs from $1$ to $D$, the MPO bond-dimension, which for the $t$-direction mapping scales as $D\approx 4L$. For the range of system lengths we treat, $L=20$ to $64$, this amounts to $D\approx 80$ to $256$, implying an equal number of Hamiltonian contributions $\hat{H}_{\rm left}[b_l]$ ($\hat{H}_{\rm right}[b_r]$). These describe the action of the Hamiltonian to the left (right) of the sites $i$, $i+1$, while for every $b_l$, $b_c$ ($b_c$, $b_r$) $\hat{h}_{i}[b_l,b_c]$ ($\hat{h}_{i+1}[b_c,b_r]$) is a purely local operator on site $i$ ($i+1$). Combined with the fact that we utilize $\chi=10000$ to $18000$, this large number of Hamiltonian contributions result in memory requirements on the order of several TB for any single ground state calculation and a commensurate amount of computational effort.
Parallelized DMRG was developed to make calculations of such magnitude tractable~\cite{Bauera}. We provide a more comprehensive overview of its technical features in Appendix~\ref{app:dmrg_tech}, and highlight here its first parallelization layer, which is the most relevant to this work, shown schematically in Fig.~\ref{fig:pdmrg}c: the largest objects of DMRG are the \textit{boundary terms}. These are the expressions of $\hat{H}_{\rm left}[b_l]$, $\hat{H}_{\rm right}[b_r]$ in the $\chi$ optimal basis states of the system to the left or right of sites $i$, $i+1$ respectively. Denoting these basis states as $|\alpha_l\rangle$, $|\alpha_r\rangle$ respectively, the left boundary is $L[b_l]:=\sum_{\alpha_l,\alpha_l'} \left( \langle \alpha_l' | \hat{H}_{\rm left}[b_l] | \alpha_l \rangle \right) |\alpha_l'\rangle \langle\alpha_l |$, and the right boundary $R$ is defined analogously using $\hat{H}_{\rm right}[b_r]$ and $|\alpha_r\rangle$. The parallelization layer distributes each of the $D$ elements of both these vectors (each element being a block-sparse matrix due to the use of conserved quantum numbers to enhance performance), to nodes of a parallel supercomputer. For this project, the computer cluster was the 'Piz Daint' system of the Swiss National Supercomputing Center (CSCS). In this manner, we have spread the calculations out to up to many dozens of nodes (for the largest lattices).
\section{Scaling analysis}\label{sec:scale}
As discussed in Sec.~\ref{sec:over}, our indirect strategy for arguing that a ground state of the 2D \mbox{U-V} model has SC order, by eliminating competitor phases, relies on obtaining the correct behaviour of the systems spin gap as $\ensuremath{L\rightarrow\infty}$.
In the following we discuss the two necessary steps required
in service of this goal: sharply reducing the oscillations of the spin gap as $1/L$ decreases, and extrapolating
ground states energy to zero error in the DMRG-approximation.
\subsection{Removing oscillations of $\dsl{L}$ }\label{subsec:remosc}
The standard definition of the spin gap, eq.~(\ref{eq:ds}), is based on the difference between two ground state energies.
However, due to the small value of $\ensuremath{t_\perp}/t$ in the regime of interest, one encounters oscillations of $\dsl{L}$ with $L$,
which presents a complication for obtaining clean extrapolation of $\dsl{L}$ in $L$, as shown in Fig.~\ref{fig:scalex}a.
That weak-interchain coupling is evidently the cause is demonstrated by studying the \mbox{U-V} model in the
non-interacting limit $U=V=0$. While clearly $\ensuremath{\Delta_s}=0$ for $\ensuremath{L\rightarrow\infty}$ (at any $\ensuremath{N_{\rm ch}}$) in this case,
the $\ensuremath{N_{\rm ch}}$ bands are only weakly split. Then, the shifting positions of both the highest occupied as well as of the lowest unoccupied states in each band
with changing $L$ (c.f. Fig.~\ref{fig:scalex}b)
will invariably cause such oscillatory behaviour in the definition~(\ref{eq:ds}).
Based on this analysis, clearly one should smooth this finite-size
effect by using a more general definition of the spin gap. The natural generalization, given by eq.~(\ref{eq:Ds}), is the one
that averages over all the $\ensuremath{N_{\rm ch}}$ bands close to the Fermi-level in the non-interacting limit, and that applies unchanged to the interacting regime.
As depicted in Fig.~\ref{fig:scalex}a and b, we find this definition removes the oscillatory behaviour to a large degree
for all our data. As discussed in Appendix~\ref{app:scale}, this definition of the averaged spin gap results in a linear polynomial
in $1/L$ as the only consistent method for extrapolating $\Dsl{L,\ensuremath{N_{\rm ch}}}$ to $L^{-1}=0$; linear extrapolation can recover physically expected behaviour correctly, while
the oscillatory remnant overwhelms any vestiges of quadratic dependency in $1/L$ that might remain after the averaging.
We further note that this procedure is similar to adding two particles instead of one when computing the compressibility of spinless
particles, to avoid the unwanted oscillations provoked by the breaking of $k \to -k$ symmetry that adding a single particle would entail.
Analogously, one usually adds four particles to stay in the sector of total spin zero for spinful systems.
\begin{figure}[ht]
\centering
\includegraphics[width=1\columnwidth]{Fig_scalex_plus_scalen.png}
\caption{
\textbf{(a)} and \textbf{(b)}: Averaging eliminates oscillations in the spin gap.
\textbf{(a)} Comparing $\dsl{L,\ensuremath{N_{\rm ch}}=2}$ (blue line) against $\Dsl{L,\ensuremath{N_{\rm ch}}=2}$ (red line) for
$U/t=4$, $V/t=1$, $\ensuremath{t_\perp}/t=0.1$. Using
the averaged version of the spin gap, $\ensuremath{\overline{\Delta}_s}$, nearly eliminates the oscillations, which are a strong impediment
to a clean extrapolation to $1/L\rightarrow 0$.
\textbf{(b)} Illustrating the cause of oscillatory behaviour in $\ensuremath{\Delta_s}$ is straightforward for $U=V=0$.
Plotting the single-particle bands for $L=60$ (left), $L=64$ (right) ($\ensuremath{N_{\rm ch}}=6$ - only four bands can be seen as two are doubly degenerate each) around the Fermi level (red line), one sees how the distance between highest
occupied/lowest unoccupied levels (pairs marked with blue ellipsis) shifts with $L$, actually increasing here as $L$ increases.
The oscillations vanish almost completely with the averaging over the $\ensuremath{N_{\rm ch}}$ bands (red boxes).
\textbf{(c)} and \textbf{(d)}: Scaling ground state energies to zero truncation error $\ensuremath{\epsilon_{\rm max}}$, for the example of $L=40$, $\ensuremath{N_{\rm ch}}=4$,
$N_\ensuremath{\uparrow}=N_\ensuremath{\downarrow}=40$.
\textbf{(c)} Plotting $E_{\rm GS}$ against $\ensuremath{\epsilon_{\rm max}}$, computed for $\chi=12000,14000,16000,18000$.
\textbf{(d)} Plotting the resulting $\Dsl{40,4}$ at $1/\chi=0$, and the corresponding values at the four finite
values of $1/\chi$.
}
\label{fig:scalex}
\end{figure}
\subsection{Extrapolating in the DMRG truncation error}\label{subsec:truncerr}
In DMRG, one attempts to compute the ground state wavefunction $|\psi_{\rm GS}\rangle$
of a lattice-Hamiltonian, by mapping the physical lattice to a 1D chain. At
every bipartition of this DMRG chain, the algorithm achieves the required reduction of the exponentially-scaling
full Hilbert space by approximating $|\psi_{\rm GS}\rangle$ through two
sets of $\chi$ optimally chosen basis states, yielding a wavefunction $|\psi^\chi_{\rm GS}\rangle$.
A given $\chi$ corresponds to a particular truncation
error $\epsilon:=\langle\delta\psi|\delta\psi\rangle$, ${|\delta\psi\rangle:=|\psi_{\rm GS}\rangle-|\psi^\chi_{\rm GS}\rangle}$,
where $\epsilon$ is known in principle in DMRG.
The controlled accuracy and known error $\epsilon$ enables local quantities, and thus energies,
to be extrapolated to zero error~\cite{White2007}. Specifically for energies, it is known that the approximation
error in the energy, $\delta E$, is $\propto\epsilon$. As we obtain $\Dsl{L,\ensuremath{N_{\rm ch}}}$ from the difference
between the $S_z=\ensuremath{N_{\rm ch}}$ and $S_z=0$ ground state energies, wherever we have at least two ground states with different $\chi$ we use this technique to extrapolate
the energy to zero $\epsilon$ before computing $\ensuremath{\overline{\Delta}_s}$. In this, we always employ $\ensuremath{\epsilon_{\rm max}}$,
the maximal truncation error committed in the final sweep as a proxy for $\epsilon$. For $V/t=1$, $\ensuremath{N_{\rm ch}}=4$ we have
obtained multiple energies, by computing ground states for $\chi=10000$ and $18000$ independently, then
obtained $\chi=12000,14000$ and $16000$ using the $\chi=18000$ ground state as an initial state
and applying two complete sweeps at the lower $\chi$. An example is of this is shown in Fig.~\ref{fig:scalex}c.
As illustrated in Fig.~\ref{fig:scalex}d, the change in $\Dsl{\ensuremath{L\rightarrow\infty},\ensuremath{N_{\rm ch}}}$ relative to values at finite-$\chi$
is typically small anyway, on the orders of a few percent at most.
Outside these parameters, we have aimed to produce both $\chi=10000$ and $\chi=18000$
ground states in independent simulations wherever possible, but particularly for $V/t=0.5$ available
computing ressources proved ultimately insufficient. Thus, many results for this parameter are based
partly or fully on a single $\chi=10000$ ground state.
In order to obtain any particular value of $\Dsl{L,\ensuremath{N_{\rm ch}}}$ as the basis for an extrapolation
to $\ensuremath{L\rightarrow\infty}$ at fixed $\ensuremath{N_{\rm ch}}$, we therefore pursue two separate protocols:
\indent \textbf{Straight extrapolation} - Here, we extrapolate both $S_z=0$ and $S_z=\ensuremath{N_{\rm ch}}$
ground state energies to zero $\ensuremath{\epsilon_{\rm max}}$ when possible. If scaling is only possible
in one of the spin sectors, we form $\ensuremath{\overline{\Delta}_s}$ from the two lowest-energy ground
states with comparable $\epsilon$.
\indent \textbf{Extrapolation plus estimates} - Whenever a ground state is available
at $\chi=10000$ exclusively, we estimate the correction heuristically. We have done
this based on the following observations, made on states where extrapolations
were possible: (i) the fractions by which energies further decrease
upon extrapolation seem to roughly double going from one $L$ to the next, (ii) they
also seem to, roughly double with every increase of $\ensuremath{N_{\rm ch}}$. (iii) for the same parameters,
fractions seem to be about $1.3$ larger in the $S_z=\ensuremath{N_{\rm ch}}$ sector compared to the $S_z=0$ sector.
We then assume an uncertainty of $25\%$ in these estimated correction fractions.
\section{Extrapolated spin gaps and inferring SC-order from correlation functions}\label{sec:res}
The exact behaviour of $\Dsl{L,\ensuremath{N_{\rm ch}}}$ as $1/L$ scales to zero is critical to our approach. In~\ref{subsec:spingap},
we take the data as they appear to be, linearly increasing in $1/L$, with only weak remnants of the oscillatory behaviour
that $\ensuremath{\overline{\Delta}_s}$ was defined to eliminate (c.f. Appendix~\ref{app:scale} for details). It then follows that the spin gap is finite for the infinitely long strips, and non-decaying
or even increasing in $\ensuremath{N_{\rm ch}}$. In~\ref{subsec:spingap_alt}, we consider the possibility that what appear to be remnants of oscillations
is actually the onset of a scaling of $\Dsl{L,\ensuremath{N_{\rm ch}}}$ to zero as $1/L$ decreases. In~\ref{subsec:corrs}, we then analyse the corrollary
information that correlation functions offer.
\subsection{The spin gap as a function of $\ensuremath{N_{\rm ch}}$} \label{subsec:spingap}
\begin{table}[t]
\begin{tabular}{ c || c | c || c | c |}
& \multicolumn{4}{|c|}{ $\Dsl{\ensuremath{N_{\rm ch}}}$ [$t\times 10^{-3}$] } \\ \cline{2-5}
& \multicolumn{2}{|c||}{ Straight extrapolation } & \multicolumn{2}{|c|}{ Extr. + estimates } \\ \hline
$\ensuremath{N_{\rm ch}} $ & $V/t=0.5$ & $V/t=1$ & $V/t=0.5$ & $V/t=1$ \\ \hline\hline
$2$ & $5.59$ & $7.59$ & $5.36$ & $7.31$ \\ \hline
$4$ & $9.68$ & $10.2$ & $^{(a)}10.77(10)$ & $10.268(11)$ \\ \hline
$6$ & $9.37$ & $11.4$ & $^{(b)}4.3(1.4)$ & $11.20(97)$ \\ \hline
$8$ & $9.45$ & & $^{(c)}7.6(1.3)$ & \\ \hline
\end{tabular}
\caption{
\textbf{(a)} Summary of spin gaps at $\ensuremath{L\rightarrow\infty}$ for different $\ensuremath{N_{\rm ch}}$, assuming the $1/L$-scaling of $\Dsl{L,N}$ we appear to observe continues to hold beyond the strip lengths we computed. ``Extr. + estimates'' is based on estimating
ground state energies at zero truncation error ($\epsilon=0$) for cases where ground states cannot be systematically extrapolated to $\epsilon=0$, because converged simulations were only available for a single $\chi$-value (see text for details). $^{(a)}$:~only one ground state energy of the finite-$L$ ensemble used to obtain this value via linear fit was estimated. $^{(b)}$:~multiple finite-$L$ ground state energies were estimated. $^{(c)}$:~All finite-$L$ ground state energies were estimated.
}
\label{tab:spingaps}
\end{table}
\begin{figure*}
\includegraphics[width=\textwidth]{Fig_scale_in_Linv.png}
\caption{
Scaling $\Dsl{L,\ensuremath{N_{\rm ch}}}$ to $\ensuremath{L\rightarrow\infty}$ using the straight extrapolation
protocol (c.f. Sec.~\ref{subsec:truncerr}).
\textbf{(a)} Scaling at $V/t=0.5$, for $\ensuremath{N_{\rm ch}}=2$ (blue, crosses), $\ensuremath{N_{\rm ch}}=4$ (red, squares), $\ensuremath{N_{\rm ch}}=6$ (black, diamonds),
$\ensuremath{N_{\rm ch}}=8$ (green, circles).
\textbf{(b)} Scaling at $V/t=1$, symbols and colours for $\ensuremath{N_{\rm ch}}=2,4,$ and $6$ as in (a).
In both subfigures, $\dsl{L,\ensuremath{N_{\rm ch}}=1}$ has been inserted for comparison purposes (grey stars).
The raw data is summarized in Tab.~\ref{tab:Es_raw} of Appendix~\ref{app:scale}.
}
\label{fig:scalgap}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{Fig_scalgap.png}
\caption{
Plotting $\Dsl{\ensuremath{L\rightarrow\infty},\ensuremath{N_{\rm ch}}}$ vs. $\ensuremath{N_{\rm ch}}$ for
\textbf{(a)}~$V/t=0.5$.
Red crosses denote outcomes of the straight extrapolation protocol, while grey lines with uncertainty ranges
show outcomes of the extrapolation plus estimates protocol (c.f. Sec.~\ref{subsec:truncerr} and Tab.~\ref{tab:spingaps}).
\textbf{(b)}~$V/t=1$. Symbols as in (a).
}
\label{fig:gapNch}
\end{figure}
Applying the two distinct protocols for computing $\Dsl{L,\ensuremath{N_{\rm ch}}}$ outlined in Sec.~\ref{subsec:truncerr},
the behaviour of the spin gap under increasing $\ensuremath{N_{\rm ch}}$ emerges.
In Figs.~\ref{fig:scalgap}a and b, we show how we obtain spin gaps in the thermodynamic limit $\ensuremath{L\rightarrow\infty}$
through linear fits, for both $V/t=0.5$ and $V/t=1$, using the straight extrapolation protocol.
Not shown are the equivalent plots for the extrapolation plus estimates protocol. There, we also
employ linear fits, but now we obtain the maximal range of possible outcomes of $\Dsl{\ensuremath{L\rightarrow\infty},\ensuremath{N_{\rm ch}}}$
by performing separate linear fits for every possible combination of extremal values of the
$\Dsl{L,\ensuremath{N_{\rm ch}}}$ at different $L$. The resulting $\Dsl{\ensuremath{L\rightarrow\infty},\ensuremath{N_{\rm ch}}}$, respectively the mean values
and maximal/minimal values, of both protocols are summarized in Tab.~\ref{tab:spingaps}
and shown in Figs.~\ref{fig:gapNch}a and b.
Using the straight extrapolation protocol, there seems to be little room for doubt that $\Dsl{\ensuremath{N_{\rm ch}}}$
is at least a non-decaying function beyond $\ensuremath{N_{\rm ch}}=2$ for $V/t=0.5$, while it is unambiguously
a monotonically increasing function at $V/t=1$. At this latter value of the nearest-neighbour repulsion,
the extrapolation plus estimates protocol is benefitting from relatively small uncertainty, as here
we have a greater collection of states that we could extrapolate to $\ensuremath{\epsilon_{\rm max}}=0$, and thus heuristic extrapolations
were only necessary for a few states. With these smaller uncertainties, this second protocol supports the
straight extrapolation protocol. While the non-decaying nature of $\Dsl{\ensuremath{N_{\rm ch}}}$ beyond $\ensuremath{N_{\rm ch}}=6$ cannot be guaranteed, a scenario
in which the the spin gap suddenly reverses at larger $\ensuremath{N_{\rm ch}}$ and starts decaying towards zero
seems highly unlikely and we are not aware of any mechanism that would support such a reversal.
We find further evidence for this view with the significant enhancement of short-range $d_{xy}$-correlations as $\ensuremath{N_{\rm ch}}$
increases, which we discuss further in Sec.~\ref{subsec:corrs}.
For $V/t=0.5$ the situation is possibly more uncertain when comparing the $\Dsl{\ensuremath{L\rightarrow\infty},\ensuremath{N_{\rm ch}}}$
resulting from the two different extrapolation protocols. The deviations are noteworthy for $\ensuremath{N_{\rm ch}}=6$, but it is very easily
possible for our heuristic model of the corrections (described in Sec.~\ref{subsec:truncerr}) to be just off in this case.
But we will observe that neither protocol is compatible with a steady decrease of the spin gap with
increasing $\ensuremath{N_{\rm ch}}$. And, as for $V/t=1$, a significant and consistent enhancement of short-range
$d_{xy}$ correlations at $\ensuremath{N_{\rm ch}}>2$ (see below) further supports this reading of the spin gap behaviour
We thus can summarize that for $V/t=1$ the extended \mbox{U-V} model is highly likely
to have a finite spin gap for $\ensuremath{N_{\rm ch}}\rightarrow\infty$, while for $V/t=0.5$ this likelihood,
while reduced, remains high.
\subsection{Testing the alternative: could $\Dsl{\ensuremath{L\rightarrow\infty},\ensuremath{N_{\rm ch}}}$ scale to zero with $\ensuremath{N_{\rm ch}}$?}
\label{subsec:spingap_alt}
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig_altscaling.png}
\caption{
\textbf{(a)} and \textbf{(b)} Testing the viability of a MO/FL state in the 2D limit as an alternative explanation
of the spin gap data, by evaluating the stability of the scaling ansatz ${B[\ensuremath{N_{\rm ch}}]/L+C[\ensuremath{N_{\rm ch}}]/L^2}$ with $1/\ensuremath{N_{\rm ch}}$
for both $V/t=1$ (blue crosses) and $V/t=0.5$ (red crosses).
The strong growth of $C[\ensuremath{N_{\rm ch}}]$ indicates the instability of the quadratic term, making the MO/FL-hypothesis
an unsuitable one, except potentially at $V/t=0.5$ (c.f. text for details)
\textbf{(c)} and \textbf{(d)} Testing the viability of a nodal USC state in the 2D limit as an alternative explanation
of the spin gap data, by evaluating the stability of the scaling ansatz ${A[\ensuremath{N_{\rm ch}}]/L^{\alpha[\ensuremath{N_{\rm ch}}]}}$ with $1/\ensuremath{N_{\rm ch}}$
(symbols matching (a) and (b)). For $V/t=1$ there is a quick saturation to seemingly stationary values, indicating that
nodal USC cannot definitely be ruled out as an alternative in the 2D limit (c.f. text for details).
}%
\label{fig:altscalings}
\end{figure}
As discussed, the results for $\Dsl{L,\ensuremath{N_{\rm ch}}}$ summarized in Figs.~\ref{fig:scalgap}a,b appear to be most congruent with a linear scaling
in $1/L$ to a finite intersect at $1/L=0$, with weak oscillations surviving the averaging inherent to definition~(\ref{eq:Ds}) (c.f. also Appendix~\ref{app:scale}).
Returning to the discussion of Sec~\ref{subsec:obs}, the main follow-up question then must be: could there be alternate
fits \textit{that make physical sense}, associated with either nodal superconductivity, or with a Fermi liquid, or a magnetically ordered state?
The first possibility would be marked by $\Dsl{L,\ensuremath{N_{\rm ch}}}$ scaling to zero as $A/L^\alpha$, $\alpha<1$ in the \mbox{large-$\ensuremath{N_{\rm ch}}$} regime,
while for the latter two possibilities $\Dsl{L,\ensuremath{N_{\rm ch}}}$ should also scale to zero in the \mbox{large-$\ensuremath{N_{\rm ch}}$} regime, but as $B/L+C/L^2$. We allow for a quadratic correction term
to the ideal linear vanishing of $\Dsl{L,\ensuremath{N_{\rm ch}}}$ expected for a FL or MO state (c.f. Sec~\ref{subsec:obs}) because our results from Sec.~\ref{subsec:spingap} show that a
purely linear scaling to zero is impossible given our data.
Analyzing our data under these alternative scenarios requires recognizing that at $\ensuremath{N_{\rm ch}}=2$ we start in a regime
where we are guaranteed a finite spin gap~\cite{BookGiamarchi2003} (c.f. also Appendix~\ref{app:scale}). That makes it extremely likely
that some finite spin gap could survive for at least some larger $\ensuremath{N_{\rm ch}}$ - our linear-polynomial ansatz of the previous sections yields just that.
For this reason one cannot just blindly fit the data in Tab.~\ref{tab:Es_raw} with these two alternate scaling ansatzes; these ansatzes will always ``fit" \textit{any} data like in those tables, appearing linear within the fitting region via parameter-tuning, then bending down to zero outside of it. They easily ``fit" even the $\ensuremath{N_{\rm ch}}=2$ data, a nonsensical result considering these systems are known to have a finite spin gap.
Thus, the only meaningful question is whether or not these fits converge on a physically consistent scenario as $1/\ensuremath{N_{\rm ch}}$ decreases.
For this purpose, we plot the fitting parameters of both ansatzes to the raw data of Tab.~\ref{tab:Es_raw}
against $1/\ensuremath{N_{\rm ch}}$ in Fig.~\ref{fig:altscalings}. For the FL/MO-state ansatz $B[\ensuremath{N_{\rm ch}}]/L+C[\ensuremath{N_{\rm ch}}]/L^2$ in Figs.~\ref{fig:altscalings}a and b, neither coefficient shows
discernable convergence: $B[\ensuremath{N_{\rm ch}}]$ drops monotonically, albeit slowly, while $C[\ensuremath{N_{\rm ch}}]$ increases monotonically, at a much faster rate.
Thus, within the range of accessible $\ensuremath{N_{\rm ch}}$-values, the quadratic correction term appears unstable. Rather, the rapid growth of $C[\ensuremath{N_{\rm ch}}]$ with growing $\ensuremath{N_{\rm ch}}$, coupled with the slow flattening of the linear slope controlled by $B[\ensuremath{N_{\rm ch}}]$, is that of an ansatz that increasingly struggles to approximate a behaviour of $\Dsl{L,\ensuremath{N_{\rm ch}}}$ that is actually linear or near-linear in $1/L$.
The linear-polynomial ansatz was already treated in Sec.~\ref{subsec:spingap}, yielding a finite spin gap.
There is a motivated alternative however. A near-linear ansatz $A[\ensuremath{N_{\rm ch}}]/L^{\alpha[\ensuremath{N_{\rm ch}}]}$ with $\alpha<1$, that scales to zero gap would be suggested by earlier
theory in contradiction to our result. These prior works were based on analytical perturbative RG and mean-field theory; for our 2D U-V model here,
a $d_{x^2-y^2}$-order has been predicted~\cite{Nickel2005,Nickel2006,Bourbonnais2011,Sedeki2012}. Any order other
than isotropic \mbox{$s$-wave} results in nodal lines intersecting with the Fermi surface (c.f.~\ref{fig:ordpar}).
The intersects result in gap-nodes, isolated points in the Brillouin zone where the SC gap closes.
At these nodes zero-energy quasiparticles may thus be produced, according to the mean-field treatment of these systems, and
these theories thus predict an ungapped spin excitation spectrum. This nodal structure results
in the spin susceptibility, eq.~(\ref{eq:spinsuc}), scaling as $1/L^\alpha$, with $\alpha<1$~\cite{Sigrist1991,Hasegawa1996,Moriya2003,Shinagawa2007,Chubukov2008,Vladimirov2011}.
The results of fitting $A[\ensuremath{N_{\rm ch}}]/L^{\alpha[\ensuremath{N_{\rm ch}}]}$ to our spin-gap data, depicted in Figs.~\ref{fig:altscalings}c and d, then show a difference between $V/t=0.5$ and $V/t=1$ as $\ensuremath{N_{\rm ch}}$ increases. In the latter case both coefficients saturate quickly to nearly stable values. For $\alpha[\ensuremath{N_{\rm ch}}]$, this is crucially well below $1$. A ground state with nodal USC in the 2D regime could not be completely ruled out on this basis. To what extent this is an artefact of our data being confined to smaller $L$-values at larger $\ensuremath{N_{\rm ch}}$ must remain as a key question to be answered in future work.
For $V/t=0.5$ on the other hand, at $\ensuremath{N_{\rm ch}}\geq 4$ we cannot find saturation of the growing $\alpha[\ensuremath{N_{\rm ch}}]$ within the accessible $\ensuremath{N_{\rm ch}}$. Taken together with the linear polynomial fitting done in Sec.~\ref{subsec:spingap} and the behaviour of $C[\ensuremath{N_{\rm ch}}]$ in the FL/MO-scaling ansatz above (Fig.~\ref{fig:altscalings}b), it may not be possible to rule out that the $V/t=0.5$ data could move towards a fully linear scaling to zero in the large $\ensuremath{N_{\rm ch}}$-limit. Three conjoined caveats apply however: firstly, $\ensuremath{N_{\rm ch}}$, is limited to $\leq 8$ here, it is clearly possible that $\alpha[\ensuremath{N_{\rm ch}}]$ stabilizes below $1$ at larger $\ensuremath{N_{\rm ch}}$. Secondly, the behaviour of $\alpha[\ensuremath{N_{\rm ch}}]$ with $\ensuremath{N_{\rm ch}}$ could easily be an artefact of the oscillatory remnant in $1/L$ that $\Dsl{L,\ensuremath{N_{\rm ch}}}$ retains despite the averaging, analogous to the discussion in Appendix~\ref{app:scale} for the case of a full second-order polynomial fit to the raw data. Thirdly, the results of the straight extrapolation protocol for the spin gap at $\ensuremath{L\rightarrow\infty}$ (Tab.~\ref{tab:spingaps}, Fig.~\ref{fig:gapNch}a) would point in the opposite direction.
We thus conclude that our data most strongly supports a fully gapped spin spectrum. As laid out in Appendix~\ref{app:scale}, the averaged spin gap $\Dsl{L,\ensuremath{N_{\rm ch}}}$ appears to result in a linear structure in $1/L$ with a superimposed weak oscillatory remnant, of unknown analytical form, with its amplitude slowly decreasing with $1/L$. It is those weak downwards oscillations that the two alternate scaling ansatzes we now studied are, erroneously, susceptible to. A conservative reading of our results would still mandate that at $V/t=1$ we cannot completely rule out a spin excitation spectrum with nodal zeros, which would be compatible with the nodal USC systems described by mean-field theories. At the same time, within the confines of the $\ensuremath{N_{\rm ch}}$-values that we have treated in this work, there is no consistent scenario for a FL or MO state at $V/t=1$ due to the apparent instability of the quadratic correction as shown in Fig.~\ref{fig:altscalings}b.
For $V/t=0.5$ the situation is less clear. The upward growth of $\alpha[\ensuremath{N_{\rm ch}}]$ parametrizing the fit to a nodal USC-regime, combined with the instability of the quadratic correction term $C[\ensuremath{N_{\rm ch}}]$ in the fit to the FL/MO-regime opens up the possibility that there may not be enough data to reliably exclude any of the three regimes for this parameter, when viewed in conjunction with the outcome of the extrapolation plus estimates protocol for the linear-polynomial extrapolation (Fig.~\ref{fig:gapNch}a and Tab.~\ref{tab:spingaps}).
However, viewed together with the enhancement exclusive to the $d_{xy}$-correlation channel discussed in the next section, it turns out that either the nodal USC scenario or the fully spin-gapped one should be favored for $V/t=0.5$ as well.
\subsection{Correlation functions: pairing in the $d_{xy}$-channel as likely candidate-order in the 2D limit}\label{subsec:corrs}
\begin{table*}[t]
\begin{tabular}{ c || c | c || c | c || c | c || c ||}
& \multicolumn{2}{|c||}{ $\ensuremath{N_{\rm ch}}=2$ } & \multicolumn{2}{|c||}{ $\ensuremath{N_{\rm ch}}=4$ } & \multicolumn{2}{|c||}{ $\ensuremath{N_{\rm ch}}=6$ } & \multicolumn{1}{|c||}{ $\ensuremath{N_{\rm ch}}=8$ }\\ \hline
\mbox{short-range decay } & $V/t=0.5$ & $V/t=1$ & $V/t=0.5$ & $V/t=1$ & $V/t=0.5$ & $V/t=1$ & $V/t=0.5$ \\ \hline\hline
$\mbox{\Large\( \frac{\max_{|r-r_s|\leq 1}[d_{xy}(r)]}{d_{xy}(1)} \)}$ & $0.17$ & $0.15$ & \cellcolor{red!25} $1.08$ & \cellcolor{red!25} $0.55$ & \cellcolor{red!25} $0.67$ & \cellcolor{red!25} $0.79$ & \cellcolor{red!25} $1.05$ \\ \hline
$\mbox{\Large\( \frac{\max_{|r-r_s|\leq 1}[d_{x^2-y^2}(r)]}{d_{x^2-y^2}(1)} \)}$ & $0.09$ & $0.08$ & $0.08$ & $0.09$ & $0.09$ & $0.07$ & $0.09$ \\ \hline
$\mbox{\Large\( \frac{\max_{|r-r_s|\leq 1}[s(r)]}{s(1)} \)}$ & $0.97$ & $0.87$ & $0.22$ & $0.09$ & $0.10$ & $0.16$ & $0.14$ \\ \hline
$\mbox{\Large\( \frac{\max_{|r-r_s|\leq 1}[C(r)]}{C(1)} \)}$ & $0.06$ & $0.07$ & $0.06$ & $0.06$ & $0.06$ & $0.06$ & $0.06$ \\ \hline
\end{tabular}
\caption{Decay of correlations at short range, $r\approx r_s=4a$, in the effective 2D regime (c.f. main text and example shown in Fig,~\ref{fig:corrdecay}a). The strong enhancement in the $d_{xy}$-channel compared to all other channels for $\ensuremath{N_{\rm ch}}\geq 4$ is highlighted in red. }%
\label{tab:corrdecay}
\end{table*}
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig_corr_decay_normalized.png}
\caption{
\textbf{(a)} Correlation functions for $\ensuremath{N_{\rm ch}}=8$, $V/t=0.5$, normalized to their value at $r=1$. Dash-dotted lines show relative decay at first maximum within $r\lesssim r_s$ for $d_{xy}$, (red), $d_{x^2-y^2}$ (green), $s$ (blue) and $C$ (black). This behaviour of the normalized correlation functions in representative for all $\ensuremath{N_{\rm ch}}\geq 4$.
\textbf{(b)}~How correlation functions $d_{x^2-y^2}$, $d_{xy}$ and $s(r)$, defined in eq.~(\ref{eq:dx2y2corr})~-~(\ref{eq:ddcorr}), are evaluated on the the two
central legs of the lattice.
}%
\label{fig:corrdecay}
\end{figure}
From the ground states, any desired correlation function between site ${\bf r}_1$ and site ${\bf r}_2$ in the lattice can in principle be computed. With the form-factor of the lattices we consider, strips of length $L$ and width $\ensuremath{N_{\rm ch}}$, where $L\gg\ensuremath{N_{\rm ch}}$, generally a cross-over of behaviours is to be expected with growing $r:=|{\bf r}_1-{\bf r}_2|$. As $\ensuremath{N_{\rm ch}}$ increases, at short distances the physics will become increasingly dominated by that of the 2D limit ($\ensuremath{N_{\rm ch}}\rightarrow\infty$). Conversely, at sufficiently large distances, any correlation function will exhibit the typical behaviour of a 1D-system, i.e. it will decay algebraically as a function of $r$ - as long as no discrete symmetry breaks spontaneously~\footnote{a transition to a CDW would happen in our systems eventually if $V/t$ were to be increased further}. This behaviour is generic to the ground state of any 1D quantum system, even one of finite width~\cite{BookGiamarchi2003}. The length-scale at which the cross-over between the two regimes takes place will increase as $\ensuremath{N_{\rm ch}}$ is raised, with the 1D-regime starting at larger and larger distances, until it disappears completely when $\ensuremath{N_{\rm ch}}\rightarrow\infty$.
Even with pDMRG we cannot access the full 2D regime for the time being, but we can get new and valuable insight into how the various potential channels for ordering of the \mbox{U-V} model in the 2D limit respond to the systematic increase of $\ensuremath{N_{\rm ch}}$. As we already evaluate the possibility of magnetic order or a Fermi liquid based on the scaling of $\Dsl{L,\ensuremath{N_{\rm ch}}}$, we limit ourselves to the main competitor-orders, the SC $d_{x^2-y^2}(r)$-, $d_{xy}(r)$-, $s(r)$- as well as the charge-density-wave channel. We compute the correlation functions~(\ref{eq:dx2y2corr})~-~(\ref{eq:ddcorr}) connected to each of these, as defined in Sec.~\ref{subsec:obs}.
We then specifically study the dependency of the short-range behaviour, $r \lesssim r_s:=\pi / \ensuremath{k_F}=4a$ of the above correlation functions on $\ensuremath{N_{\rm ch}}$. An example that is highly representative of the general behaviour for $\ensuremath{N_{\rm ch}}\geq 4$ is shown in Fig.~\ref{fig:corrdecay}: when each correlation function is normalized to its value at $r=1$, the first maximum within the short-distance quasi-2D regime decays strongly in all channels \textit{except} in the $d_{xy}$ channel. Table~\ref{tab:corrdecay}, which summarizes the short-range decay for all studied $\ensuremath{N_{\rm ch}}$-values, confirms this short-range enhancement as exclusive to the $d_{xy}$ channel. As is to be expected, once $r$ increases beyond the short-range regime, $d_{xy}(r)$, like all the other correlation functions, behaves according to 1D physics, i.e. it decays algebraically.
We do not take the above short-range behaviour of these systems at $\ensuremath{N_{\rm ch}}\leq 8$ to constitute definite proof of $d_{xy}$-ordering in the full 2D-limit. At the same time however, we observe that it would align with a simple order-of-magnitude estimate, namely that the diagonal singlet pairing of $d_{xy}$-order is energetically advantageous at strong coupling - naive perturbative analysis suggests that the gain in energy should be $\mathcal{O}(\ensuremath{t_\perp}^2/V)$ across a diagonal, while it is $\mathcal{O}(\ensuremath{t_\perp}^2/U)$ for singlet pairing across a rung in $d_{x^2-y^2}$ order. And of course, this would be in line with the high likelihood for a finite spin gap and the attendant elimination of competing MO or FL phases in these systems that we discussed in Secs.~\ref{subsec:spingap} and \ref{subsec:spingap_alt}; the open questions raised by a system having a finite spin gap jointly with SC $d_{xy}$-order we will discuss now.
\section{Discussion and connections to experiment}\label{sec:disc}
Our results show a fully gapped spin excitation spectrum for finite-width strips of the 2D \mbox{U-V} model at the given
parameters, as evidenced by the finite ${\Dsl{\ensuremath{L\rightarrow\infty},\ensuremath{N_{\rm ch}}}}$ we find.
We then show that the behaviour of the gap with the strip-width is most consistent with a fully-gapped spin
spectrum also in the 2D limit when taking the apparent structure of $\Dsl{L,\ensuremath{N_{\rm ch}}}$ as consisting of a linear part
with a superimposed weak oscillatory remnant into account (c.f. Sec.~\ref{subsec:spingap_alt} and Appendix~\ref{app:scale}).
At the same time, the behaviour of the various correlation functions at short range
with increasing width, indicative of the behaviour of the 2D system, lends support to pairing happening predominantly
in the $d_{xy}$-channel, rather than in the extended $s$- or $d_{x^2-y^2}$-channel.
Such a conclusion however is at variance with existing analytical descriptions of USC systems, such as from combining
the results of perturbative functional RG with mean-field descriptions. Besides these methods predicting $d_{x^2-y^2}$
symmetry for the order parameter instead~\cite{Nickel2005,Nickel2006,Bourbonnais2011,Sedeki2012}, the more far-reaching difference to our findings lies in the effects these theories predict from the nodal lines,
which any plausible order parameter will exhibit (these originate from the systems attempt to minimize the Coulomb repulsion).
Specifically, they forecast that node lines' intersections with the Fermi surface will always create point nodes for
gapless quasiparticle excitations (c.f. Fig.~\ref{fig:ordpar}). In the mean-field theory, these point nodes in turn necessitate an ungapped spin excitation spectrum.
There are two ways to look at the apparent discrepancy between this mean-field treatment and our findings of a finite spin gap in Sec.~\ref{subsec:spingap}.
We considered the first way in Sec.~\ref{subsec:spingap_alt}, observing that disregarding the apparent structure of $\Dsl{L,\ensuremath{N_{\rm ch}}}$ (linear with oscillatory remnant, c.f. Appendix~\ref{app:scale}) allows
for sublinear scaling of $\Dsl{L,\ensuremath{N_{\rm ch}}}$ to be a physically consistent fit within the accessible $\ensuremath{N_{\rm ch}}$-range, at least for $V/t=1$. As explained there, this scaling of
$\Dsl{L,\ensuremath{N_{\rm ch}}}$ taken together with the boost to the $d_{xy}$-correlations from Sec.~\ref{subsec:corrs} would be consistent with the prediction of nodal USC derived from mean-field theory just the exact symmetry of the order parameter would be different.
The second way is to consider the structure of the $\Dsl{L,\ensuremath{N_{\rm ch}}}$-data as linear + weak oscillations. Then one has to reconcile the finite spin gap in the 2D limit that ultimately results
from our numerics at strong coupling with the weak-coupling analytical theory. Taking stock of the experimental situation allows just such a reconciliation. Firstly, we notice that our result of dominant $d_{xy}$ correlations in the 2D regime of the strips' ground state
would be in line with the general argument that it is energetically advantageous for the system to minimize the number of nodes, which $d_{xy}$ would achieve better than a
$d_{x^2-y^2}$-order given the Fermi-surface topology (c.f. Fig.~\ref{fig:ordpar}). Secondly, measurements on actual USC materials, which are certainly in the
strongly interacting regime, provide evidence for USC order with zero-gap point nodes, yet which exhibit \textit{fully gapped} spin excitations at the same time. This has been demonstrated for LSCO~\cite{Lake1999} close to and away from optimal doping, as well as YBCO~\cite{Dai2001} over a range of doping. To our knowledge, this fundamental discrepancy between the standard mean-field descriptions of USC models with zero-gap nodes and actual measurements so far has neither been theoretically explained nor even replicated. Thirdly, strongly underdoped yet still superconducting LSCO
has been shown to have not just gapped spin excitations but to be fully gapped overall~\cite{Razzoli2013}. The possibilities discussed for this observation are either another phase
coexisting with USC order or topological superconductivity of the chiral $d_{x^2-y^2}\pm id_{xy}$-type, which the authors show to fit the measured gap. Fourthly, the loss of
$D_4$ symmetry in the anisotropic 2D lattice results in a mixing of orbitals, such as $d_{xy}$ and $g_{xy(x^2-y^2)}$ (c.f. Fig.~\ref{fig:ordpar})~\cite{Cho2013}. When viewed in light of
the numerous proposals to explain USC phases at low or zero temperature via mixing of different order symmetries, it cannot be ruled out that the remaining nodes resulting from a pure
$d_{xy}+\alpha g_{xy(x^2-y^2)}$ orbital, sketched in Fig.~\ref{fig:ordpar}, are eliminated when admixed with yet other orders, even without invoking the possibility of topological superconductivity.
Given the available data we cannot currently distinguish among these possibilities, but provide them to make it plain
that the most consistent interpretation of our results, a fully spin-gapped USC order (with dominant $d_{xy}$ symmetry), has clear precedents,
the predictions of weak-coupling analytics notwithstanding.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{Fig_orderpara.png}
\caption{
Fermi surface of the noninteracting lattice electrons for the 2D \mbox{U-V} model (solid black lines) and
possible symmetry of the order parameter, shown as a generic mixture between $d_{xy}$ and $g_{xy(x^2-y^2)}$
(see text for details). This symmetry minimizes, but does not eliminate intersections of nodal lines and Fermi surface
(red circles), where the gap of USC order would exhibit zero-nodes in standard mean field treatments. From this, these theories
predict a non-gapped spectrum for spin excitation, i.e. zero spin gap.
}
\label{fig:ordpar}
\end{figure}
\section{Conclusion}\label{sec:out}
We have performed large-scale calculations with parallel DMRG to obtain ground states
of the 2D \mbox{U-V} Hubbard model, the presumed minimal model of unconventional superconductivity
in the organic Bechgaard and Fabre salts. The use of parallel DMRG here allows to study large systems at sufficient
accuracies to reliably compute ground state energy differences between different quantum number sectors, i.e. the spin gap.
Considering strips of increasing width up to $8$ chains, we have computed
this models' spin gap as a function of strip-length and -width, averaged so as to strongly reduce oscillations that
would preclude extrapolations to infinite strip-length. After extrapolating the ground state energies in the discarded weight,
we arrived at data consistent with a finite spin gap at the thermodynamic limit (i.e. at infinite length). Taking the apparent structure
of the computed spin gaps as a function of inverse length into account, we conclude that the spin gap of the infinite strips
is non-decreasing or even increasing with the strips width. A finite spin gap in the full 2D limit is therefore the most likely
possibility, thus making the case for singlet pairing. Disregarding key features of the computed spin gaps (i.e. its decomposition into
linear and weak oscillatory remnant) still yields scaling results at most consistent with nodal unconventional superconductivity,
where the spin excitation spectrum would touch zero in isolated points. Even then however our results do not
admit a physically consistent scaling scaling solution in line with a Fermi liquid or magnetically ordered states.
As we steer away from model parameters that could result in charge density wave order of the ground state, the remaining
candidate by exclusion is unconventional (i.e. repulsively mediated) superconducting order of singlet type. This reading is further supported by the
enhancement of short range correlations (i.e. where the strips' resemble a 2D system) with the strips' width to take place only in the $d_{xy}$-channel.
These results suggest several lines for further inquiry. One would be to test for the possibility of topological $d_{x^2-y^2}\pm id_{xy}$ superconductivity
that is raised by the presence of the spin gap at strong coupling.
Another would be to move more closely to the parameter regime of the Bechgaard and Fabre salts, i.e. to increase both $U/t$ and $V/t$ further yet. This will have to entail
a careful study of the enlarged parameter space for the onset of the Mott-insulating state that we discussed in Sec.~\ref{subsec:parms} for the case $\ensuremath{N_{\rm ch}}=2$, and the behaviour of which at $\ensuremath{N_{\rm ch}}>2$ is currently unclear.
\section{Acknowledgements}\label{sec:ack}
A.K. would like to thank Ronny Thomale for useful discussions.
This work was supported in part by the Swiss NSF under MaNEP and Division II, and has further received funding through an ERC Starting Grant from the European
Union's Horizon 2020 research and innovation programme under grant agreement No. 758935. The development
of pDMRG was made possible by support of the Swiss government under its HP2C initiative.
Carrying out the large-scale ground state calculations was made possible through two grants of compute time
on the ``Piz Daint'' cluster of the Swiss National Supercomputing Center
(CSCS), proposal IDs 20131011-18256183-84 and 20141010-173654696-445. The authors
would like to thank CSCS for supporting this work. Calculations for $\ensuremath{N_{\rm ch}}=1$ and for some values at $\ensuremath{N_{\rm ch}}=2$
were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at the PDC, KTH Stockholm.
|
1,116,691,499,128 | arxiv | \section{Introduction}
At least since Ap\'ery's proof of the irrationality of $\zeta(3)$ there has
been a great deal of interest in identities such as
\begin{equation}\label{1.1}
\zeta(3) = \frac{5}{2}\sum_{k=1}^\infty\frac{(-1)^{k-1}}{\binom{2k}{k}k^3}.
\end{equation}
This identity was actually first obtained by Markov in 1890; see, e.g.,
\cite{KS} which provides a historical perspective. Following Ap\'ery's work,
numerous other related identities were obtained or rediscovered, some of them
of a more general nature. For instance, Koecher \cite{Ko} used the generating
function
\begin{equation}\label{1.2}
\sum_{k=0}^\infty\zeta(2k+3)x^{2k} = \sum_{n=1}^\infty\frac{1}{n(n^2-x^2)}
\qquad (|x|<1)
\end{equation}
as motivation to prove the identity
\begin{equation}\label{1.3}
\sum_{n=1}^\infty\frac{1}{n(n^2-x^2)}
= \frac{1}{2}\sum_{k=1}^\infty\frac{(-1)^{k-1}}{\binom{2k}{k}k^3}\cdot
\frac{5k^2-x^2}{k^2-x^2}\prod_{m=1}^{k-1}\big(1-\frac{x^2}{m^2}\big).
\end{equation}
The generating function \eqref{1.2} is easy to obtain by expanding each summand
on the right as a geometric series and changing the order of summation.
Koecher obtained \eqref{1.1} by expanding the right-hand side of \eqref{1.3}
in powers of $x$ and equating the constant coefficients in \eqref{1.2} and
\eqref{1.3}. Similarly, by equating coefficients of $x^2$, he obtained
\begin{equation}\label{1.4}
\zeta(5) = 2\sum_{k=1}^\infty\frac{(-1)^{k-1}}{\binom{2k}{k}k^5}
-\frac{5}{2}\sum_{k=1}^\infty\frac{(-1)^{k-1}H_{k-1}^{(2)}}{\binom{2k}{k}k^3},
\end{equation}
where $H_n^{(2)}=\sum_{j=1}^nj^{-2}$ is a generalized harmonic number.
Koecher also gave a corresponding formula for $\zeta(7)$, and it is clear that
such identities can be obtained from \eqref{1.2} and \eqref{1.3} for any
further odd argument of the zeta function. It should also be noted that
independently, and using a different method, Leshchiner \cite{Le} obtained the
same expanded identities as did Koecher.
The current paper is based on the observation that Koecher's proof is
``structural", that is, its main ideas can be generalized. It is the purpose
of this paper to prove such a generalization, and to consider some special
cases. These, in turn, will lead to various identities or classes of identities,
some of which are new.
This paper is structured as follows. After we prove our main result in
Section~2, we derive two different instances in Section~3, one of which includes
Koecher's original result \eqref{1.3} as a special case. The remainder of the
paper is devoted to applications of these results: In Section~4 we
obtain a general result on alternating Euler sums, and in Section~5 we derive
an infinite class of Markov-Ap\'ery type formulas which generalize the
identity \eqref{1.1}. Finally, in Section~6, we derive a class of identities
for positive even powers of $\pi$.
\section{The main result}
The basic idea in our approach is to replace the sequence of positive integer
squares, which plays an important role in Koecher's proof, by an arbitrary
increasing sequence
\begin{equation}\label{2.1}
{\bf z} := (z_1, z_2, z_3,\ldots),\quad 0 < z_1 < z_2 <\cdots
\end{equation}
of positive real numbers, with the additional condition
\begin{equation}\label{2.2}
z_n \geq \varepsilon\,n, \quad n\geq 1,\quad\hbox{for some}\quad \varepsilon >0.
\end{equation}
Extending Koecher's notation in \cite{Ko}, we denote
\begin{equation}\label{2.3}
(n;k)_{{\bf z},\alpha} := z_n^\alpha\prod_{i=1}^k(z_n-z_i),\quad n\geq 1,\quad
k\geq 0,
\end{equation}
where $n$ and $k$ are integers and $\alpha\geq 0$ is a real constant. We then
define the zeta function belonging to the sequence $\bf z$ by
\begin{equation}\label{2.4}
\zeta_{\bf z}(s) := \sum_{n=1}^\infty\frac{1}{z_n^s},\quad {\rm Re}(s)>1.
\end{equation}
Absolute convergence of this series for ${\rm Re}(s)>1$ is guaranteed by the
condition \eqref{2.2}. It is now easy to derive an analogue of the generating
function \eqref{1.2}.
\begin{lemma}\label{lem:2.1}
Let $\bf z$ be a sequence satisfying \eqref{2.1} and \eqref{2.2}, and
$\alpha\in{\mathbb R}$ be such that either $\alpha>0$, or $\alpha=0$ and
$\sum z_n^{-1}$ converges. Then
\begin{equation}\label{2.5}
\sum_{n=1}^\infty\frac{1}{(z_n-x)z_n^\alpha}
= \sum_{k=0}^\infty\zeta_{\bf z}(k+\alpha+1)x^k,\quad |x| <\min\{1, z_1\}.
\end{equation}
\end{lemma}
\begin{proof}
By the condition on $\alpha$, and since $|x|<\min\{1, z_1\}$, both sides of
\eqref{2.5} are
absolutely convergent, and the following series operations are legitimate.
Beginning with the left-hand side and using geometric series expansions, we get
\begin{align*}
\sum_{n=1}^\infty\frac{1}{(z_n-x)z_n^\alpha}
&=\sum_{n=1}^\infty\frac{1}{z_n^{\alpha+1}(1-\frac{x}{z_n})}
=\sum_{n=1}^\infty\frac{1}{z_n^{\alpha+1}}\sum_{k=0}^\infty\frac{x^k}{z_n^k}\\
&=\sum_{k=0}^\infty\bigg(\sum_{n=1}^\infty\frac{1}{z_n^{k+\alpha+1}}\bigg)x^k.
\end{align*}
This, with \eqref{2.4}, gives \eqref{2.5} as required.
\end{proof}
We are now ready to state and prove our main result. The proof follows
Koecher's ideas in \cite{Ko}.
\begin{theorem}\label{thm:2.2}
Let $\bf z$ and $\alpha$ be as in Lemma~\ref{lem:2.1}, and suppose that
\begin{equation}\label{2.5a}
P_N:=\frac{z_1+z_N}{z_{N+1}-z_N}\cdot\frac{z_1+z_{N-1}}{z_{N+1}-z_{N-1}}\cdots
\frac{z_1+z_1}{z_{N+1}-z_1},\qquad N=1, 2,\ldots,
\end{equation}
is a bounded sequence. If we set
\begin{equation}\label{2.6}
\gamma_k(x) := \frac{1}{z_k-x}\cdot\frac{1}{(k;k-1)_{{\bf z},\alpha}}
+ \sum_{n=k+1}^\infty\frac{1}{(n;k)_{{\bf z},\alpha}},
\end{equation}
then for all complex $x$ with $|x|<z_1$ we have
\begin{equation}\label{2.7}
\sum_{n=1}^\infty\frac{1}{(z_n-x)z_n^\alpha}
= \sum_{k=1}^\infty\gamma_k(x)\prod_{\ell=1}^{k-1}(x-z_\ell).
\end{equation}
\end{theorem}
\begin{proof}
For greater ease of notation, we suppress the subscripts ${\bf z}, \alpha$.
We consider the sequence of functions $\varphi_k(x)$, defined by the series
\begin{equation}\label{2.8}
\varphi_k(x) := \sum_{n=k+1}^\infty\frac{1}{(z_n-x)(n;k)},\quad k=0, 1, 2,\ldots
\end{equation}
Since $(n;0)=z_n^\alpha$ by \eqref{2.3}, we have
\begin{equation}\label{2.9}
\varphi_0(x) := \sum_{n=1}^\infty\frac{1}{(z_n-x)z_n^\alpha},
\end{equation}
which is the left-hand side of \eqref{2.7}. Now we claim that the sequence
$\varphi_k(x)$ satisfies the recurrence relation
\begin{equation}\label{2.10}
\varphi_{k-1}(x)-(x-z_k)\varphi_k(x) = \frac{1}{(z_k-x)(k;k-1)}
+\sum_{n=k+1}^\infty\frac{1}{(n;k)},\quad k\geq 1.
\end{equation}
To prove this identity, we first note that from \eqref{2.3} we get
\begin{equation}\label{2.11}
\frac{1}{(n;k-1)} = \frac{z_n-z_k}{(n;k)}.
\end{equation}
Now we have by \eqref{2.8},
\begin{align*}
\varphi_{k-1}(x)&-(x-z_k)\varphi_k(x)
=\sum_{n=k}^\infty\frac{1}{(z_n-x)(n;k-1)}
+(z_k-x)\sum_{n=k+1}^\infty\frac{1}{(z_n-x)(n;k)}\\
&= \frac{1}{(z_k-x)(k;k-1)}+\sum_{n=k+1}^\infty\frac{1}{(z_n-x)(n;k-1)}\\
&\quad+(z_k-x)\sum_{n=k+1}^\infty\frac{1}{(z_n-x)(n;k)}\\
&= \frac{1}{(z_k-x)(k;k-1)}
+\sum_{n=k+1}^\infty\frac{1}{(z_n-x)}\bigg(\frac{1}{(n;k-1)}
+\frac{z_k-x}{(n;k)}\bigg).
\end{align*}
With \eqref{2.11} we get
\[
\frac{1}{(n;k-1)}+\frac{z_k-x}{(n;k)} = \frac{z_n-z_k+z_k-x}{(n;k)}
= \frac{z_n-x}{(n;k)},
\]
which then leads to \eqref{2.10}, as claimed.
To solve the recurrence \eqref{2.10}, we denote its right-hand side by
$\gamma_k(x)$, as in \eqref{2.6}. For each $k\geq 2$, we multiply both sides of
\eqref{2.10} by $(x-z_1)\cdots(x-z_{k-1})$ and add the resulting identities for
$k=1,2,\ldots,N$, for some positive integer $N$. This is a telescoping sum,
giving
\begin{equation}\label{2.12}
\varphi_0(x)-(x-z_1)\cdots(x-z_N)\varphi_N(x)
= \sum_{k=1}^N\gamma_k(x)\prod_{\ell=1}^{k-1}(x-z_\ell).
\end{equation}
With \eqref{2.8} and \eqref{2.3} we get
\begin{equation}\label{2.13}
(x-z_1)\cdots(x-z_N)\varphi_N(x)
= \sum_{n=N+1}^{\infty}\frac{1}{(z_n-x)z_n^\alpha}\cdot
\frac{(x-z_1)\cdots(x-z_N)}{(z_n-z_1)\cdots(z_n-z_N)}.
\end{equation}
Using the fact that $|x|<z_1$, we get the estimate
\[
\left|\frac{(x-z_1)\cdots(x-z_N)}{(z_n-z_1)\cdots(z_n-z_N)}\right|
\leq \frac{(z_1+z_1)\cdots(z_1+z_N)}{(z_{N+1}-z_1)\cdots(z_{N+1}-z_N)},
\]
where in the denominator we have used the fact that $z_n\geq z_{N+1}$.
By the condition \eqref{2.5a}, the large fraction on the right
of \eqref{2.13} is bounded, and due to the
conditions on $\alpha$, the series on the right converges.
Therefore
\[
\lim_{N\rightarrow\infty}(x-z_1)\cdots(x-z_N)\varphi_N(x) = 0,
\]
and this means that \eqref{2.12}, with \eqref{2.9}, implies \eqref{2.7} as
$N\rightarrow\infty$. The proof is now complete.
\end{proof}
\section{Two particular cases}
In this paper we will mainly consider two instances of Theorem~\ref{thm:2.2},
which we state as further theorems. The first one will have Koecher's original
result as a special case.
\begin{theorem}\label{thm:2.3}
Let the sequence ${\bf z}$ be given by
\begin{equation}\label{2.14}
z_n = (n+c)^\beta+d,\qquad c>-1,\; d\geq 0,\; \beta>1,
\end{equation}
for all integers $n\geq 1$. If $\alpha\geq 0$ and $\gamma_k(x)$ is as defined
in \eqref{2.6}, then for all complex $x$ with $|x|<\min\{1,z_1\}$ we have
\begin{equation}\label{2.15}
\sum_{m=0}^\infty\zeta_{\bf z}(m+\alpha+1)x^m
= \sum_{k=1}^\infty\gamma_k(x)\prod_{\ell=1}^{k-1}(x-z_\ell).
\end{equation}
\end{theorem}
\begin{proof}
The sequence ${\bf z}$ in \eqref{2.14} clearly satisfies \eqref{2.1} and
\eqref{2.2}. It remains to verify that the sequence $P_N$ in \eqref{2.5a} is
bounded. We first consider the denominator and note that
\[
z_{N+1}-z_{N+1-j}=(N+1+c)^\beta-(N+1+c-j)^\beta,\qquad j=1,2,\ldots,N.
\]
The Mean Value Theorem gives us
\[
\frac{(N+1+c)^\beta-(N+1+c-j)^\beta}{j} = \beta x_j^{\beta-1},\qquad
N+1+c-j < x_j < N+1+c,
\]
so that
\[
z_{N+1}-z_{N+1-j} > j\,\beta(N+1+c-j)^{\beta-1},\qquad j=1,2,\ldots,N.
\]
Hence the denominator of \eqref{2.5a} is larger than
\begin{align*}
N!\beta^N&\big((N+c)(N-1+c)\cdots(1+c)\big)^{\beta-1} \\
&=\frac{\Gamma(N+1)\Gamma(c+1)\beta^N}{\Gamma(N+1+c)}
\big((N+c)(N-1+c)\cdots(1+c)\big)^{\beta},
\end{align*}
and with \eqref{2.14} and \eqref{2.5a} we get, with $\tilde{d}:=d+z_1$,
\begin{equation}\label{2.16}
P_N < \frac{1}{\Gamma(c+1)}\cdot\frac{\Gamma(N+1+c)}{\Gamma(N+1)\beta^N}
\cdot\frac{(N+c)^\beta+\tilde{d}}{(N+c)^\beta}
\cdot\frac{(N-1+c)^\beta+\tilde{d}}{(N-1+c)^\beta}
\cdots\frac{(1+c)^\beta+\tilde{d}}{(1+c)^\beta}.
\end{equation}
We now consider two different limits for the right-hand side of \eqref{2.16}.
First,
\begin{align*}
\lim_{N\to\infty}\frac{\Gamma(N+1+c)}{\Gamma(N+1)\beta^N}
&= \lim_{N\to\infty}\left(\frac{\Gamma(N+c)}{\Gamma(N)\cdot N^c}
\cdot\frac{N^c}{\beta^{N-1}}\right) \\
&= \lim_{N\to\infty}\frac{\Gamma(N+c)}{\Gamma(N)\cdot N^c}
\cdot\lim_{N\to\infty}\frac{N^c}{\beta^{N-1}} = 0
\end{align*}
since, by a well-known property of the gamma function, the first limit on the
right-hand side is 1, while the second limit is 0 for any real $c>-1$ since
$\beta>1$.
Next, the remaining fractions on the right of \eqref{2.16} can be rewritten,
in reverse order, as
\[
\bigg(1+\frac{\tilde{d}}{(1+c)^\beta}\bigg)
\bigg(1+\frac{\tilde{d}}{(2+c)^\beta}\bigg)\cdots
\bigg(1+\frac{\tilde{d}}{(N+c)^\beta}\bigg).
\]
The limit of this expression, as $N\to\infty$, is an infinite product which
converges since
\[
\sum_{j=1}^\infty\frac{\tilde{d}}{(j+c)^\beta},\qquad \beta>1, c>-1,
\]
is a convergent series. Hence the limit of the right of \eqref{2.16} is 0,
as $N\to\infty$. This, in turn means that $P_N$ is a bounded sequence.
All conditions of Theorem~\ref{thm:2.2} are therefore satisfied, and
\eqref{2.15} follows from \eqref{2.5} and \eqref{2.7}.
\end{proof}
For applying Theorem~\ref{thm:2.2} or Theorem~\ref{thm:2.3}, the infinite
series on the right of \eqref{2.6} is usually the most difficult part to
evaluate. However, the evaluation required for deriving Koecher's result
\eqref{1.3} is quite straightforward, and is given in the following lemma. We
state a more general version, which will be useful later in this paper.
\begin{lemma}\label{lem:2.4}
Let $(n;k)=(n+r+k)(n+r+k-1)\cdots(n+r-k)$, where $r\geq 0$ is an integer. Then for each $k\geq 1$ we have
\begin{equation}\label{2.17}
\sum_{n=k+1}^\infty\frac{1}{(n;k)} = \frac{r!}{2k(2k+r)!}.
\end{equation}
\end{lemma}
\begin{proof}
The following partial fraction expansion is easy to find and to verify:
\begin{align*}
&\frac{1}{(n+r+k)(n+r+k-1)\cdots(n+r-k)} \\
&\qquad =\frac{1/2k}{(n+r+k-1)\cdots(n+r-k)}
-\frac{1/2k}{(n+r+k)\cdots(n+r-k+1)}.
\end{align*}
Summing this over all $n\geq k+1$ gives a telescoping series on the right, with
only the first term on the right remaining for $n=k+1$. That is, we are left
with
\[
\frac{1}{2k}\cdot\frac{1}{(2k+r)(2k+r-1)\cdots r+1} = \frac{r!}{2k(2k+r)!},
\]
as claimed.
\end{proof}
To obtain Koecher's identity \eqref{1.3}, we set $c=d=0$ and $\beta=2$ in
\eqref{2.14}, so that $z_n=n^2$. Furthermore, we replace $x$ by $x^2$ and set
$\alpha=1/2$; then by \eqref{2.4} with $r=0$ the left-hand side of \eqref{2.15}
becomes the left-hand side of \eqref{1.2}. Next, by \eqref{2.3} we have
\[
(n;k)_{{\bf z},1/2} = (n+k)\cdots(n+1)n(n-1)\cdots(n-k)
\]
and $(k;k-1)_{{\bf z},1/2}=(2k-1)!$. Hence with \eqref{2.6} and \eqref{2.17}
we have
\[
\gamma_k(x^2) = \frac{1}{k^2-x^2}\cdot\frac{1}{(2k-1)!}+\frac{1}{2k(2k)!}
= \frac{1}{2k(2k)!}\frac{5k^2-x^2}{k^2-x^2},
\]
and the right-hand side of \eqref{2.15} becomes
\[
\sum_{k=0}^\infty\frac{1}{2k(2k)!}\frac{5k^2-x^2}{k^2-x^2}(-1)^{k-1}(k-1)!^2
\prod_{\ell=1}^{k-1}\bigg(1-\frac{x^2}{\ell^2}\bigg).
\]
This is the same as the right-hand side of \eqref{1.3}, as claimed.
\medskip
Since the proof of Theorem~\ref{thm:2.3} required $\beta > 1$, the situation in
the case $\beta=1$ in \eqref{2.14} will be quite different. The next result
deals with this case.
\begin{theorem}\label{thm:2.5}
Let the sequence ${\bf z}$ be given by $z_n = n+c$ for all integers $n\geq 1$,
where $c>-1$ is a real constant. If $\alpha>\max\{0,c\}$ and $\gamma_k(x)$ is
as defined in \eqref{2.6}, then for all complex $x$ with $|x|\leq\varepsilon$,
where $\varepsilon>0$ is sufficiently small, we have
\begin{equation}\label{2.18}
\sum_{m=0}^\infty\zeta_{\bf z}(m+\alpha+1)x^m
= \sum_{k=1}^\infty\gamma_k(x)\prod_{\ell=1}^{k-1}(x-z_\ell).
\end{equation}
\end{theorem}
\begin{proof}
The sequence ${\bf z}$ once again satisfies \eqref{2.1} and \eqref{2.2}. By
the proof of Theorem~\ref{thm:2.2} we are done if we can show that the
right-hand side of \eqref{2.13} approaches 0 as $N\to\infty$. To do so, we
set $z_n=n+c$, and using $|x|\leq\varepsilon$ and $n\geq N+1$, we get
\begin{align}
\left|\frac{(x-z_1)\cdots(x-z_N)}{(z_n-z_1)\cdots(z_n-z_N)}\right|
& \leq \frac{(1+c+\varepsilon)\cdots(N+c+\varepsilon)}{N!}\label{2.19} \\
& =\frac{\Gamma(N+1+c+\varepsilon)}{\Gamma(N+1)\Gamma(1+c+\varepsilon)}.\nonumber
\end{align}
Since
\[
\lim_{N\to\infty}\frac{\Gamma(N+1+c+\varepsilon)}{\Gamma(N+1)(N+1)^{c+\varepsilon}} = 1
\]
by a well-known property of the gamma function, we have
\begin{equation}\label{2.20}
\frac{\Gamma(N+1+c+\varepsilon)}{\Gamma(N+1)\Gamma(1+c+\varepsilon)}
\sim \frac{1}{\Gamma(1+c+\varepsilon)}\cdot N^{c+\varepsilon}.
\end{equation}
On the other hand,
\[
\sum_{n=N+1}^{\infty}\frac{1}{(z_n-x)z_n^\alpha}
\leq \sum_{n=N+1}^{\infty}\frac{1}{(n+c-\varepsilon)(n+c)^\alpha}
\leq \delta\sum_{n=N+1}^{\infty}\frac{1}{(n+c)^{\alpha+1}},
\]
for some constant $\delta>1$. Now, using an integral estimate, we get for
$\alpha>0$,
\begin{equation}\label{2.21}
\sum_{n=N+1}^{\infty}\frac{1}{(z_n-x)z_n^\alpha}
< \delta\int_N^\infty\frac{dx}{(x+c)^{\alpha+1}}
= \frac{\delta}{\alpha}\cdot\frac{1}{(N+c)^\alpha}.
\end{equation}
Combining \eqref{2.19}--\eqref{2.21}, we see that the right-hand side of
\eqref{2.13} is dominated by a sequence that is asymptotically equivalent to
\[
\frac{\delta}{\alpha\Gamma(1+c+\varepsilon)}\cdot N^{c+\varepsilon-\alpha}.
\]
Hence the right-hand side of \eqref{2.13} approaches 0 as $N\to\infty$
when $\alpha>c+\varepsilon$. This is the case whenever $\alpha>c$ and
$\varepsilon$ is chosen sufficiently small, for instance
$\varepsilon=(\alpha-c)/2$. The proof of Theorem~\ref{thm:2.5} is now complete.
\end{proof}
\section{Results on Euler sums}
One of the first and most famous result linking a Riemann zeta value with
values of multiple zeta functions is Euler's identity
\[
\sum_{n=1}^\infty\frac{1}{n^3}
= 8\sum_{n=1}^\infty\frac{(-1)^n}{n^2}\sum_{j=1}^{n-1}\frac{1}{j},
\]
which can be written as
\begin{equation}\label{4.1}
\zeta(3) = 8\zeta(\overline{2},1);
\end{equation}
see, e.g., \cite{BB}. In \eqref{4.1} and in what follows we use the standard
notation
\begin{equation}\label{4.2}
\zeta(s_1,\ldots,s_m)
= \sum_{k_1>\cdots>k_m\geq 1}\frac{1}{k_1^{s_1}}\cdots\frac{1}{k_m^{s_m}},
\end{equation}
with the further convention that an overlined variable, e.g., $\overline{s}_j$,
indicates that $1/k_j^{s_j}$ is replaced by $(-1)^{k_j}/k_j^{s_j}$ in
\eqref{4.2}. It is also customary to denote $r$ repetitions of a substring $S$
by $\{S\}^r$, with $\{S\}^0$ being the empty string. Thus, for example,
\[
\zeta(\overline{2},\{1\}^3)
= \sum_{k_1>\cdots>k_4\geq 1}\frac{(-1)^{k_1}}{k_1^2k_2k_3k_4}.
\]
We are now ready to state and prove the main result of this section.
\begin{theorem}\label{thm:4.1}
For any integer $n\geq 2$ we have
\begin{equation}\label{4.3}
\zeta(n) = (-2)^{n-1}\bigg(2\zeta(\overline{2},\{1\}^{n-2})
+\sum_{j=3}^{n-1}(-1)^j\zeta(\overline{j},\{1\}^{n-j})\bigg).
\end{equation}
\end{theorem}
Before proving this identity, we note that for $n=2$ it is trivially true,
and for $n=3, 4, 5$ we get
\begin{align*}
\zeta(3) &= 8\zeta(\overline{2},1),\\
\zeta(4) &= -16\zeta(\overline{2},1,1)+8\zeta(\overline{3},1),\\
\zeta(5) &= 32\zeta(\overline{2},1,1,1)-16\zeta(\overline{3},1,1)
+16\zeta(\overline{4},1).
\end{align*}
The identity \eqref{4.3} could therefore be considered a generalization of
Euler's identity \eqref{4.1}.
\begin{proof}[Proof of Theorem~\ref{thm:4.1}]
We use Theorem~\ref{thm:2.5} with $c=0$ and $\alpha=1$. Then $\zeta(k+2)$ will
be the coefficient of $x^k$ on the right-hand side of \eqref{2.18}.
To evaluate $\gamma_k(x)$ in \eqref{2.6}, we first note that by \eqref{2.3}
we have
\[
(n;k)_{{\bf z},1}=n(n-1)\cdots(n-k),
\]
and in particular
$(k;k-1)_{{\bf z},1}=k!$. Furthermore, with a partial fraction expansion similar
to the one in the proof of Lemma~\ref{lem:2.4}, namely
\[
\frac{1}{n(n-1)\cdots(n-k)}
=\frac{1/k}{(n-1)\cdots(n-k)}-\frac{1/k}{n\cdots(n-k+1)},
\]
we get a telescoping series which sums as
\[
\sum_{n=k+1}^\infty\frac{1}{(n;k)_{{\bf z},1}} = \frac{1}{k\cdot k!}.
\]
Hence
\begin{align}
\gamma_k(x) &= \frac{1}{k-x}\cdot\frac{1}{k!}+\frac{1}{k\cdot k!}
= \frac{1}{k\cdot k!}\bigg(\frac{1}{1-\frac{x}{k}}+1\bigg)\label{4.4}\\
&=\frac{1}{k\cdot k!}\bigg(2+\sum_{j=1}^\infty\frac{x^j}{k^j}\bigg).\nonumber
\end{align}
To deal with the right-hand side of \eqref{2.18}, which we call $R(x)$, we
use the finite multiple sums
\[
H_K(m):=\sum_{K\geq k_1>\cdots>k_m\geq 1}\frac{1}{k_1\cdots k_m}\qquad(m\geq 1),
\]
which are sometimes called hyperharmonic numbers, with the convention
$H_K(0)=1$ for all integers $K\geq 1$. Note that $H_K(1)=H_K$, the $K$th
harmonic number. With this notation we have
\begin{align*}
\prod_{\ell=1}^{k-1}(x-z_\ell) &= \prod_{\ell=1}^{k-1}(x-\ell)
= (-1)^{k-1}(k-1)!\prod_{\ell=1}^{k-1}\left(1-\tfrac{x}{\ell}\right)\\
&= (-1)^{k-1}(k-1)!\sum_{\nu=0}^{k-1}(-1)^\nu H_{k-1}(\nu)x^\nu.
\end{align*}
Upon multiplying this by \eqref{4.4}, we get
\begin{align*}
R(x) &= \sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^2}
\bigg(2+\sum_{j=1}^\infty\frac{x^j}{k^j}\bigg)
\bigg(\sum_{\nu=0}^{k-1}(-1)^\nu H_{k-1}(\nu)x^\nu\bigg)\\
&= \sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^2}S_k(x),
\end{align*}
where
\[
S_k(x) = 2\sum_{\nu=0}^{k-1}(-1)^\nu H_{k-1}(\nu)x^\nu
+\sum_{\mu=1}^\infty
\bigg(\sum_{j=1}^\mu\frac{(-1)^{\mu-j}}{k^j}H_{k-1}(\mu-j)\bigg)x^\mu.
\]
Hence, by equating coefficients of powers of $x$ on both sides of \eqref{2.18},
we get
\[
\zeta(\mu+2) = \sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^2}
\bigg(2(-1)^\mu H_{k-1}(\mu)
+\sum_{j=1}^{\mu-1}\frac{(-1)^{\mu-j}}{k^j}H_{k-1}(\mu-j)+\frac{1}{k^\mu}\bigg).
\]
So with this and the definition of Euler sums we get
\begin{align}
\zeta(\mu+2) &= 2(-1)^{\mu-1}\zeta(\overline{2},\{1\}^\mu)\label{4.5}\\
&\quad+\sum_{j=1}^{\mu-1}(-1)^{\mu-j-1}\zeta(\overline{j+2},\{1\}^{\mu-j})
+\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^{\mu+2}}.\nonumber
\end{align}
The last sum on the right of \eqref{4.5} can be rewritten as
\[
\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^{\mu+2}}
=\sum_{k=1}^\infty\frac{1}{k^{\mu+2}}-2\sum_{k=1}^\infty\frac{1}{(2k)^{\mu+2}}
= \bigg(1-\frac{1}{2^{\mu+1}}\bigg)\zeta(\mu+2).
\]
Combining this with \eqref{4.5}, we get
\begin{equation}\label{4.6}
\zeta(\mu+2) = (-1)^{\mu-1}2^{\mu+2}\zeta(\overline{2},\{1\}^\mu)
+2^{\mu+1}\sum_{j=1}^{\mu-1}(-1)^{\mu-j-1}\zeta(\overline{j+2},\{1\}^{\mu-j}).
\end{equation}
Finally, we replace $\mu$ by $n-2$ ($n\geq 2$) and shift the summation on the
right of \eqref{4.6} by 2. Then \eqref{4.6} immediately gives \eqref{4.3}, and
the proof is complete.
\end{proof}
We now present an alternative proof of Theorem~\ref{thm:4.1}, based on a
recent result of Xu \cite[Theorem~2.1]{Xu} who showed that
\begin{equation}\label{4.8}
\zeta\big(\overline{k+2},\{1\}^{m-1}\big)=\frac{(-1)^{m+k}}{m!k!}
\int_0^1(\log x)^k(\log(1+x))^{m}\frac{dx}{x}.
\end{equation}
For $n\geq 2$ we denote by $S_n$ the expression in large parentheses in
\eqref{4.3}, namely
\begin{equation}\label{4.9}
S_n := 2\zeta(\overline{2},\{1\}^{n-2})
+\sum_{j=3}^{n-1}(-1)^j\zeta(\overline{j},\{1\}^{n-j}),
\end{equation}
and we prove the following result.
\begin{theorem}\label{thm:4.2}
We have the generating function
\begin{equation}\label{4.10}
H(z) := \sum_{n=2}^\infty S_nz^{n-1} = \psi(1)-\psi(1+\tfrac{z}{2}),
\end{equation}
where $\psi(z)=\Gamma'(z)/\Gamma(z)$ is the digamma function.
\end{theorem}
Before we prove this, we note that a well-known generating function for the
Riemann zeta function implies
\begin{equation}\label{4.11}
\sum_{n=2}^\infty\zeta(n)(-\tfrac{z}{2})^{n-1} = \psi(1)-\psi(1+\tfrac{z}{2});
\end{equation}
see, e.g., \cite[Eq.~25.8.5]{DLMF}. Comparing \eqref{4.10} with \eqref{4.11},
we immediately obtain Theorem~\ref{thm:4.1}. We mention in passing that
$\psi(1)=-\gamma$, where $\gamma$ is the Euler-Mascheroni constant.
For the proof of Theorem~\ref{thm:4.2} we require two lemmas; the first one
is the evaluation of an integral which we were unable to find in the literature.
\begin{lemma}\label{lem:4.3}
For any $z\in{\mathbb R}$ with $z>0$ we have
\begin{equation}\label{4.12}
\int_0^1\left(\frac{1+x^z}{(1+x)^z}-1\right)\frac{dx}{x}
= \psi(1)-\psi(z).
\end{equation}
\end{lemma}
\begin{proof}
We denote the integral on the left of \eqref{4.12} by $I(z)$. Then it is easy
to verify that
\begin{equation}\label{4.13}
I(z+1)-I(z) = - \int_0^1\frac{1+x^{z-1}}{(1+x)^{z+1}}dx = -\frac{1}{z},
\end{equation}
where the second equality comes from a special case of the identity 2.2.2.14
in \cite{PrE}.
We now compare \eqref{4.13} with the well-known functional equation
\begin{equation}\label{4.14}
\psi(z+1)-\psi(z) = \frac{1}{z},
\end{equation}
(see, e.g., \cite[Eq.~5.5.2]{DLMF}) and apply the following uniqueness result:
{\it Suppose that $\varphi$ is defined on $[N,\infty)$ for some non-negative
integer $N$ and that $\varphi\to 0$ as $x\to\infty$. Then if there exist two
nondecreasing solutions of the equation $f(z+1)-f(z) = \varphi(x)$, they must
differ by a constant.}
This can be found in \cite[Lemma~1.1]{Mu}, where further references are given.
To apply this result, we set $f(z)=-I(z)$ and note that for $z>0$ and $x>0$,
\[
\frac{d}{dz}\left(\frac{1+x^z}{(1+x)^z}-1\right)
=\frac{x^z}{(1+x)^z}\big(\log{x}-\log(1+x)\big)-\frac{\log(1+x)}{(1+x)^z} < 0.
\]
Hence $f(z)$ is nondecreasing, and comparing \eqref{4.13} with \eqref{4.14} we
therefore have $f(z)=-I(z)=\psi(z)+C$ for some constant $C$. We determine $C$
by setting $z=1$; since $I(1)=0$, we have $C=-\psi(1)$, which completes the
proof.
\end{proof}
\begin{lemma}\label{lem:4.4}
For any integer $n\geq 2$ we have
\begin{align}
S_n = \frac{1}{(n-1)!}\int_{0}^{1}\bigg(&\left(\log\frac{1}{1+x}\right)^{n-1}
-(\log x)^{n-1}+\left(\log\frac{x}{1+x}\right)^{n-1}\label{4.15}\\
&+(n-1)(\log x)^{n-2}\log(1+x)\bigg)\frac{dx}{x}.\nonumber
\end{align}
\end{lemma}
\begin{proof}
Substituting \eqref{4.8} into \eqref{4.9}, we get
\begin{align}
S_n &= \frac{(-1)^{n-1}}{(n-1)!}\bigg(2\int_0^1\big(\log(1+x)\big)^{n-1}\frac{dx}{x}\label{4.16}\\
&+\int_0^1\bigg(\sum_{j=3}^{n-1}(-1)^j\binom{n-1}{j-2}(\log{x})^{j-2}
\big(\log(1+x)\big)^{n-j+1}\bigg)\bigg)\frac{dx}{x}.\nonumber
\end{align}
The sum in the second term is now seen to be
\begin{align*}
\sum_{j=1}^{n-3}&(-1)^j\binom{n-1}{j}(\log{x})^j\left(\log(1+x)\right)^{n-1-j}
= \big(\log(1+x)-\log{x}\big)^{n-1} \\
&-\log(1+x)^{n-1}-(-1)^n\left((n-1)(\log{x})^{n-2}\log(1+x)-(\log{x})^{n-1}\right),
\end{align*}
and combining this with \eqref{4.16} we obtain \eqref{4.15}.
\end{proof}
We are now ready to prove Theorem~\ref{thm:4.2}.
\begin{proof}[Proof of Theorem~\ref{thm:4.2}]
We substitute \eqref{4.15} into the infinite series \eqref{4.10} and interchange
the order of summation and integration, which is easy to justify. Using the
evaluations
\begin{align*}
\sum_{n=2}^\infty\frac{z^{n-1}}{(n-1)!}\left(\log\frac{1}{1+x}\right)^{n-1}
&=\frac{1}{\left(1+x\right)^{z}}-1,\\
\sum_{n=2}^\infty\frac{z^{n-1}}{(n-1)!}\left(\log x\right)^{n-1} & =x^{z}-1,\\
\sum_{n=2}^\infty\frac{z^{n-1}}{(n-1)!}\left(\log\frac{x}{1+x}\right)^{n-1}
&=\left(\frac{x}{1+x}\right)^{z}-1,
\end{align*}
and
\[
\sum_{n=2}^\infty\frac{z^{n-1}}{(n-1)!}(n-1)(\log x)^{n-2}\log(1+x)
=z\log(1+x)x^{z},
\]
we find
\begin{equation}\label{4.17}
H(z)=\int_{0}^{1}\left(\frac{1+x^{z}}{(1+x)^z}-1+x^z\big(z\log(1+x)-1\big)\right)\frac{dx}{x}.
\end{equation}
Splitting this integral into three parts, we use Lemma~\ref{lem:4.3} as well as
the integral
\[
\int_0^1 x^{z-1}\log(1+x)dx
=\frac{\log{2}}{z}-\frac{1}{2z}\left(\psi(\tfrac{z+2}{2})-\psi(\tfrac{z+1}{2})\right)
\]
(see \cite[Eq.~4.293.1]{GR}) and the obvious integral
\[
\int_0^1 x^{z-1}dx = \frac{1}{z}.
\]
Substituting all this into \eqref{4.17}, we get
\[
H(z) = \psi(1)-\psi(z)+\log{2}-\tfrac{1}{2}\psi(\tfrac{z}{2}+1)
+\tfrac{1}{2}\psi(\tfrac{z+1}{2})-\tfrac{1}{z}.
\]
Finally, using the identities \eqref{4.14} and
$\psi(2y)=\frac{1}{2}\big(\psi(y)+\psi(y+\frac{1}{2})\big)+\log{2}$
(see \cite[Eq.~5.5.8]{DLMF}) with $y=(z+1)/2$, we get
\[
H(z) =
\psi(1)-\psi(z+1)+\left(\psi(z+1)-2\cdot\tfrac{1}{2}\psi(\tfrac{z}{2}+1)\right),
\]
which is the right-hand side of \eqref{4.10}, as required.
\end{proof}
\section{Generalizing the Markov-Ap\'ery identity}
We recall that Koecher's identity \eqref{1.3} was obtained from
Theorem~\ref{thm:2.3} as a special case by taking $z_n=n^2$. The identity
\eqref{1.1} then followed by setting $x=0$. In this section we will again use
Theorem~\ref{thm:2.3}, with $x=0$ from the beginning but with $z_n=(n+c)^2$
for integers $c\geq 0$. For the remainder of this section we set
\begin{equation}\label{5.0}
\zeta_c(s):= \zeta(s)-\sum_{j=1}^c\frac{1}{j^s},\qquad c=0,1,2,\ldots;
\end{equation}
in particular, $\zeta_0(s)=\zeta(s)$ and $\zeta_1(s)=\zeta(s)-1$.
Also, $\zeta_c(s)$ is in fact the Hurwitz zeta function $\zeta(s,c)$.
We are now ready to state the main result of this section.
\begin{theorem}\label{thm:5.1}
If $c\geq 0$ is an integer, then
\begin{equation}\label{5.1}
\zeta_c(3) = \frac{1}{2c!^2}\sum_{k=1}^\infty
\frac{(-1)^{k-1}}{\binom{2k+2c}{k+c}}\cdot\frac{P_c(k)}{(k+c)^2k(k+1)\cdots(k+c)},
\end{equation}
where
\begin{align}
P_c(k)&=\frac{4(k+2c)!(k+c-1)!}{(k+c)(k-1)!^2}
+\frac{2(k+c)!(2k+2c)!}{(k-1)!}\sum_{j=0}^{2c}\frac{(2c-j)!}{j!(2k+2c-j)!}\label{5.2}\\
&\quad\times\sum_{\nu=0}^j\frac{(-1)^\nu}{k+c-\nu}\binom{j}{\nu}
\frac{(2k+j-1-\nu)!(k+2c-\nu)!}{(k-1-\nu)!(2k+2c-\nu)!}.\nonumber
\end{align}
\end{theorem}
\noindent
{\bf Remark.} We can use \eqref{5.2} to compute
\begin{align*}
P_0(k)&=5,\\
P_1(k)&=5k^3+12k^2+4k+2,\\
P_2(k)&=5k^6+49k^5+171k^4+271k^3+232k^2+128k+48,\\
P_3(k)&=5k^9+111k^8+1011k^7+4935k^6+14262k^5+25734k^4\\
&\quad+30190k^3+24048k^2+13248k+4320,\\
P_4(k)&=5k^{12}+198k^{11}+3409k^{10}+33650k^9+211731k^8+894834k^7\\
&\quad+2613523k^6+5362734k^5 + 7817348k^4 + 8176552k^3 + 6167424k^2\\
&\quad+3244032k+967680,\\
P_5(k)&=5k^{15}+310k^{14}+8625k^{13}+142600k^{12}+1564435k^{11}+12049820k^{10}\\
&\quad+67279375k^9+277409600k^8+853390140k^7+1968104030k^6\\
&\quad+3407457500k^5+4426865800k^4+4304943120k^3+3095389440k^2\\
&\quad+1556582400k+435456000.
\end{align*}
Based on these evaluations and on further computations for several $c\geq 6$,
we conjecture that all $P_c(k)$ are polynomials in $k$ of degree $3c$, with
integer coefficients, including leading coefficient 5 and, for $c\geq 1$,
constant coefficient $c!(2c)!$.
\medskip
Before proving Theorem~\ref{thm:5.1}, we note that, since $P_0(k)=5$,
for $c=0$ the identity \eqref{5.1} reduces to \eqref{1.1}. Similarly, for
$c=1$ and 2 and using $P_1(k)$ and $P_2(k)$ as above, we get after some minor
manipulations,
\begin{align}
\zeta(3) &= 1+\frac{1}{4}\sum_{k=1}^\infty\frac{(-1)^{k-1}}{\binom{2k}{k}}
\cdot\frac{5k^3+12k^2+4k+2}{k(k+1)^2(2k+1)},\label{5.3}\\
\zeta(3) &= 1+\frac{1}{8}+\frac{1}{16}\sum_{k=1}^\infty
\frac{(-1)^{k-1}}{\binom{2k+2}{k+1}}
\cdot\frac{P_2(k)}{k(k+1)(k+2)^2(2k+3)}.\label{5.4}
\end{align}
The main ingredient in the proof of Theorem~\ref{thm:5.1} is the evaluation
of the infinite series on the right of \eqref{2.6}. We state this as a lemma.
\begin{lemma}\label{lem:5.2}
Let $\alpha=\frac{1}{2}$ and let $\bf z$ be the sequence given by $z_n=(n+c)^2$.
Then
\begin{align}
\sum_{n=k+1}^\infty\frac{1}{(n;k)_{{\bf z},1/2}}
&=\sum_{j=0}^{2c}\frac{(2c-j)!}{(2k+2c-j)!}\label{5.5}\\
&\quad\times\sum_{\nu=0}^j\frac{(-1)^\nu}{k+c-\nu}
\cdot\frac{(2k+j-1-\nu)(k+2c-\nu)!}{(j-\nu)!(k-1-\nu)!(2k+2c-\nu)!\nu!}.\nonumber
\end{align}
\end{lemma}
\begin{proof}
We fix the integers $k$ and $c$. By the definition \eqref{2.3} we have
\[
(n;k)_{{\bf z},1/2} = (n+c)\prod_{i=1}^k\left((n+c)^2-(i+c)^2\right),
\]
and writing the factors in descending order, we get
\begin{equation}\label{5.5a}
(n;k)_{{\bf z},1/2} = (n+k+2c)\cdots(n+2c+1)(n+c)(n-1)\cdots(n-k),
\end{equation}
so that
\begin{equation}\label{5.6}
\sum_{n=k+1}^\infty\frac{1}{(n;k)_{{\bf z},1/2}} = \sum_{n=1}^\infty Q_n,
\end{equation}
where
\begin{equation}\label{5.7}
Q_n :=\frac{1}{n+k+c}\cdot
\frac{(n+k+2c)(n+k+2c-1)\cdots(n+k)}{(n+2k+2c)(n+2k+2c-1)\cdots(n+1)n}.
\end{equation}
It is our goal to find a partial fraction expansion of \eqref{5.7} to make it
possible to apply Lemma~\ref{lem:2.4}. For this purpose we consider $n$, for
the moment, as a variable and set
\begin{equation}\label{5.8}
Q_n = \sum_{j=0}^{2c}\frac{A_j}{(n+2k+2c-j)\cdots(n+2c-j)},
\end{equation}
where the coefficients $A_0,\ldots,A_{2c}$ are to be determined. To do so, we
fix $j$, $0\leq j\leq 2c$, multiply the right-hand sides of \eqref{5.7} and
\eqref{5.8} by $n+2k+2c-j$, and formally set $n=-2k-2c+j$. Then it is
straightforward to see that the right-hand side of \eqref{5.7} becomes
\[
\frac{(-1)^j}{k+c-j}\cdot\frac{(k+2c-j)!}{(2k+2c-j)!(k-1-j)!j!},
\]
while the right-hand side of \eqref{5.8} will give
\[
\frac{(-1)^jA_0}{j!(2k-j)!}+\frac{(-1)^{j-1}A_1}{(j-1)!(2k-j+1)!}
+\cdots+\frac{A_j}{(2k)!}.
\]
Equating the two gives, for $j=0,1,\ldots, 2c,$
\begin{equation}\label{5.9}
\binom{2k}{j}A_0-\binom{2k}{j-1}A_1+\cdots+(-1)^j\binom{2k}{0}A_j = B_j,
\end{equation}
where
\begin{equation}\label{5.10}
B_j := \frac{1}{k+c-j}\cdot\frac{(k+2c-j)!(2k)!}{(k-1-j)!(2k+2c-j)!j!}.
\end{equation}
The linear system \eqref{5.9} has a $(2c+1)\times(2c+1)$ lower triangular
matrix $M$ with diagonal $(1,-1,1,-1,\ldots,1)$, and thus determinant $(-1)^c$.
We claim that its inverse is the matrix $\widetilde{M}$, given by the column
vectors
\begin{equation}\label{5.11}
\left(0,\ldots,0,(-1)^\ell\binom{2k-1}{0},(-1)^\ell\binom{2k}{1},\ldots,
(-1)^\ell\binom{2k-1+2c-\ell}{2c-\ell}\right),
\end{equation}
($\ell=0,1,\ldots,2c$) with $\ell$ initial zeros. Thus, the matrix
$\widetilde{M}$ is also lower triangular. To show that this is indeed the
inverse of $M$, we take the inner product of the $j$th row of $M$, as given by
the coefficient sequence in \eqref{5.9}, and the $\ell$th column of
$\widetilde{M}$ in \eqref{5.11}. When $j<\ell$, this is obviously zero. When
$j\geq\ell$, this inner product is
\begin{align*}
(-1)^\ell\sum_{i=\ell}^j&(-1)^i\binom{2k}{j-i}\binom{2k-1+i-\ell}{i-\ell}\\
&=(-1)^{\ell-j}\sum_{s=0}^{j-\ell}(-1)^s\binom{2k}{s}\binom{2k-1+j-\ell-s}{2k-1},
\end{align*}
where we obtained the right-hand side by setting $s=j-i$ and then slightly
manipulating the second binomial coefficient. Now, when $\ell=j$, the
right-hand side is obviously 1. When $\ell<j$, a known binomial coefficient
identity, namely Equation 4.2.5.50 in \cite[p.~619]{PrE}, shows that the
right-hand side vanishes. Hence we have shown that $M\widetilde{M}=I$, and thus
$\widetilde{M}=M^{-1}$, as claimed.
Now, solving the system \eqref{5.9} with the help of the matrix $\widetilde{M}$,
we get
\[
A_j = \sum_{\nu=0}^j(-1)^\nu\binom{2k+j-1-\nu}{j-\nu}B_\nu,
\]
and thus, with \eqref{5.10},
\begin{equation}\label{5.12}
A_j = \sum_{\nu=0}^j\frac{(-1)^\nu\cdot 2k}{k+c-j}
\cdot\frac{(2k+j-1-\nu)!(k+2c-\nu)!}{(j-\nu)!(k-1-\nu)!(2k+2c-\nu)!\nu!}.
\end{equation}
Next, with \eqref{5.6}, \eqref{5.8} and Lemma~\ref{lem:2.4} we have
\begin{align*}
\sum_{n=k+1}^\infty\frac{1}{(n;k)_{{\bf z},1/2}}
&= \sum_{j=0}^{2c}A_j\sum_{n=1}^\infty\frac{1}{(n+2k+2c-j)\cdots(n+2c-j)}\\
&= \sum_{j=0}^{2c}A_j\cdot\frac{(2c-j)!}{2k(2k+2c-j)!}.
\end{align*}
Finally, this combined with \eqref{5.12} gives the desired evaluation
\eqref{5.5}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:5.1}]
We use Theorem~\ref{thm:2.3} with $\beta=2$ and $d=0$, so that $z_k=(k+c)^2$.
Furthermore, we set $x=0$ and $\alpha=\frac{1}{2}$, and we let $c\geq 0$ be
an integer. From \eqref{5.5a} we have
\begin{align}
(k;k-1)_{{\bf z},1/2} &= (2k-1+2c)\cdots(k+1++2c)(k+c)(k-1)\cdots 2\cdot 1\label{5.13}\\
&= (k+c)\frac{(2k-1+2c)!(k-1)!}{(k+2c)!}.\nonumber
\end{align}
Based on the observation that the right-hand side of \eqref{5.5}, when
multiplied by $2(k+c)!(2k+2c)!/(k-1)!$, seems to be a polynomial in $k$ with
integer coefficients, we denote
\begin{equation}\label{5.14}
\widetilde{P}_c(k):=\frac{2(k+c)!(2k+2c)!}{(k-1)!}
\sum_{n=k+1}^\infty\frac{1}{(n;k)_{{\bf z},1/2}}.
\end{equation}
Hence with \eqref{2.7} and \eqref{5.13}, \eqref{5.14} we get
\[
\gamma_k(0)=\frac{1}{(k+c)^2}\cdot\frac{(k+2c)!}{(k+c)(2k-1+2c)!(k-1)!}
+\frac{(k-1)!\widetilde{P}_c(k)}{2(k+c)!(2k+2c)!}.
\]
Rewriting this, we find
\begin{equation}\label{5.15}
\gamma_k(0)=\frac{(k-1)!}{4(k+c)^2(k+c-1)!(2k+2c-1)!}P_c(k),
\end{equation}
where
\begin{equation}\label{5.16}
P_c(k) = \frac{4(k+2c)!(k+c-1)!}{(k+c)(k-1)!^2} + \widetilde{P}_c(k).
\end{equation}
This last identity, with \eqref{5.14} and \eqref{5.5}, can be seen to be the
same as \eqref{5.2}.
Next we note that
\[
\prod_{\ell=1}^{k-1}\left(-z_\ell^2\right)
= (-1)^{k-1}\prod_{\ell=1}^{k-1}(\ell+c)^2 = (-1)^{k-1}\frac{(k+c-1)!^2}{c!^2},
\]
and so, with \eqref{5.15} and \eqref{2.15} we get
\begin{align}
\zeta_{\bf z}(\tfrac{3}{2})
&=\sum_{k=1}^\infty(-1)^{k-1}\frac{(k-1)!(k+c-1)!}{4(k+c)^2c!^2(2k+2c-1)!}P_c(k)\label{5.17}\\
&=\frac{1}{2c!^2}\sum_{k=1}^\infty(-1)^{k-1}\frac{(k-1)!(k+c)!}{(k+c)^2(2k+2c)!}P_c(k).\nonumber
\end{align}
Finally, by \eqref{2.4} and \eqref{5.0} we have
$\zeta_{\bf z}(\tfrac{3}{2})=\zeta_c(3)$, and upon rewriting the right-most
term in \eqref{5.17}, we get the desired identity \eqref{5.1}.
\end{proof}
\section{Identities for even powers of $\pi$}
In this section we use Theorem~\ref{thm:2.3} again, but this time with
$z_n=(n+\frac{1}{2})^2$ and $\alpha=0$. While in Section~5 we considered only
the case $x=0$, we will now make full use of the identity \eqref{2.15}. Before
we can state the main result of this section, we need the following definition.
For integers $K\geq 1$ and $\nu\geq 1$ we denote
\begin{equation}\label{6.1}
\widetilde{H}_K(\nu)
:=\sum_{K\geq k_1>\cdots>k_{\nu}\geq 1}\frac{1}{(2k_1+1)^2\cdots(2k_{\nu}+1)^2},
\end{equation}
and we set $\widetilde{H}_K(0)=1$ for all integers $K\geq 1$. These numbers can
be seen as a type of generalized harmonic numbers.
\begin{theorem}\label{thm:6.1}
For any integer $\mu\geq 0$ we have
\begin{align}
&\big(1-4^{-\mu-1}\big)\zeta(2\mu+2)
=1+\sum_{k=1}^\infty\frac{(-1)^{k+\mu-1}}{16^k(2k+1)^2}\binom{2k}{k}\label{6.2}\\
&\times\bigg(\frac{10k^3+9k^2-k+1}{2k-1}\widetilde{H}_{k-1}(\mu)
+4k(k+1)\sum_{j=1}^{\mu}\frac{(-1)^j}{(2k+1)^{2j}}\widetilde{H}_{k-1}(\mu-j)\bigg).\nonumber
\end{align}
\end{theorem}
Before proving this result, we state the two smallest cases separately. To do
so, we use the fact that $\zeta(2)=\pi^2/6$ and $\zeta(4)=\pi^4/90$.
\begin{corollary}\label{cor:6.2}
\begin{align}
\frac{\pi^2}{8} &= 1+\sum_{k=1}^\infty\frac{(-1)^{k-1}}{16^k}\binom{2k}{k}
\frac{10k^3+9k^2-k+1}{(2k-1)(2k+1)^2},\label{6.3}\\
\frac{\pi^4}{96} &= 1+\sum_{k=1}^\infty\frac{(-1)^{k-1}}{16^k}\binom{2k}{k}
\frac{1}{(2k+1)^2}\label{6.4}\\
&\qquad\times\bigg(\frac{4k(k+1)}{(2k+1)^2}-\frac{10k^3+9k^2-k+1}{2k-1}
\sum_{j=1}^{k-1}\frac{1}{(2j+1)^2}\bigg).\nonumber
\end{align}
\end{corollary}
More generally, using Euler's formula
\[
\zeta(2n) = \frac{(-1)^{n-1}2^{2n-1}B_{2n}}{(2n)!}\cdot\pi^{2n},
\]
where $B_{2n}$ is the $2n$th Bernoulli number, we can write the left-hand side
of \eqref{6.2} as a rational multiple of $\pi^{2\mu+2}$.
For the proof of Theorem~\ref{thm:6.1} we require the following series
evaluation.
\begin{lemma}\label{lem:6.3}
For any integer $k\geq 1$ we have
\begin{equation}\label{6.5}
\sum_{n=k+1}^\infty\frac{n(n+1)(n-k+1)!}{(n+k+1)!}
=\frac{2k^3+5k^2+3k+1}{(2k-1)(2k+1)(2k+1)!}.
\end{equation}
\end{lemma}
\begin{proof}
We denote the series in \eqref{6.5} by $S$ and shift the summation by $k+1$
units, obtaining
\begin{align}
S &= \sum_{n=0}^\infty(n+k+1)(n+k+2)\frac{n!}{(n+2k+2)!}\label{6.6}\\
&= \sum_{n=0}^\infty\frac{n^2\cdot n!}{(n+2k+2)!}
+(2k+3)\sum_{n=0}^\infty\frac{n\cdot n!}{(n+2k+2)!} \nonumber\\
&\qquad\qquad+(k+1)(k+2)\sum_{n=0}^\infty\frac{n!}{(n+2k+2)!}\nonumber\\
&= S_2 + (2k+3)S_1 + (k+1)(k+2)S_0.\nonumber
\end{align}
To evaluate the series $S_0, S_1, S_2$, we use the Gaussian hypergeometric
series,
\begin{equation}\label{6.7}
_2F_1(a,b;c;z) = \sum_{n=0}^\infty\frac{(a)_n(b)_n}{(c)_n}\cdot\frac{z^n}{n!},
\end{equation}
where $(x)_n=x(x+1)\cdots(x+n-1)$ for $n\geq 1$ and $(x)_0=1$, and apply the
well-known identity
\begin{equation}\label{6.8}
_2F_1(a,b;c;1) = \frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}\qquad
({\rm Re}(c-a-b)>0),
\end{equation}
known as Gauss's theorem;
see, e.g., \cite[Eq.~15.4.20]{DLMF}. Applying \eqref{6.7} and then \eqref{6.8},
we first see that
\begin{equation}\label{6.9}
S_0=\sum_{n=0}^\infty\frac{n!}{(n+2k+2)!}
=\frac{1}{(2k+2)!}\,_2F_1(1,1;2k+3;1) =\frac{1}{(2k+1)(2k+1)!}.
\end{equation}
Next we note that
\[
\sum_{n=0}^\infty\frac{n\cdot n!}{(n+2k+2)!}
= \sum_{n=0}^\infty\frac{(n+1)!}{(n+2k+2)!} - S_0,
\]
and with \eqref{6.7} and \eqref{6.8} we get
\[
\sum_{n=0}^\infty\frac{(n+1)!}{(n+2k+2)!}
=\frac{1}{(2k+2)!}\,_2F_1(1,2;2k+3;1) =\frac{1}{2k(2k+1)!},
\]
so that
\begin{equation}\label{6.10}
S_1 = \frac{1}{2k(2k+1)!} - \frac{1}{(2k+1)(2k+1)!} = \frac{1}{2k(2k+1)(2k+1)!}.
\end{equation}
Next we have
\begin{align}
S_2 &= \sum_{n=0}^\infty\frac{n^2\cdot n!}{(n+2k+2)!}
= \sum_{n=0}^\infty\frac{\big(1-3(n+1)+(n+1)(n+2)\big)\cdot n!}{(n+2k+2)!}\label{6.11}\\
&= S_0-3(S_0+S_1)+\sum_{n=0}^\infty\frac{(n+2)!}{(n+2k+2)!}.\nonumber
\end{align}
Using \eqref{6.7} and \eqref{6.8} again, we get
\[
\sum_{n=0}^\infty\frac{(n+2)!}{(n+2k+2)!}
=\frac{2}{(2k+2)!}\,_2F_1(1,3;2k+3;1) =\frac{2}{(2k-1)(2k+1)!}.
\]
Finally, combining this with \eqref{6.11} and then with \eqref{6.6}, \eqref{6.9}
and \eqref{6.10}, we see after some routine manipulations that $S$ equals the
right-hand side of \eqref{6.5}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:6.1}]
We begin with using the definition \eqref{2.3}, obtaining
\begin{align}
(n;k)_{{\bf z},0}
&= \prod_{\ell=1}^k\left((n+\tfrac{1}{2})^2-(\ell+\tfrac{1}{2})^2\right)
=\frac{1}{4^k}\prod_{\ell=1}^k\left((2n+1)^2-(2\ell+1)^2\right)\label{6.12}\\
&= \frac{1}{4^k}(2(n+k)+2)(2(n+k))(2(n+k)-2)\cdots(2n+4)\nonumber\\
&\qquad\qquad\times(2n-2)(2n-4)\cdots(2n-2k)\nonumber\\
&= \frac{1}{4^k}\cdot 2^{2k}\cdot\frac{(n+k+1)!}{(n+1)!}\cdot\frac{(n-1)!}{(n-k-1)!}
= \frac{(n+k+1)!}{n(n+1)(n-k-1)!},\nonumber
\end{align}
and by Lemma~\ref{lem:6.3} we have
\begin{equation}\label{6.13}
\sum_{n=k+1}^\infty\frac{1}{(n;k)_{{\bf z},0}}
=\frac{2k^3+5k^2+3k+1}{(2k-1)(2k+1)(2k+1)!}.
\end{equation}
By \eqref{6.12} we also have
\begin{equation}\label{6.14}
(k;k-1)_{{\bf z},0} = \frac{(k+k)!}{k(k+1)\cdot 0!} = \frac{2(2k-1)!}{k+1}.
\end{equation}
These identities, combined with \eqref{2.7}, give
\[
\gamma_k(x)=\frac{k+1}{2(2k-1)!}\cdot\frac{1}{(k+\tfrac{1}{2})^2-x}
+\frac{2k^3+5k^2+3k+1}{(2k-1)(2k+1)(2k+1)!},
\]
and since
\[
\frac{1}{(k+\tfrac{1}{2})^2-x}
= \frac{4}{(2k+1)^2}\cdot\frac{1}{1-\frac{4x}{(2k+1)^2}},
\]
we get
\begin{equation}\label{6.15}
\gamma_k(x)=\frac{10k^3+9k^2-k+1}{(2k-1)(2k+1)(2k+1)!}
+\frac{4k(k+1)}{(2k+1)(2k+1)!}
\sum_{j=1}^\infty\left(\frac{2}{2k+1}\right)^{2j}x^j.
\end{equation}
Next we have
\begin{align*}
\prod_{\ell=1}^{k-1}\left(x-z_l\right)
&=\frac{1}{4^{k-1}}\prod_{\ell=1}^{k-1}\left(4x-(2\ell+1)^2\right) \\
&=(-1)^{k-1}\frac{(2k)!^2}{4^{2k-1}k!^2}
\prod_{\ell=1}^{k-1}\left(1-\frac{4x}{(2\ell+1)^2}\right),
\end{align*}
and thus, by \eqref{6.1}, we get
\begin{equation}\label{6.16}
\prod_{\ell=1}^{k-1}\left(x-z_l\right) = (-1)^{k-1}\frac{(2k)!^2}{4^{2k-1}k!^2}
\sum_{\nu=0}^{k-1}(-4)^\nu\widetilde{H}_{k-1}(\nu)x^{\nu}.
\end{equation}
Now we proceed in analogy to the proof of Theorem~\ref{4.1} and let $R(x)$ be
the right-hand side of \eqref{2.15}. Multiplying \eqref{6.15} and \eqref{6.16}
and summing over $k$, we get
\begin{align}
R(x)&=\sum_{k=1}^\infty(-1)^{k-1}\frac{(2k)!^2}{4^{2k-1}k!^2}
\cdot\frac{1}{(2k+1)(2k+1)!}\cdot S_k(x)\label{6.17}\\
&= \sum_{k=1}^\infty\frac{(-1)^{k-1}}{16^k}\binom{2k}{k}\frac{4}{(2k+1)^2}\cdot S_k(x)\nonumber,
\end{align}
where
\begin{align}
S_k(x)&=\frac{10k^3+9k^2-k+1}{2k-1}\sum_{\nu=0}^{k-1}(-4)^\nu\widetilde{H}_{k-1}(\nu)x^\nu\label{6.18}\\
&\qquad+4k(k-1)\sum_{\mu=1}^\infty\bigg(\sum_{j=1}^{\mu}\left(\frac{2}{2k+1}\right)^{2j}(-4)^{\mu-j}\widetilde{H}_{k-1}(\mu-j)\bigg)x^\mu.\nonumber
\end{align}
Equating coefficients of $x^\mu$ in \eqref{6.17}, \eqref{6.18} with the
left-hand side of \eqref{2.15}, we get
\begin{align}
\zeta_{\bf z}(\mu+1)
&=\sum_{k=1}^\infty\frac{(-1)^{k-1}\cdot 4}{16^k(2k+1)^2}\binom{2k}{k}
\bigg(\frac{10k^3+9k^2-k+1}{2k-1}(-4)^\mu\widetilde{H}_{k-1}(\mu)\label{6.19}\\
&\qquad+4^{\mu+1}k(k+1)\sum_{j=1}^{\mu}\frac{(-1)^{\mu-j}}{(2k+1)^{2j}}\widetilde{H}_{k-1}(\mu-j)\bigg).\nonumber
\end{align}
Finally, by the definition \eqref{2.4} we have
\begin{align*}
\zeta_{\bf z}(\mu+1)
&=\sum_{n=1}^\infty\frac{1}{(n+\tfrac{1}{2})^{2\mu+2}}
=4^{\mu+1}\sum_{n=1}^\infty\frac{1}{(2n+1)^{2\mu+2}}\\
&=4^{\mu+1}\bigg(\sum_{n=1}^\infty\frac{1}{n^{2\mu+2}}
-\sum_{n=1}^\infty\frac{1}{(2n)^{2\mu+2}} - 1\bigg) \\
&= \big(4^{\mu+1}-1)\zeta(2\mu+2) - 4^{\mu+1}.
\end{align*}
Combining this with \eqref{6.19} and dividing both sides by $4^{\mu+1}$, we
finally get \eqref{6.2}, as desired.
\end{proof}
In closing, we note that an identity very similar to \eqref{6.2} was earlier
obtained
by Leshchiner \cite[Eq.~(4b)]{Le}. One basic difference lies in the fact that
the analogue of the multiple generalized harmonic sum \eqref{6.1} used by
Leshchiner has $k_\nu\geq 0$ in the summation (using our notation, which differs
from Leshchiner's). In analogy to the identities \eqref{6.3} and \eqref{6.4}
above, the two smallest cases of Eq.~(4b) in \cite{Le} are
\begin{align*}
\frac{\pi^2}{10}
&= 1+\sum_{k=1}^\infty\frac{(-1)^k}{16^k(2k+1)^2}\binom{2k}{k},\\
\frac{\pi^4}{96}
&= 1+\sum_{k=1}^\infty\frac{(-1)^k}{16^k(2k+1)^2}\binom{2k}{k}
\bigg(\frac{1}{(2k+1)^2}-\frac{5}{4}\sum_{j=0}^{k-1}\frac{1}{(2j+1)^2}\bigg).
\end{align*}
In spite of the similarities, Theorem~\ref{thm:6.1} above appears to be new.
|
1,116,691,499,129 | arxiv | \section{Introduction}
Genetic evolution appears to be open-ended. Taking advantage of environmental regularities, gene expression and regulation can generate a potentially infinite number of traits and trait variations. Such evolutionary open-endedness has been characterized by a constellation of overlapping features, yet can generally be understood as the ability of an evolutionary system to produce a continuous stream of novel units \citep{taylor2016open}. For those trying to create and understand open-ended evolutionary systems the goal is to understand the underlying principles and dynamics of evolutionary systems in general. Such understanding is based upon knowledge of the best explored and understood open-ended evolutionary system: genetic evolution. But it also can, and should, draw upon the development of artificial evolutionary systems that explore the principles of life-as-it-could-be \citep{langton1989artificial}. Such artificial evolutionary systems depart from the rules and principles of Darwinian genetic evolution while still meeting the general requirements of an evolving system. The interaction between the two can be consilient. Darwinian genetic evolution provides a source of valuable ideas and inspiration as well as justification for the designs of artificial systems. Despite this positive interplay, having only one concrete instance of an open-ended system is a problem. Such sparse epistemological situations can limit abilities to discern alternate possibilities, detect generalizable features, and develop robust theories and models.
It is increasingly being recognised, however, that there is another evolutionary system from which one can find inspiration: cultural evolution \citep{marriott2018social,borg2021evolved, bedau2013minimal, bedau2019open, bedau2019patented}. Minimally characterized, culture is information transmitted through mechanisms of social learning \citep{boyd1985culture, CES, whiten2022emergence}. And while this minimal characterization leaves out many distinctive features of human and non-human cultural groups (as e.g. different species differ in the types of information they can transmit) \citep{whiten2022emergence,buskell2019systems}, and leaves open precisely how `social learning' should be construed \citep{lewens2015culture}, its abstractness makes it exceptionally useful for designing models of cultural change and describing general evolutionary dynamics. On this characterization, cultural evolution is the change in frequency -- or, of special interest here -- the form of cultural traits over time, where these changes are at least in part influenced by social learning \citep{neadle2017food}.
Although cultural evolution is often described as being analogous to genetic evolution \citep{cavalli1981cultural}, there are clear differences in the way culture is inherited: 1. while genetic evolution relies on typically two (sometimes one) parent(s), there are potentially unlimited numbers of cultural ``parents''; 2. while genetic transmission is almost exclusively transmitted vertically from parent to child, cultural transmission can involve substantial amounts of horizontal or oblique transmission; 3. while genetic changes generally occur between generations, cultural change generally occurs within generations \citep{mesoudi2011cultevol,mesoudi2006unified}. While these features distinguish cultural from genetic change, these do not imply that cultural inheritance is in any sense less (or not) ``evolutionary'' -- only that its dynamics frequently differ.
Over the past 40 years there has been increasing recognition that culture and cultural evolution exist within non-human animal populations (most prominently in birds and mammals) \citep{whiten2021burgeoning,whiten2021psychological,whiten2019cultevol}, and that culture not only exists as a result of genetic adaptation but also plays an important co-evolutionary role in guiding genetic evolution \citep{whitehead2019reach,uchiyama2021}. This co-evolutionary relationship between genes, culture, and the environment is sometimes known as ``triple inheritance" \citep{laland2000triple}. Nonetheless, while many animal species exhibit culture, human cultural evolution appears both quantitatively and qualitatively distinct. Several dividing lines between human and animal cultures have been proposed, but the most prominent of recent formulations holds that human culture is distinctive in virtue of its cumulative nature -- with human culture accumulating modifications over time, and with these modifications building upon one another \citep{tomasello1999origins}. However, as more observations of cultural evolution in other species have been made, it has become increasingly apparent that cumulative cultural evolution is actually not unique to human culture \citep{mesoudi2018cumulative}. This raises the following question: what, if anything, is unique about human cultural evolution?
We think issues about the distinctiveness of human culture and the nature of open-ended evolution are overlapping -- and that explorations of the two will be mutually illuminating, with potential downstream consequences for Artificial Life. Here we situate cultural evolution within a broader framework of open-ended evolution and argue that:
\begin{enumerate}
\item Culture is an evolving system, co-evolving alongside genetic evolution.
\item That within cultural species there are a range of ``types'' of cultural evolutionary patterns; cumulative and non-cumulative, tall and wide, unbounded and bounded.
\item That recognizing these ``types'' of cultural evolution allows Artificial Life researchers to better understand evolutionary dynamics and provides new perspectives from which to explore open-ended evolution.
\item That only humans demonstrate open-ended cultural evolution and that human cultural evolution has transitioned from a bounded to an unbounded evolutionary system in recent evolutionary history, thus providing a second instance of ``evolved open-endedness.''
\item That existing Artificial Life methods can be fruitfully applied to the study of cultural evolution.
\end{enumerate}
To develop these points, we outline a number of core concepts from the wider study of cultural evolution. We then analyze ``open-ended evolution'' and explore how such analyses might improve our understanding of evolutionary dynamics and the emergence of evolved open-ended evolutionary systems. A table of definitions for the key terms used here can be found in table \ref{tab:definitions}.
\begin{table}[h]
\centering
\begin{tabulary}{1.0\textwidth}{LLL}
Term & Definition & See\\
\hline
\hline
Culture & Information transmitted through mechanisms of social learning & \citet{boyd1985culture, CES, whiten2022emergence} \\
\hline
Cultural Evolution & The change in frequency or the form of cultural traits over time, where these changes are at least in part influenced by social learning & \citet{neadle2017food} \\
\hline
Open-Ended Evolution & An evolutionary process that is capable of producing a continuous stream of new adaptive novel units, with no \emph{a priori} limitations on the generation of such novelty & \citet{taylor2016open, gabora2017autocatalytic} \\
\hline
Cumulative Culture & A process whereby a culturally transmitted trait accumulates modifications over time with a ratchet-like effect & \citet{tomasello1999origins,boyd1985culture} \\
\hline
Unbounded Evolution & A continuous demonstration of new adaptive novelty and/or the ongoing growth in trait diversity. Term used interchangeably with open-ended evolution, but often used to contrast with bounded evolution & \citet{bedau1998classification, channon2006unbounded} \\
\hline
Evolved Open-Endedness & Open-endedness as the outcome of an evolutionary process as opposed to an assumed pre-condition & \citet{pattee2019evolved}\\
\hline
Wide Evolution & A characterization of the disparity of traits and traditions; increased through processes of recombination, innovation, or the exploration of previously underappreciated affordances & \citet{buskellForthcomplex, derex2022exploit} \\
\hline
Tall Evolution & A characterization of the typical length (measured in relevant changes generated through cumulative evolution) of independent trait traditions &
\end{tabulary}
\caption{Reference table of definitions for key terms}
\label{tab:definitions}
\end{table}
\section{Cultural Evolution}
What is culture and how does it evolve? As suggested above, culture can be minimally defined as the transmission of information -- traits -- through mechanisms of social learning \citep{boyd1985culture}. This minimal and abstract characterization of culture permits ``information" and ``traits" to be read in an encompassing way to include a wide variety of techniques, technology, and behavior. Examples of such traits include the extractive foraging techniques among chimpanzees \citep{sanz2010chimpanzees} or methods for lighting a fire \citep{macdonald2021middle}. It may also incorporate behaviors with communicative effects such as warning calls \citep{griffin2004social}, bird-song, or language \citep{janik2000different}. The definition also incorporates population-level conventions among conspecifics for greeting and leave-taking \citep{duranti1997universal,baehren2022saying} as well as normative behaviors such as styles of dress or decoration \citep{baehren2022saying, richerson2009tribal}. Again, the key is that the acquisition of these behavioral traits or beliefs are and must be influenced by social learning -- when they are not, the traits are not cultural.
\subsection{Does Culture Evolve?}
An evolutionary process does not require a \emph{particular} kind of physical instantiation or biological substrate. While familiar processes of biological evolution are mainly grounded in the manipulation and modification of genes, cultural evolution (and evolution more generally) is under no such obligation. Consider Dennett’s \citeyear{dennett1996darwin} conception of evolution as being both algorithmic and substrate neutral. Evolution is algorithmic in the sense that if certain conditions are met, a certain sort of outcome is necessarily produced \citep[p.~48]{dennett1996darwin}. Where there is reproduction with variation under selection at a population level, a certain kind of outcome is produced -- in this case, the frequency of adaptive outcomes is increased in the population over time. In cultural evolution, `adaptive' may refer to the cultural trait and the success the trait has in spreading from mind to mind \citep{rosenberg2017social}, or it may refer to the effects the trait has on its bearers' (adaptive) behavior, though the co-evolutionary nature of culture and biology, and the effect culture has on biology, cannot (and should not) be understated \citep{henrich2007dual}.
Ultimately, the target of reproduction is the informational content carried by some vehicle -- whether this vehicle is expressed behavior, an artefact, or the instructions of a written account (though it is of course the case that the vehicle can itself have ``fitness''). Artificial Life has often equated such a characterization with the idea of a ``meme'' \citep{bull2000meme, bullinaria2010memes, bedau2013minimal}: a discrete, particulate unit of information that is copied intact between brains, analogous to the way that genes are copied between parents and offspring \citep{dawkins1976selfish}. Cultural evolution, however, does not require the process of reproduction and cultural inheritance to be understood in terms of strict copying. While the literature on this point is vast, \citet{rosenberg2017social} provides a clear summary of the arguments:
\begin{enumerate}
\item Replication in biology has not always involved high-fidelity replicators -- the ``major transitions in evolution'' literature explains how evolution itself has gradually generated higher fidelity transmission processes. While the first replicating molecules were not DNA, nor did they have accurate copying mechanisms, fidelity increases are evolutionary achievements that could and can be selected for over time \citep{maynardsmith1995major}.
\item Even in genetic evolution, a single gene can rarely be equated with a single trait -- the vast majority of biological traits result from complex interactions between the proteins expressed and regulated by many genes, so why should one demand in cultural evolution that a trait is the product of one discrete meme?
\item Many features of human institutions are adapted to preserve and proliferate cultural traits even under low individual copying fidelity. Variation is introduced in the form of the (re)combination of existing traits, innovation of new traits by individuals (which may involve rational thought), or copying error (loosely analogous to mutation in genetic evolution). Meanwhile, selection may occur in multiple ways. This includes biological selection -- that is, the effect that cultural traits have on biological fitness (for instance, being led to believe that something is safe to eat when it is not).
\end{enumerate}
If we accept that evolution is algorithmic (i.e. it follows a series of processes to produce a certain outcome; selection + reproduction + variation = evolution), it follows that we are not bound to particular features of biological processes (e.g. sexual reproduction), nor are we bound to a specific substrate (e.g. DNA). Though Dennett’s conception of cultural evolution is considered out-of-date to some modern scholars of cultural evolution \citep{uhlivr2012needs}, his fundamental argument applies to it nonetheless: the idea of an algorithmic process makes it all the more powerful, since the substrate neutrality it thereby possesses permits us to consider its applications to just about anything \citep{dennett1996darwin}. Which is, of course, true: That one can create an evolutionary process within a computer is evidence that the process itself need not be strictly biological, merely algorithmic \citep{lehman2020surprising}.
\subsection{Co-Dependent Evolutionary Systems}
Cultural evolution is deeply intertwined with biological evolution. While these evolutionary processes and their products can generate complicated co-evolutionary feedback loops, each evolutionary system can be understood, studied, and modelled separately \citep{boyd1985culture, mesoudi2011cultevol}. For instance, as we suggest in more detail below, pre-modern hominin cultural evolution contributed to biological fitness in the form of ecological knowledge and technological production. Nonetheless, over time, cultural evolution has become increasingly unmoored from genetic fitness effects, producing a wide range of behavioral, social, and technological change \citep{henrich2015secret}. The reason for both the intimacy and relative independence of the two systems should be evident. The substrate of culture is biological: the brain.
Culture is bound to a biological substrate, but a substrate which is different from the classical understanding of genetic evolution in which traits are encoded (directly or indirectly) by genes. Gene expression may produce brains and (some) brains may acquire culture, but one cannot skip the middle step and claim that genes produce culture. While humans may be biologically prepared to acquire language \citep{fitch2011unity}, they are not biologically determined to learn English, Farsi, or Korean. Clearly, accessibility and exposure to certain kinds of inputs -- the presence of English, Farsi, or Korean language cues -- determine what language any given human ultimately produces. Or put another way, the acquisition, production, and transmission of language is largely influenced by social learning. So one cannot simply claim that the process of cultural evolution is independent from biology. Biological and cultural evolution are interdependent.
The idea that cultural species, and particularly cumulatively cultural species such as \emph{Homo sapiens}, have two interdependent systems of inheritance has been labelled `dual inheritance' (`triple inheritance' if the environment is also included \citep{laland2000triple}). On this account, human offspring inherit a genotype from their parents through sexual reproduction and they inherit a body of cultural information over the course of their post-natal lives via processes of social learning \citep{henrich2007dual} -- processes that themselves may be culturally evolved tools \citep{heyes2018gadgets}. Just as one’s genotype has been dictated by a history of selection pressures acting on genetic variation, one’s cultural inheritance is similarly shaped by selective pressures and the variation introduced through innovation, recombination, and error involved in social learning. Thus, in the same way that certain phenotypic features are adaptations -- increasing the biological fitness of individuals -- elements of culture may also be adaptations. Consider food taboos present in Fijian society \citep{henrich2010evolution,mckerracher2016food} which apply exclusively to pregnant women. Despite the causal opacity of the underlying process, these taboos protect women from miscarriage. Alternatively, consider the ritualized process of cassava production. Again, despite the causal opacity of the underlying process, populations have developed practices that remove toxic cyanogenic elements which would have long-term health consequences if regularly consumed \citep{bradbury2011mild,cardoso2005processing,mckerracher2016food,banea1992shortcuts}. Of course, it can also be adaptive to acquire cultural elements idiosyncratic to local cultures. Regardless of whether the practice of female or male circumcision has biological benefits, within a circumcising culture, it can be adaptive to demonstrate commitment to the group by engaging in such a costly signal. This can ensure inclusion and support by the group as well as prevent ostracism \citep{sosis2004adaptive, howard2017frequency} -- thus enhancing reproductive outcomes.
Cultural organisms do not only inherit genes and cultural information, but also an environment: that is, a habitat that has been selected, modified, and partly created by their ancestors. All organisms change their habitats through their actions -- of which spiderwebs, termite mounds, or human-made earthworks are just a few notable examples -- with more or less transitory effects. Such organism-modified environments are evolutionarily relevant insofar as they modify selection pressures or transmission opportunities -- what the evolutionary literature calls niche construction \citep{laland2000triple}. Systematic and long-lasting modifications, such as beaver dam-building or human agriculture can have profound effects on both biological and cultural evolutionary processes of the species producing these modifications as well as others in the habitat.
While niche construction is not uniquely human, humans are distinctive in that most of their niche construction activities are cultural (e.g., making dams, fences, bridges, schools, roads, clothes). Over evolutionary time, the hominin lineage has created a cultural niche that has not only affected their biological and cultural evolution by creating new selection pressures, but which has increasingly become crucial for their survival \citep{laland2011cultural,uchiyama2021}. For example, the use of fire and cooking may have facilitated selection for larger brains alongside smaller guts and jaws. Lacking fire or cooking, hominins would have been poorly adapted to their environments \citep{aiello1995expensive}. The second inheritance system -- culture -- can thus indirectly affect the first -- genes -- through niche construction. Genes and culture have co-evolved: cultural activities such as tool use and tool making have generated selection pressures for social tolerance and cognitive skills such as social learning, attention, working memory, and language, which in turn have opened up ever greater capacities for cultural innovations, social learning, and large-scale cooperation \citep{henrich2015secret}, creating the biological and cultural conditions for the emergence of open-ended cultural evolution.
Cultural evolution is often faster than genetic evolution: a cultural variant can emerge and recombine quickly and repeatedly within the lifetime of its carrier, and can die independently of the death of the individual \citep{boyd2013cultural}. Alongside the speed of cultural evolution, humans’ capacity for planning and foresight suggests that many human adaptations are cultural or have cultural origins \citep{uchiyama2021}. Thus, cultural evolution cannot only produce solutions to (ecological) problems, but also create new opportunities and niches that cultural evolution can exploit - an autocatalytic process, resulting in the emergence of unbounded cumulative culture.
\section{Open-Ended Cultural Evolution}
As noted in the introduction, open-ended evolution is an umbrella term for a constellation of features associated with evolutionary change. These include the ongoing generation of novelties, adaptations, and evolutionary salient entities \citep{taylor2016open}. For simplicity, we hold that an evolutionary system can generate open-ended evolutionary change if it is able to produce a continuous stream of novel units (evolutionary individuals, traits) with no \emph{a priori} limits to the generation of such novelties \citep{gabora2017autocatalytic,taylor2016open}. As several commentators have noted \citep{pattee2019evolved,tennie2018culture,bedau2019open,bedau2019patented}, human cultural evolution appears to be just such an open-ended evolutionary system.
More recently, cultural evolution researchers have used the term “open-ended” to describe what is unique about human culture \citep{tennie2018culture}. This acknowledges that human culture frequently involves processes of cumulative cultural evolution -- processes that generate traits (e.g. behaviour, beliefs) that build upon previous traits, perhaps also making them more complex, efficient, and adaptive. But calling human culture ``open-ended" is also meant to suggest that cultural solutions to problems do not need to be stuck at local optima, but can break free and further improve, for instance, by the harnessing of new affordances \citep{arthur2009nature,derex2022exploit}. Focusing on this putative “uniqueness” of human culture, researchers have identified important transitions, cognitive capacities, and patterns of cultural evolution as hominins have evolved and changed over the past 8 million years.
In the next three subsections we make distinctions between patterns of cultural evolutionary change: between cumulative and non-cumulative cultural traditions; between ``building-up'' or \emph{tall} traditions and the ``building-out'' of \emph{wide} repertoires of traditions; and between bounded and unbounded evolution. These patterns capture important differences in cultural evolutionary dynamics. Though these patterns are distinct, they likely overlap in any given instance. In the final subsection we turn to consider how these distinct kinds of evolutionary patterns help characterize and explain the evolution of open-ended cultural evolution in hominins.
In focusing on distinct kinds of evolutionary patterns, and tracing these patterns back to concrete changes in selection pressures, cognitive mechanisms, and social arrangements, the approach taken here differs from recent attempts at describing hallmarks of open-ended evolution \citep{taylor2016open}. Hallmarks are signals, such that if one encountered them, this is good evidence that the evolutionary system is capable of open-ended evolution. By contrast, our approach distinguishes patterns that are associated with processes supporting cultural evolutionary change. These processes are critical to, but not necessarily sufficient for, open-ended evolution -- and thus are poor candidates for a hallmark approach. Nonetheless, distinguishing these processes helps to identify those important for evolving open-endedness, as well as how the interaction between such processes may be important to the eventual emergence of a system supporting full-blown open-ended evolution.
\subsection{Cumulative vs. Non-Cumulative}
A key distinction drawn by cultural evolution researchers is that between cumulative and non-cumulative culture. As many researchers see it, cumulative culture is central to explaining how human beings could have developed the sophisticated technical toolkits that allowed them to survive and thrive across varying -- and sometimes extreme -- ecologies \citep{richerson2005notbygenes, henrich2015secret, potts2013hominin,grove2011speciation}. Based on extensive human and non-human experiments, and a number of computational and mathematical models, \citet{mesoudi2018cumulative} have suggested ``core'' criteria that cultural evolutionary processes would have to satisfy in order to be classified as cumulative:
\begin{enumerate}
\item a change in behavior, followed by ...
\item ... transfer of the modified or novel trait via social learning, where ...
\item ... the learned trait results in an ``improvement" in performance/fitness (cultural or genetic), with ...
\item ... the previous steps repeated in a manner that results in (sequential) modification and improvement over time.
\end{enumerate}
However, we follow recent work in denying that ``improvement'' over time is a necessary feature of cumulative culture evolution, and instead favor a minimal formulation that sheds this requirement \citep{buskellinpressmere}.
On this minimal formulation, cumulative culture is simply the modification to, and retention of, socially transmitted cultural traits \citep{buskellinpressmere}. What we have called processes of cumulative culture in the above discussion, are whatever cognitive and social capacities are sufficient to bring about trait modification and retention over time. But these \emph{processes} generate \emph{patterns} in the evolutionary record. Because cumulative culture involves retained modifications, they have histories -- and can be considered ``traditions''. The histories of such traditions can, at least in principle, be reconstructed as sequences of step-by-step changes (akin to what \citet{calcott2009lineage} calls “lineage explanations”). This minimal formulation better aligns cumulative culture with evolution theory, such that cumulative changes can generate not only adaptive traditions, but also neutral and maladaptive ones \citep{buskellinpressmere}.
Contrasting with cumulative culture is non-cumulative cultural evolution. The latter is a process of cultural change that does not retain modifications for one reason or another. This might be because there is no retention of past behavior, no introduction of modifications, or no social learning sophisticated enough to pick up on relevant modifications. These situations might occur if individuals can only innovate new traits, cycle through a set of traits, or do not learn from one another. In these cases, histories of modifications will be non-existent, uninformative, or based in non-cultural inheritance systems.
\subsection{Tall vs. Wide Evolution}
\begin{figure}[tp]
\centering
\includegraphics[width=0.8\textwidth]{BORG.fig1.png}
\caption{ This figure illustrates our conception of \textit{tall} and \textit{wide} evolution. Full details are available in SUPP MATERIAL X. Each box represents some kind of technology or cultural practice. Each pixel within each box represents a piece of discrete (but arbitrary) information. The eight squares in row 1 were generated by asking each of the pixels to become black or white at a probability of .5. Thus, all initial configurations of aggregate information are equiprobable. Thereafter one of eight arbitrary rules was applied over ten iterations. These rules were not grounded, but represent changes of `information' within the aggregate, or which introduces structure (such as symmetry) in the aggregate. As can be seen, as the aggregate information cumulatively changes over time, it becomes more complex and more structured, and increasingly dissimilar from other traditions. Each column is independent of all other columns, and `movement' along the \textit{wide} axis is not possible without violating the cumulative principle of \textit{tall} evolution.}
\label{fig:tallvswide}
\end{figure}
Recent work has built upon analyses of cumulative culture to distinguish further cultural evolutionary patterns that had been unhelpfully lumped together. This work distinguishes between patterns involving an increasing stock of cultural traditions (``cultural disparity'') and important aspects of cumulative cultural traditions (e.g. increases in adaptiveness, efficacy, or complexity) \citep{buskell2018religion, buskellForthcomplex}. This and other work \citep{dean2014human, tennie2009ratcheting} points to a helpful distinction between cultural evolutionary patterns: between ``building upon” traditions and ``building out” to generate new traditions -- or just \emph{tall} versus \emph{wide} evolution.
Figure \ref{fig:tallvswide} provides a visual example of both tall and wide evolution, with tall evolution displaying a series of path-dependent adaptations within a single tradition. Each step in the sequence could only have occurred if the previous evolutionary steps had already arisen. While tall traditions need not be path-dependent -- for instance, if evolution is highly constrained -- it is a common assumption that evolutionary change is so, and we emphasize path-dependency here. Wide evolution, by contrast, is about the novel instancing of new traits. Paradigmatically, this involves the innovation of completely new traditions that need not follow any \emph{a priori} sequence. Of course, some new traditions may only arise through path-dependent cumulative evolution and recombination -- but we put those instances to the side in this illustration. Thus, in this figure, one could re-arrange the wide axis (since new traditions need not appear in any sequence), but not the tall (since each step is strongly determined by the one prior).
By way of example, let us consider some kind of adaptive problem that may have multiple starting points - starting points which are either equiprobable (equally likely to occur in the same environment), or equally efficient at solving the problem but are the product of different affordances due to different environments. This might include capturing fish, or preserving meat, or could include production of housing or clothing, refining ore into more valuable products, or skinning cats. The specifics matter less than the principle being illustrated. Along the x-axis we have multiple starting points. Let us consider the fishing example. One equiprobable starting point may be to wait in the shallows and bash a fish with a rock as it swims by, or, to bash a fish with a stick. Another example may be to wait at a certain point on the beach which, at low-tide, forms a natural pool from which fish cannot escape. Another yet may involve poisoning the water with certain plant foliage. It can be true that these starting points are 1) are all equally likely due to the affordances of the environment, or 2) are all arrived at by different groups who live in different environments with different affordances. Whether either is true in any given situation is less important than accepting that these are (some of) the starting points for acquiring fish.
Tall evolution may involve the rock culture innovating upon the basic rock-bashing behaviour. Perhaps first by throwing the rock, then to tying a fibre to the rock before throwing (so as to recover the rock more quickly through a pulling motion); and then using multiple rock-fiber devices to expand the range of striking. Later innovations might eschew the bashing/throwing motion for connecting the fibers together to make a rake or net. Further innovating might then improve the netting technology or the casting technique, and so on.
Meanwhile, the stick culture may innovate upon the bashing motion by innovating a sharp point -- now preferring to pierce rather than to bash. Later innovations might make spears much longer than would ever be practical for bashing, so as to stand further away from the fish without scaring them. Then, perhaps, innovations might lead to a stone-tip for the spear. And later still, a spear-throwing device like an atlatl or woomera to bring down larger prey, and so on.
It may be the case that the first instance that stick bashing and rock bashing are equally (in)efficient, and that -- assuming an abundance of rocks and sticks -- one individual or one culture may switch between techniques with little cost. However, once groups begin to innovate upon their starting point, horizontal movement comes with greater cost, and relies upon different principles. A raking technique does not beget a spear-thrower, and vice versa. After `tall' evolution has progressed beyond a certain point, horizontal movement cannot be integrated/combined with the existing `advanced' approach, and switching comes at greater cost to the individual or culture.
Another case study is the tool use of chimpanzees. Chimpanzees are capable of spontaneously innovating tools given available resources, such as using blades of grass for termite fishing, sticks for obtaining out of reach objects, branches for scooping algae out of water \citep{boesch1990tool,sanz2010chimpanzees,bandini2017scooping}. Each and all of these innovations can exist within a population of individuals, but the existence of one need not depend on the existence of any other. Theoretically, any of these innovations can be selected for and spread within the population independently of the others. This is wide evolution. Nonetheless, modifications could be added to these innovations -- introducing an anvil-prop to nut-cracking, chewing and stripping the grass to produce ant-catching bristles -- that put them on the vertical road to becoming a tall cultural evolutionary tradition.
This example also points to an important corollary of the distinction between tall and wide evolution. The capacities underlying each plausibly come apart. This seems clear when one looks at hominin evolution, where early capacities for social learning lead to wide knowledge bases of disparate ecological traditions prior to the building up any particular tradition into more complex forms \citep{buskellinpressmere, sterelny2021pleistocene} (more on this below).
More generally, we want to resist identifying tall or wide evolution patterns as hallmarks of open-ended evolution. It is an open question of how tall (or short), wide (or narrow) evolutionary patterns relate to open-ended evolution, as well as the transition to open-ended evolution. As examples above and below suggest, capacities that support tall and wide evolutionary patterns likely existed well before ecological and evolutionary circumstances permitted their expression. And indeed, open-endedness most likely emerged from the gradual accumulation of new traditions, their elaboration into tall, path-dependent traditions, and their recombination and exaptation into bushy, wide, and novel traditions - we can see this visually in the patent record genealogies produce by \citet{bedau2013minimal, bedau2019patented}, with both the gradual accumulation of new patent traditions and long sequences of traditions building up being easy to identify. There's no reason to take either tall or wide evolution as a hallmark of open-ended evolution, ultimately they just describe the patterns of change that underpin the emergence of open-endeded evolutionary process, both seem necessary for open-ended evolution to emerge, but only further empirical analysis of the patterns of change found in open-ended evolutionary systems will allow us to ascertain whether common pattern exists or whether a multitude of patterns can ultimately underpin open-endedness. This should be unsurprising. Both formal modelling \citep{enquist2010one,kolodny2015evolution,winters2020comb} and cultural evolutionary theory \citep{richerson2005notbygenes,buskell2019systems,charbonneau2016modularity} emphasizes the role of cultural recombination as a potent force in generating new innovations: this occurs when distinct cultural traditions (or their constituent elements) are combined, and potentially exapted \citep{mesoudi2018cumulative}, to generate new traits. We expand upon this line of thinking below and go on to ask whether these variations in the progression of evolution (tall, wide, recombinative, exapted) are detectable within the ``ALife test'' introduced by \citet{bedau1998classification} (also see, \citet{channon2001passing,channon2003improving,channon2006unbounded}).
\subsection{Unbounded/Bounded Evolution}
A conceptually distinct and contrasting set of evolutionary patterns is that between bounded and unbounded evolution. Bounded evolution occurs when abilities for transmission, retention, or the production of modifications are limited or absent. This leads to evolutionary exploration of a parochial, bounded space of traits. Unbounded evolution, by contrast, occurs when the above abilities for transmission, retention, or the production of modifications are present and when the environment facilitates evolutionary exploration. This might occur, for instance, when the environment is rich in natural resources which can be exploited in technological production \citep{derex2022exploit}.
To get a grasp on this distinction, it is useful to look at a domain in cultural evolutionary research where issues of boundedness or unboundedness arise. A good example is work on the Zone of Latent Solutions (ZLS) Theory \citep{tennie2009ratcheting}, which analyses the cultural and putative cumulative cultural traditions of non-human animals. \emph{Putative}, because while several species have capacities for social learning, they appear to have minimal capacities for building upon previous traits. Speaking generally, the ZLS theory suggests that the cultural capacities of non-human animal species are ``bounded", limited by a possible range of features. Explanations for why this might be the case have mainly centred on the great apes (hereafter ``apes"), but developing work suggests similar explanations may hold true with other animals, such as some birds and whales \citep{aplin2019birds,perry2011traditions,vanschaik2003orangculture,whitehead2015whales,whiten1999cultures}.
According to the ZLS theory, many putative instances of ape (and perhaps other animals') cumulative culture are not, in fact, instances of cumulative culture. The ZLS theory argues that apes lack (or have minimal, or rarely expressed) capacities for transmitting and retaining trait modifications. What appears to be cumulative culture is instead likely to be socially-influenced \emph{reinnovation}. When apes reinnovate, they draw on a baseline repertoire of behaviours -- behaviours that any able-bodied ape would be able to express -- to individually strike upon the trait of interest. Though this reinnovation may be socially facilitated, in the sense that other apes may draw attention to relevant or highly salient environments or objects, the trait is developed by each learner anew.
The basic idea of the ZLS is that this baseline repertoire -- and the artful combinations thereof -- largely set the bounds of possible cultural evolution (together, perhaps, with other cognitive features). Absent of more sophisticated forms of social learning, apes are unable to add novel traits, or to build cumulative traditions that progress beyond the boundary of `latent solutions'. Apes, but not humans, do not seem to copy -- or transmit -- traits beyond their ZLS (be it in the technical \citep{tennie2009ratcheting}, or social domain \citep{clay2017overimitation}). As said above, the appearance of cumulative culture can largely be accounted for by socially-facilitated reinnovation \citep{tennie2020zone}. That being said, there is some contrary evidence that suggests apes have limited capacities for some cumulative culture. Yet these capacities seem restricted to particular domains, with specific kinds of knowledge, and often in highly structured learning environments \citep{cladiere2014baboon,sasaki2017cumulative}. Absent these conditions, ape cultural evolution may be robustly bounded.
What might explain the transition between bounded ape culture and unbounded human culture? Though a full catalogue of important underlying processes has not yet been completed, a key capacity seems to be abilities for copying ``know-how'' -- that is, capacities for attending to, perhaps understanding, and copying/reconstructing the elements and interrelationships of \emph{any} particular behavior (including the making of artefacts; and of artefact structures themselves). Other relevant capacities - at least for modern humans - plausibly include language, and special types of teaching (especially those types of teaching that can transmit know-how).
ZLS research thus helps the current project in two ways. First, it helps to sharpen the notion of cultural evolutionary boundedness. Boundedness involves a limited exploration of cultural evolutionary space, due to minimal, lacking, or rarely expressed capacities for transmission, retention, or the production of modifications. Second, it helps to illuminate the devilish empirical issues involved in understanding the transition from boundedness to unboundedness. Focusing on the tall, wide, and unbounded cultural evolution of humans alone may not be helpful for understanding this transition \citep{buskellinpressmere}, but a combined focus that also includes understanding the patterns of change in evolutionary systems that ultimately fail to break away from boundedness may.
\subsection{Evolved Open-Endedness in Action}
According to \citeauthor{pattee2019evolved}, ``conditions for increased open-endedness must have been gradually acquired in the course of evolution" (\citeyear[p.~5]{pattee2019evolved}). In justifying this claim, \citeauthor{pattee2019evolved} point not only to concepts from the foundations of the modern synthesis \citep{haldane1932causes} and other more recent attempts to frame evolution as a progression of steps towards increased evolvability \citep{wagner1996perspective,wilson1997altruism,maynardsmith1995major,szathmary2015toward}, but also to numerous examples of evolved mechanisms that have ``significantly facilitated the open-endedness in the evolution of life" \citep[p.~6]{pattee2019evolved}. Notable amongst these examples are:
\begin{itemize}
\item the evolution of symbolic language spoken by humans, which are noted as being ``evolved from simpler, less open-ended languages" \citep[p.~6]{pattee2019evolved}.
\item the formation of co-operative groups of increasing scale and complexity (colonies --> societies), with higher levels of organisational and institutional formation requiring the evolution of new mechanisms not previously seen in lower-level organisational entities.
\item the evolution of new information-processing abilities, sensory modalities, and the brain, all providing organisms with new possibilities to explore and exploit.
\end{itemize}
From these examples it is clear that \citet{pattee2019evolved} consider what we describe as the evolution of culture (e.g. languages and social institutions) and the biological mechanism that support culture (e.g. the brain and culture supporting sensory modalities), as clear examples of evolved open-endedness. Therefore, we believe that in human cultural evolution (including ``dual-inheritance" and ``triple-inheritance") we have a real (and recent) example of evolved open-endedness in action. Below, we outline the case for human culture evolution as an instance of evolved open-endedness in action.
Within cultural species more broadly, we can differentiate between different types of cultural evolution: bounded non-cumulative, bounded cumulative, unbounded non-cumulative, and unbounded cumulative. While cumulative culture may or may be uniquely human \citep{mesoudi2018cumulative}, unbounded cumulative culture plausibly is. Indeed, human cultural evolution appears to be the only instance of unbounded cumulative cultural evolution.
Evidence suggests that the transition towards unbounded cumulative cultural evolution has taken place over the last few hundred thousand with the origin and evolution of \emph{Homo sapiens} \citep{stringer2016origin, stringer2017origin}, or even few million years with the advent on stone tool use in early \emph{Homo} \citep{lewis2016earlier}. We thus have, in both archaeological remains and in our genes, the record of this transition into open-ended cultural evolution. Exploring this transition is valuable, for it offers a compelling insight into the problems, solutions, processes and complex evolutionary dynamics that can jointly explain the emergence of a new open-ended evolutionary system. Though this is a particular instance, we suspect the concepts, tools, and ideas can be generalised.
This is not to say explaining the transition from primate ancestors to fully-fledged cultural hominins is easy. Anything but. Contemporary narratives point to a number of important changes that might have facilitated the evolution of a robust, quasi-independent system for cultural inheritance. These include changes in morphology (the bipedal stance, decreased gut size, increased crania), life history and population structure (social affiliation, intergenerational care, long developmental periods, extended family groups and social institutions), and cognitive attributes and machinery (greater executive control, social tolerance and attentiveness) \citep{sterelny2012evolved, sterelny2021pleistocene, klein2008career, kaplan2000theory, aiello1995expensive, grove2017environmental, anton2014evolution, ostrom1990governing, powers2013co,powers2016institutions}.
Just as important were cultural evolutionary feedback loops where early culture could facilitate selection for more and more effective social learning. Pre-modern hominin culture, for instance, generated an information environment seeded with cues as to how one should live. This includes ``scaffolded'' learning environments, where juveniles can learn in a relatively safe and low cost manner by interacting with the products of adult cooperation. These low-cost and safe learning environments could be increasingly supplemented with real-world experience, perhaps teaching, and experimentation as learners developed. Selection to improve capacities to navigate and explore this informational domain would in turn lead to greater informational structure in the world -— and thus to further selection. This general story is one of humans as ``evolved apprentices'' \citep{sterelny2012evolved}.
The story of how hominins escaped the ``boundedness'' of their primate relatives exploits this evolutionary feedback loop, increasing capacities for both tall and wide culture, and abilities to recognize ``task-independent'' properties of artefacts and behaviors that could be transferred and combined with other behaviors to generate new kinds of cultural traditions. These cognitive and cultural capacities could open up new evolutionary domains by exploiting novel affordances \citep{arthur2009nature,derex2022exploit}. As a result, human technologies capture and put to use a collection of phenomena: for example, a car not only exploits the phenomenon that rolling objects produce much less friction than sliding ones (resulting in the use of wheels), but it also exploits the phenomenon that chemical substances (diesel, say) produce energy when burned \citep{arthur2009nature}. This discovery and exploitation of new solutions to old problems allows a potentially unbounded form of cumulative culture. We see evidence for the opening-up of new evolutionary search spaces, and the exploitation of new solutions in numerous domains within patent records \citep{bedau2019open, bedau2019patented, bedau2013minimal}.
Equally important is the way that human groups can support the increasing specialisation of skills and knowledge, the circulation of knowledge, and participation in collective endeavours -- pitching in on large or temporally distributed projects that could never be completed by a single agent in their own lifetime. These social features in turn could contribute to the changes in cognition, life history, and information dynamics discussed above. This is part of what some have called -- with various slight differences -- the cultural intelligence hypothesis \citep{herrmann2007specialized,vanschaik2011social,muthukrishna2018cultural}.
As this makes clear, the transition between a limited type of social learning and the more complex and open-ended form currently enjoyed by humans is a complex story. Despite this, complexity researchers in archaeology, comparative psychology, paleoanthropology, psychology, philosophers, and many others have been able to make progress on disentangling distinct causal pathways, and to show how these can be put together again to explain the evolution of a distinct system of open-ended evolution: human cultural evolution \citep{tomasello1999origins,boyd1985culture}.
\section{Cultural Evolution, Open-Ended Evolution and Artificial Life}
Culture and cultural evolution have a long tradition in Artificial Life, appearing amongst both the grand challenges \citep{taylor1993artificial} and open problems \citep{bedau2000open} of the field, and spawning a regular workshop series at the Artificial Life conference \citep{marriott2018social}. It is therefore curious that open-ended cultural evolution has received relatively little attention as a possible avenue for fruitful research until recently (see \citet{bedau2019open}).
In the previous sections of this paper we have outlined many of the arguments and factors that we feel place cultural evolution firmly within the domain of open-ended evolution research. However, we also note a curious parallel between the work already taking place within the Artificial Life open-ended research community and the broader study of culture as an evolving system. A particular example of this can be seen in \citet{taylor2019evolutionary}, where three classes of novelty, all capable of generating open-ended evolution, are introduced: 1) \emph{exploratory novelty}, whereby existing traits are recombined to produce novel adaptations, 2) \emph{expansive novelty} resulting from the discovery and exploitation of new affordances, and 3) \emph{transformative novelty} resulting from the discovery of new state spaces, possibly via the exaptation of current traits. Within the cultural evolution literature we can see clear parallels with each of these classes: \emph{exploratory novelty} can be seen as a restricted process of cultural variation and accumulated modification within one domain or affordance (described as Type I cumulative cultural evolution by \citet{derex2022exploit}); \emph{expansive novelty} can be interpreted as an exploration of new affordances, expanding cultural evolution in to new domains (described as Type II cumulative cultural evolution by \citet{derex2022exploit}); and \emph{transformative novelty} can be viewed as movement into an n-dimensional state-space through the recombination and exaptation of existing cultural traits, enabling the creation and exploitation of new cultural and ecological niches. Examples of cultural exaptation abound in numerous domains, technology \citep{boyd2013cultural, bedau2019open, bedau2019patented} and pharmaceuticals \citep{andriani2015measuring} being two such examples.
It is evident that open-ended evolution research in artificial life and cultural evolution research have been speaking about very similar things; the types of novelty discussed by \citet{taylor2019evolutionary} and core aspects of cumulative cultural evolution outlined by \citet{derex2022exploit} and \citet{mesoudi2018cumulative} demonstrate such similarities. It should therefore be uncontroversial to suggest an open-ended evolutionary synthesis that combines genetic evolution, cultural evolution, and artificial evolution within a single theoretical framework. Combined with the exploratory work on open-ended technological innovation of \citet{bedau2019open, bedau2019patented}, the inclusion of social and cultural transitions emerging from earlier biological transitions within the major transitions framework \citep{maynardsmith1995major, szathmary2015toward, calcott2011major}, and the clear articulation of evidence for both biological and cultural mechanisms for the facilitation of evolved open-endedness \citep{pattee2019evolved}, we see a strong argument for the inclusion of cultural evolution within the broader framework of open-ended evolution.
In the sections below we argue that the transition from bounded to unbounded evolution, that is evident within the recent hominin evolutionary history, shines an important light on how evolved open-endedness might be achieved. We go on to consider tall and wide evolution within the context of the \citet{bedau1998classification} ``ALife Test'' and provide some initial thoughts on how this test could be further expanded to detect tall and wide adaptations in order to better delineate between the mechanisms driving (and halting) artificial evolutionary systems. Finally, we introduce a raft of new questions that the inclusion of cultural evolution under the framework of evolved open-endedness allows us to ask.
\subsection{Transitions from Bounded to Unbounded Evolution}
As we saw in section two, it is common to operationalize culture in informational terms: culture is information, embedded (or carried) by heterogeneous vehicles, that can be transmitted between agents \citep{richerson2005notbygenes}. On this understanding, one thread tying together the evolutionary history of hominin populations is an increase in and improvement of culturally transmitted information \citep{boyd1985culture}. This general observation has led some researchers to claim that culture represents a ``major transition'' in the sense of \citet{maynardsmith1995major} and \citet{szathmary2015toward}, building off the idea that such transitions involve changes in the quality and reliability of information transfer. For instance, \citet{waring2021longterm} argue that human cultural groups are a new kind of evolutionary individual, suggesting that cultural selection pressures now vastly outweigh biological selection pressures in determining the course of human diversification and change.
Waring and Wood’s arguments interpret the major transitions framework in a particularly strong way. This takes transitions to involve the stabilization of a new evolutionary individual, here, a cultural group \citep{mcshea2011miscellaneous}. But one need not understand the framework in this ``unified'' way \citep{michod1999dynamics}. Instead, transitions may involve modifications of the ``core elements of the evolutionary process itself'' \citep[p.~4]{calcott2011major}, irrespective of introducing a new level or kind of selection process \citep{godfreysmith2009darwinian}. Thus, even if one is sceptical about cultural group selection (see, for instance, \citet{chellappoo2022selection}) one can usefully understand the introduction and refinement of cultural evolution using the ideas and machinery of the major transition literature \citep{maynardsmith1995major,szathmary2015toward, calcott2011major}.
We conceive ``open-endedness'' through this more expansive understanding. It characterises an increase of informational content that can be (or is) transmitted in a given domain, potentially reflecting coordinated or piecemeal changes to the rate, increased quantity, or kind of variation that can be generated. In so doing, we follow \citet{pattee2019evolved}:
``[o]ver time both biological adaptations that enable more complex and open-ended social and cultural behaviors (bigger brains, opposable thumbs, changes in the shape of the larynx, ...), and cultural adaptations that open up access to new domains of knowledge (symbolic language, the scientific method, music and art, complex social institutions, ...) have been selected for in a clear demonstration of selection in favour of open-endedness, with this same selection pressure being seemingly absent in our closest genetic relatives''.
\subsection{Cultural Evolution and the ``ALife Test'' for Open-Endedness}
Determining whether an evolutionary system exhibits unbounded evolutionary dynamics is still arguably the primary concern of open-ended evolution research. Without the ability to judge whether a system is open-ended, how can open-endedness be understood to any useful degree? Despite a general lack of use, we are of the opinion that the classification system of long-term evolutionary dynamics devised by \citet{bedau1998classification} (sometimes known as the ``ALife Test'' for open-endedness) provides us with the best method for determining whether an evolutionary system exhibits unbounded evolutionary dynamics. However, we believe some of the key features of cultural evolution -- wide vs. tall evolution, transition from bounded to unbounded evolution, and evolved open-endedness -- may necessitate some refinement of the ``ALife Test''.
The three primary measures of evolutionary activity described in \citet{bedau1998classification} are 1) the diversity of traits within the system at any given time, 2) the amount of ``new evolutionary activity'' observed in the system over time (i.e., the creation and maintenance of new adaptive traits), and 3) the mean cumulative activity of traits (i.e., the number of traits observed to date divided by the current diversity of traits in the system). For a system to exhibit unbounded evolutionary dynamics it would need to always demonstrate positive new evolutionary activity (i.e. new traits are being created and maintained), alongside either unbounded diversity (as time progresses the number of traits maintained in the system continues to grow) and/or unbounded mean cumulative activity.
What these measures of evolutionary activity do not take into account is whether the new activity is a result of cumulative evolutionary processes, non-cumulative evolutionary processes, or recombinative processes. These distinctions matter because they can begin to shed light on \emph{how} a system has progressed toward, and ultimately achieved, open-endedness. For instance, would we expect to see a ``building-out'' of wide adaptations (as seems to be the case in hominin cultural evolution) before the emergence of tall accumulated modifications, ultimately leading to the combination of traits from disparate evolutionary lineages forming recombinative adaptations (wide evolution providing the raw material for exploratory and expansive evolution as per \citet{taylor2019evolutionary}? Or are there numerous different pathways to open-endedness which can only be understood by breaking down the nature of the evolutionary patterns of change, adaptive processes, substrate and mechanisms underpinning these evolutionary systems?
\subsection{New Questions in Open-Endedness}
Once we consider the implications and nature of cultural evolution from an open-ended evolution perspective we can begin to ask new and important questions about evolved open-endedness, human cultural evolution, and the underpinning dynamics of all evolutionary systems. These questions include, but are not limited to:
\begin{itemize}
\item Do the mechanisms underpinning cultural evolution more easily lead to open-endedness than those underpinning genetic evolution? Or vice-versa?
\item What happens when a bounded aspect of an evolutionary system (e.g. animal cultural evolution) comes up against an unbounded aspect of the same evolutionary system (e.g. human open-ended cultural evolution)? Is there a sudden pressure for evolved open-endedness to emerge amongst species that have so far only exhibited bounded cultural evolution? And does the emergence of open-endedness always lead to the extinction of its bounded counterpart?
\item Are there any bounded aspects of human cultural evolution? And could there also be bounded aspects of genetic evolution?
\item Does an evolutionary system need to be cumulative to be open-ended, or is it possible to have non-cumulative open-ended evolution? Note: If major transitions are one of the primary behavioral hallmarks of an open-ended evolutionary system \citep{taylor2016open}, and major transitions build up incrementally from one another (each transition is dependent on subsequent levels), this would imply that open-ended evolution must result from a cumulative evolutionary process. But is it possible to generate open-ended evolution without cumulative major transitions and could major transitions be the result of numerous independent innovations?
\item Are cumulative evolutionary systems always open-ended? The numerous cases outlined in \citet{mesoudi2018cumulative} would suggest not, nor do the criteria for cumulative cultural evolution necessitate an open-ended system (or logically lead to the conclusion that open-ended evolution is an unavoidable end point).
\item What features of cultural evolution are common to all evolutionary systems capable of generating the open-ended evolution of novelty?
\item Is an open-ended evolutionary synthesis which accommodates cultural evolution alongside genetic evolution and artificial evolution viable and/or desirable?
\item Is niche construction necessary for open-ended evolution? And are the autocatalytic processes resulting from the interplay between numerous interdependent evolutionary systems (see ``triple inheritance" \citep{laland2000triple}) necessary for open-endedness?
\end{itemize}
\section{Conclusion}
In this paper we set out to outline culture as an evolutionary system and argue for its inclusion within the broader framework of evolved open-endedness. In order to make these arguments we provided numerous examples of the unique aspects of cultural evolution that make it a fascinating counterpoint to biological evolution, but we also maintain a direct link between the core algorithmic features of biological evolution and cultural evolution. We went on to discuss the key features and dynamics of cultural evolution, including: tall, wide, cumulative and non-cumulative evolution, transitions from bounded to unbounded evolution, dual and triple inheritance, evolved open-endedness, major transitions, and the ZLS theory. Each of these features provide new insights into the nature of another model evolutionary system.
Going forward we believe two lines of enquiry are necessary to fully develop cultural evolution as an integral part of open-ended evolution research. 1) Following on from the work of \citet{bedau2019open}, we believe an application of the ``ALife Test'' to the vast number of available cultural evolution datasets, across numerous species, would be informative for both the open-evolution community and the cultural evolution community. 2) Including mechanisms of cultural transmission and the unique features of cultural evolution within artificial evolutionary models aimed at addressing the question of open-endedness -- this may involve the modelling of culture as an independent system, or the inclusion of culture alongside genetic (and environmental) inheritance. To enable these two lines of enquiry we believe some work on the refinement of the ``ALife Test'' is necessary, as is the development of tall- wide-recombinative evolutionary theory, and more interdisciplinary dialogue between the fields of Cultural Evolution and Artificial Life.
\section{Acknowledgements}
We would like to thank the members of the Cultural Evolution Online (CEO) Discord group who have provided advice and insights that have helped us form the views contained in this paper, and James Winters and Mathieu Charbonneau for their involvement in regular discussions with the authors on the topics addressed in this paper. We would also like to thank our two reviewers; both provided a series of constructive comments that have unquestionably improved this paper. Finally we would like to thank the organisers of the OEE4 workshop and the guest editors of this special issue for providing us with the opportunity to present this work.
\printbibliography
\end{document}
|
1,116,691,499,130 | arxiv | \section{Introduction}
The interest in magneto-elastic materials finds its motivation in the growing
variety of new materials among which magneto-rheological elastomers or
magneto-sensitive polymeric composites \cite{Hossain-et-al-2015a, Hossain-et-al-2015b}
may be mentioned. A whole Special Issue devoted to {\it Magnetoelastic Materials}
is going to be published soon \cite{SI2020} in the Journal {\it Materials}.
Many applications of magneto-elastic materials, covering a wide area of interest from
technological to biomedical devices, see e.g. \cite{Ren},
can be listed. In particular, also
two dimensional problems are subject of applicative investigations \cite{Hadda}.
The model we consider is a two dimensional simplified one, however, we believe that, it might
open the way to further applications, possibly, via perturbative methods \cite{Bernard}.
We study the functional energy of a magneto-elastic material, that is
a material which is capable of deformation and magnetisation. The
magnetisation is a phenomenon that does not appear at a macroscopic
level, it is characterised by the {\it magnetisation vector} whose magnitude
is independent of the position while its direction which can vary from one point to another.\\
In this context, the magnetisation vector ${\bf m}$ is a map from $\Omega$
(a bounded open set of $\mathbb{R}^2$) to $S^2$ (the unit sphere of
$\mathbb{R}^3$). In particular, here we assume $\Omega$ is the unit
disk of $\mathbb{R}^2$. The magnetisation distribution is well
described by a free energy functional which we assume composed of
three terms, namely the {\it exchange} energy ${E}_{\rm ex}$, the
{\it elastic} energy $E_{\rm el}$ and the {\it elastic-magnetic}
energy $E_{\rm em}$. In Section~\ref{model} we detail the three
energetic terms and, after some simplifications, derive the proposed
functional for describing some phenomena. Assuming the hypothesis of
radially symmetric maps, i.e.
$$ {\bf m} = ( \cos \theta \sin h(r), \sin \theta \sin h(r), \cos h(r)
),$$ we get to the analysis of a one-dimensional energy functional
that can be expressed in terms of the only scalar function $h$. The
effect of the elastic deformation reveals through a positive
parameter $\mu$ which characterizes the connection between the
magnetic and elastic processes. In Section~\ref{minproblem}
the minimisation of the energy functional, namely
$$E(h)= \pi \int_0^1 \left[h_r^2+\left(\frac{\sin h}{r}\right)^2 -
\frac{\mu}{2} (\sin2h )^2 \right] rdr,$$ is the aim of our paper.
In particular, we prove that there exists a critical value $\mu^0$
such that for $\mu \le \mu^0$ the functional energy is not negative
and there is only a global minimiser that is the trivial solution
$h\equiv 0$; for $\mu
> \mu^0$ other nontrivial minimisers appear, moreover the energy
takes negative values. The local
bifurcation analysis is carried out. More precisely we prove that at
the point $\mu^0$, two branches of minimisers, with small norm,
bifurcate from the trivial stable solution. This local analysis does
not exclude the existence of other solutions of the minimisation
problem even for $\mu = 0$ (see also the results by Brezis and Coron
in \cite{BC} concerning the solutions of harmonic maps from the unit
disk in $\mathbb{R}^2$ to the sphere $S^2$).
For the modelling of magneto-elastic interactions see also \cite{BPGV}, \cite{Bw},
\cite{CPV}, \cite{csvv}, \cite{csvv1}, \cite{He}, \cite{VV}. Magneto-viscoelastic problems are studied in \cite{GVS2010}, \cite{GVS2012} and \cite{MGVS2017}.
Moreover we recall that the phenomenon of bifurcation of minimising
harmonic maps has been studied by Bethuel, Brezis, Coleman, H{\rm
\'{e}}lein (see \cite{BBCH}) in a different physical context.
\medskip
\section{The model} \label{model}
\setcounter{equation}{0}
We start with the general three-dimensional theory.
We assume $\Omega \subset \mathbb{R}^3$ is the volume of the
magneto-elastic material and $\partial \Omega$ its boundary. Let
$x_i,\, i=1,2,3$ be the position of a point ${\bf x}$ of $\Omega$ and
denote by
$$u_i=u_i({\bf x}),\qquad i=1,2,3$$
the components of the displacement vector ${\bf u}$ and by
$$\varepsilon_{kl}({\bf u})=\frac{1}{2}(u_{k,l}+u_{l,k}),\qquad k,l=1,2,3$$
the deformation tensor where, as a common praxis, $u_{k,l}$ stands
for ${\frac{\partial u_k}{\partial x_l}}$. Moreover we denote by
$$m_{j}=m_{j}({\bf x}),\qquad j=1,2,3$$
the components of the magnetisation vector ${\bf m}$ that we assume of
unit modulus, i.e. $|{\bf m}|=1$. \\
In the sequel, where not specified,
the Latin indices vary in the set \{1,2,3\} and the summation over repeated indices is assumed.
We first define the exchange energy which arises from exchange
neighbourhood interactions as
\begin{equation} \label{Eex}
E_{\rm ex}({\bf m}) =\frac{1}{2} \int_{\Omega} a_{ij} m_{k,i}
m_{k,j}d\Omega
\end{equation}
where $a_{ijkl} = a_1\delta_{ijkl} + a_2\delta_{ij}\delta_{kl}$ with $a_1,a_2\ge0$ and $\delta_{ijkl}=\delta_{ik}\delta_{jl}$ is the fourth-order identity tensor.
This integral represents the interface energy between magnetised domains with different orientations. For most magnetic materials $div\, {\bf m}=\delta_{ij}{m}_{i,j}=0$, so hereafter we assume $a_1=a>0$ and $a_2=0$ (see \cite{LL}).
The magneto-elastic energy is due to the coupling between the magnetic moments and the elastic lattice. For cubic crystals it is assumed to be
\begin{equation} \label{Eem}
E_{\rm em}({\bf m},{\bf u}) =\frac{1}{2} \int_{\Omega} \lambda_{i j k l}
m_{i} m_{j} \varepsilon_{kl}({\bf u})d\Omega
\end{equation}
where $\big.\mathbb{L}= \{{\lambda}_{klmn}\}$ denotes the \textit{ {magneto-elasticity tensor}} whose entries $ \lambda_1, \lambda_2, \lambda_3\ge0$, and $\lambda_{i j k l}=\lambda_{1} \delta_{i j k l}+ \lambda_{2}
\delta_{i j }\delta_{kl}+ \lambda_{3}(\delta_{i k}\delta_{j
l}+\delta_{i l}\delta_{j k})$ with $\delta_{i j k l}=1$ if
$i=j=k=l$ and $\delta_{i j k l}=0$ otherwise. Moreover we
introduce the elastic energy
\begin{equation} \label{Eel}
E_{\rm el}({\bf u})= \frac12 \int_\Omega \sigma_{ijkl}
\varepsilon_{ij}({\bf u})\varepsilon_{kl}({\bf u})\,d\Omega
\end{equation}
where $\Big.\mathbb{E}= \{ \epsilon_{lm}\}$ indicates the \textit{ {strain tensor}} $\sigma_{ijkl}$
satisfying the following symmetry property
$$\sigma_{ijkl}=\sigma_{klij}=\sigma_{jilk}$$
and moreover the inequality
$$\sigma_{ijkl}\varepsilon_{ij}\varepsilon_{kl}\ge\beta\varepsilon_{ij}\varepsilon_{ij}$$
holds for some $\beta>0$. In the isotropic case
$$\sigma_{ijkl} = \tau_1\delta_{ijkl} + \tau_2\delta_{ij}\delta_{kl},\qquad \tau_1, \tau_2\ge0.$$
The resulting energy functional ${E}$ is given by
\begin{equation}
E({\bf m},{\bf u})= E(m,u) = E_{ex} (m) + E_{em} (m,u) + E_{ve} (u),
\end{equation}
which {{after some manipulations \cite{BPGV, GVS2012}}}, under the assumption the material is isotropic, reads
\begin{multline}
E({\bf m},{\bf u})=
\frac{1}{2}\int_{\Omega} a|\nabla {{\bf m}}|^2 d\Omega +
\frac{1}{2}\int_{\Omega} \left[\tau_1
|\nabla {{\bf u}}|^2+\tau_2 (div\, {{\bf u}})^2\right] d\Omega +\\
+\frac{1}{2}\int_{\Omega} \left[\lambda_{1}\delta_{k l i j
u_{j,i}
m_{k}m_{l} +\lambda_{2} |{{\bf m}}|^2 div \,{{\bf u} }+2\lambda_3(\nabla
u_i\cdot {{\bf m}}) m_{i} \right] d\Omega~.
\end{multline}
\subsection{A simplified 2D model}
To get the proposed model we make some approximations. First of all
we assume
$\Omega \subset \mathbb{R}^2$
and neglect the components in plane of the
displacement vector ${\bf u}$, i.e. we assume ${{\bf u}}=(0,0,w)$, which implies $div\, u=0$ since $w$ depends only on the plane coordinates. Let $\lambda_3= \lambda$ be a positive constant,
setting\footnote{No need to prescribe nor $ \lambda_2$ nor $\tau_2\ge0$ since they both appear only as factors of $div\, u$; also $\lambda_1$ can be left arbitrary; indeed, $\delta_{klij}u_{j,i}=0$ since $u_{j,i}\neq0$ only if $j=3$ and $i=1,2$ but $\delta_{klij}=0$ when $ j\neq i$.}
$\tau_1= 1$ and $a=1$,
the functional
${E}$ reduces to
\begin{equation} \label{E}
\displaystyle{{E}({\bf m},w)= \frac{1}{2}\int_{\Omega} \left( |\nabla {{\bf m}}|^2
+ 2\lambda m_3 (m_{\alpha} w_{,\alpha})+|\nabla w|^2\right)d\Omega}
\end{equation}
where the Greek indices vary in the set \{1,2\}.
Setting $\Omega \equiv D = \{ (x,y) \in \mathbb{R}^2 : \, x^2+y^2 <
1\}$ and assuming radial symmetry, further to $w=w(r)$, we can express the components of the vector ${\bf m}$
in terms of $r$, that is, of the form
$$ {\bf m} = \left( \frac{x}{r} \sin h(r), \frac{y}{r} \sin h(r), \cos h(r)
\right), \qquad r=\sqrt{x^2+y^2},$$
where $h: (0,1)\subset {{\rm I}\!{\rm R}} \to {{\rm I}\!{\rm R}}$ is an unknown regular function.
Using the fact that $\partial_x r = \frac{x}{r}$ and $\partial_y r = \frac{y}{r}$ we deduce by the chain rule, where $h_r:=\displaystyle{{ d h} \over {dr}}$ denotes the derivatives of the $h$ with respect to the variable $r$, it follows
\begin{equation*}
\partial_x {\bf m} = \left(\frac{\sin h}{r}+ \frac{x^2}{r}\left(\frac{\sin h}{r}\right)_{\!r}, ~\frac{xy}{r}\left(\frac{\sin h}{r}\right)_{\!r} , ~\frac{x}{r}\left(\cos h\right)_{\!r}\right)
\end{equation*}
\begin{equation*}
\partial_y {\bf m} = \left({\frac{xy}{r}\left(\frac{\sin h}{r}\right)_{\!r} ,~ \frac{\sin h}{r}+ \frac{y^2}{r}\left(\frac{\sin h}{r}\right)_{\!r} , ~\frac{y}{r}\left(\cos h\right)_{r} }\right).
\end{equation*}
Thus we get
\begin{equation*}\displaystyle
\begin{aligned}
|\nabla {\bf m} |^2&= \left[\frac{\sin h}{r}+ \frac{x^2}{r}\left(\frac{\sin h}{r}\right)_{\!r} \right]^2 +\left[{\frac{\sin h}{r}+ \frac{y^2}{r}\left(\frac{\sin h}{r}\right)_{\!r} }\right]^2 +
2\left[{\frac{xy}{r}\left(\frac{\sin h}{r}\right)_{\!r}}\right]^2 +[\left(\cos h\right)_{\!r} ]^2\\
&=2\left(\frac{\sin h}{r}\right)^2 +
\frac{x^4+2x^2y^2 +y^4}{r^2}\left[{\left(\frac{\sin h}{r}\right)_{\!r}}\right]^2 +2 \frac{x^2+y^2}{r^2}\sin h \left(\frac{\sin h}{r}\right)_{\!r} + h_{r}^2 (\sin h)^2 \\
&= 2\left(\frac{\sin h}{r}\right)^2 + {r^2}\left[\left(\frac{\sin h}{r}\right)_{\!r}\right]^2 +2 \sin h\left(\frac{\sin h}{r}\right)_{\!r} + h_{r}^2(\sin h)^2 \\
&= \left(\frac{\sin h}{r}\right)^2 + \left[\frac{\sin h}{r}+{r}\left(\frac{\sin h}{r}\right)_{\!r}\right]^2
+h_{r}^2 (\sin h)^2 \\
&= \left(\frac{\sin h}{r}\right)^2 + \left[\frac{\sin h}{r}+{r}\left(\frac{r h_{\!r} \cos h -
\sin h}{r^2}\right)\right]^2 + h_{r}^2 (\sin h)^2 \\
&= \left(\frac{\sin h}{r}\right)^2 +h_r^2.
\end{aligned}
\end{equation*}`
So the energy (\ref{E}), when we recall the assumed radial symmetry implies also $w=w(r)$, adopting the notation $w_r:=\displaystyle{{ d w} \over {dr}}$, becomes
$$
E(h,w)= \pi \int_0^1 \left[h_r^2 +\left(\frac{\sin h}{r}\right)^2
+\lambda \sin2h \, w_r + w_r^2\right] rdr$$
and from that we deduce the governing
equations
\begin{equation} \label{eqs}
\left \{ \begin {array}{l}\displaystyle{ h_{rr} +\frac{h_r}{r}
-\frac{\sin2h}{2r^2} -\lambda \cos2h \,w_r =0}\\
\\
\displaystyle{ w_{rr} + \frac{w_r}{r} + \frac{\lambda}{2} \left[
(\sin 2h)_r + \frac{\sin 2h}{r}\right]=0}.
\end{array}\right.
\end{equation}
We prescribe the following boundary conditions
\begin{equation} \label{bcw}
w_r(0)=0,\,\,w(1)=0,
\end{equation}
where the first condition is motivated by the symmetry assumptions, while the second one corresponds
to prescribe the boundary of $\Omega$ is fixed, and
\begin{equation} \label{bch}
h_r(1)=0.
\end{equation}
Solving the second equation of (\ref{eqs}) which can be written
\begin{equation*}
(r w_r)_{r} + \frac{\lambda}{2}(r \sin 2h)_{r} =0 \Leftrightarrow w_r = -\frac{\lambda}{2}\sin 2h
\end{equation*}
where the double implication is guaranteed when we set $h(0)=0$. Then, letting
$\mu= \lambda^2/2$ we get the equation
\begin{equation} \label{eqh}
h_{rr} +\frac{h_r}{r} -\frac{\sin 2h}{2r^2} +\mu \sin 2h \cos2h =0
\end{equation}
and the energy E becomes
\begin{equation} \label{Eh}
E(h)= \pi \int_0^1 \left[h_r^2+\left(\frac{\sin h}{r}\right)^2 -
\frac{\mu}{2} (\sin2h )^2 \right] rdr.
\end{equation}
The variational analysis of the functional $E(h)$ is the objective
of the following section.
\section{The minimisation problem} \label{minproblem}
\begin{lemma} \label{lemma1}
Let us define
\begin{equation}
V=\{v ~\vert ~ v_r, {v\over r}\in L^2(0,1;r dr)\}.
\end{equation}
$V$ is a Hilbert space equipped with the norm
\begin{equation}
\displaystyle{\vert\!\vert v \vert\!\vert^2=
\int_0^1{ ({ v_r}^2+ {v^2\over r^2}) r dr~.}}
\end{equation}
\end{lemma}
\noindent\begin{proof}
Let $v_n$ be a Cauchy sequence in $V$, $\{(v_n)_r\}, \displaystyle{\left\{{v_n\over r}\right\}}$ are Cauchy sequences in
$$L^2(r dr)=L^2(0,1;r dr)$$ and there exist $h, g$ such that
\begin{equation}\label{m1}
\{(v_n)_r\}, \displaystyle{\left\{{v_n\over r}\right\}} \to g,h~~\text{in}~~L^2(r dr)
\end{equation}
Set $\tilde h=h r$. Since
\begin{equation}\label{1m}
\int_0^1 \left({v_n\over r}-h\right)^2 r dr~\to 0
\end{equation}
one has
\begin{equation}
{v_n\to rh ~~\text{in}~~L^2 \big(0,1;{dr\over r}\big)~.}
\end{equation}
but also in ${\cal D}^{\prime}(0,1)$ so that
\begin{equation}
{(v_n)_r\to \tilde h_r ~~\text{in}~~{\cal D}^{\prime}~.}
\end{equation}
We deduce from \eqref{1m} that $\tilde h_r=g$ and thus $ \tilde h\in V$ and since
\begin{equation}
(v_n)_r, {v_n\over r} \to \tilde h_r, {\widetilde h\over r}~~\text{in}~~L^2(r dr)
\end{equation}
one has $v_n\to \tilde h \in V$. This completes the proof of Lemma \ref{lemma1}.
\end{proof}\hfill\hfill $\Box$
\begin{lemma} \label{lemma2}
$$ V \subset \{v\in C([0,1]) ~\vert ~v(0)=0 \} $$
\end{lemma}
\noindent\begin{proof}
For $x,y\in (0,1]$ one has
\begin{equation}
\vert x v(x)- y v(y) \vert = \left\vert \int_x^y {(r v)_r dr }\right\vert = \left\vert \int_x^y { r (v_r +{v\over r}) dr }\right\vert
\\ \le \int_x^y { r (|v_r| +{{|v|}\over r}) dr ~.}
\end{equation}
Using the Cauchy-Young inequality $a \le {1\over 2} a^2 + {1\over 2}$ one gets
\begin{equation}
\vert x v(x)- y v(y) \vert \le \int_x^y \left\{ { {r\over 2} \left({v_r}^2 +{{v^2}\over {r^2}}\right) }
+ r \right\}dr \to 0 ~~
\text{when}~~ y \to x.
\end{equation}
It follows that $v$ is continuous at any point where $r\neq0$ on $[0,1]$.
Now, one has also
\begin{equation}
\begin{array}{cl@{\hspace{0.5ex}}c@{\hspace{1.0ex}}l}
\displaystyle v(x)^2 - v(y)^2 =\!\! \int_x^y \!\! { {d\over {dr}} {v(r)}^2 } dr = \!\! \int_x^y \!\! { 2 {v_r \, v} }~ dr =
\!\!
\int_x^y \!\! {2 \sqrt{r} ~v_r {v\over {\sqrt{r}}} }dr \\ \\
\displaystyle \le \int_x^y\!\! \left\{ { {r} ({v_r}^2 +{{v^2}\over {r^2}})} \right\} ~dr \le \varepsilon
\end{array}
\end{equation}
for $x,y$ small enough (we used again the Cauchy-Young inequality).
\smallskip
Thus, when $x\to0$, $v(x)^2$ is a Cauchy sequence and there exist $l\ge 0$ such that
\begin{equation}
\displaystyle{\lim_{x\to 0} {v(x)^2}=l}.
\end{equation}
If $l>0$ one has for $\varepsilon$ small enough
\begin{equation}
\Vert v \Vert^2\ge \int_0^1{ {v^2\over r} dr }\ge
\int_{\varepsilon^2}^\varepsilon{ \left({l\over 2}\right)^2{{dr}\over r} }
={l^2\over 4}(\ln \varepsilon- 2 \ln \varepsilon) = - {l^2\over 4} \ln \varepsilon
\end{equation}
and a contradiction when $\varepsilon\to 0$. Thus, $l=0$ and this completes the proof of the Lemma \ref{lemma2}.
\end{proof} \hfill $\Box$
\bigskip
\noindent{\bf Remark}
Since $V\subset H^1( \varepsilon,1)$, it follows that $V \subset C^{1/2}( \varepsilon,1)$ for every $\varepsilon$.
One sets
\begin{equation}
E(h)= \pi \int_0^1 \left\{ h_r^2+ ({{\sin h}\over{r}})^2 - \frac{\mu}{2} (\sin 2h)^2 \right\} rdr.
\end{equation}
\bigskip\noindent
One would like to show that $E(h)$ possesses a minimiser on $V$ for any $\mu$.
\noindent\begin{lemma} \label{lemma3}
The energy $E(h)$ is bounded from below on $V$ and one can find a minimising sequence $v_n$ such that
\begin{equation}\label{2m}
0\le v_n\le {\pi \over 2}~.
\end{equation}
\end{lemma}
\noindent\begin{proof}
One has clearly for every $h\in V$
\begin{equation}
E(h)\ge -\pi {|\mu|\over 2}\int_0^1 r dr = -\pi {|\mu|\over 4}~.
\end{equation}
Thus $$ I= \inf_{h\in V} E(h) $$
exists. Let us denote by $v_n$ a sequence such that
$$ E(v_n) \to I~. $$
If $v_n\in V$, then also $\vert v_n \vert\in V$ and one has $$ E(v_n)= E(|v_n|)$$
so, without loss of generality, we assume $v_n\ge 0$.
\bigskip
\begin{figure}[H]
\centering
\epsfig{file=FIG-1-ver8.png, scale=0.31}
\caption{graphical representation.}
\label{fig4}
\end{figure}
\noindent
Then on $v_n> {\pi \over 2}$, we replace $v_n$ by $-v_n +\pi$ (cfr. Fig. 1).
It is clear that
\begin{equation}
\tilde v_n=v_n X_{\{v_n\le {\pi \over 2}\}} + (-v_n +\pi)X_{\{v_n> {\pi \over 2}\}}
\end{equation}
satisfies $\tilde v_n\in V$ and $$E(\tilde v_n)=E(v_n).$$
This completes the proof of the Lemma \ref{lemma3}.
\end{proof}
\hfill\hfill $\Box$
\begin{remark}\label{rem-p4} It could be that $-v_n +\pi$ achieves negative values, but clearly, after a finite number of operations like the one we just did we get a $v_n$ satisfying \eqref{2m}.
\end{remark}
\begin{lemma} \label{Lemma 4} There exists a minimiser $\tilde h$ of $E$ in $V$ satisfying
\begin{equation}
0 \le \tilde h\le {\pi \over 2}~.
\end{equation}
\end{lemma}
\noindent\begin{proof}
We consider the sequence $\{v_n\}$ constructed in Lemma \ref{lemma3}.
We claim that $\{v_n\}$ is bounded in $V$ independently of $n$. Indeed, one has, since
for some constant $\lambda>0$ one has $\displaystyle{\left({{\sin x}\over{x}}\right)^2\ge \lambda}$,
$\forall x\in[0,{\pi \over 2}]$,
\begin{equation} \label{m3}
\begin{array}{cl@{\hspace{0.5ex}}c@{\hspace{1.0ex}}l}
\displaystyle \int_0^1 {r \left\{ {({v_n})_r}^2 +{{{v_n}^2}\over {r^2}} \right\} dr \le
\int_0^1 {r \left\{ {({v_n})_r}^2 +{1\over \lambda}\left({{\sin v_n}\over {r}}\right)^2 \right\} dr }}\\
\displaystyle \le \left(1 \vee {1\over \lambda}\right) \int_0^1 {r \left\{ {({v_n})_r}^2 +
\left({{\sin v_n}\over {r}}\right)^2 \right\} dr \le C}
\end{array}
\end{equation}
where $C$ is a constant independent of $n$ and $\vee$ denotes the maximum of two numbers.
Recall that since ${v_n}$ is a minimising sequence one has, for $n$ large enough,
$$ E({v_n}) \le E(0) =0$$
i.e. see the definition of $E$
\begin{equation}
\pi \int_0^1 {\left\{ {({v_n})_r}^2 + \left({{\sin v_n}\over {r}}\right)^2 \right\} r dr}\le
\pi {|\mu|\over 2}\int_0^1{{\sin^2 (2 v_n)} r dr} \le \pi {|\mu|\over 4}~.
\end{equation}
Since $\displaystyle{\{({v_n})_r}\}, \{{{v_n}\over r}\}$ are bounded in $L^2(r dr)$ one finds a subsequence, still labelled by $n$, such that
$$ {{v_n}\over r} {{\rightharpoonup h}} ~~, ~~ {({v_n})_r} \rightharpoonup g ~ \text{in} ~ L^2(r dr)~.$$
Set $\tilde h =h r$. The first weak convergence above reads
\begin{equation*}
\displaystyle \int_0^1 {{v_n}\over r} \Psi r dr \to \int_0^1 h \Psi r dr~~~~, ~~~\forall \Psi\in L^2 (rdr).
\end{equation*}
In particular, taking $\Psi \in {\cal D}(0,1)$ one see that
$$ {v_n} \to \tilde h =h r ~ \text{in} ~ {\cal D}^{\prime}(0,1)$$
and thus, by the continuity of the derivative in ${\cal D}^{\prime}$
$$({v_n})_r \to \tilde h_r = g ~~\text{in}~~ {\cal D}^{\prime}(0,1)~.$$
Thus, we have $ \tilde h\in V$. For any $k\ge 2$ one has also, thank to \eqref{m3}, that
$ {v_n}$ is bounded in $\displaystyle H^{1}\left({1\over k},1\right)$. Thus, by induction, one can find a subsequence
$\{n_{k}\}$ extracted from $\{n_{k-1}\}$ such that
$$ {v_{n_{k}} }\to \tilde h ~ \text{in} ~ L^{2}\left({1\over k},1\right)~ \text{and a. e..}$$
Then clearly
$$ v_{n_{k}} \to \tilde h ~~ ~ \text{a.e. on} ~ (0,1).$$
By the dominated Lebesgue theorem one has then that
$$ r \sin 2 {v_{n_{k}} } \to r \sin 2 \tilde h ~~ ~ \text{in} ~ L^{2}(0,1) $$
$$ \sin {v_{n_{k}} } \to \sin \tilde h ~~ ~ \text{a.e.~ on} ~ (0,1)~. $$
Then, since $ x\mapsto x^2$ is convex by the Fatou lemma one has
\begin{equation}\label{weak-eps}
\begin{array}{cl@{\hspace{0.5ex}}c@{\hspace{1.0ex}}l}
{\displaystyle{ I={\underline{\lim}} ~E({v_{n_k} }) =
\pi\, {\underline{\lim}} \int_0^1 {\left\{ {({v_{n_k} })_r}^2 + \left({{\sin {v_{n_k} }}\over {r}}\right)^2 \right\} r dr}-
\pi \int_0^1{{\mu\over 2}({\sin (2 {v_{n_k} })})^2 r dr}\ge}}\\ \\
\displaystyle{\ge \pi ~ {\underline{\lim}} \int_0^1 {{({v_{n_n} })_r}^2 r dr}+ \pi ~{\underline{\lim}} \int_0^1 {\left({{\sin {v_{n_k} }}\over {r}}\right)^2 r dr}-
{\pi\mu\over 2} \int_0^1{({\sin (2 \tilde h)})^2 r dr}\ge}\\ \\
{\displaystyle{\ge \int_0^1 {({\tilde h_r})^2 r dr} + \pi \int_0^1 {{\underline{\lim}}{\left({{\sin {v_n}}\over {r}}\right)^2} r dr} -
{\pi\mu\over 2}\int_0^1{{(\sin (2 \tilde h))^2} r }dr} =}\\ \\
\displaystyle{\hphantom{1\over1}=E({\tilde h}) = I}~.
\end{array}
\end{equation}
This shows that ${\tilde h}$ is the minimiser that we are looking for.
\end{proof} \hfill $\Box$
\begin{lemma}\label{Euler-eq}{
The Euler equation of the minimising problem is given by
\begin{equation} \label{m1b}
\left \{ \begin {array}{l}\displaystyle{ -h_{rr}
-\frac{h_r}{r}
+\frac{\sin 2h}{r^2} = \mu \sin 2h \cos 2h } ~~~\text{in}~ (0,1)\\
\\
h(0) =h_r(1)=0
\end{array}\right.
\end{equation}}
\end{lemma}
\noindent\begin{proof}
If $h$ is a minimiser of $E$ on $V$ one has
$$ {d \over{d \lambda}} E(h+\lambda v) \vert_0=0~~~,~~ \forall v\in V $$
Since
\begin{equation}
\displaystyle{ E(h+\lambda v)=
\pi \int_0^1 {\left\{ (h+\lambda v)_r^2 +{ {{\sin(h+\lambda v)^2}}\over{{r}^2}}
-{\mu\over 2} {\sin (2 (h+\lambda v))}^2
\right\} }r dr}~.
\end{equation}
One gets $\forall v$
\begin{equation}\label{m2b}
\begin{array}{cl@{\hspace{0.5ex}}c@{\hspace{1.0ex}}l}
\displaystyle{
\int_0^1 {\left\{ 2 h_r v_r + 2 {{ \sin h\cos h }\over {{r}^2}}v
-2{\mu}\, {\sin (2 h)\cos (2 h)}\, v
\right\} r dr}=0 }\\ \\
\displaystyle{ \Longleftrightarrow
\int_0^1 {\left\{ h_r v_r + {{\sin (2 h)}\over {2{r}^2}} v
-{\mu}\, {\sin (2 h)\cos (2 h)}\, v
\right\} r dr}=0~~~, ~~~\forall v\in V}~.
\end{array}
\end{equation}
Thus, in the distributional sense
\begin{equation}\label{m3b}
\begin{array}{cl@{\hspace{0.5ex}}c@{\hspace{1.0ex}}l}
\displaystyle{- ( r h_r )_r + {{\sin (2 h)}\over {2{r}}}
-{\mu}\,r\, {\sin (2 h)\cos (2 h)} =0} \\ \\
\displaystyle{ \Longrightarrow - r h_{rr} -h_r + {{\sin (2 h)}\over {2{r}}}
-{\mu}\,r\, {\sin (2 h)\cos (2 h)} =0~.}
\end{array}
\end{equation}
Dividing by $r$ we get the first equation of \eqref{m1b}. Integrating by parts in \eqref{m2b}
and using \eqref{m3b} we get
\begin{equation*}
\displaystyle{ \int_0^1 { ( r h_r v )_r - ( r h_r)_r v+ {{\sin 2 h v}\over {2{r}}}
-{\mu} {\sin (2 h)\cos 2 h r v}} =0~~~, ~~\forall v \in V }
\end{equation*}
i.e.
\begin{equation*}
\displaystyle{ \int_0^1 { ( r h_r v )_r } =0~~~, ~~\forall v \in V }
\end{equation*}
which gives
$$ h_r(1)=0~. $$
(in a weak sense) $h(0)=0$ follows from $h \in V$. This completes the proof of the Lemma.
\hfill $\Box$ \end{proof}
\begin{lemma}\label{p2b}
If $h\neq 0$ is a nonnegative minimiser of $E$ on $V$ then $h>0$ on $(0,1)$.
\end{lemma}
\begin{proof}
Indeed, if $h$ vanishes at $r_0\in(0,1)$ then, since $h$ is smooth and $r_0$ is a minimum
for $h$, one would have
$$ h(r_0)= h_r(r_0)=0$$
then from the theory of o.d.e's (see [1]), $h\equiv 0.$
\hfill $\Box$ \end{proof}
\begin{lemma}\label{4.8}
If $h$ is a positive minimiser of $E$ then $0<h\le {\pi\over 2}$.
\end{lemma}
\begin{proof}
If not then $h$ constructed as in the figure before (Fig.1) is a minimiser but it has a jump in the derivative unless this one is $0$. But then $h= {\pi\over 2}$ is solution of the o.d.e. on $h >{\pi\over 2}$ and a contradiction follows. Note that the solution of the elliptic equation (\ref{m1}) is smooth on $(0,1)$.
\end{proof}
\begin{lemma}\label{4.9b}
A minimiser cannot vanish on $(0,1)$ unless it vanishes identically.
\end{lemma}
\begin{proof}
If $h$ is a minimiser, $\vert h \vert $ is also a minimiser. But, then, $\vert h \vert $ would
have a jump discontinuity in its derivative unless when it vanishes so does $h_r$.
This implies (theory of o.d.e's), $h=0$.
\end{proof}
\rightline \hfill $\Box$
\begin{lemma}\label{4.9}
If $h \in V$ then $\sin (kh) \in V ~~, ~~ \forall k\in{{\rm I}\!{\rm R}}$.
\end{lemma}
\begin{proof}
One has
\begin{equation}
\sin (kh)_r= k h_r\cos (kh)~~,~~\vert \sin (kh)\vert \le \vert kh \vert.
\end{equation}
Therefore one has
\begin{equation}
\begin{array}{cl@{\hspace{0.5ex}}c@{\hspace{1.0ex}}l}
\displaystyle
\Vert \sin (kh) \Vert^2 = \int_0^1 \left\{ \sin (kh)_r^2 + \left( { \sin (kh) \over r}\right)^2 \right\} r dr\\ \\
\displaystyle{ \le \int_0^1 \left\{ k^2 \cos (kh)^2 h_r^2+ k^2 {h^2 \over {r^2}} \right\} r dr \le k^2 \Vert h\Vert ^2 }
\end{array}
\end{equation}
\end{proof}
\bigskip
\noindent It easy to check that $h\equiv 0$ solves (\ref{bch}),
(\ref{eqh}) and hence it is a stationary point of the functional
(\ref{Eh}). \\
Let $\gamma_0$ be the first eigenvalue of the problem
\begin{equation} \label{eigen}
\left \{ \begin {array}{l}\displaystyle{ -\phi_{rr}
-\frac{\phi_r}{r}
+\frac{\phi}{r^2} = \gamma \phi }\\
\\
\phi(0) =0, \quad \phi_r(1)=0 ~.
\end{array}\right.
\end{equation}
\begin{lemma}\label{gamma-0}
$$\gamma_0>1~~.$$
\end{lemma}
\begin{proof}
Suppose not, i.e. $\gamma_0\le1$.
Let $\phi$ be the corresponding positive (or nonnegative) eigenfunction.
One has
\begin{equation}
\begin {array}{l} \displaystyle{ -\phi_{rr}
-\frac{\phi_r}{r}= \phi (\gamma_0- {1\over {r^2}}) \le 0
~~~\text{since}~r\in (0,1)}\\ \\
\displaystyle{ (r \phi_r)_r \ge 0 \Longrightarrow r \phi_r \nearrow ~~
\Longrightarrow ~~r \phi_r \le 0 ~~\text{since}~~ \phi_r (1)=0~.}
\end{array}
\end{equation}
Thus, the maximum of $\phi$ is achieved at $0$ but, since $\phi(0)=0$,
we get a contradiction i.e. $\phi\equiv0$.
\hfill\hfill $\Box$
\end{proof}
\medskip\noindent
We have the following bifurcation lemma.
\begin{lemma}\label{biforcation}
If $\mu \le \gamma_0/2 $ we have $E(h) \ge 0$ and the global minimum
is attained only for $h\equiv 0$. For $\mu > \gamma_0/2 $ the
global minimum is negative.
\end{lemma}
\begin{proof} The first equation of \eqref{eigen} can also be written after a multiplication by $r$ as
\begin{equation*}
\displaystyle{ -(r\phi_r)_{r}+\frac{\phi}{r} = \gamma \phi r}.
\end{equation*}
Multiplying by $\phi$ and integrating over $(0,1)$ we derive by definition of $\gamma_0$ that
\begin{equation}\label{pointca}
\int_0^1 \left(\phi_r^2 + \frac{\phi^2}{r^2} \right)rdr \geq \gamma_0 \int_0^1 \phi^2 rdr ~~\forall \phi ~\text{ with }~\phi(0) =0, \quad \phi_r(1)=0.
\end{equation}
We divide the proof in two parts:
\bigskip
\noindent ({\it i} ) \quad $\mu \le \gamma_0/2$ \\
In this case we have (using \eqref{pointca} with $ \phi = \sin h$)
\begin{equation*}
\begin{aligned}
E(h) &=\pi \int_0^1
\left[(\cos h)^2 h_r^2+\left(\frac{\sin h}{r}\right)^2 - 2\mu (\sin h )^2(\cos h)^2 + (1-(\cos h)^2) h_r^2 \right] rdr \\
&\geq \int_0^1
\left[\gamma_0 (\sin h )^2 - 2\mu (\sin h )^2(\cos h)^2 + (1-(\cos h)^2) h_r^2 \right] rdr \\
& = \int_0^1
\left[ (\gamma_0 - 2\mu ))(\sin h )^2 + (1-(\cos h)^2) (2\mu (\sin h )^2 + h_r^2) \right] rdr \\
&\geq 0 =E(0)
\end{aligned}
\end{equation*}
the equality taking place only for $h=0$.
\bigskip
\noindent({\it ii }) \quad $\mu > \gamma_0/2$ \\
Let us denote by $\phi_0$ the first positive normalised eigenfuntion to \eqref{eigen}.
One has for $\epsilon >0$
\begin{equation*}
\begin{aligned}
E(\epsilon \phi_0) &\leq \pi \int_0^1
\left[(\epsilon \phi_0)_r^2+\frac{(\epsilon \phi_0)^2}{r^2} - \frac{\mu}{2} (\sin (2\epsilon \phi_0))^2\right] rdr \\
&= \pi \int_0^1
\left[\gamma_0(\epsilon \phi_0)^2 - \frac{\mu}{2} (\sin (2\epsilon \phi_0))^2\right] rdr. \\
\end{aligned}
\end{equation*}
Using with $x= 2\epsilon \phi_0$ the formula
\begin{equation*}
\sin x = x - \int_0^1 (1 - \cos (tx)) xdt
\end{equation*}
$E(\epsilon \phi_0) $ can be written as
\begin{equation*}
\begin{aligned}
E(\epsilon \phi_0)
&= \pi \int_0^1
\left[(\epsilon \phi_0)^2 \{\gamma_0- 2\mu (1-\int_0^1 (1 - \cos (2t\epsilon \phi_0))dt )^2\}\right] rdr \\
&< 0 =E(0)
\end{aligned}
\end{equation*}
for $\epsilon$ small since
\begin{equation*}
\int_0^1 (1 - \cos (2t\epsilon \phi_0))dt \to 0
\end{equation*}
when $\epsilon \to 0$.
\hfill $\Box$ \end{proof}
\bigskip
\noindent
{\bf Alternative proof of $(ii)$
}
\medskip\noindent
Suppose $h\neq 0$ is a minimiser of $E$ one has
\begin{equation}
E(h)< E(0)
\end{equation}
i.e.
\begin{equation}
\begin{array}{cl@{\hspace{0.5ex}}c@{\hspace{1.0ex}}l}
\displaystyle \int_0^1
\left\{ h_r^2+\left(\frac{\sin h}{r}\right)^2 \right\} rdr < {\mu \over 2} \int_0^1
\left({\sin 2 h}\right)^2 rdr = 2\mu \int_0^1 {\sin h}^2 {\cos h}^2 rdr \\
\\
\displaystyle \Longrightarrow~~ \int_0^1
\left\{ {\cos h}^2 h_r^2+\left(\lambda\frac{ \sin h}{r}\right)^2 \right\}rdr <\gamma_0
\int_0^1 {\sin h}^2 rdr \\ \\
\displaystyle \Longrightarrow~~ \gamma_0 >
{ \int_0^1\left\{ {\sin h}_r^2 +\left(\frac{\sin h}{r}\right)^2 \right\} rdr \over
\int_0^1 {\sin h}^2 rdr }
\end{array}
\end{equation}
and a contradiction since ${\sin h}\in V$ with the definition of $\gamma_0$.
\medskip\noindent
Consider the problem
\begin{equation} \label{one}
\left \{ \begin {array}{l}\displaystyle{ -h_{rr}
-\frac{h_r}{r}
+\frac{\sin 2h}{r^2} = \mu \sin 2h \cos 2h } ~~~\text{on}~ (0,1)\\
\\
h(0) =h_r(1)=0
\end{array}\right.
\end{equation}
\noindent
\begin{lemma}
If $\mu\le \gamma_0/2$ the only solution of (\ref{one})
such that $ \displaystyle{h\in \left[ -{\pi\over 2}, {\pi\over 2}\right]}$ is $h\equiv0$.
\end{lemma}
\noindent
\begin{proof}
Recall that for any $\phi \in V$ one has by definition of $\gamma_0$
\begin{equation}\label{due}
\displaystyle{\gamma_0 \int_0^1 \phi^2 r dr \le
\int_0^1 \left(\phi_r^2 + {\phi^2\over{r^2}}\right) r dr } ~~.
\end{equation}
Let us write the
equation (\ref{one}) as
\begin{equation}
({r h_r})_{r} +\frac{\sin 2h}{2 r} = \mu \sin 2h \cos 2h \, r~.
\end{equation}
Multiply both sides by $\sin 2h$ and integrate on $(0,1)$. It comes
\begin{equation}
\displaystyle{\int_0^1 r \left\{ h_r ({\sin 2h})_{r} +\frac{(\sin 2h)^2}{2 r^2} \right\} dr=
\mu \int_0^1 (\sin 2h)^2 \cos 2h ~ r dr} ~.
\end{equation}
One has
\begin{equation}
\displaystyle{({\sin 2h})_{r} =2 \cos 2h ~ h_r \Longleftrightarrow
h_r= {({\sin 2h})_{r} \over {2 \cos 2h}}}~.
\end{equation}
Thus, the equation above becomes
\begin{equation}
\displaystyle{\int_0^1 r \left\{ ({\sin 2h})^2_{r} {1\over {\cos 2h}} + \frac{(\sin 2h)^2}{ r^2} \right\} dr=
2 \mu \int_0^1 (\sin 2h)^2 \cos 2h r dr} ~.
\end{equation}
Suppose that $h$ is such that $ \displaystyle{h\in \left[ -{\pi\over 2}, {\pi\over 2}\right]}$
then since $-1\le \cos 2h\le 1$ one gets
\begin{equation}
\displaystyle{\int_0^1 r \left\{ ({\sin 2h})^2_{r} + \frac{(\sin 2h)^2}{ r^2} \right\} dr<
\gamma_0 \int_0^1 (\sin 2h)^2 r dr} ~
\end{equation}
i.e $\sin 2h \in V$ and satisfies an inequality contradicting (\ref{due}),
except if $h\equiv 0$.
\end{proof} \hfill\hfill $\Box$
\bigskip
\noindent Each minimiser of $E(h)$ solves the problem (\ref{bch}),
(\ref{eqh}). For the solutions of this problem we can give the
following existence result around the bifurcation point.
\begin{lemma} \label{existence}
There exist two positive numbers $\rho_0$ and $\delta_0$ such that,
the problem (\ref{bch}), (\ref{eqh}) does not have non-zero
solutions for $\mu \in (\gamma_0/2 - \delta_0, \gamma_0/2]$ and
$\|h\|_0 \le \rho_0$. The problem has exactly two solutions $h_1$
and $h_2 = - h_1$ in the sphere $\|h\|_0 \le \rho_0$ for $\mu \in
(\gamma_0/2, \gamma_0/2 + \delta_0)$.
\end{lemma}
\begin{proof}
The proof follows from \cite[Theorem 6.12]{K}. Indeed the equation
(\ref{eqh}) can be written in the form
\begin{equation}\label{eqK}
2\mu h = L(h, r) + C(h, r, \mu) + D(h, r, \mu)
\end{equation}
where $L$ is the linear operator
$$L(h,r)=-h_{rr} -\frac{h_r}{r} +\frac{h}{r^2}$$
and $C$, $D$ are given by
$$C(h, r, \mu)=-\frac{2}{3}\frac{h^3}{r^2}+\frac{16}{3} \mu {h^3},$$
$$D(h, r, \mu)=-(\frac{2h}{2r^2} -\frac{\sin2h}{2r^2}) + \frac{2}{3}\frac{h^3}{r^2}
+\frac{\mu }{2}( 4h - \sin 4h) - \frac{16}{3} \mu {h^3}.
$$
\noindent It is easy to check that
\begin{equation} \label{omo}
C(th,r,\mu) = t^3 C(h,r,\mu),\quad (-\infty < t < \infty)
\end{equation}
and
\begin{equation}\label{D}
\|D(h,r,\mu)\|_0 = o (\| h\|^3).\end{equation} Moreover we
have
\begin{equation}\label{CF}
\displaystyle{((C(\phi^0,r,\mu),\phi^0))_0 =
\int_0^1 \left[-\frac{2}{3}\frac{(\phi^0)^4}{r}+\frac{16}{3} \mu
{(\phi^0)^4} r\right] dr}
> 0,\quad {\rm for}\,\, \mu\ge \frac{\gamma_0}{8}.
\end{equation}
Indeed from $((L(\phi^0,r)-\gamma_0 \phi^0, (\phi^0)^3))_0 = 0$ it
follows that
\begin{equation}
\displaystyle{\int_0^1 \left[{-\frac{d}{dr}{{(\phi^0_r r)}}(\phi^0)^3}+ {{(\phi^0)^4}\over{r}}
-\gamma {(\phi^0)^4} r \right] dr =0}
\end{equation}
The latter, on by parts integration
\begin{equation*}
\displaystyle{-{(\phi^0_r r)}(\phi^0)^3 \big\vert_{0}^{1}
+ \int_0^1 3 {(\phi^0_r)^2} {(\phi^0)^2 } r dr +
\int_0^1\left[{{(\phi^0)^4}\over{r}} -\gamma_0 {(\phi^0)^4} r \right] dr =0}
\end{equation*}
that is
$$ \int_0^1 \frac{(\phi^0)^4}{r} - \gamma_0 (\phi^0)^4r \, dr \le 0,$$
and the inequality (\ref{CF}) can be easily derived.
\noindent The statements (\ref{eqK})- (\ref{CF}), together to the
local Lipschitz condition on the operators C and D, assure (see
\cite{K} ) the existence of exactly two branch of non-zero solutions
bifurcating from the point $\gamma_0/2$. Finally we remark that the
existence of two opposite branch follows from the odd functions in
(\ref{eqK}).
\hfill $\Box$ \end{proof}
\begin{remark}\label{stability}
In order to establish the stability of the solutions to (\ref{bch}),
(\ref{eqh}) around the point $\mu_0=\gamma_0/2$, we perform a
qualitative analysis of the bifurcation equation to the lowest
order (see equation (\ref{loweq}) below). From (\ref{eqK}) setting
$$G(h,r,\mu)= -2\mu h + L(h, r) + C(h, r, \mu) + D(h, r, \mu)= 0 $$
and
$$ 2 \mu = \gamma_0 + \delta, \qquad |\delta| << 1,$$
assuming that each element $h \in \mathcal{H}^1(0,1)$ has the unique
representation
$$ h= \beta \phi^0 + P h,\qquad (P h, \phi^0)_0 = 0,\quad \beta \in
\mathbb{R},$$ we have
$$(G(h, r,\mu), \phi^0)_0= - \delta \beta +
(C(\beta \phi^0, r,\mu),\phi^0)_0 + \dots$$ Moreover from
(\ref{omo}), (\ref{CF}) we can get to the simple l.o. bifurcation
equation, namely
\begin{equation} \label{loweq} - \delta \beta +
\beta^3 \bar{C}=0,\qquad \bar{C} =(C(\phi^0,r,\mu),\phi^0)_0 \ge 0.
\end{equation}
It is easy to check that:\\
for $\delta \le 0$ there is the only solution $\beta=0$ and this
solution is stable (indeed in this case: $-\delta + 3 \beta^2
\bar{C} \ge 0)$;\\
for $\delta > 0$ the trivial solution is no more stable but other
two stable solutions appear, i.e. $\beta = \pm
\sqrt{\delta/\bar{C}}$.
\end{remark}
\subsection*{Acknowledgements}
Under the financial support of G.N.F.M.-I.N.d.A.M., I.N.F.N. and Universit\`a di Roma
\textsc{La Sapienza}, Rome, Italy.
M. Chipot acknowledges Dip. S.B.A.I., Universit\`a di Roma \textsc{La Sapienza}, for the kind hospitality.
\bibliographystyle{plain}
\nocite{*}
\hfill
|
1,116,691,499,131 | arxiv | \section{Introduction}
The detection of forbidden emission lines in different samples of T Tauri stars has provided access to different physical conditions depending on their line-of-sight (LoS) velocity information. Line decomposition methods have led to the classification of different velocity components that originate from different physical mechanisms \citep{Hamann_1994, Hartigan_1995, Ercolano_2017, Fang2018, Banzatti2019, Pascucci_2020}. Outflows, jets, magnetospheric accretion, and disk winds influence the angular momentum transport (i.e., \citet{Casse_2000, Casse_2002, Nezami_2012,Stepanovs_2016}).
In most of the disk regions, the magneto-rotational instability (MRI; \cite{Balbus_1991}) is almost entirely suppressed by non-ideal magnetohydrodynamic (MHD) effects \citep{Bai_2013} because of the low level of coupling between the gas and the magnetic field. At the same time, strong magnetic fields give rise to magnetically driven winds \citep{Blandford_1982}, as in the uppermost layers of the disk atmosphere, the gas and magnetic field are coupled again. On the other hand, thermal photoevaporative winds can co-exist with magnetically driven winds \citep{Rodenkirch_2020}. In the regions where thermal disk winds are generated, the surrounding gas must be highly ionized at least enough for electrons to be thermally excited; temperatures can be greater than 5,000 K with gas densities between 10$^{5}$-10$^{6}$ cm$^{-3}$ \citep{2Simon_2016}. Therefore, studying forbidden emission lines can provide clues about the main processes taking place in the inner disk.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=18cm]{test.pdf}
\caption{Composite image of different protoplanetary disks at four different forbidden emission lines. First column shows the dust continuum emission at 1.3 mm from ALMA cycle 4 \citep{Long_2018, Long_2019}. Other columns show the different forbidden emission lines from MUSE as labeled in the top left of the first row. Dashed yellow rectangles mark the area from where we gather the spectrum of the jet by summing over the spatial axes within the dashed yellow frame of the image.}
\label{fig:supra}
\end{figure*}
\begin{table*}[pht!]
\caption{Disk properties (geometric parameters).}
\label{tab:disk_properties}
\small
\centering
\begin{tabular}{lccccccccr}
\hline \hline
\rule{0pt}{1.0\normalbaselineskip}
Name & $\alpha$(J2000) & $\delta$(J2000) & d$_\mathrm{pc}$ & incl & PA$_{\mathrm{dust}}$ & $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$ & difference & ref.\\
& & & (pc) & ($^{\circ}$) & ($^{\circ}$) & ($^{\circ}$) & ($^{\circ}$) \\
\hline
DL Tau & 04 33 39.07 & +25 20 38.09 &159.9 & -45.0$\pm$0.2 & 52.1$\pm0.4$ & 143.25$\pm$0.61 & 1.5$\pm$0.72 & $^{a}$ \\
CIDA 9 & 05 05 22.86 & +25 31 31.23 &165.6 & 45.6$\pm0.5$ & 102.7$\pm0.7$ & -- &-- &$^{a}$\\
CI Tau & 04 33 52.01 & +22 50 30.09 & 160.3 & 50.0$\pm0.3$ & 11.2$\pm$0.13 & 79.12$\pm$1.36 & 0.32$\pm$1.37 & $^{a}$ \\
DS Tau & 04 47 48.59 & +29 25 11.18 & 158.4 & -65.2$\pm0.3$ & 159.6$\pm0.4$& 70.69$\pm$0.78 & 1.09$\pm$0.88 & $^{a}$ \\
GO Tau & 04 43 03.07 & +25 20 18.70 & 142.4 & 53.9$\pm0.5$ & 110.9$\pm0.24$& -- & -- & $^{a}$\\
IP Tau & 04 24 57.08 & +27 11 56.54 & 129.4 & -45.2$_{-0.8}^{+0.9}$ & 173.0$\pm1.1$ & 81.35$\pm$1.53 & 1.65$\pm$1.88 &$^{a}$ \\
IM Lup & 15 56 09.17 & -34 30 35.67 & 155.8 & -47.5$\pm$0.5 & 143.9$\pm0.6$ & 56.02$\pm$1.90 & 2.12$\pm$1.99 &$^{b}$ \\
GW Lup & 14 46 44.72 & -34 30 35.67 & 155.2 & 38.7$\pm$0.3 & 127.6$\pm0.5$ & -- & -- & $^{b}$\\
\hline
\multicolumn{8}{@{}l@{}}{\scriptsize \textbf{Notes.} The inclination, and PA$_{\mathrm{dust}}$ are obtained from $^{a}$\citet{Long_2019}, and $^{b}$\citet{Huang_2018}} \\
\multicolumn{8}{@{}l@{}}{\scriptsize The distances, d$_\mathrm{pc}$, are from the third release of the \citet{Gaia_2021}} \\
\multicolumn{8}{@{}l@{}}{\scriptsize The $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$ represents the average among the lines we measure a PA$_{\mathrm{outflow/jet}}$ in Table \ref{tab:jets_properties} }\\
\multicolumn{8}{@{}l@{}}{\scriptsize Symbol '--' means no detection} \\
\end{tabular}
\end{table*}
T Tauri sources show complex velocity line profiles. The low-velocity component (LVC) can be classified in two components: the narrow component (NC) and the broad component (BC); both presumably tracing photoevaporative or magnetohydrodynamic winds. \citet{Whelan_2021} reported the first detection of MHD winds, which spatially resolved the forbidden emission lines using spectroastrometry analysis. On the other hand, a small percentage \citep[$\sim$30$\%$;][]{Nisini_2018} shows a high-velocity component (HVC) associated with jets. Though different mechanisms form winds and outflows and jets, collimated disk winds could form the base of an outflow-like or jet-like structure. Many surveys \citep[i.e.,][]{Hamann_1994, Hartigan_1995, Hirth_1997, Antoniucci_2011, Rigliaco_2013, Natta_2014, 2Simon_2016, Nisini_2018, Fang2018, Banzatti2019} have shown that the HVC of the [\ion{O}{i}]~$\lambda$6300 excitation line is common and likely probing outflows, jets, or MHD winds \citep{Edwards_1987, Hartigan_1995, Bacciotti_2000, Lavalley_2000, Woitas_2002}. Moreover, the HVC in T Tauri disks has been spatially confirmed to be associated with jets. Their emission reaches a greater extension from the star than those lines with the LVC \citep{Dougados_2000}. Other lines, such as [\ion{N}{ii}]~$\lambda$6583 and [\ion{S}{ii}]~$\lambda$6731, have also been reported to probe outflows and jets \citep{Solf_1993, Dougados_2000, Lavalley_2000, Woitas_2002, Natta_2014, Nisini_2018}, in which the detailed kinematic structure has been determined from the analysis of such intrinsic line profiles. Another important line is H$\mathrm{\alpha,}$ which also traces the velocity fields of outflows and jets \citep{Edwards_1987, Bacciotti_2000, Woitas_2002}, as well as accretion processes in the accretion flow onto a planet and in the circumplanetary disk \citep[i.e.,][]{Haffert_2019, Hashimoto_2020, Marleau_2022}.
Studying the connection between the inner and outer disk has attracted much attention lately. Despite the rings and gaps seen in protoplanetary disks, shadows could indicate some degree of misalignment between the inner and outer disk \citep{Marino_2015, stolker_2016, Min_2017, Debes_2017, Benisty_2017, Casassus_2018, Benisty_2018, Pinilla_2019, Muro-Arena_2020}. Resolving the inner parts of the disk is quite challenging. A recent study by \citet{Bohn_2022} showed that it is possible to assess the inner disk using near-infrared and submillimeter observations in T Tauri and Herbig Ae/Be disks. They emphasize the difficulty of measuring a possible misalignment between inner and outer disks in T Tauri disks, mainly due to the limited angular resolution of VLT/GRAVITY or the specific geometry orientation of the disks.
In this paper, we introduce the detection of forbidden emission lines such as: [\ion{O}{i}]~$\lambda\lambda$6300, 6363, [\ion{N}{ii}]~$\lambda\lambda$6548, 6583, H$\mathrm{\alpha}$, and [\ion{S}{ii}]\,$\lambda\lambda$6716, 6730 for the first time in some T Tauri sources with the MUSE/VLT instrument in the narrow-field mode (NFM). These forbidden emission lines originate from the innermost part of the disk as outflows/jets and we estimate the position angle (PA$_\mathrm{outflow/jet}$) that is connected with the inner disk and compare it with the dust position angle (PA$_\mathrm{dust}$), derived from previous work. In \S\ref{sec:sect_2}, we describe the physical properties of the disk using the Atacama Large Millimeter/submillimeter Array (ALMA) and the MUSE observations. In \S\ref{sect_3}, we present our estimates of the PA$_\mathrm{outflow/jet}$ and analyze the velocity components from the line profiles. In \S\ref{sect_4} and \S\ref{sect:sect_5}, we present our summary and conclusions, respectively.
\section{Disk parameters and observations}
\label{sec:sect_2}
We briefly summarize the disk observational description of the dust continuum emission. The physical properties we use to compare the properties of the outflows/jets are originally constrained by previous studies. Lastly, we introduce the MUSE observations.
\subsection{Dust continuum parameters}
The disks presented here have been well studied and are known to have dust substructures detected at millimeter wavelengths. We adopted some additional disk properties, such as the inclination and PA$_\mathrm{dust}$, from previous work that performed the disk model fitting based on the dust morphology and the origin of the substructures. The distance of each source is obtained from the \citet{Gaia_2021}.
The disk properties were obtained and analyzed from three different datasets. The disk properties from IM Lup and GW Lup were obtained from the Disk Substructures at High Angular Resolution Project (DSHARP) at 1.25 mm continuum observations with a high spatial resolution of 0$\farcs$04 \citep{Andrews_2018, Huang_2018}. For the disks in the Taurus star-forming region, we considered the ALMA Cycle 4 observations (ID: 2016.1.01164.S; PI: Herczeg) that were observed at 1.33 mm and a resolution of $\sim$0$\farcs$12 \citep{Long_2018, Long_2019}.
\subsection{MUSE observations and data reduction}
The observations were carried out between November 2019 and January 2020 with VLT/MUSE in the NFM under program-ID 0104.C-0919 (PI: A. M\"uller). The MUSE optical integral field spectrograph covers the spectral range from 4650--9300~\AA~ at a resolving power of $\sim$2000 at 4600~\AA~ and $\sim$4000 at 9300~\AA~ \citep{2010SPIE.7735E..08B}, with a spectral sampling of 0.125 nm per pixel. The adaptive-optics assisted NFM has a field of view (FOV) of 7.4$\times$7.4~$\mathrm{arcsec}^{2}$ with a spatial scale of 0.025~$\mathrm{arcsec}$ per pixel, which is a pixel scale that is about ten times smaller than the wide-field mode. With this spatial resolution, we can recover the spatial extension, by means of x and y pixels, of the emission coming from the outflows/jets. The VLT adaptive optics system uses four laser guide stars whose wavelengths range from 5781~\AA~to 6048~\AA. This spectral range is removed from the MUSE spectra to avoid the presence of additional lines that occur when the laser interacts with air molecules.
The MUSE-NFM observations were reduced with the standard ESO pipeline v2.8.1 \citep{2012SPIE.8451E..0BW, 2014ASPC..485..451W, 2016ascl.soft10004W}, which includes bias subtraction, spectral extraction, flat-fielding, wavelength calibration, and flux calibration. Although the NFM includes an atmospheric dispersion corrector (ADC), the centroid variations can still be a few spatial pixels along the entire wavelength range. Therefore, we corrected the calibrated pixel tables for each exposure based on the centroid shifts with wavelengths measured on intermediate cube reconstructions. The corrected pixel tables were then used to create the combined cube of all calibrated exposures.
The wavelength calibration was done using a helium-argon ARC lamp from the day before each night of observations. Due to temperature variations, the wavelength solution for a given spectrum can be offset and an empirical wavelength correction using bright skylines should be applied \citep{Weilbacher_2020}. Skylines are, however, faint in the NFM, so this wavelength correction is hard to apply. These offsets can vary from target to target and are on the order of fractions of a pixel, up to a full pixel in extreme cases \citep[5-50 km~s$^{-1}$; see also][]{Xie_2020}. We report eight data cubes that were obtained over 0.5~h each.
\begin{figure*} [!htbp]
\centering
\begin{subfigure}[]{.45\textwidth}
\includegraphics[width=7cm]{DLTau_image_new.pdf}
\end{subfigure}
\begin{subfigure}[]{0.45\textwidth}
\includegraphics[width=7cm]{CITau_image_new.pdf}
\end{subfigure}
\begin{subfigure}[]{.45\textwidth}
\includegraphics[width=7cm]{DSTau_image_new.pdf}
\end{subfigure}
\begin{subfigure}[]{.45\textwidth}
\includegraphics[width=7cm]{IPTau_image_new.pdf}
\end{subfigure}
\begin{subfigure}[]{.45\textwidth}
\includegraphics[width=7cm]{IMLup_image_new.pdf}
\end{subfigure}
\caption{Visualization of the estimated $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$ compared to the PA$_{\mathrm{dust}}$. Top panels show DL Tau (\textit{left}) and CI Tau (\textit{right}). Middle panels show DS Tau (\textit{left}) and IP Tau (\textit{right}). Bottom panel is IM Lup. The MUSE outflow/jet images are averaged from the different emission lines that are used to estimate the $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$. We adopt the geometrical configuration visually demonstrated in Figure 3 from \citet{Pietu_2007} for the disks with positive and negative inclination. CI Tau has positive disk inclination, and the $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$ is measured from north to west. The rest of the sources have negative disk inclination and the $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$ is measured from north to east.}
\label{DLTau_and_CITau_diagram}
\end{figure*}
\subsection{MUSE and ALMA image registration}
When comparing the MUSE images with ALMA data, we focus on comparing the geometry of the dusty outer disk plane, as measured with ALMA, and the position angle of the outflows/jets, as measured with MUSE. With the assumption that the disk is circular and the star is at its center, the ALMA observations can be used to recover the inclination and the position angle relative to the fitted disk center. In our sample, the systems have their disk geometry calculated through visibilities \citep{Long_2018} or image fitting \citep{Huang_2018}, and we adopted those published values for our inclination and position angle for the outer disk plane. As both works fit the center of the disk under the assumptions mentioned above, the PA$_\mathrm{dust}$ already includes the observational and instrumental uncertainties, which remain fixed in this work.
With MUSE, our assumption to recover the PA$_\mathrm{outflow/jet}$ is that the star is centered in the image frame. In order to confirm this, we fit a 2D circular Gaussian to the specific channels we used to analyze the PA$_\mathrm{outflow/jet}$. We do this to the reduced data cube without PSF subtraction, assuming that the outflow/jet emission is negligible compared to the stellar emission. We found that for each channel, the center of the Gaussians is less than 1 pixel (or smaller than 25mas) from the center of the images for all sources. Therefore, we measured the PA$_\mathrm{outflow/jet}$ relative to the center of the image where the star is.
When comparing ALMA with MUSE, the assumptions are that the outer disk geometry (incl, PA$_\mathrm{dust}$) was measured relative to the same point as the PA$_\mathrm{outflow/jet}$, which is the stellar position. Additionally, the outer disk plane did not change even when both images were taken at different epochs. The comparison of the geometry of the disk with different instruments is similar to the work done by \citet{Bohn_2022}, where they combined the K-band from GRAVITY/VLTI to constrain the geometry of the inner disk by finding the best parametric SED solution to the squared visibilities. These authors used ALMA $^{12}$CO and $^{13}$CO data to constrain the geometry of the outer disk. Their determination of potential misalignment between the inner and outer disks is narrowed down by comparing the inclinations and position angles defined by the planes of the inner and outer disks, respectively. The \citet{Gravity_2020} describes the position angle constrained by ALMA to be a well-fixed parameter in order to describe the geometry of the disk. We include the rotational accuracy as 0.2$^{\circ}$ when propagating the uncertainty for each PA$_\mathrm{outflow/jet}$ value in Table \ref{tab:jets_properties} (see also Appendix \ref{sec:rotation_residual_analysis}).
\section{Results}
\label{sect_3}
We identified seven forbidden emission lines: [\ion{O}{i}]~$\lambda\lambda$6300, 6363, [\ion{N}{ii}]~$\lambda\lambda$6548, 6583, H$\mathrm{\alpha}$, [\ion{S}{ii}]\,$\lambda\lambda$6716, 6730, which are seen to be originating from the innermost part of the disk as outflow-like or jet-like structures (Fig.\ref{fig:supra}, Fig.\ref{fig:DLTau_jet_dust}, Figure \ref{spectrum1}, Figure \ref{spectrum2}, and Figure \ref{spectrum3}). Some of these lines have been reported in some of the sources we analyzed here (i.e., \citet{2Simon_2016, Fang2018, Banzatti2019}). We used \citet{Natta_2014}, \citet{Podio_2006}, and the NIST atomic spectra database for the line identification and line uncertainty. The PA$_\mathrm{outflow/jet}$ is defined as the position angle of the outflow/jet measured in degrees that lie approximately in the direction perpendicular to the rotational axis of the disk. In the next section, we detail how the geometrical derivation of the PA$_\mathrm{outflow/jet}$ is conducted. It is very likely that we are not solely seeing jets, but also outflows and (perhaps) winds that cannot be revealed due to the low spectral resolution of MUSE. Any description and analysis of winds are out of the scope of the present work, as it is necessary to assess the few km s$^{-1}$ velocity components that characterize them. We performed a Gaussian fit to the emission lines to study their velocity components and, finally, we measured the length of the outflow/jet to estimate the mass loss from the [\ion{O}{i}]~$\lambda$6300 line.
\begin{table*}[htp!]
\caption{Outflow/jet properties.}
\label{tab:jets_properties}
\small
\centering
\begin{tabular}{lccccccccccr}
\noalign{\smallskip} \hline \hline
\noalign{\smallskip} & \multicolumn{4}{c}{[\ion{O}{i}] $\lambda6300.30 ~\pm$~0.010~\AA} & & \multicolumn{4}{c}{[\ion{N}{ii}] $\lambda6548.05 ~\pm$~0.10~\AA} \\
\cline{2-5}
\cline{7-10}
\noalign{\smallskip} Name & FWHM & $v_{\mathrm{outflow/jet}}$ & $F_{\lambda}$ & PA$_{\mathrm{outflow/jet}}$ & & FWHM & $v_{\mathrm{outflow/jet}}$ & $F_{\lambda}$ & PA$_{\mathrm{outflow/jet}}$ \\
& (km~s$^{-1}$) & (km~s$^{-1}$) & (erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$) & ($^{\circ}$) & & (km~s$^{-1}$) & (km~s$^{-1}$) & (erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$) & ($^{\circ}$) \\
\hline
DL Tau & 135.22 & -217.8$\pm$2.0 & 7.96$\times$10$^{-15}$ & 143.41$\pm$0.25 & & 130.89 & -216.3$\pm$2.0 & 1.93$\times$10$^{-15}$ & -- \\
CIDA9 & -- &-- & -- & -- & & -- & -- & -- & -- \\
CI Tau & 160.90 & -209.7$\pm$1.3 & 1.18$\times$10$^{-15}$ & 78.12$\pm$0.61 & & -- & -- & -- & --\\
DS Tau & 143.70 & 161.9$\pm$1.6 & 1.21$\times$10$^{-15}$ & 69.79$\pm$0.68 & & -- & -- & -- & -- \\
GO Tau & -- & -- & -- & -- & & -- & -- & -- & --\\
IP Tau & 204.58 & -83.0$\pm$2.0 & 2.87$\times$10$^{-16}$ & 81.35$\pm$1.53 & & -- & -- & -- & -- \\
IM Lup & 262.19 & -95.4$\pm$1.3 & 4.91$\times$10$^{-16}$ & 59.08$\pm$1.67 & & -- & -- & -- & -- \\
GW Lup & -- & -- & -- & -- & & -- & -- & -- & -- \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[htb]
\label{tab:jets_properties_cont}
\small
\centering
\begin{tabular}{lccccccccccr}
\noalign{\smallskip} \hline \hline
\noalign{\smallskip} & \multicolumn{4}{c}{H$\mathrm{\alpha}$ $\lambda$6562.80 $\pm~10^{-5}$~\AA} & &\multicolumn{4}{c}{[\ion{S}{ii}] $\lambda$6716.44 $\pm$~0.010~\AA} \\
\cline{2-5}
\cline{7-10}
\noalign{\smallskip} Name & FWHM & $v_{\mathrm{outflow/jet}}$ & $F_{\lambda}$ & PA$_{\mathrm{outflow/jet}}$ & & FWHM & $v_{\mathrm{outflow/jet}}$ & $F_{\lambda}$ & PA$_{\mathrm{outflow/jet}}$ \\
& (km~s$^{-1}$) & (km~s$^{-1}$) & (erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$) & ($^{\circ}$) & & (km~s$^{-1}$) & (km~s$^{-1}$) & (erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$) & ($^{\circ}$) \\
\hline
DL Tau & 121.73 & -222.9$\pm$2.0 & 2.28$\times$10$^{-14}$ & 142.79$\pm$0.23 & & 124.87 & -208.0$\pm$2.0 & 7.31$\times$10$^{-15}$ & 143.22$\pm$0.23 \\
CIDA9 & -- & -- & -- & -- & & -- & -- & -- & --\\
CI Tau & 154.65 & -184.8$\pm$1.3 & 5.51$\times$10$^{-15}$ & -- & & 150.17 & -195.6$\pm$1.3 & 3.74$\times$10$^{-16}$ & --\\
DS Tau & -- & -- & -- & -- & & 115.56 & 151.2$\pm$1.6 & 6.48$\times$10$^{-16}$ & -- \\
GO Tau &-- &-- &-- & -- & & -- & -- & -- & --\\
IP Tau & -- & -- & -- & -- & & -- & -- & -- & -- \\
IM Lup & -- & -- & -- & -- & & -- & -- & -- & -- \\
GW Lup & -- & -- & -- & -- & & -- & -- & -- & -- \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[htb]
\small
\centering
\begin{tabular}{lccccccccccr}
\noalign{\smallskip} \hline \hline
\noalign{\smallskip} & \multicolumn{4}{c}{[\ion{O}{i}] $\lambda$6363.78 $\pm$~0.010~\AA} & &\multicolumn{4}{c}{[\ion{N}{ii}] $\lambda$6583.45 $\pm$~0.10~\AA} \\
\cline{2-5}
\cline{7-10}
\noalign{\smallskip} Name & FWHM & $v_{\mathrm{outflow/jet}}$ & $F_{\lambda}$ & PA$_{\mathrm{outflow/jet}}$ & & FWHM & $v_{\mathrm{outflow/jet}}$ & $F_{\lambda}$ & PA$_{\mathrm{outflow/jet}}$ \\
& (km~s$^{-1}$) & (km~s$^{-1}$) & (erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$) & ($^{\circ}$) & & (km~s$^{-1}$) & (km~s$^{-1}$) & (erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$) & ($^{\circ}$) \\
\hline
DL Tau & 135.67 & -219.5$\pm$2.0 & 2.64$\times$10$^{-15}$ & 143.77$\pm$0.31 & & 116.65 & -217.8$\pm$2.0 & 4.93$\times$10$^{-15}$ & 143.51$\pm$0.23 \\
CIDA9 & -- & -- & -- & -- & & -- & -- & -- & --\\
CI Tau & -- & -- & -- & -- & & 158.72 & -225.4$\pm$1.3 & 1.20$\times$10$^{-15}$ & 79.68$\pm$0.92 \\
DS Tau & -- & -- & -- & -- & & 129.32 & 194.8$\pm$1.6 & 2.18$\times$10$^{-15}$ & 71.59$\pm$0.39 \\
GO Tau & -- &-- &-- & -- & & -- & -- & -- & --\\
IP Tau & -- & -- & -- & -- & & -- & -- & -- & -- \\
IM Lup & -- & -- & -- & -- & & 135.32 & -124.1$\pm$1.3 & 3.77$\times$10$^{-16}$ & 52.96$\pm$0.90 \\
GW Lup & -- & -- & -- & -- & & -- & -- & -- & -- \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[htb!]
\small
\centering
\begin{tabular}{lcccccr}
\noalign{\smallskip} \hline \hline
\noalign{\smallskip} & \multicolumn{4}{c}{[\ion{S}{ii}] $\lambda$6730.82 $\pm$~0.010~\AA} \\
\cline{2-5}
\noalign{\smallskip} Name & FWHM & $v_{\mathrm{outflow/jet}}$ & $F_{\lambda}$ & PA$_{\mathrm{outflow/jet}}$\\
& (km~s$^{-1}$) & (km~s$^{-1}$) & (erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$) & ($^{\circ}$)\\
\hline
DL Tau & 122.54 & -236.6$\pm$2.0 & 1.16$\times$10$^{-14}$ & 142.81$\pm$0.23 \\
CIDA9 & -- & -- & -- & -- \\
CI Tau & 152.89 & -189.7$\pm$1.3 & 7.35$\times$10$^{-16}$ & 79.57$\pm$0.79 \\
DS Tau & 130.87 & 167.9$\pm$1.6 & 1.11$\times$10$^{-15}$ & -- \\
GO Tau & -- & -- & -- & -- \\
IP Tau & -- & -- & -- & -- \\
IM Lup & -- & -- & -- & -- \\
GW Lup & -- & -- & -- & -- \\
\hline
\end{tabular}
\caption*{\footnotesize{\textbf{Note:} The outflow/jet is obtained by de-projecting the line centroids from Gaussian fits, as explained in Sect. \ref{sec:outflow_jet_velocities}; the errors reported in this table do not include the uncertainty in wavelength calibration. Symbol '--' means no detection. All PA$_{\mathrm{outflow/jet}}$ uncertainties includes the 0.2$^{\circ}$ MUSE rotational uncertainty.}}
\end{table*}
\begin{figure*} [!htbp]
\centering
\includegraphics[width=12cm]{jet_wiggles_gaussian_fit2.pdf}
\caption{Successive Gaussian centers shown as navy-colored dots. Wiggles are seen in DL Tau and IP Tau at [\ion{O}{i}]~$\lambda$6300, and for CI Tau, DS Tau, and IM Lup at [\ion{N}{ii}]~$\lambda$6583.}
\label{fig:joydivision}
\end{figure*}
\subsection{Geometrical fitting of the outflows/jets}
\label{geometrical_fitting}
In order to derive the PA$_{\mathrm{outflow/jet}}$ for each spectral line in all disks, we fit the jet intensity peaks with a linear function (after stellar subtraction). We read the maximum value across the outflow/jet for the forbidden emission lines detected on each source and store the x and y positions corresponding to the jet emission into a new array. These are then converted into polar coordinates, where $r = \sqrt{x^{2}+y^{2}}$ and $\theta = \arctan(y/x)$.
Jet intensity peaks are only considered if they are above 5$\sigma$ of the background. For other imaging parameters and wavelengths (see Table \ref{tab:jet_lengths} and \ref{tab:spectral_widths}). The retrieved positions contain the angular information to calculate the PA$_{\mathrm{outflow/jet}}$.
In order to fit a line along these retrieved positions of the jet, we employed a fitting method called the orthogonal distance regression (ODR; \citet{brown1990statistical}). This routine gathers the retrieved data and the linear model function to produce the best linear parameters for the jet. The linear model function follows the simple ordinary prescription of a line, $f(x_{i}; \vec{\beta}) = \beta_{1}x + \beta_{0}$, where, $x_{i}$ is the position in pixels, and $\beta_{0}$ and $\beta_{1}$ are the unknown parameters to be found by the ODR routine. We assign an initial standard deviation for each jet peak position to be 1 pixel. Choosing a value of 1 pixel is a conservative selection for the initial standard error as we found that when estimating the PA$_{\mathrm{outflow/jet}}$, the corresponding error of the Gaussian center is smaller than 1 pixel. As the function we used for the fit, $f(x_{i}; \vec{\beta)}$, is said to be linear in variables, $x_{i}$, and linear in parameters, $\vec{\beta}$, the total error in the routine is estimated by applying the linear least squares method. That is, by finding the set of parameters for which the sum of the squares of the $n$ orthogonal distances from the $f(x_{i}; \vec{\beta})$ curve to the $n$ data points is minimized. This error associated at the end of the fit follows: min $\sum_{i=1}^{n} [y_{i} - f(x_{i}+\delta_{i};\vec{\beta)}]^{2} + \delta_{i}^{2}$, where $\delta_{i}$ is the error associated to each $n$ data point.
For the iterative routine to estimate the PA$_{\mathrm{outflow/jet}}$ for each emission line, we used (as an initial estimate) the PA$_{\mathrm{outflow/jet}}$ from the locations of the jet intensity peaks from the first part. Then, we used this first value for the PA$_{\mathrm{outflow/jet}}$ as a seed for a second iteration, where we fit Gaussians to profiles that are orthogonal to the jet axis defined by the first PA$_{\mathrm{outflow/jet}}$. Depending on the source, the first three to five pixels were not used to avoid artifacts from the stellar subtraction at the image center. For example, a comparison between the peak intensity locations and the Gaussian centers is shown for DL Tau in Fig. \ref{fig:linear_jet_DLTau_diagram2}. Those Gaussian centers were then used to calculate the final PA$_{\mathrm{outflow/jet}}$ for each emission line. The final PA$_{\mathrm{outflow/jet}}$ of the disks for which we detect bright emission lines are shown in Table \ref{tab:jets_properties}. Then, we took the average, $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$, from the PA$_{\mathrm{outflow/jet}}$ estimates of different emission lines in Table \ref{tab:jets_properties} and we added the corresponding errors in quadrature. This $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$ value for each source was then used to compared with the PA$_\mathrm{dust}$ (see Table \ref{tab:disk_properties}). We note that the CIDA9, GO Tau, and GW Lup jets were too compact at the center, making it difficult to properly estimate any PA$_{\mathrm{outflow/jet}}$. Before making the comparison between the $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$ and the PA$_{\mathrm{dust}}$, it is important to understand the side of the disk plane which the emission line is coming from. We used the same geometrical convention as in Figure 3 from \citet{Pietu_2007} to indicate the real disk plane configuration through the inclination for each disk system.
In Fig.\ref{DLTau_and_CITau_diagram}, we portray this convention by superimposing the outflows/jets to the mm dust continuum of the protoplanetary disk. It shows a better visualization of the measured $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$, in comparison with the PA$_\mathrm{dust}$, for DL Tau (\textit{left panel}) for the negative outer disk inclination case, and CI Tau (\textit{right panel}) for the positive outer disk inclination case. DS Tau, IP Tau, and IM Lup have negative outer disk inclinations. The PA$_\mathrm{dust}$ follows the usual standard definition of the position angle of the major axis of the outer disk, from 0$^{\circ}$ to 180$^{\circ}$, from north to east. Since DL Tau has a negative outer disk inclination, the PA$_\mathrm{outflow/jet}$ values are measured counterclockwise from the north, and it is similar to saying that the disk is flipped. On the other hand, CI Tau has a positive outer disk inclination then the PA$_\mathrm{outflow/jet}$ values are measured clockwise from the north. Having identified the orientation of the outer disk inclination, we determined the rotation of the protoplanetary disk-outflow/jet system (see Fig.\ref{fig:DLTau_jet_dust}).
We took into account the difference between the $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$, which is related to the PA of the innermost disk, and the PA$_\mathrm{dust}$. For this, we followed:
\begin{equation}
\text{difference} =
\begin{cases}
\lvert \lvert \overline{\mathrm{PA}}_{\mathrm{outflow/jet}}-\text{PA}_{\mathrm{dust}}\lvert-90^{\circ} \lvert, & \text{if incl.<0} \\
\lvert \lvert \overline{\mathrm{PA}}_{\mathrm{outflow/jet}}+\text{PA}_{\mathrm{dust}} \lvert -90^{\circ} \lvert, & \text{if incl.>0}
\end{cases}
\end{equation}
We found that the difference between the $\overline{\mathrm{PA}}_\mathrm{outflow/jet}$ and the PA$_\mathrm{dust}$ to be small in general (see Table \ref{tab:disk_properties}). For the sources analyzed here, the estimation of the PA$_\mathrm{outflow/jet}$ and their associated errors lie within the estimation of the PA$_\mathrm{dust}$ and its associated error range. Under the assumption that the outflow (or jet) axis is perpendicular to the inner disk plane, we do not find any misalignment between the inner and outer disk. The spread of the PA$_\mathrm{outflow/jet}$ values in Table \ref{tab:jets_properties} depend on the uncertainties and values coming from the Gaussian fit centers.
The symmetric Gaussian fit model is not well suited to capture possible asymmetries in outflows/jets produced by low signal-to-noise variations, especially close to the center of the images where the rms is not constant (see Figure \ref{fig:joydivision}).
Our rms values are estimated in a region free from stellar and outflow/jet emission; however, the rms value close to the image center is not constant due to contamination when subtracting the stellar spectrum. Those effects are not systematically considered in our PA$_\mathrm{outflow/jet}$ estimates, therefore, our uncertainties should be considered as lower limits.
It is the case that the estimation of the PA$_\mathrm{outflow/jet}$ for IM Lup was more difficult because of its compact extent and low signal-to-noise level.
The intensity variations in the outflow/jet are comparable to noise variation level, resulting in a PA$_\mathrm{outflow/jet}$ difference of 6$^{\circ}$ between the [\ion{O}{i}] $\lambda6300$ and [\ion{N}{ii}] $\lambda6583$ lines.
By calculating the difference between PA$_\mathrm{outflow/jet}$ and PA$_\mathrm{dust}$ we get 5.18$^{\circ}\pm1.77^{\circ}$ and 0.94$^{\circ}\pm1.08^{\circ}$ for [\ion{O}{i}] $\lambda6300$ and [\ion{N}{ii}] $\lambda6583$ line, respectively.
The $\overline{\mathrm{PA}}_{\mathrm{outflow/jet}}$ and PA$_\mathrm{dust}$ are different by $2.12^{\circ}\pm1.99^{\circ}$, therefore, the difference is within the uncertainty. This is further discussed in \S\ref{sect_4}.
\begin{figure}[htbp]
\centering
\includegraphics[width=7cm]{line_profile_Halpha.pdf}
\caption{H${\alpha}$ line profile identified for DL Tau probing the high line-of-sight velocity component coming from the strong outflow/jet. The red dashed line represent the line-of-sight maximum velocity. }
\label{line_profile_Halpha}
\end{figure}
\begin{figure*}[htp!]
\centering
\includegraphics[width=12cm]{line_profile_OI6300.pdf}
\caption{[\ion{O}{i}]$\lambda$6300 line profiles identified in five sources from our sample. DL Tau and CI Tau have their line centered at velocities greater than 100 km s$^{-1}$, which is characteristic of jets. Unlike DL Tau and CI Tau, the disks of DS Tau, IP Tau, and IM Lup show smaller shifts of less than 100 km s$^{-1}$ and broader line widths significantly overlapping with 0 km s$^{-1}$, possibly including the LVC emission as a result. DS Tau has the most centered line.}
\label{line_profile_OI6300}
\end{figure*}
Overall, we detected two essential geometrical features. One is a broadening of the respective intensity distribution with distance, indicating that the jet width increases with distance from the star. The other feature is the variation of the location of the intensity maximum, indicating a wiggling of the jet axis. A periodic pattern in these changes may hint at a precession or orbiting jet \citep{Fendt_1998}. In Fig. \ref{fig:joydivision}, the wiggle patterns are marked by the Gaussian centers as navy color dots. We further investigate how these Gaussian centers vary in position with respect to their distance from the jet axis (see Fig. \ref{linear_jet_DLTau_diagram} and Fig. \ref{fig:linear_jet_DLTau_diagram2}). In order to check whether or not these successive Gaussian centers have periodicity, we calculate their deviation of them from the jet axis. Applying a simple sinusoidal function in a periodogram does not provide significance, even if essentially done by eye -- as there seems to be some quasi-periodic behavior in the Gaussian centers. In \S\ref{jet_precession}, we further discuss how the patterns of the Gaussian centers could possibly hint at a sign of jet precession.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=12cm]{line_profile_NII6583.pdf}
\caption{[NII]$\lambda$6583 line profiles identified in four sources from our sample. Same case as in Fig.\ref{line_profile_OI6300}, DL Tau and CI Tau probe high-velocity components, while DS Tau and IM Lup are more centered to zero.}
\label{line_profile_NII6583}
\end{figure*}
\begin{figure*}[!htbp]
\centering \includegraphics[width=12cm]{line_profile_SII6730.pdf}
\caption{[SII]$\lambda$6730 line profile identified for DL Tau, CI Tau, and DS Tau.}
\label{line_profile_SII6730}
\end{figure*}
\subsection{Outflow/jet velocities}
\label{sec:outflow_jet_velocities}
The emission of these forbidden optical emission lines has been established to trace outflows/jets and winds in T Tauri stars. From their line velocity, we can characterize whether it is a high-velocity or a low-velocity component \citep{Edwards_1987, Hartigan_1995}. For each spectral frame in the data cube, we summed pixel fluxes over the region defined by the dashed yellow rectangles in Fig.\ref{fig:supra} to obtain a spectrum of the emission. To measure the emission line centroids to sub-pixel precision, we take the line peak from a Gaussian fit to the data (see Fig. \ref{line_profile_Halpha}, Fig. \ref{line_profile_OI6300}, Fig. \ref{line_profile_NII6583}, and Fig. \ref{line_profile_SII6730}). In Table \ref{tab:jets_properties}, we report the de-projected outflow/jet velocities estimated from the Gaussian fits as follows:
$v_{\mathrm{outflow/jet}} = c \left( \left(\lambda_\mathrm{gauss}-\lambda_\mathrm{ref}\right) / \lambda_\mathrm{ref}\right)/ \cos(\mathrm{incl}), $
where $c$ is the speed of light, incl is the inclination of the disk (Table \ref{tab:disk_properties}), $\lambda_\mathrm{gauss}$ is the Gaussian fit centroid, and $\lambda_\mathrm{ref}$ is the rest wavelength of each forbidden line in the air.
The errors on the de-projected velocities, $v_{\mathrm{outflow/jet}}$, in Table \ref{tab:jets_properties} are propagated from the Gaussian fit and the disk inclination, but do not include the uncertainty in wavelength calibration that is likely between 5 km~s$^{-1}$ up to 50 km~s$^{-1}$ (see \citep{Xie_2020}). Most of the $v_{\mathrm{outflow/jet}}$ that we could estimate from the emission lines are blue-shifted and their values correspond mostly (but are not limited to) to the high-velocity components. We determine that for each source, we only see one side of the outflow/jet and the dust from their protoplanetary disk obscures the receding part of these outflow/jet velocities. Unlike the one-sided blue-shifted sources, we report the red-shifted velocities for DS Tau.
The detection of H$\mathrm{\alpha}$ line seems to stand out in the spectra (Fig.\ref{spectrum1}, Fig.\ref{spectrum2}, and Fig.\ref{spectrum3}). However, the H$\mathrm{\alpha}$ emission is seen spread in an area surrounding the line spread function, similar to a high noise intensity that H$\mathrm{\alpha}$ introduces from a high photon count level. This effect is not necessarily seen prominently in DL Tau (Fig. \ref{line_profile_Halpha}). An instrumental artifact causes this spread and the reason behind this effect is unknown. Similar effects have been seen and analyzed by \citet{Xie_2020}, where the H$\alpha$ line-to-continuum ratio varies across the field. These variations are so high that the residuals due to this instrument issue are stronger than those from photon noise. The noise reduction is taken into account when applying the principal component analysis (PCA; \citet{Soummer_2012}) method, but the error scale is so high that quantifying the real emission of H$\mathrm{\alpha}$ over the area is difficult.
The structured spectral line profiles reported here show the intrinsic outflow/jet velocity peaks or the line-of-sight velocity for the lines that we were able to capture in the spectra (Fig. \ref{line_profile_Halpha}, Fig. \ref{line_profile_OI6300}, Fig. \ref{line_profile_NII6583}, and Fig. \ref{line_profile_SII6730}). The velocity values reported here are corrected with the stellar radial velocity that is reported in Table \ref{tab:disk_properties} from both \citet{Banzatti2019} and \citet{Fang2018}. The [\ion{O}{i}]$\lambda$6300, H$\alpha$, [\ion{N}{ii}]$\lambda$6583, and [\ion{S}{ii}]$\lambda$6730 line profiles (Fig. \ref{line_profile_OI6300}, Fig. \ref{line_profile_NII6583}, and Fig. \ref{line_profile_SII6730}) for DL Tau and CI Tau are considerably blue-shifted with line center velocities much greater than 100 km~s$^{-1}$. The deprojection velocities show that the velocity of the outflow/jet, $v_{\mathrm{outflow/jet}}$, for DL Tau and CI Tau are greater or very close to 200 km~s$^{-1}$ tracing strong outflows/jets, as shown in see Table \ref{tab:jets_properties}. High-resolution spectra for DL Tau have shown a single low-velocity and a high-velocity component \citep{Banzatti2019}. The high-velocity component dominates the MUSE image. In addition, DL Tau hosts the most extended and collimated outflow/jet, reaching approximately 180 AU. For CI Tau, a strong outflow/jet is reported for the first time: its morphology is different from the DL Tau one, as seen in Fig. \ref{DLTau_and_CITau_diagram} (right panel). By taking 200 km~s$^{-1}$ as an approximate velocity of the jets in DL Tau and CI Tau, and a distance of 159 pc and 160 pc, respectively, then the proper motion of their extended outflow/jet is about 0$\farcs$26 per year.
The emission lines of DS Tau, IP Tau, and IM Lup have rather minor line-of-sight velocity shifts of less than 100 km~s$^{-1}$ (except for DS Tau) and broader line widths considerably overlapping 0 km~s$^{-1}$, which could hide some LVC emission. DS Tau is the only source in this sample where the line emission appears red-shifted rather than blue-shifted. This red-shifted emission is consistent with a high disk inclination (see Table \ref{tab:disk_properties}) observed with ALMA, where the outflow/jet appears only from the back side of the disk (see Fig.\ref{fig:DLTau_jet_dust}). The IP Tau disk is a transition disk as seen in mm continuum emission \citep{Long_2018}. The MUSE data for IP Tau shows a blue-shifted component in the [\ion{O}{i}]~$\lambda$6300 line with properties in between the HVC and the LVC, which is in good agreement with what has been observed in high-resolution spectral analysis \citep{Banzatti2019}. Interestingly, this line has been reported to vary over time \citep{2Simon_2016}. A recent work by \citet{Bohn_2022} found no potential misalignment between the inner and outer disk in IP Tau when analyzing the VLT/GRAVITY and ALMA data. IM Lup is known to have a blue-shifted LVC- BC measured from the [\ion{O}{i}]~$\lambda$6300 line \citep{Fang2018}.
\subsection{Are emission lines resolved?}
We have classified whether or not the emission lines we report are resolved, marginally resolved, or unresolved (see Table \ref{tab:spectral_widths}), depending on the resolution limit that MUSE has at the wavelength of a given emission line (Fig. \ref{fig:muse_spectral_resolution}). Resolving the line width implies that the emission line width is larger than the line-broadening of MUSE. The [\ion{S}{ii}]~$\lambda$6716 line, for example, is marginally equal to the line-broadening FWHM of MUSE (115 km~s$^{-1}$) in DL Tau (Fig.\ref{fig:DLTau_widths}); but in CI Tau, it is resolved given that the measured width is approximately 150 km~s$^{-1}$ (see Fig.\ref{fig:CITau_widths}), which is greater than the instrumental resolution. Marginally resolved lines and unresolved lines, in particular, should be considered to upper limits rather than precise estimates of the emission line widths \citep[i.e.,][]{Eriksson_2020}. The H$\mathrm{\alpha}$ line in almost all samples, except in CI Tau, is mostly unresolved because their measured width is less than the line-broadening FWHM of MUSE at H$\alpha$ (119 km~s$^{-1}$). The spectral capability of MUSE has enabled to solve for the velocity components of the forbidden emission lines, leading to a good venue for the exploration of the momentum transport of the disk as the disk losses mass, in which such estimate depends on the line profile and the resolution limit of the instrument.
\subsection{Mass-loss rate}
\label{mass_loss_rate}
The mass-loss rate considers the velocity and the length from the bright [\ion{O}{i}]$\lambda$6300 line luminosity of the densest bulk of the outflow/jet. The [\ion{O}{i}]$\lambda$6300 line is optically thin and used to trace the total mass in the outflow/jet \citep{Hartigan_1994, Giannini_2015, Nisini_2018, Fang2018}. The mass-loss rate is:
\begin{equation}
\label{eq:mass_loss}
\begin{split}
\dot{M}_{\mathrm{loss}} = C(T,n_{e}) \left( \frac{V_{\perp}}{100~\mathrm{km~s^{-1}}} \right)
\left( \frac{l_{\perp}}{100~\mathrm{au}} \right)^{-1} \\
\left( \frac{L_{\mathrm{6300}}}{L_{\odot}} \right)~ M_{\odot}~yr^{-1}
\end{split}
,\end{equation}
where $V_{\perp}$ is the outflow/jet velocity that is deprojected from the LoS (see Table \ref{tab:jets_properties}). The length of the outflow/jet is $l_{\perp}=$d$_\mathrm{pc}~\theta,$ where d$_\mathrm{pc}$ is the distance of the source given in Table \ref{tab:disk_properties}, and $\theta$ is the length of the outflow/jet measured in arsec (see Table \ref{tab:jet_lengths} and \ref{tab:mass_loss}). The length of the outflow/jet is measured until the emission flux drops below 10$^{-17}$~erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$. $C(T,n_{e})$ is a coefficient of the [\ion{O}{i}]~$\lambda$6300 line transition from the energy state 2$\,\to\,$1, that depends on the gas temperature via thermally excited collisions of electrons. At $T=10,000$ K, we used $C(T,n_{e})$ = 9.0$\times10^{-5}$ for a $n_{e}$ = 5.0$\times10^{4}$~cm$^{-3}$, both values provided by \citet{Fang2018}. The resulting mass-loss rates of the disk-outflow/jet sources are listed in Table.\ref{tab:mass_loss}. These mass-loss rates range (1.1-6.5) $\times$10$^{-7}$-10$^{-8}$~$M_{\odot}~yr^{-1}$, which is in agreement with values reported by \citet{Hartigan_1994, Hartigan_1995, Nisini_2018, Fang2018}. A good advantage of the MUSE data is that the spatial extension of the outflow/jet can be measured. We use the distances of the sources specified in Table \ref{tab:disk_properties} to convert from angular length (arcsecs) to physical length (au). We adopted the same scaling distance as in \citet{Fang2018} at 100 au as a normalization factor of the extension of the outflow/jet. As noted, the mass-loss rate is proportional to the line luminosity of [\ion{O}{i}]~$\lambda$6300, which assumes a low electron number density that is much lower than the critical density ($n_{\mathrm{H}}\sim$10$^{6}$~cm$^{-3}$) and an atomic abundance similar to the standard interstellar medium ($\sim$10$^{-4}$).
\begin{table}[pht!]
\caption{Mass-loss rates for the disk-outflow/jet systems.}
\label{tab:mass_loss}
\small
\centering
\begin{tabular}{lcccccr}
\hline \hline
\toprule
Name & log $L_{
\mathrm{6300}}$ & $\theta$ & $l_{\perp}$ & $\dot{M}_{\mathrm{loss}}$ \\
& ($L_{\odot}$) & (arcsec) & (AU) & ($M_{\odot}~yr^{-1}$) \\
\hline
DL Tau & -2.20 & 1.1 & 180.5 & 4.6$\times10^{-7}$ \\
CI Tau & -2.99 & 0.7 & 112.2 & 1.1$\times10^{-7}$\\
DS Tau & -3.02 & 0.9 & 142.6 & 6.5$\times10^{-8}$ \\
IP Tau & -3.82 & 0.5 & 64.7 & 1.2$\times10^{-8}$\\
IM Lup & -3.43 & 0.4 & 62.3 & 3.4$\times10^{-8}$ \\
\hline
\end{tabular}
\end{table}
\noindent
\subsection{Outflow/jet width in DL Tau}
\label{line_width}
Figure \ref{jet_width} shows the width of six emission lines of the outflow/jet in DL Tau. The intensity profiles are compiled by stacking the Gaussian fits of the intensity slices across the outflow/jet as a function of the distance from the outflow/jet axis. The outflow/jet widths in Fig. \ref{jet_width} represent the average length, namely $\sim$0$\farcs$5, of the extension of the outflow/jet in DL Tau. We converted the width of the outflow/jet) from pixels using the MUSE spatial resolution of 0$\farcs$025 per pixel to angular distance: [\ion{O}{i}]~$\lambda$6300 = 0$\farcs$16, [\ion{O}{i}]~$\lambda$6363 = 0$\farcs$12, [\ion{N}{ii}]~$\lambda$6583 = 0$\farcs$11, [\ion{S}{ii}]~$\lambda$6716 = 0$\farcs$31, [\ion{S}{ii}]~$\lambda$6730 = 0$\farcs$21, and H$\alpha$ = 0$\farcs$17. Each emission line width in au is shown in the legend box. The only emission line that is not included in the [\ion{N}{ii}]~$\lambda$6548 line because it is very faint.
Each line profile traces different layers of the outflow/jet. The [\ion{O}{i}]~$\lambda$6363 line and the [\ion{N}{ii}]~$\lambda$6583 line share very similar narrow widths and, compared to the other lines, they both seem to trace the deepest layer of the outflow (jet). These two lines also have similar FWHM (see Table \ref{tab:jets_properties}), encouraging further studies to see whether [\ion{O}{i}]~$\lambda$6363 and the [\ion{N}{ii}]~$\lambda$6583 could perhaps share the same physical properties of the emitting area. Interestingly, we see similar widths for H$\alpha$ and [\ion{O}{i}]~$\lambda$6300. On the other hand, we found [\ion{S}{ii}]~$\lambda$6716 to be the broadest line, followed by [\ion{S}{ii}]~$\lambda$6730 line.
The brightest and most detected lines are [\ion{O}{i}]~$\lambda$6300 and [\ion{N}{ii}]~$\lambda$6583 as a result of collisional excitation of electrons in shock fronts. The [\ion{S}{ii}]~$\lambda$6716/[\ion{S}{ii}]~$\lambda$6730 and [\ion{N}{ii}]~$\lambda$6583/[\ion{O}{i}]~$\lambda$6300 line ratio can help to estimate the physical conditions concurring in these scenarios. Empirical fittings \citep{Profaux_2014, Ellerbroeck_2014} and shock model analysis \citep{Hartigan_1994} suggest that these two line ratios are coming from regions with electron densities, $n_{e}$, ranging from 10$^{3}$ to $10^{4}$~cm$^{-3}$, ionization fractions ranging from 0.2 to 10, shock velocities ranging from 60 to 80~km~s$^{-1}$, and magnetic field strengths ranging from 10 to $10^{4}~\mu$G. Studies of other line ratios also provide useful information on the physical conditions. Previous analysis from the [\ion{O}{i}] at $\lambda$5577.3 and $\lambda$6300.3 line ratio in the LVC emitting region have determined that these are emitted in rather dense gas ($n_{e}\geq10^{7}$ cm$^{-3}$), where the gas density relative to hydrogen, $n_{H}$, is $\geq10^{10}$ cm$^{-3}$. In temperatures between 5000 and 10$^{4}$ K, these estimations are based on models where the excitation energies are due to collisions of electrons in regions dominated by neutral Hydrogen \citep{Natta_2014, 2Simon_2016, Fang2018}. \citet{Lavalley_2000} mention that from the [\ion{N}{ii}]~$\lambda$6583/[\ion{O}{i}]~$\lambda$6300 the line ratio diagnostic, the ratio increases with ionization fraction, in regions where $n_{e}\geq n_{H}$. \citet{Lavalley_2000} also stated that the [\ion{S}{ii}]~$\lambda$6716 and [\ion{S}{ii}]~$\lambda$6730 line ratio is a well-known decreasing function of the electronic density until $n_{e}\geq n_{cr} \sim 10^{4}$ cm$^{-3}$, the critical density for collision with electrons of [\ion{S}{ii}]. We encourage further analysis of the line ratio of these forbidden emission lines to the gas temperature, gas density, electron density, and ionization fraction that is characteristic depending on the region of the emitting area.
\begin{figure}[!htbp]
\centering
\includegraphics[width=9cm]{spectral_resolution2.pdf}
\caption{Spectral resolution, R=$\lambda/\Delta \lambda$, versus wavelength of the seven forbidden emission lines used in the analysis here. The blue curve shows the resolution curve of MUSE at increments of 500~\AA. Each R value for each line is linearly interpolated from the MUSE resolution curve. The values were obtained from the MUSE User Manual (version 11.4, Fig.18).}
\label{fig:muse_spectral_resolution}
\end{figure}
\section{Discussion}
\label{sect_4}
The forbidden emission lines analyzed here originate from the center of the protoplanetary disks. Their shape indicates the presence of outflow/jet systems tracing the ongoing process of mass accretion to the star and mass extraction from the inner disk. A full analysis of the mass accretion from the lines is out of the scope of the current work. However, we are motivated to continue this analysis and compare the kinematics with the relative flux between HVC and LVC, with high-resolution spectra from \citet{Banzatti2019}.
From the eight T Tauri systems, the averaged position angle of the outflow/jet, $\overline{\mathrm{PA}}_\mathrm{outflow/jet}$, for DL Tau, DS Tau, CI Tau, IP Tau, and IM Lup (see Table \ref{tab:disk_properties}) show no tendency of misalignment between the inner disk and the outer disk. The outflows/jets with low signal-to-noise levels and compact extensions close to the image center are challenging to the Gaussian fit method. Due to these effects, the orthogonal profiles of the outflow/jet can take non-Gaussian shapes which, combined with the variable rms in the image center, produce a systematic underestimation of the PA$_{\mathrm{outflow/jet}}$ uncertainties. For IM Lup, determining a PA$_{\mathrm{outflow/jet}}$ from the [\ion{O}{i}]~$\lambda$6300 and [\ion{N}{ii}]~$\lambda$6583 lines was challenging due to the systematic issues mentioned resulting in a $\Delta$PA$_{\mathrm{outflow/jet}}=6^{\circ}$ between the two emission lines. However, the PA$_{\mathrm{outflow/jet}}$ values from [\ion{O}{i}]~$\lambda$6300 and from [\ion{N}{ii}]~$\lambda$6583 are tracing the same physical outflow/jet axis. The PA average of both emission lines for IM Lup are within the uncertainty range, therefore, no misalignment is detected between the inner and outer disk. However, we encourage follow-up observations with higher signal-to-noise to improve the PA$_{\mathrm{outflow/jet}}$ estimates for IM Lup. If there was a misalignment between the inner and outer disk, it would be an order of magnitude smaller than the reported values on other sources: $72^{\circ}$ for HD 100453 \citep{Benisty_2017}, $30^{\circ}$ for HD14306 \citep{Benisty_2018}, $30^{\circ}$ for DoAR 44 \citep{Casassus_2018}, and $70^{\circ}$ for HD 142527 \citep{Marino_2015}. One piece of observational evidence supporting the determination of no misalignment in IM Lup is the lack of shadows in scattered light \citep{Avenhaus_2018}. For IP Tau, our result of no misalignment agrees with what has been found by \citet{Bohn_2022} when performing a parametric model fit to the squared visibilities. The estimation of the PA$_{\mathrm{outflow/jet}}$ for different emission lines is a reliable way to illustrate the orientation of the inner disk and compare it to the PA$_{\mathrm{dust}}$. The two most detected lines in our sample are [\ion{O}{i}]~$\lambda$6300 and [\ion{N}{ii}]~$\lambda$6583 (see Table \ref{tab:jets_properties}). An interesting future question would be how these emission lines observed in the inner parts of the disk are aligned to the stellar rotation axis, finding that spectro-polarimetric observations could help provide an answer \citep[i.e.,][]{Donati_2019}. Our understanding of the nature of the emission lines and the stellar properties dependence is limited, as it is unclear what the strength of the magnetic field is in the inner disk of T Tauri sources.
\begin{figure} [htp!]
\centering
\includegraphics[width=9cm]{jet_width_DLTau.pdf}
\caption{Line-width profile of the emission lines analyzed in DL Tau. Each line profile is the result of a compilation of the Gaussian fits to intensity slices across the outflow/jet and their maximum intensity peaks are centered at the jet axis. The width of the outflow/jet, in au, is shown in the legend box.}
\label{jet_width}
\end{figure}
The line profiles help characterize the velocity components that describe the outflow/jet emission. We confirm a strong blue-shifted outflow (jet) in DL Tau and CI Tau with a $v_{\mathrm{outflow/jet}}$ greater or approximately 200~km~s$^{-1}$ reported from their emission lines. DS Tau is also a source showing HVC, but because of its high inclination (i=65$^{\circ}$) close to the edge-on orientation disk, an LVC can be embedded into the HVC \citep[see also ][]{Banzatti2019}. The line profile of IP Tau and IM Lup show low HVC and are also seen to encompass LVC parts as their profile also approaches the 0 km~s$^{-1}$. Although we are limited by the resolution of MUSE to analyze the LVC, high-resolution studies have confirmed that DL Tau, DS Tau, and IM Lup show the presence of disk winds when analyzing the [\ion{O}{i}]~$\lambda$6300 line \citep{Banzatti2019}.
Furthermore, the emission line profile of the maximum intensity peaks for DL Tau probe different layers in the outflow/jet suggesting that the temperature, the gas density, the electron density, and the ionization fraction is different depending on the proximity to the jet center. The H$\mathrm{\alpha}$ emission in all disks is seen throughout the disk surface, except in DL Tau. Although it has been assumed to be a systematic issue (i.e., imperfections on the PSF part of the H$\alpha$), at the same time, we do not discard the possibility of a highly ionized atmosphere above the surface of the disk.
Despite the low signal-to-noise level in IP Tau, we found some emission from both, an extremely weak signal in the red-shifted component and the blue-shifted one; with the characteristic of a wide inner gap in the dust continuum emission. We stress that the H$\mathrm{\alpha}$ emission should be interpreted with caution before drawing a definitive conclusion that these are real velocity shifts.
The mass-loss rate calculated for the outflows/jets are on the order of 10$^{-7}$-10$^{-8}$~$M_{\odot}~yr^{-1}$ (see Table \ref{tab:mass_loss}) -- a result that has also been found in previous works \citep[i.e.,][]{Hartigan_1995, Nisini_2018,Fang2018}. In the outer parts, where the emission of the outflow/jet is weaker, the length could be better defined for longer time integration that would lead to a higher sensitivity. As a result, deeper observations could potentially detect longer outflow/jet lengths. The mass-loss rate formula is also used, assuming that the peak of the emission line happens in the shock front, where the velocity of the shock is used instead of the intrinsic velocity of the outflow/jet. \citet{Hartigan_1994} showed different mass-loss rate calculations in both the postshock and the shock front and stressed that these values should not differ greatly from those based on electron densities and ionization fractions. The [\ion{O}{i}]~$\lambda$6300 line is primarily seen in most sources, probing gas slightly closer to the shock front than [\ion{S}{ii}]~$\lambda$6730, where the critical density is about 100 times lower \citep{Fang2018}. Determining the true angular momentum transport due to the outflow mass loss is very difficult as it requires the knowledge of the full 3D velocity vector and mass content in the outflow. The next step will be to improve the modeling of the velocity, density, and temperature structure of the outflows to generate synthetic line observations and compare them with actual observations. With the help of Fig. \ref{jet_width}, further analysis must be considered to compare the environment where these lines are emitting from. Physical parameters such as ionization fraction, electron density, gas temperature, and the rotational velocity are key to modeling the production of winds in disks.
\subsection{Jet wiggles as a potential sign of precession}
\label{jet_precession}
The outflows/jets show different morphologies. When plotting the Gaussian centers in Fig. \ref{fig:joydivision}, it seems that in jets that are not too collimated, the amplitude of the wiggles is higher and asymmetric. As the wiggles do not show any clear periodic pattern, these are not associated with any inner disk misalignment. Instead, these are distinctive to the jet launching and evolution mechanism. Investigating whether there are any photometric variations in the inner disk of these sources could help us better understand whether there might be a body orbiting close to the central star \citep{Herbst_2001, Cody_2018}.
Fully 3D simulations of a MHD jet launching in young binaries, considering tidal forces \citep{2015ApJ...814..113S,2018ApJ...861...11S}, have found that the inner disks will be warped and that the jet axis, which remains perpendicular to the disk, is subsequently re-aligned as a consequence of the disk precession.
The angle of the jet precession cone derived is slight and about $8\degr$, but it could be much less -- that is, perhaps approaching the differences between position angles identified here for T Tauri systems. However, their simulation was run only for one binary orbit and thus could not follow a whole precession period. Similar simulations, run in hydrodynamics but with forced precession of the jet nozzle, were applied for the precession jet source SS\,433 \citep{2014A&A...561A..30M}, for example. These simulations show that the expected sinusoidal pattern of jet propagation varies along the jet, with larger amplitude and wavelength for more considerable distances, similar to our data indicating a jet cone opening up with distance. Furthermore, a more complex model fitting may be necessary to derive a conclusive statement about precession.
As we do not know the magnetic field structure and strength, defining a statement about the viability of kink modes in jets is not feasible. As an alternative to a precession of the jet axis, we might assume that the jet kink instability could cause the wiggling jet structure. Depending on the jet magnetization, this instability limits the expansion of the jet \citep{2008A&A...492..621M}.
\section{Conclusion}
\label{sect:sect_5}
We analyzed spatially resolved emission lines, [\ion{O}{i}]~$\lambda\lambda$6300, 6363, [\ion{N}{ii}]~$\lambda\lambda$6548, 6583, H$\mathrm{\alpha}$, and [\ion{S}{ii}]\,$\lambda\lambda$6716, 6730, in five T Tauris: DL Tau, CI Tau, DS Tau, IP Tau, and IM Lup hosting outflows/jets. We conducted an analysis to estimate the PA$_{\mathrm{outflow/jet}}$ of the emission lines coming from the innermost region of the disk. The average of the position angles for different emission lines was used to compare with the PA$_{\mathrm{dust}}$ from the outer disk obtained in a previous work. We also performed a simple kinematic analysis to describe the velocity components of the line profiles and the line width of the outflow/jet in DL Tau. Finally, we calculated the mass-loss rate of the outflow/jet by using the [\ion{O}{i}]~$\lambda$6300 line based on physical parameters derived in this work. Our main conclusions are summarized as follows:
\begin{enumerate}
\item The PA$_{\mathrm{outflow/jet}}$ values are in good agreement, with differences of about 1$^{\circ}$, except for IM Lup that is 2.1$^{\circ}$, with the previously determined PA$_{\mathrm{dust}}$. Therefore, we do not find any evidence for a potential misalignment of the inner disk with regard to the outer disk.
\item The DL Tau and CI Tau emission lines are strongly blue-shifted, showing a velocity profile greater than 200 km~s$^{-1}$ associated with strong outflows/jets. The IP Tau, DS Tau, and IM Lup emission lines are less shifted and closely probing low-velocity components more associated with outflows and winds. The velocity components exhibit different outflows/jets and wind morphology in their systems.
\item The line width of the emission lines in DL Tau probes different layers in the outflow/jet, with the [\ion{O}{i}]~$\lambda6363$ line and the [\ion{N}{ii}]~$\lambda6583$ line probing the deepest layer and the [\ion{S}{ii}]~$\lambda6716$ line and the [\ion{S}{ii}]~$\lambda6730$ line probing the widest layer. The [\ion{O}{i}]~$\lambda6363$ and [\ion{N}{ii}]~$\lambda6583$ lines have very similar widths as well as H$\alpha$ and [\ion{O}{i}]~$\lambda6363$ lines hinting to certain correlation in the ionization fraction and electronic density functions as mentioned in \citet{Lavalley_2000}.
\item Our estimated values for the mass loss are in agreement as being on the order of (1.1-6.5) $\times$10$^{-7}$-10$^{-8}$~$M_{\odot}~yr^{-1}$, including the measurement of the length of the outflow/jet. This value is comparable to previous works for other sources and instruments.
\end{enumerate}
\begin{acknowledgements}
We would like to thank the referee for providing useful comments in order to improve the quality and understanding of this work. This work is supported by the \emph{European Research Council (ERC)} project under the European Union's Horizon 2020 research and innovation programme number 757957. Thanks to Paola Pinilla for providing the dust continuum data. Also, thanks to Matthias Samland and Sebastiaan Haffert for useful discussions about the MUSE instrument and data analysis. This paper make use of the ALMA data: ADS/JAO.ALMA\#2016.1.01164.S,ADS/JAO.ALMA\#2016.1.00484.L, ADS/JAO.ALMA\#2015.1.00888.S,ADS/JAO.ALMA\#2017.A.00006.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. N.K. acknowledges support provided by the Alexander von Humboldt Foundation in the framework of the Sofja Kovalevskaja Award endowed by the Federal Ministry of Education and Research. Th.H. acknowledges support from the European Research Council under the Horizon 2020 Framework Program via the ERC Advanced Grat Origins 83 24 28. G-DM acknowledges the support of the DFG priority program SPP 1992 ``Exploring the Diversity of Extrasolar Planets'' (KU~2849/7-1 and MA~9185/1-1). G-DM also acknowledges the support from the Swiss National Science Foundation under grant BSSGI0$\_$155816 ``PlanetsInTime''. Parts of this work have been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation. SK acknowledges funding from UKRI in the form of a Future Leaders Fellowship (grant no. MR/T022868/1). We would like to thank the matplotlib team \citep{Hunter_2007} for the good quality tool in order to better visualize the data. To \citet{Piqueras_2017} for the mpdaf software to analyze the MUSE data.
\end{acknowledgements}
|
1,116,691,499,132 | arxiv | \section{Introduction \qquad}
Nowadays, real-time systems are becoming more and more complex and are often
critical. Generally, these systems consist of several tasks that are timely
dependent, interacting and sharing one or more resources (e.g processors,
memory). Consequently, the correctness proofs of such systems are demanding
much theory regarding their increasing complexity. We need, for instance, to
consider formal models requiring the specification of time preemption;
concept where execution of a task may be stopped for a while and later
resumed at the same point. This notion of suspension implies to extend the
semantics of timed clocks in order to handle such behaviors. For this
effect, the concept of \textit{stopwatch} has been introduced while many
models have been defined, as for instance, hybrid automata ($LHA$) \cite%
{Alur}, stopwatch automata ($SWA$) \cite{Cassez}, Network of Stopwatch Automta (NSA) \cite{Glon2018}, and timed automata with priorities \cite{Pimk2021}. Time Petri nets ($TPN$)
have also been considered in several works including
\textit{Preemptive}-$TPN $ \cite{Bucci} \cite{Nigro2012} \cite{Abd2006}, \textit{Stopwatch-} $TPN$ \cite{Bert-grid},
\textit{Inhibitor-}$TPN$ \cite{IHTPN}, \textit{Scheduling-}$TPN$
\cite{LIme2003} and of unfolding safe parametric stopwatch TPN (PSwPNs)\cite{jard2013}. For example, in \cite{IHTPN} the authors defined the
\textit{ITPN} (\textit{Inhibitor arc Time Petri Nets) }model, wherein the
progression and the suspension of time is driven by using standard and
inhibitor arcs.
However, whatever the model we consider, the time analysis of the system is
basically the same, as it involves the investigation of a part of or the
whole set of its reachable states that determines its state space. As the
state space is generally infinite due to dense time semantics, we need
therefore to compute finite abstractions of it, that preserve properties of
interest. In these abstractions, states are grouped together, in order to
obtain a finite number of these groups. These groups of states are, for
instance, regions and zones for timed automata, or state classes \cite%
{BerDiaz} for time Petri nets. Hence, the states pertaining to each group
can be described by a system of linear inequalities, noted $D$, whose set of
solutions determines the state space of the group. Hence, if the model does
not use any stopwatch, then $D$ is of a particular form, called \textit{DBM}
\textit{(Difference Bound Matrix) }\cite{Dill}. However, when using
stopwatches, the system $D$ becomes more complex and does not fit anymore
into a \textit{DBM}. \ In actual fact, $D$ takes a general polyhedral form
whose \textit{canonical} form \cite{Avis} is given as a conjunction of two
subsystems $D=\overrightarrow{D}\wedge \widehat{D},$\ where $\overrightarrow{%
D}$ is a \textit{DBM}\ system\ and $\widehat{D}$ is a polyhedral system that
cannot be encoded with \textit{DBMs}.
The major shortcoming of manipulating polyhedra is the performance loss in
terms of computation speed and memory usage. Indeed, the complexity of
solving a general polyhedral system is exponential in the worst case, while
it is polynomial for a \textit{DBM} system. Furthermore, the reachability is
proved to be undecidable for both $\mathit{SWA}$ and $\mathit{LHA}$ \cite%
{Cassez} \cite{Alur} \cite{Henz}, as well as for $TPN$\ extended with
stopwatches \cite{Bert-grid} \cite{magnin2009}. As a consequence, the finiteness of the exact
state class graph construction cannot be guaranteed even when the net is
bounded.
In order to speed up the graph computation, an idea is to leave out the
subsystem $\widehat{D},$ to keep only the system $\overrightarrow{D}$ thus
overapproximating the space of $D$ to the \textit{DBM} containing it, see
\cite{Bucci}\cite{IHTPN}\cite{abdelli} for details. The obvious consequence
of the overapproximation is that we add states in the computed group that
are not reachable indeed. Yet more, this could prevent the graph computation
to terminate, by making the number of computed markings unbounded.
Conversely, this can also make the computation of the approximated graph
terminate by cutting off the polyhedral inequalities that prevent the
convergence.
Furthermore, in order to settle a compromise between both techniques, a
hybrid approach has been proposed by \textit{Roux et al }\cite{ROUX Magnat}.
The latter puts forward a sufficient condition that determines the cases
where the subsystem $\widehat{D}$ becomes redundant in $D$. Hence, the
combination of both \textit{DBM} and polyhedral representations makes it
possible to build the exact state class graph faster and with lower expenses
in terms of memory usage comparatively to the polyhedra based approach \cite%
{LIme2003}. More recently, \textit{Berthomieu et al} have proposed an
overapproximation method based on a quantization of the polyhedral system $D$
\cite{Bert-grid}. The latter approach ends in the exact computation of the
graph in almost all cases faster than the hybrid approach \cite{ROUX Magnat}%
. Nevertheless, this technique is more costly in terms of computation time
and memory usage comparatively to the \textit{DBM} overapproximation
although it yields much precise graphs.
Different algorithms \cite{abdelli}\cite{IHTPN}\cite{Bucci} have been
defined in the literature to compute the \textit{DBM} overapproximation\ of
a class. All these approaches are assumed theoretically to compute the
tightest \textit{DBM} approximation of $D$. However, we have shown in \cite%
{abdelli} that by avoiding to compute the minimal form of the \textit{DBM}
systems, our algorithm succeeds to compute straightforwardly the reachable
systems in their normal form. We thereby shunned the computation and the
manipulation of the intermediary polyhedra. Moreover, the effort needed for
the normalization and the minimization of the resulted \textit{DBM}\ system
is removed. This has improved greatly the implementation and the computation
of the \textit{DBM} overapproximated graph.
Although the cost of computing the \textit{DBM} overapproximation is low
comparing to the exact construction, it remains that in certain cases the
approximation is too coarse to restore properties of interest and especially
quantitative properties \cite{Abdelli un}. In actual fact, more the
approximated graphs are big more the approximation looses its precision and
therefore includes false behaviors that may skew the time analysis of the
system. Many of these false behaviors are generated in the \textit{DBM}
overapproximation because the computation of a \textit{DBM} class is
performed recursively only from its direct predecessor class. We think that
some time information that stand in upper classes in the firing sequence
could be used to fix the approximation of the class to compute. In actual
fact, the \textit{DBM} overapproximations defined in \cite{abdelli}\cite%
{IHTPN}\cite{Bucci} are assumed to be\textit{\ }the tightest when referring
to the polyhhedral system $D$ computed in the context of the approximated
graph. The latter may not be equal to the polyhedral system resulted after
firing the same sequence in the exact graph. As polyhedral constraints are
removed systematically each time they appear in upper classes in the firing
sequence, the resulted $DBM$ overapproximation looses its precision.
Therefore, the \textit{DBM} overapproximation could be still more tightened
if we could restore some time information encoded by polyhedral constraints
removed in the upper classes in the firing sequence.
We explore in this paper a novel approach to compute a more precise \textit{%
DBM} overapproximation of the state space of real time preemptive systems
modeled by using the $\mathit{ITPN}$\ model. For this effect, we extend the
expression of a class to the time distance system that encodes the
quantitative properties of firing's subsequences. The time distance system has been already considered
in the computation of the state space of many timed Petri nets extensions as \cite{Bouch} \cite{TSPN}.
This system records relevant time information that is exploited to tighten still more the
\textit{DBM} overapproximation of a class. Although, the cost of computing
the latter is slightly higher than when using classical \textit{DBM}
overapproximation techniques \cite{abdelli}\cite{IHTPN}\cite{Bucci}, the
global effort needed to compute the final \textit{DBM} system remains
polynomial. Consequently, the resulted approximated graphs are very compact,
even equal to the exact ones while improving by far their calculation times.
Moreover, the obtained graphs are more suitable to restore quantitative
properties of the model than other constructions. To advocate the benefits
of this graph approximation, we report some experimental results comparing
our graph constructions with other fellow approaches.
The remainder of this paper is organized as follows: In section 2, we
present the syntax and the formal semantics of the $ITPN$ model. In section\
3, we lay down and discuss through an example the algorithms that build the
exact graph and the \textit{DBM} overapproximation of an $ITPN$. In section
4, we introduce formally our overapproximation and show how the approximated
graph is built. In $Section\ 5,$ we report the experimentation results of
the implementation of our algorithms and compare them with those of other
graph constructions.
\section{Time Petri Net with Inhibitor Arcs}
\textit{Time Petri nets with inhibitor arcs} ($ITPN$) \cite{IHTPN} extends
time Petri nets\cite{Merlin} to \textit{Stopwatch inhibitor arcs}. Formally,
an $ITPN$ is defined as follows:
\begin{definition}
{\small {An $\mathit{ITPN}$\textit{\ }is given by the tuple $%
(P,T,B,F,M^{0},I,IH)$ where: $P$ and $T$ are respectively two nonempty sets
of places and transitions; $B$ is the backward incidence function \footnote{%
{\small $%
\mathbb{N}
$ denotes the set of positive integers. In the graphical representation, we
represent only arcs of non null valuation, and those valued 1 are implicit.}}
: $B:P\times T\longrightarrow
\mathbb{N}
=\{0,1,2,..\};$ $F$ is the forward incidence function $F:P\times
T\longrightarrow
\mathbb{N}
$ $;$ $M^{0}$ is the initial marking mapping $M^{0}:P\longrightarrow
\mathbb{N}
$ ; $I$ is the delay mapping $I:T\longrightarrow
\mathbb{Q}
^{+}\times
\mathbb{Q}
^{+}\cup \left\{ \infty \right\} ,$\ where $%
\mathbb{Q}
^{+}$ is the set of non negative rational numbers. We write $%
I(t)=[tmin(t),tmax(t)]$\ such that $0\leq tmin(t)\leq tmax(t)$ $;$ $%
IH:P\times T\longrightarrow
\mathbb{N}
$ \ is the inhibitor arc function; there is an inhibitor arc connecting the
place $p$ to the transition $t,$ if $IH(p,t)\neq 0.$ }}
\end{definition}
\begin{figure}[h]
\centering\includegraphics[width=7 cm]{SFM1.eps}
\caption{An $ITPN$ model }
\label{Fig1}
\end{figure}
For instance, let us consider the $ITPN$ model shown in \textit{Fig 1}.
Therein, the \textit{inhibitor arc} is the arc ended by a circle that
connects the place $p_{7}$ to the transition $t_{3}$.\ Initially, the place $%
p_{3}$ is marked but the place $p_{7}$ is not; hence $t_{3}$ is enabled but
not inhibited. Therefore, $t_{3}$ is progressing as it is the case for $%
t_{4} $\ which is also enabled for the initial marking. However, the firing
of the transition $t_{4}$\ consumes the token in the place $p_{4}$ and
produces another in $p_{2}$\ and another one in $p_{7}$.\ Therefore, the
inhibitor arc becomes activated and the clock of $t_{3}$ is thus suspended ($%
t_{3}$ is inhibited). This suspension lasts as long as $p_{7}$\ remains
marked. For more details, the formal semantics of the $ITPN$ model is
introduced hereafter.
Let $RT:=(P,T,B,F,M^{0},I,IH)$ be an \textit{ITPN. }
\begin{description}
\item[-] We call a \textit{marking} the mapping, noted $M,$ which associates
with each place a number of tokens: $M:P\rightarrow
\mathbb{N}
.$
\item[-] A transition $t$ is said to be \textit{enabled} for the marking $M,$
if $\forall p\in P,B(p,t)\leq M(p)$; the number of tokens in each input
place of $t$ is greater or equal to the valuation of the arc connecting this
place to the transition $t$. Thereafter, we denote by $Te(M)$ the set of
transitions \textit{enabled} for the marking $M$.
\item[-] A transition $t$ is said to be \textit{inhibited }for a marking $M,$
if it is \textit{enabled} and if there exists an inhibitor arc connected to $%
t,$ such that the marking satisfies its valuation ($t\in Te(M))\wedge
\exists p\in P,0<IH(p,t)\leq M(p)$. We denote by $Ti(M)$ the set of
transitions that are \textit{inhibited} for the marking $M$.
\item[-] A transition $t$ is said to be \textit{activated }for a marking $M,$
if it is enabled and not inhibited, ($t\in Te(M))\wedge $ $\ (t\notin Ti(M))$%
; we denote by $Ta(M)$ the set of transitions that are \textit{activated}
for the marking $M$.
\item[-] Let $M$ be a marking ; two transitions $t_{i}$ and $t_{j}$ enabled
for $M$ are said to be \textit{conflicting} for $M$, if $\exists p\in
P,\quad B(p,t_{i})+B(p,t_{j})>M(p).$
\item[-] We note hereafter by $Conf(M)$ the relation built on $Te(M)^{2}$
such that $(t_{1},t_{2})\in Conf(M),$ iff $t_{1}$ and $t_{2}$ are in
conflict for the marking $M$.
\end{description}
For instance, let us consider again the \textit{$ITPN$} of \textit{Fig 1}.
Its initial marking is equal to $M^{0}:\left\{ p_{1},p_{3},p_{4}\right\}
\rightarrow 1;\left\{ p_{2},p_{5},p_{6},p_{7}\right\} \rightarrow 0.$ the
sets of enabled, inhibited, and activated transitions for $M^{0}$\ are
respectively $Te(M^{0})=\left\{ t_{1},t_{3,}t_{4}\right\} ,$\ $%
Ti(M^{0})=\varnothing ,$\ and $Ta(M^{0})=Te(M^{0}).\medskip $
\begin{remark}
We assume in the sequel a monoserver semantics, which means that no
transition can be enabled more than once for any marking.
\end{remark}
We define the semantics of an $ITPN$ as follows:
\begin{definition}
{\small {\ The semantics of an $ITPN$ is defined as a LTS (labeled
transition system), $ST=(\Gamma ,e^{0},\rightarrow ),$ such that: }}
\begin{itemize}
\item {\small $\Gamma $ is the set of reachable states: Each state, noted $%
e, $ pertaining to $\Gamma $ is a pair $(M,V)$ where $M$\ is a marking and $%
V $\ is a valuation function that associates with each enabled transition $t$
of $Te(M)$ a time interval that gives the range of relative times within
which $t $ can be fired. Formally we have : $\forall t$\ $\in Te(M),\quad
V(t):=[x(t),y(t)]$ }
\item {\small $e^{0}=(M^{0},V^{0})$ is the initial state, such that: $%
\forall t\in Te(M^{0}),\quad V^{0}(t):=I(t):=[tmin(t),tmax(t)].$ }
\item {\small $\rightarrow \in \Gamma \times (T\times
\mathbb{Q}
^{+})\times \Gamma $ \ is a relation, such that $((M,V),(t_{f},\underline{%
t_{f}}),(M^{\uparrow },V^{\uparrow }))\in \rightarrow ,$ iff: }
\begin{description}
\item[(i)] {\small $t_{f}\in Ta(M).$ }
\item[(ii)] {\small $x(t_{f})\leq \underline{t_{f}}\leq \underset{\forall
t\in Ta(M)}{MIN}\left\{ y(t)\right\} .$ }
\end{description}
{\small and we have: }
{\small $\forall p\in P,$ $M^{\uparrow
}(p):=M(p)-B(p,t_{f})+F(p,t_{f})\smallskip .$ }
{\small $\forall t\in Te(M^{\uparrow })$ }
{\small if $t$ $\notin New(M^{\uparrow })$: }
{\small $\quad
\begin{tabular}{ll}
$\lbrack x^{\uparrow }(t),\ y^{\uparrow }(t)]:=[MAX(0,\ x(t)-\underline{t_{f}%
}),$ $\ y(t)-\underline{t_{f}}]$ & $t\in Ta(M)$ \\
$\lbrack x^{\uparrow }(t),\ y^{\uparrow }(t)]:=[x(t),$ $\ y(t)]$ & $t\in
Ti(M)$%
\end{tabular}%
\medskip $ }
{\small if $t$ $\in New(M^{\uparrow })$\quad }
{\small $\quad \lbrack x^{\uparrow }(t),\ y^{\uparrow }(t)]:=I(t)=[tmin(t),\
tmax(t)]$ \smallskip }
\begin{itemize}
\item {\small where $New(M^{\uparrow })$ denotes the set of transitions
\textit{newly enabled} for the marking $M^{\uparrow }.$ These transitions
are those enabled for $M^{\uparrow }$\ and not for $M$, or those enabled for
$M^{\uparrow }$\ and $M$ but are conflicting with $t_{f}$ \ for the marking $%
M$. \ Otherwise, an enabled transition which does not belong to $%
New(M^{\uparrow })$ is said to be \textit{persistent}. }
\end{itemize}
\end{itemize}
\end{definition}
If $t$ is a transition enabled for the state $e$, we note \underline{$t$}
the clock associated with $t$ that takes its values in $%
\mathbb{Q}
^{+}.$ \underline{$t$} measures the residual time of the transition $t$
relatively to the instant where the state $e$ is reached. The time
progresses only for activated transitions, whereas it is suspended for
inhibited transitions. Therefore, a transition $t_{f}$ can be fired at
relative time $\underline{t_{f}}$ from a reachable state $e,$ if $(i)$ $%
t_{f} $ is \textit{activated} for the marking $M$, and if $(ii)$ the time
can progress within the firing interval of $t_{f}$ without overtaking those
of other activated transitions. After firing $t_{f}$ the reachable state,
noted $e^{\uparrow },$ is obtained:
\begin{itemize}
\item by consuming a number of tokens in each input place $p$ of $t_{f}$
(given by the value $B(p,t_{f})$), and by producing a number of tokens in
each output place $p$ of $t_{f}$ (given by the value $F(p,t_{f})$);
\item by shifting the interval of a persistent activated transition with the
value of the firing time of $t_{f}$. However, the intervals of persistent
inhibited transitions remain unchanged. Finally, a newly enabled transition
is assigned its static firing interval.
\end{itemize}
Similarly as for a $TPN,$ the behavior of an $ITPN$\ can be defined as a
sequence of pairs $(t_{f}^{i},\underline{t_{f}^{i}})$, where $t_{f}^{i}$ is
a transition of the net and $\underline{t_{f}^{i}}$ $\in
\mathbb{Q}
^{+}$. Therefore, the sequence $\mathit{S}^{\mathit{\ast }}=((t_{f}^{1},%
\underline{t_{f}^{1}}),(t_{f}^{2},\underline{t_{f}^{2}}),..,(t_{f}^{n},%
\underline{t_{f}^{n}}))$\ denotes that $t_{f}^{1}$\ is firable after $%
\underline{t_{f}^{1}}$\ time units, then $t_{f}^{2}$\ is fired after $%
\underline{t_{f}^{2}}$\ time units and so on, such that $t_{f}^{n}$ is fired
after the absolute time $\sum_{i=1}^{n}\underline{t_{f}^{i}}.$ Moreover, we
often express the behavior of the net as an \textit{untimed sequence, }%
denoted by\textit{\ }$\mathit{S}$\textit{,} obtained from a timed sequence $%
\mathit{S}^{\mathit{\ast }}$ by removing the firing times: If $\mathit{S}^{%
\mathit{\ast }}=((t_{f}^{1},\underline{t_{f}^{1}}),(t_{f}^{2},\underline{%
t_{f}^{2}}),..,(t_{f}^{n},\underline{t_{f}^{n}})),$ then $\mathit{S}%
=(t_{f}^{1},t_{f}^{2},..,t_{f}^{n}).$ As the set of time values is assumed
to be dense, the model $ST$ is infinite. In order to analyze this model, we
need to compute an abstraction of it that saves the most properties of
interest. The construction of a \textit{symbolic graph} preserves the
untimed sequences of $ST,$ and makes it possible to compute a finite graph
in almost all cases. We show hereafter how to compute the state class graph
of the $ITPN$ that preserves chiefly the linear properties of the model.
\section{$ITPN$ state space construction}
As for a $TPN$ model \cite{Merlin}, the state graph $ST$ of an $ITPN$ can be
contracted by gathering in a same class all the states reachable after
firing the same untimed sequence. This approach (known as the state class
graph method \cite{BerDiaz}), expresses each class as a pair $(M,D)$ where $%
M $ is is the common marking and $D$ is a system of inequalities that
encodes the state space of the class. Each variable of such a system is
associated with an enabled transition and measures its residual time$.$ When
dealing with an $ITPN$, the inequalities of the system $D$ may take a
polyhedral form \cite{LIme2003}. More formally, a class of states of an $%
ITPN $\ is defined as follows:
\begin{definition}
{\small {\ Let $ST=(\Gamma ,e^{0},\rightarrow )$ be the LTS associated with
an $ITPN$. A class of states of an $ITPN$, denoted by $E,$\ is the set of
all the states pertaining to $\Gamma $ that are reachable after firing the
same untimed sequence $S=$ $(t_{f}^{1},..,t_{f}^{n})$ from the initial state
$e^{0}$. A class $E$\ is defined by $(M,D),$ where $M$ is the marking
reachable after firing $S$, and $D$ is the firing space encoded as a set of
inequalities. \newline
For $Te(M)=\{t_{1},..,t_{s}\},$ we have : \medskip $\ D=\widehat{D}\wedge $}}%
$\overrightarrow{{\small {D}}}${\small {\ }}
$\overrightarrow{{\small D}}${\small $:=\left\{
\begin{tabular}{l}
{\Huge $\wedge $}$_{i\neq j}$ $\quad (\underline{t_{j}}-\underline{t_{i}}%
~\leq ~d_{ij})$ \\
{\Huge $\wedge $}$_{i\leq s}\quad (d_{i\bullet }\leq ~\underline{t_{i}}~\leq
d_{\bullet i})$%
\end{tabular}%
\right. ~~\smallskip $ }
{\small \qquad with ($t_{j},t_{i})\in Te(M)^{2}~$ $d_{ij}\in
\mathbb{Q}
\cup \left\{ \infty \right\} ,~d_{\bullet i}\in
\mathbb{Q}
^{+}\medskip \cup \left\{ \infty \right\} ,d_{i\bullet }\in
\mathbb{Q}
^{+}$ }
{\small $\widehat{D}:=${\Huge $\wedge $ }$_{k=1..p}\quad (\alpha _{1k}%
\underline{t_{1}}+..+\alpha _{sk}\underline{t_{s}}~\leq ~d_{k})~\ $ }
{\small \qquad with$\ d_{k}\in
\mathbb{Q}
\cup \left\{ \infty \right\} ,$ $(\alpha _{1k},..,\alpha _{sk})\in
\mathbb{Z}
^{s}$ \ and\footnote{{\small $Z$ denotes the set of relative integers.\ }} }
{\small \qquad \qquad \qquad $\forall k,\exists (i,j),(\alpha _{ik},\alpha
_{jk})\notin \left\{ (0,0),(1,-1),(1,0),(-1,0)\right\} $ }
\end{definition}
We denote by the element $\left\{ \bullet \right\} $\ the instant at which
the class $E$ is reached. Therefore, the value of the clock \underline{$%
t_{i} $} expresses the time relative to the instant $\bullet ,$ at which the
transition $t_{i}$ can be fired.\ Thus, for each valuation $\psi $
satisfying the system $D,$ it corresponds a unique state $e=(M,V)$ reachable
in $ST$ after firing the sequence $S$.
In case of a $\mathit{TPN}$, the system $D$ is reduced to the subsystem $%
\overrightarrow{D}.$ The inequalities of the latter have a particular form,
called $DBM$ (\textit{Difference Bound Matrix})\cite{Dill}. The
coefficients, $d_{\bullet i},d_{i\bullet }$ and $d_{ij}$ are respectively,
the minimum residual time to fire the transition $t_{i},$ the maximum
residual time to fire the transition $t_{i},$ and the maximal firing
distance of the transition $t_{j}$ relatively to $t_{i}.$ The $DBM$ form
makes it possible to apply an efficient algorithm to compute a class, whose
overall complexity is $O(m^{3})$, where $m$\ is the number of enabled
transitions. However, for $TPN$ augmented with stopwatches, the state space
of a class cannot be encoded only with $DBMs$. Actually, inequalities of
general form (called also \textit{polyhedra)}$,$ are needed to encode this
space. The manipulation of these constraints, given by the subsystem $%
\widehat{D},$ induces a higher complexity that can be exponential in the
worst case.
The exact state class graph, noted $GR,$ of an $ITPN$\ is computed by
enumerating all the classes reachable from the initial class $E^{0}$ until
it remains no more class to explore. Formally, the exact state class graph
of an $ITPN$ can be defined as follows \cite{Bert-grid}:
\begin{definition}
{\small {The exact state class graph of an $ITPN$, denoted by $GR$, is the
tuple $(CE,E^{0},\longmapsto )$\ where: \newline
- $CE$\ is the set of classes reachable in $GR;$\ \newline
- $E^{0}=(M^{0},D^{0})$\ is the initial class such that: $D^{0}=\left\{
\begin{tabular}{ll}
$\forall t_{i}\in Te(M^{0}),$ & $tmin(t_{i})\leq \underline{t_{i}}\leq
tmax(t_{i})$%
\end{tabular}%
\right. $; \newline
- $\longmapsto $\ is the transition relation between classes defined on $%
CE\times T\times CE,$\ such that \newline
$((M,D),t_{f},(M^{\uparrow },D^{\uparrow }))\in \longmapsto ,$ iff: }}
\begin{description}
\item[a)] {\small $t_{f}$ is activated and the system $D$ augmented with the
firing constraints of $t_{f}$\ that we write $D_{a}=$ $D\wedge (\forall t\in
Ta(M),\quad \underline{t_{^{f}}}\leq \underline{t})$ holds. }
\item[b)] {\small $\forall p\in P,$ $M^{\uparrow
}(p):=M(p)-B(p,t_{f})+F(p,t_{f})\smallskip .$ }
\item[c)] {\small The system $D^{\uparrow }$ is computed from $D,$ as
follows: }
\begin{enumerate}
\item {\small In the system $D_{a}$, replace each variable \underline{$t$}
related to a persistent transition activated for $M$ by: $\underline{t}:=%
\underline{t_{f}}+\underline{t^{\prime }},$ thus denoting the time
progression. On the other hand, replace each variable \underline{$t$}
related to a persistent transition inhibited for $M$ by: $\underline{t}:=%
\underline{t^{\prime }},$ thus denoting the time inhibition. }
\item {\small Eliminate then by substitution the variable \underline{$t_{f}$}
as well as all the variables relative to transitions disabled by the firing
of $t_{f};$ }
\item {\small Add to the system thus computed, the time constraints relative
to each newly enabled transition for $M^{\uparrow }$:
\begin{tabular}{ll}
$\forall t_{i}\in New(M^{\uparrow }),$ & $tmin(t_{i})\leq \underline{t_{i}}$$%
\leq tmax(t_{i})$%
\end{tabular}
}
\end{enumerate}
\end{description}
\end{definition}
The last definition shows how the exact state class graph of an $ITPN$ is
built. Being given a class $E=(M,D)$ and a transition $t_{f}$ activated for $%
M$, the computation of a class $E^{\uparrow }=(M^{\uparrow },D^{\uparrow })$%
\ \ reachable from $E$ by firing $t_{f}$ consists in computing the reachable
marking $M^{\uparrow }$ and the system $D^{\uparrow }$\ that encodes the
firing space of $E^{\uparrow }.$ The class $E$ can fire the activated
transition $t_{f},$ if there exists a valuation that satisfies $D$ (a state
of $E$), such that $t_{f}$\ can be fired before all the other activated
transitions. The firing of $t_{f}$ produces a new class $E^{\uparrow
}=(M^{\uparrow },D^{\uparrow });$ the latter gathers all the states
reachable from those of $E$ that satisfy the firing condition of $Definition$
$2.$ The system $D^{\uparrow }$ that encodes the space of $E^{\uparrow }$\
is computed from the system $D$\ augmented with the firing constraints of $%
t_{f}$. The substitution of variables relative to activated transitions
allows to shift the time origin towards the instant at which the new class $%
E^{\uparrow }$ is reached. Then, an equivalent system is computed wherein
the variables relative to transitions that have been disabled following the
firing of $t_{f}$ are removed. Finally, the constraints of transitions newly
enabled are added.
The complexity of the firing test and the step 2 of the previous algorithm
depends on the form of the system $D$. If $D$\ includes polyhedral
constraints, then the complexity of the algorithm is exponential, whereas it
is polynomial otherwise.\ It should be noticed that the system $D^{0}$
related to the initial class is always in $DBM$ form, and that polyhedral
constraints are generated in the systems of reachable classes only when both
inhibited and activated transitions stand persistently enabled in a firing
sequence \cite{Bucci} \cite{ROUX Magnat}.
Knowing how to compute the successors of a class, the state class graph
computation is basically a depth-first or breadth-first graph generation.
Then the state class graph is given as the quotient of \ $GR$ by a suitable\
equivalence relation. This equivalence relation may be equality : two
classes $(M,D)$ and $(M,D^{\prime }),$ given in their minimal form are equal
if \ $D=D^{\prime }$, or inclusion; in other terms, if $\left\rceil
D\right\lceil $\ denotes the set of solutions for the system $D$, then we
have : $\left\rceil D\right\lceil \subseteq \left\rceil D^{\prime
}\right\lceil .$ It should be noticed that the equality preserves mainly the
untimed language of the model, whereas the inclusion preserves the set of
reachable markings.
The algorithm given in \textit{Definition 4} can be applied to a $TPN$ with
the specificity that the system $D$ is always encoded in $DBM$. Moreover, it
is proved that the number of equivalent $DBM$ systems computed in the graph
is always finite \cite{BerDiaz}. This property is important since it implies
that the graph is necessarily finite, if the number of reachable markings is
bounded. Unfortunately, this last property is no more guaranteed in presence
of stopwatches. In actual fact, the number of reachable polyhedral systems
may be infinite too, thus preventing the termination of the graph
construction even when the net is bounded. To tackle these issues, the $%
\mathit{DBM}$ overapproximation technique has been proposed as an
alternative solution to analyze preemptive real time systems \cite{abdelli}%
\cite{IHTPN}\cite{Bucci}. This approach consists in cutting off the
inequalities of the subsystem $\widehat{D}$\ when the latter appears in $D$.
It thereby keeps only those of the subsystem $\overrightarrow{D}$ to
represent an overapproximation of the space of $D$. This solution makes it
possible to build a less richer graph than the exact one, but nevertheless
with lesser expenses in terms of computation time and memory usage.
Moreover, this overapproximation ensures that the number of $DBM$ systems to
be considered in the computation is always finite, whereas that of polyhedra
systems may be infinite. This may thus make the overapproximated
construction terminate, while the exact one does not.\ To better understand
how works this approach, we apply the state class graph method to the $ITPN$
example of \textit{Fig 1}. In the sequel, we denote by\ $\widetilde{%
{\scriptsize {\normalsize D}}}$ the $DBM$ system obtained by $DBM$
overapproximation which may be different from $\overrightarrow{D}$ as we can
see thereafter. Therefore, the system $\overrightarrow{D}$ denotes the
tightest $DBM$ system that one can obtain by $DBM$ overapproximation.
Let $E=(M,D)$ be the class reachable in the exact graph after firing the
sequence $(t_{4},t_{1})$ from the initial class $E^{0}=(M^{0},D^{0}).$
{\scriptsize {\textit{E}$^{0}$\textit{=}$\left(
\begin{tabular}{l}
${\normalsize M}^{0}{\normalsize :p}_{1}{\normalsize ,p}_{3}{\normalsize ,p}%
_{4}{\normalsize \rightarrow 1}$ \\
${\normalsize D}^{0}{\normalsize :}\left\{
\begin{tabular}{l}
3 $\leq $ $\underline{t_{1}}\leq $ 3 \\
2 $\leq $ $\underline{t_{3}}\leq $ 4 \\
0 $\leq \underline{t_{4}}\leq $ 2%
\end{tabular}%
\right. $%
\end{tabular}%
\right. $ \ \textit{E=}$\left(
\begin{tabular}{l}
${\normalsize M:p}_{2}{\normalsize ,p}_{3}{\normalsize ,p}_{5}{\normalsize ,p%
}_{7}{\normalsize \rightarrow 1}$ \\
${\normalsize D:}\left\{
\begin{tabular}{l}
$\underline{t_{5}}=0$ \\
$-8\leq \underline{t_{2}}-\underline{t_{7}}\leq -5$ \\
$7\leq \underline{t_{7}}\leq 9$ \\
$0\leq \underline{t_{2}}$ \\
$9\leq \underline{t_{7}}+\underline{t_{3}}\leq 11$%
\end{tabular}%
\right. $%
\end{tabular}%
\right. $ $\ \widetilde{{\normalsize D}}{\normalsize :}\left\{
\begin{tabular}{l}
$\underline{t_{5}}=0$ \\
$-8\leq \underline{t_{2}}-\underline{t_{7}}\leq -5$ \\
$7\leq \underline{t_{7}}\leq 9$ \\
$0\leq \underline{t_{2}}$ \\
$0\leq \underline{t_{3}}\leq 4$%
\end{tabular}%
\right. $ } }
At this stage, polyhedral constraints given by $9\leq \underline{t_{7}}+%
\underline{t_{3}}\leq 11$ appear for the first time in the firing sequence.
This happens because the inhibited transition $t_{3}$ and the activated
transitions $t_{7}$ and $t_{2}$ are persistently enabled in this sequence.
The $\mathit{DBM}$ overapproximation consists in cutting off the polyhedral
constraints $9\leq \underline{t_{7}}+\underline{t_{3}}\leq 11$ after
normalizing all the $DBM$ constraints. We thereby obtain the system $%
\widetilde{{\scriptsize {\normalsize D}}}$ that replaces the system $%
{\normalsize D}$ in the $DBM$ approximated class $\widetilde{{\normalsize E}}
$. However, at this stage, the removed polyhedral constraints are redundant
relatively to $\widetilde{{\scriptsize {\normalsize D}}}$ and therefore have
no impact on the firing of activated transitions $t_{2}$, $t_{5}$ and $t_{7}$%
. Let us consider now the firing of the transition $t_{2}$ from both classes
${\normalsize E}$ and $\widetilde{{\normalsize E}}$ to reach respectively
the classes ${\normalsize E}^{\prime }$ and $\widetilde{{\normalsize E}%
^{\prime }}.$
{\scriptsize {\ \textit{E}$^{\prime }$\textit{=}$\left(
\begin{tabular}{l}
${\normalsize M}^{\prime }{\normalsize :p}_{3}{\normalsize ,p}_{5}%
{\normalsize ,p}_{7}{\normalsize \rightarrow 1}$ \\
${\normalsize D}^{\prime }{\normalsize :}\left\{
\begin{tabular}{l}
$\underline{t_{5}}=0$ \\
$7\leq \underline{t_{7}}\leq 8$ \\
$9\leq \underline{t_{7}}+\underline{t_{3}}\leq 11$%
\end{tabular}%
\right. $%
\end{tabular}%
\right. $ $\ \widetilde{\mathit{E}^{\prime }}$\textit{=}$\left(
\begin{tabular}{l}
${\normalsize M}^{\prime }{\normalsize :p}_{3}{\normalsize ,p}_{5}%
{\normalsize ,p}_{7}{\normalsize \rightarrow 1}$ \\
$\widetilde{{\normalsize D}^{\prime }}{\normalsize :}\left\{
\begin{tabular}{l}
$\underline{t_{5}}=0$ \\
$-8\leq \underline{t_{5}}-\underline{t_{7}}\leq -7$ \\
$7\leq \underline{t_{7}}\leq 8$ \\
$0\leq \underline{t_{3}}\leq 4$%
\end{tabular}%
\right. $%
\end{tabular}%
\right. $ } }
As we notice, the polyhedral constraints are still present in $E^{\prime }$
since the transitions $t_{3}$ and $t_{7}$ remain persistently enabled.
However, these constraints are no more redundant relatively to the system $%
\widetilde{{\normalsize D}^{\prime }}$ as we obtain the DBM constraints $%
1\leq \underline{t_{3}}\leq 4$ in $\overrightarrow{{\normalsize D}^{\prime }}
$ after normalisation$.$ Therefore, this loss in the precision in the
\textit{DBM} overapproximation may have an impact on the firing process
ahead in the sequence. To highlight this fact, let us consider the firing of
the transition $t_{5}$ from both classes ${\normalsize E}^{\prime }$ and $%
\widetilde{{\normalsize E}^{\prime }}$ to reach respectively the classes $%
{\normalsize E}"$ and $\widetilde{{\normalsize E}"}.$
{\scriptsize {\textit{E}$"$\textit{=}$\left(
\begin{tabular}{l}
${\normalsize M}"{\normalsize :p}_{3}{\normalsize ,p}_{6}{\normalsize %
\rightarrow 1}$ \\
${\normalsize D}"{\normalsize :}\left\{
\begin{tabular}{l}
$\underline{t_{6}}=0$ \\
$1\leq \underline{t_{3}}\leq 4$%
\end{tabular}%
\right. $%
\end{tabular}%
\right. $ \ $\ \widetilde{\mathit{E}"}$\textit{=}$\left(
\begin{tabular}{l}
${\normalsize M}"{\normalsize :p}_{3}{\normalsize ,p}_{6}{\normalsize %
\rightarrow 1}$ \\
$\widetilde{{\normalsize D}"}{\normalsize :}\left\{
\begin{tabular}{l}
$\underline{t_{6}}=0$ \\
$0\leq \underline{t_{3}}\leq 4$%
\end{tabular}%
\right. $%
\end{tabular}%
\right. $ } }
At this stage, we notice that both systems $D"=\overrightarrow{D"}$ and $%
\widetilde{D"}$ are both in \textit{DBM}, but the exact system $D"$is more
precise that the one obtained by overapproximation. As a result, only the
transition $t_{6}$ is firable from ${\normalsize E}"$ but not $t_{3}$ since $%
{\normalsize D}"\wedge (t_{3}\leq t_{6})$ is not consistent. However, due to
constraints relaxation both transitions are firable from $\widetilde{\mathit{%
E}"}.$ Hence we have an additional sequence in the \textit{DBM}
overapproximated graph that is not reachable in the exact graph $GR$.
In actual fact, all is about the minimal residual time of $t_{3}$\ which has
increased during its inhibition time from 0 to 1. Let us clarify this point,
initially $t_{3}$ is activated and we have $2\leq \underline{t_{3}},$ and
the model fires the transition $t_{4}$ between $[0,2]$. After this firing,
the transition $t_{2}$\ is enabled for the first time, the place $p_{7}$
becomes marked, and $t_{3}$ is inhibited for the first time; we have $0\leq $%
\underline{$t_{3}$} and $2\leq \underline{t_{2}}\leq 5.$\ The transition $%
t_{1}$ is fired afterwards to enable the transition $t_{5}$ and we have
\underline{$t_{5}$}=0. So to be able to fire the persistent transition $%
t_{2} $, we must have (\underline{$t_{2}$}$=0)$ too. This compels the
relative time to progress at least with $tmin(t_{2})=2$ when firing $t_{1}$,
while the elapsed absolute time must not surpass $tmax(t_{1})=3$. This last
constraint restricts the state space of the class reachable after firing $%
t_{2}$\ only to states\footnote{%
For these states, the transition $t_{3}$ is not yet inhibited.} that have
fired initially $t_{4}$ during $[0,1]$. As a result, the minimal residual
time of the inhibited transition $t_{3}$ increases to 1 after the firing of $%
t_{2}$\textit{. }
The loss of precision in $\widetilde{D"}$ comparatively to $\overrightarrow{%
D"}$ is due to some polyhedral constraints involved in the normalization of $%
\overrightarrow{D"}$ that are removed in the predecessor classes of $%
\widetilde{\mathit{E}"}$. Therefore, we think that some time information
that stand in the upper classes in the firing sequence could be used to fix
the problem and to tighten still more the approximation. This will be the
subject of our proposal which is addressed in the next section. But before
we need to introduce formally the construction of the \textit{DBM}
overapproximation graph.
The computation of the $\mathit{DBM}$ overapproximation of a class $E$\ can
be obtained by using different algorithms \cite{abdelli}\cite{IHTPN}\cite%
{Bucci}. However, we have shown in a previous work \cite{abdelli} that by
avoiding to compute the $DBM$ systems systematically in their minimal form,
we succeed to define an algorithm that computes straightforwardly the
reachable systems in their normal form. We thereby shunned the computation
and the manipulation of the intermediary polyhedra. Moreover, the effort
needed for the normalization and the minimization of the resulted \textit{DBM%
}\ system is removed; this improves greatly the implementation and the
computation of the \textit{DBM} overapproximated graph. This algorithm
encodes the full $DBM$ system $\widetilde{D}$ as a square matrix where each
line and corresponding column, are indexed by an element of $Te(M)\cup
\left\{ \bullet \right\} .$ In concrete terms, we have: $\forall
(t_{i},t_{j})\in Te(M)^{2}\wedge (t_{i}\neq t_{j}),\quad \widetilde{D}%
[\bullet ,t_{i}]:=$ $d_{\bullet i};$\ $\quad \widetilde{D}[t_{i},\bullet
]:=-d_{i\bullet }$ $;$\newline
$\quad \quad \widetilde{D}[t_{i},t_{j}]:=d_{ij};\quad \widetilde{D}%
[t_{i},t_{i}]:=0$ $;\quad \quad \widetilde{D}[\bullet ,\bullet ]:=0.$
\begin{table}[h]
\caption{The matrix representation of the system $\widetilde{D^{0}}$\textit{.%
}}\centering\centering
\begin{tabular}{|l||l|l|l|l|}
\hline
$\widetilde{D^{0}}$ & $\bullet $ & $t_{1}$ & $t_{3}$ & $t_{4}$ \\
\hline\hline
$\bullet $ & {\scriptsize 0} & {\scriptsize 3} & {\scriptsize 4} &
{\scriptsize 2} \\ \hline
$t_{1}$ & {\scriptsize -3} & {\scriptsize 0} & {\scriptsize 1} &
{\scriptsize -1} \\ \hline
$t_{3}$ & {\scriptsize -2} & {\scriptsize 1} & {\scriptsize 0} &
{\scriptsize 0} \\ \hline
$t_{4}$ & {\scriptsize 0} & {\scriptsize 3} & {\scriptsize 4} & {\scriptsize %
0} \\ \hline
\end{tabular}%
\end{table}
These matrix notations are used to represent the coefficients of the system $%
\widetilde{D}$. For example, the matrix shown in \textit{Tab.1} encodes the
system $\widetilde{D^{0}}=D^{0}$ associated with the initial class of the $%
\mathit{ITPN}$\ of \textit{Fig 1. }
The construction of the \textit{DBM} overapproximation graph, noted $%
\widetilde{GR}$, can be computed as follows \cite{abdelli}:
\begin{definition}
{\small {\ The DBM overapproximated graph of an $ITPN$, noted $\widetilde{GR}
$, is the tuple $(\widetilde{CE},\widetilde{E^{0}},\rightsquigarrow ),$\
such that : }}
\begin{itemize}
\item {\small $\widetilde{CE}$\ is the set of DBM\ overapproximated classes
reachable in $\widetilde{GR\text{ }};$\ }
\item {\small $\widetilde{E^{0}}=(M^{0},\widetilde{D^{0}})\in \widetilde{CE}$
is the initial class, such that: }
{\small $\widetilde{D^{0}}:=\left\{
\begin{tabular}{ll}
$\forall t_{i}\in Te(M^{0}),$ & $tmin(t_{i})\leq \underline{t_{i}}\leq
tmax(t_{i})$ \\
$\forall t_{i}\neq t_{j}\in Te(M^{0}),$ & $\underline{t_{j}}-\underline{t_{i}%
}\leq tmax(t_{j})-tmin(t_{i})$%
\end{tabular}%
\right. $ }
\item {\small \ $\rightsquigarrow $ \ is a transition relation between DBM
overapproximated classes defined on $\widetilde{CE}\times T\times \widetilde{%
CE},$\ such that $((M,\widetilde{D}),t_{f},(M^{\uparrow },\widetilde{%
D^{\uparrow }}))\in \rightsquigarrow ,$ iff : }
\begin{itemize}
\item {\small $\left( t_{f}\in Ta(M)\right) $ $\wedge $ $(\widetilde{\beta }%
[t_{f}]\ \geq 0)$ such that:\ $\forall x\in Te(M)\cup \left\{ \bullet
\right\} ,\quad \widetilde{\beta }[x]=\underset{\forall t\in Ta(M)}{MIN}%
\left\{ \widetilde{D}[x,t]\right\} $. }
\item {\small $\forall p\in P,$ $M^{\uparrow
}(p):=M(p)-B(p,t_{f})+F(p,t_{f})\smallskip .$ }
\item {\small The coefficients of the $DBM$\ \ inequalities of the system $%
\widetilde{D^{\uparrow }}$\ are computed from those of $\widetilde{D}$\ by
applying the following algorithm: \bigskip }
{\small $\forall t\in Te(M^{\uparrow })$ }
{\small $\quad \quad \widetilde{D^{^{\uparrow }}}[t,t]:=0;\quad \quad \quad
\widetilde{D^{^{\uparrow }}}[\bullet ,\bullet ]:=0;$ }
{\small \quad If $t$ is persistent }
{\small \quad \quad If $t\in Ti(M)$ $\ (t$ is inhibited for $M$) }
{\small $\quad \quad \quad \widetilde{D^{^{\uparrow }}}[t,\bullet
]:=MIN\left(
\begin{tabular}{l}
$\widetilde{D}[t,\bullet ]$ \\
$\widetilde{D}[t_{f},\bullet ]+\widetilde{\beta }[t]$%
\end{tabular}%
\right. \quad \widetilde{D^{\uparrow }}[\bullet ,t]:=MIN\left(
\begin{tabular}{l}
$\widetilde{D}[\bullet ,t]\medskip $ \\
$\widetilde{D}[t_{f},t]+\widetilde{\beta }[\bullet ]$%
\end{tabular}%
\right. $ \medskip }
{\small \quad \quad If $t\notin Ti(M)$ $\ (t$ is not inhibited for $M$) }
{\small $\quad \quad \quad \widetilde{D^{\uparrow }}[\bullet ,t]:=\widetilde{%
D}[t_{f},t]$ $;\quad \quad \quad \quad \widetilde{D^{^{\uparrow }}}%
[t,\bullet ]:=\medskip \widetilde{\beta }[t].$ }
{\small \quad If $t$ is newly enabled. }
{\small $\quad \quad \widetilde{D^{\uparrow }}[\bullet ,t]:=tmax(t)$ $;\quad
\quad \ \ \widetilde{D^{^{\uparrow }}}[t,\bullet ]:=-tmin(t).\medskip
\bigskip $ }
{\small $\forall (t_{1},t_{2})\in (Te(M^{\uparrow }))^{2}\wedge (t_{1}\neq
t_{2})$ }
{\small \quad If $t_{1}$ or $t_{2}$ are newly enabled. }
{\small $\ \ \quad \widetilde{D^{\uparrow }}[t_{1},t_{2}]:=\widetilde{%
D^{\uparrow }}[\bullet ,t_{2}]+\widetilde{D^{^{\uparrow }}}[t_{1},\bullet ]$%
.\medskip \medskip }
{\small \quad If $t$$_{1}$\ and $t$$_{2}$ are persistent. }
{\small $\quad $\quad If ($t_{1},t_{2})\notin (Ti(M))^{2}$\ \ ($t$$_{1}$\
and $t$$_{2}$\ are not inhibited for $M$) }
{\small $\quad \quad \quad \widetilde{D^{\uparrow }}[t_{1},t_{2}]:=MIN(%
\widetilde{D}[t_{1},t_{2}],\quad \widetilde{D^{^{\uparrow }}}[\bullet
,t_{2}]+\widetilde{D^{^{\uparrow }}}[t_{1},\bullet ]).\medskip $ }
{\small \quad $\ \ $If \ ($t$$_{1}$,$t_{2})\in (Ti(M))^{2}$\ \ ($t$$_{1}$\
and $t$$_{2}$\ are inhibited for $M$) }
{\small $\quad \quad \quad \widetilde{D^{\uparrow }}[t_{1},t_{2}]:=MIN(%
\widetilde{D}[t_{1},t_{2}],\quad \widetilde{D^{^{\uparrow }}}[\bullet
,t_{2}]+\widetilde{D^{\uparrow }}[t_{1},\bullet ]).\medskip $ }
{\small $\quad $\quad If $\ (t_{1}\in Ti(M))\wedge (t_{2}\notin Ti(M))$\ \ \
\ (Only $t$$_{1}$\ is inhibited for $M$). }
{\small $\quad \quad \quad \widetilde{D^{\uparrow }}[t_{1},t_{2}]$ $:=MIN(%
\widetilde{D}[t_{1},t_{2}]+\widetilde{D}[t_{f},\bullet ],\quad \widetilde{%
D^{\uparrow }}[\bullet ,t_{2}]+\widetilde{D^{^{\uparrow }}}[t_{1},\bullet
]). $ }
{\small $\quad $\quad If $\ (t_{1}\notin Ti(M))\wedge (t_{2}\in Ti(M))$\ \ \
\ (Only $t$$_{2}$\ is inhibited for $M$) }
{\small {\ $\quad \quad \quad \widetilde{D^{^{\uparrow }}}[t_{1},t_{2}]$ $:=$%
$MIN(\widetilde{D}[t_{1},t_{2}]+\widetilde{\beta }[\bullet ],\quad
\widetilde{D^{\uparrow }}[\bullet ,t_{2}]+\widetilde{D^{^{\uparrow }}}%
[t_{1},\bullet ]).$ } }
\end{itemize}
\end{itemize}
\end{definition}
If $t$ is an activated transition, then $\widetilde{\beta }[t]$ denotes the
minimal time distance between its firing time and that of any activated
transition. $\widetilde{E}$. Therefore, an activated transition $t_{f}$ is
\textit{not firable} from $\widetilde{E},$ if $\widetilde{\beta }[t_{f}]<0$.
Further, $\widetilde{\beta }[\bullet ]$ represents the maximal dwelling time
in the class.
It is noteworthy that if $\widetilde{E}$\ is an overapproximation of the
exact\ class $E,$\ then all the transitions firable from $E$\ are also
firable from $\widetilde{E}$. However, a transition which is not firable
from $E$ can, on the other hand, be firable\footnote{%
Conversely, if $t_{f}$\ is not firable from $\widetilde{E},$\ then it is not
firable from\ $E$.} from $\widetilde{E}$. Actually, as the class $\widetilde{%
E}$ contains all the states of $E,$\ we can find at least one state $e$ of $%
\widetilde{E}$ unreachable in $E,$\ such that $e$\ can fire $t_{f}.$
\begin{figure}[h]
\centering%
\begin{tabular}{ll}
\includegraphics[width=7 cm]{SFMGR1.ps} &
\includegraphics[width=8
cm]{SFMGR2.eps} \\
\multicolumn{1}{c}{(a)} & \multicolumn{1}{c}{(b)}%
\end{tabular}%
\caption{The exact graph and its DBM overapproximation of the $ITPN$ of
Fig.1 }
\end{figure}
To illustrate both graph constructions, let us consider again the net of
\textit{Fig 1}. The exact state class graph resulted after applying the
algorithm of Definition 4 is shown in \textit{Fig. 2.a}. Its \textit{DBM}
overapproximation resulted by the application of the algorithm given in
Definition 5 is depicted in \textit{Fig. 2.b}. Hence, the exact graph
contains 17 classes and 22 edges, whereas its \textit{DBM} overapproximated
graph contains 21 classes and 28 edges. By comparing both graphs\footnote{%
The class $\widetilde{E^{n}}$ as well as $E^{n}$ denote the node numbered $n$
in the corresponding graph.}, we notice that the transition $t_{3}$ is
firable from the class $\widetilde{E^{11}}$ in $\widetilde{GR},$ whereas it
is not from $E^{10}$ in $GR$. Moreover, $t_{2}$ is firable from $\widetilde{%
E^{12}}$ whereas it is not from $E^{4}.$ The sequences added in the graph $%
\widetilde{GR}$ due to overapproximation are highlighted in red in \textit{%
Fig 2.b}.
Although the cost of computing the \textit{DBM} overapproximation is low
comparing to the exact construction, it remains that in certain cases the
approximation is too coarse to restore properties of interest and especially
quantitative properties. In actual fact, more transitions remain
persistently enabled along firing sequences more the approximation looses
its precision and therefore includes false behaviors that skew the time
analysis of the system. Besides, these false behaviors may compute an
infinity of unreachable markings while the exact construction is indeed
bounded. Hence, this prevents the \textit{DBM} overapproximation to
terminate while the exact construction may converge.
We investigate in the next section a new approach to compute a tighter
\textit{DBM} overapproximation. The idea is to restore from previous classes
in the firing sequence time constraints that are used to tighten still more
the \textit{DBM} overapproximation of a class.
\section{Time distance based Approximation of the ITPN State Space}
\begin{tabular}{ccccl}
{\tiny $t_{f}^{1}$} & {\tiny $t_{f}^{2}$} & & {\tiny $t_{f}^{n}$} &
{\scriptsize fired transitions} \\
{\tiny 0$<$--------$>$} & {\tiny 1 $<$ --------$>$2} & {\tiny $\ .\ .\ .$ }
& {\tiny n-1$<$--------$>$n} & {\scriptsize firing points} \\
\multicolumn{1}{l}{{\tiny $M^{0}$}} & \multicolumn{1}{l}{{\tiny $M^{1}\qquad
\ \ \ \ \ \ \ \ M^{2}$}} & {\tiny $\qquad \qquad $} & \multicolumn{1}{l}{%
{\tiny $M^{n-1}$\quad\ \quad\ \quad $M^{n}$\ }} & {\scriptsize reachable
markings} \\
\multicolumn{1}{l}{{\tiny $e^{0}$}} & \multicolumn{1}{l}{{\tiny $e^{1}$}%
\qquad\ \ {\tiny $e^{2}$}} & \multicolumn{1}{l}{} & {\tiny $e^{n-1}$\quad\
\quad\ \quad\ \ $e^{n}$} & {\scriptsize reachable states} \\
\underline{{\tiny $t_{f}^{1}$}} & \underline{{\tiny $t_{f}^{2}$}} & &
\underline{{\tiny $t_{f}^{n}$}} & {\scriptsize firing time distances}%
\end{tabular}%
\bigskip
Let $RT:=(P,T,B,F,M^{0},I,IH)$ be an \textit{Inhibitor arc Time Petri Net. }%
We suppose that a sequence of transitions $S=(t_{f}^{1},..,t_{f}^{n})$ has
been fired in \emph{RT}$.$ The marking and the state reachable at the $%
(j)^{th}$ firing point are denoted $M^{j}$ and $e^{j}$ respectively.%
{\scriptsize {\ } }Therefore, for the firing point $(n)$ we define the
following:
\begin{itemize}
\item The marking reachable at point $(n)$ is denoted by $M^{n}$.
\item The function $Ne^{n}:Te(M^{n})\longrightarrow \left\{ 0,1,..,n\right\}
;~Ne^{n}(t)$ gives, as shown in $Fig.3,$ the number of the firing point that
has enabled the transition $t$ for the last time, provided that $t$ remains
persistently enabled up to the firing point $(n)$. Thereafter, we denote by$%
~[Ne^{n}]$ the set of transition's enabling points reported at the firing
point $(n).$
\item The function $Ni^{n}:Te(M^{n})\longrightarrow \left\{
-1,0,1,..,n\right\} .~Ni^{n}(t)$ gives, as shown in $Fig.3,$ the number of
the firing point that has inhibited the transition $t$ for the last time,
provided that $t$ remains persistently enabled up to the firing point $(n)$.
We have $Ni^{n}(t)=-1$ if $t$ has never been inhibited since its last
enabling point. Thereafter, we denote by$~[Ni^{n}]$ the set of transition's
inhibiting points reported at the firing point $(n).$
\item The function $Na^{n}:Te(M^{n})\longrightarrow \left\{
-1,0,1,..,n\right\} .~Na^{n}(t)$ gives, as shown in $Fig.3,$ the number of
the firing point that has activated the transition $t$ for the last time,
provided that $t$ remains persistently enabled up to the firing point $(n)$.
We have $Na^{n}(t)=-1$ if $t$ has never been activated since its last
enabling point. Thereafter, we denote by$~[Na^{n}]$ the set of transition's
activating points reported at the firing point $(n).$
\item We denote thereafter by $Point^{n}$ the set $[Ne^{n}]\cup \lbrack
Ni^{n}]\cup \lbrack Na^{n}]-\{-1\}.$
\end{itemize}
\begin{figure}[tp]
\centering\includegraphics[width=12 cm]{SFEM2.eps}
\caption{Last enabling inhibiting and activating points of a transition.}
\end{figure}
Let us consider the firing of a sequence of transitions $S=\left(
t_{f}^{1},..,t_{f}^{n}\right) $ in the graph $GR$. The sequence $S$
describes a path in the graph $GR$ going from the node representing the
class $E^{0}$ to the node which represents the class $E^{n}$. We introduce
next the time distance system that encodes the quantitative properties of
some subsequences of $S$.
\begin{definition}
{\small {\ \textbf{\ }\ Let $E^{n}=(M^{n},D^{n})$\ be a class reachable in $%
GR,$ after firing the sequence\ $S=\left( t_{f}^{_{1}},..,t_{f}^{n}\right) .$
For point $(n),$ we define the time distance system, noted $DS^{n},$ as
follows:\newline
$DS^{n}=\left\{
\begin{tabular}{l}
{\Huge $\wedge $}$_{\forall i\in Point^{n}},-DS^{n}[n,i]\leq \underline{%
t_{f}^{_{i+1}}}+..+\underline{t_{f}^{n}}\leq DS^{n}[i,n]$\newline
\\
{\Huge $\wedge $}$_{\forall i\in Point^{n}\cup \{n\}}${\Huge $\wedge $}$%
_{\forall t\in Te(M^{n})},-DS^{n}[t,i]\leq \underline{t_{f}^{_{i+1}}}+..+%
\underline{t_{f}^{n}}+\underline{t}\leq DS^{n}[i,t]$%
\end{tabular}%
\right. $\ } }
\end{definition}
More concretely, if $t$ is an enabled transition for $E^{n}$, then $%
DS^{n}[t,i]$ represents the opposite value of the minimum residual time of $%
t $ computed from the firing point $(i),$ whereas $DS^{n}[i,t]$ denotes its
maximum residual time relatively to the firing point $(i)$. Moreover, $%
DS^{n}[i,n]$ (respectively, $DS^{n}[n,i]),$ denotes the maximum time
distance (respectively, the opposite value of the minimum time distance),
between the firing points $(i)$ and $(n)$. The coefficients of the system $%
DS^{0}$ are defined as follows:\ We have $Point^{n}=\{0\};$\newline
$\forall t\in Te(M^{0}),$ \qquad $DS^{0}[0,0]=0,$ $DS^{0}[0,t]=tmax(t),$ $%
DS^{0}[t,0]=-tmin(t).$ $\ \ \ $
Thereafter, we encode the system $DS^{n}$ as four matrices. For instance,
the coefficients of the system $DS^{0}$ of the $ITPN$ of \textit{Fig.1} are
given in \textit{Tab.2}.
\begin{table}[h]
\caption{ The time distance system at firing point $(0)$}\centering%
\begin{tabular}{l}
\begin{tabular}{|c|c|c|c|}
\hline
$DS^{0}[i,t]$ & $t_{1}$ & $t_{3}$ & $t_{4}$ \\ \hline
$0$ & 3 & 4 & 2 \\ \hline
\end{tabular}%
\medskip\
\begin{tabular}{|c|c|c|c|}
\hline
$DS^{0}[t,i]$ & $t_{1}$ & $t_{3}$ & $t_{4}$ \\ \hline
$0$ & -3 & -2 & 0 \\ \hline
\end{tabular}%
\medskip\
\begin{tabular}{|c|c|}
\hline
$DS^{0}[i,n]$ & $0$ \\ \hline
$0$ & 0 \\ \hline
\end{tabular}
\begin{tabular}{|c|c|}
\hline
$DS^{0}[n,i]$ & $0$ \\ \hline
$0$ & 0 \\ \hline
\end{tabular}%
\end{tabular}%
\end{table}
Next definition shows how the system $DS^{n}$ can be determined recursively
as a result of solving a general polyhedral system.
\begin{definition}
{\small {\ {\ \textit{Let }$E^{n-1}=(M^{n-1},D^{n-1})$\textit{\ be a class
reachable in }$GR$\textit{\ and let }$\mathit{DS}^{n-1}$\textit{\ be the
time distance system associated with the class }$E^{n-1}$\textit{. Let us
consider }$E^{n}=(M^{n},D^{n})$\textit{\ be the class reachable from }$%
E^{n-1}$\textit{\ after firing the transition }$t_{f}^{n}$\textit{. The time
distance system }$DS^{n}$\textit{\ associated with }$E^{n}$\textit{\ can be
worked out recursively from the systems }$DS^{n-1},D^{n-1}$\textit{\ \ as
follows: } } }}
\begin{enumerate}
\item {\small Compute the function $Ne$ $^{n}$\ as follows: $\forall t\in
Te(M^{n})$}\newline
{\small \ If $t\in New(M^{n})$ then $Ne^{n}(t):=n$ \qquad else $%
Ne^{n}(t):=Ne^{n-1}(t).$ }\newline
{\small Compute the function $Ni$$^{n}$\ as follows: }\newline
{\small If $Ne^{n}(t)=n$ then if $t\in Ti(M$$^{n})$ then $Ni^{n}(t):=n$ else
$Ni^{n}(t):=-1.$\qquad }
\quad {\small else if $t\in Ta(M$ $^{n-1})$ $\wedge t\in Ti(M$$^{n})$ then $%
Ni^{n}(t):=n.$ }
\quad \quad \quad \quad \quad \quad {\small else $Ni^{n}(t):=Ni^{n-1}(t)$. }%
\newline
{\small Compute the function $Na$$^{n}$\ as follows:}\newline
{\small If $Ne^{n}(t)=n$ then if $t\in Ta(M$$^{n})$ then $Na^{n}(t):=n$ else
$Na^{n}(t):=-1.$\qquad }
\quad {\small else if $t\in Ta(M$$^{n})$$\wedge t\in Ti(M$$^{n-1})$ then $%
Na^{n}(t):=n.$ }
\quad \quad \quad \quad \quad \quad {\small else $Na^{n}(t):=Na^{n-1}(t)$. }%
\newline
\item {\small Augment the system $D^{n-1}$\ with the firing constraints of $%
t_{f}^{n}$\ that we write $D_{a}^{n-1}=$ $D_{a}^{n-1}\wedge (\forall t\in
Ta(M^{n-1}),\quad \underline{t_{^{f}}^{n}}\leq \underline{t})$ }
\item {\small In the system $D_{a}^{n-1}\wedge DS^{n-1}$\ rename each
variable $\underline{t}$\ related to\ an activated transition which is
persistent for M$^{n}$\ with $\underline{t}^{\prime }+\underline{t_{f}^{n}}$%
. For inhibited transitions, rename the related variable $\underline{t}$\
with $\underline{t}^{\prime }$. }
\item {\small In the resulted system and by intersection of the constraints,
remove the variables related to disabled transitions\ and determine the
constraints of $DS^{n}$. \ \medskip }
\item {\small In the obtained system, add constraints related to newly
enabled transitions, as follows : $\forall t\in New(M^{n}),$ $\forall i\in
Point^{n}\cup \{n\}$\newline
$-DS^{n}[n,i]-tmin(t)\leq \underline{t_{f}^{_{i+1}}}+..+\underline{t_{f}^{n}}%
+\underline{t}\leq DS^{n}[i,n]+tmax(t).$ }
\end{enumerate}
\end{definition}
The computation of the system $DS^{n}$ is very complex as it needs at each
step to manipulate a global system which may contain polyhedral constraints.
Concretely, if the latter appears in $D^{n-1}$ then the cost of computing $%
DS^{n}$\ is exponential on the number of variables, otherwise it is
polynomial. However, in most of the cases, polyhedral constraints do not
affect the computation of the time distances. Therefore, to alleviate the
computation effort, the idea is to leave out systematically such constraints
during the process (keeping only the $DBM$ system to represent the space of
the class $E^{n-1}),$ with a risk however to compute in certain cases an
overapproximation of the system $DS^{n}.$ The resulted system obtained by
\textit{DBM} restriction is noted thereafter $\widetilde{DS^{n}}$ and we
have $\widetilde{DS^{0}}=DS^{0}$. The next proposition provides an algorithm
to compute recursively and efficiently the coefficients of the system $%
\widetilde{DS_{n}}$ in the context of the DBM overapproximated graph that we
aim to compute, noted $\widetilde{GRc}$. However the same algorithm can be
applied in the context of the exact graph $GR$ while restricting the system $%
D$ to $\overrightarrow{D}$ ( $\overrightarrow{D}$ is the tightest DBM
overapproximation that one can compute from $D$).
\begin{proposition}
{\small {\textit{Let the graph }}}$\widetilde{{\small {GRc}}}$ {\small be a
DBM overapproximation of the graph }${\small GR}${\small . Let{\textit{\ }}}$%
\widetilde{{\small {E}}_{{\small {c}}}^{{\small {n-1}}}}${\small {$=(M^{n-1},%
\widetilde{D_{c}^{n-1}})$\textit{\ be a class reachable in }}}$\widetilde{%
{\small {GRc}}}${\small {$,$\textit{\ from the initial class\ after firing
the sequence }$S=\left( t_{f}^{_{1}},..,t_{f}^{n-1}\right) $}.{\ \textit{Let
}$\widetilde{\mathit{DS}^{n-1}}$\textit{\ be the DBM overapproximated time
distance system associated with the class }}}$\widetilde{{\small {E}}_{%
{\small {c}}}^{{\small {n-1}}}}${\small {\textit{. Let us consider }}}$%
\widetilde{{\small {E}}_{{\small {c}}}^{{\small {n}}}}${\small {$=(M^{n},%
\widetilde{{D}_{{c}}^{{n}}})$\textit{\ the class reachable from }}}$%
\widetilde{{\small {E}}_{{\small {c}}}^{{\small {n-1}}}}${\small {\textit{\
after firing the transition }$t_{f}^{n}$\textit{. The DBM overapproximated
time distance system }}}$\widetilde{{\small {DS^{n}}}}${\small {\textit{\
associated with }}}$\widetilde{{\small {E}}_{{\small {c}}}^{{\small {n}}}}$%
{\small {\textit{\ can be computed recursively from previous systems in the
sequence }$\mathit{S}$\textit{,\ as follows:} }} {\scriptsize {\ }}
\begin{itemize}
\item {\scriptsize {\small Compute the function $Ne^{n},Ni^{n}$ and $Na^{n}$%
\ as in Definition.7. } }
\item {\scriptsize {\small The coefficients of the system $\widetilde{DS^{n}}
$\ are computed by using the following formulae: } }
{\scriptsize {\small $\forall i\in Point$}}$^{{\scriptsize {\small n}}}$%
{\scriptsize {\small \ } }
{\scriptsize {\small $\quad \widetilde{DS^{n}}[i,n]:=\lambda ^{n-1}[i];$ } }
{\scriptsize {\small $\quad \widetilde{DS^{n}}[n,i]:=\widetilde{DS^{n-1}}%
[t_{f}^{n},i];$ } }
{\scriptsize {\small $\quad \widetilde{DS^{n}}[n,n]:=0;$ } }
{\scriptsize {\small such that }$\forall i\in Point^{n}\cup \{n\},$ $\lambda
^{n}[i]=\underset{t\in Ta(M^{n})}{MIN}\left\{ \widetilde{DS^{n}}%
[i,t]\right\} ${\small \ } }
{\scriptsize {\small $\forall t\in Te(M^{n}),$\ $\forall i\in Point$}$^{%
{\small n}}$ }
{\scriptsize {\small \ \ If $Ne^{n}(t)=n$\ ($t$\ is newly enabled) } }
{\scriptsize {\small $\quad \quad \quad
\begin{tabular}{ll}
$\widetilde{DS^{n}}[i,t]:=\widetilde{DS^{n}}[i,n]+tmax(t);$ & $\quad
\widetilde{DS^{n}}[t,i]:=\widetilde{DS^{n}}[n,i]-tmin(t);$ \\
$\widetilde{DS^{n}}[n,t]:=tmax(t);$ & $\quad \widetilde{DS^{n}}%
[t,n]:=-tmin(t);$%
\end{tabular}%
$} {\small $\quad \quad $ } {\small $\quad \quad \quad $ }\newline
}
{\scriptsize {\small \ \ If $Ne^{n}(t)\neq n$\ ($t$\ is persistent) } }
{\scriptsize {\small \quad\ \ \ If $t\notin Ti(M^{n-1})$\ $\ (t$ is not
inhibited for $M$$^{n-1}$)}$,${\small \ } }
{\scriptsize \ {\small \quad\ \ \ Let }}${\scriptsize {\small r=Ne}}^{%
{\scriptsize {\small n}}}{\scriptsize {\small (t),s=Ni}}^{{\scriptsize
{\small n-1}}}{\scriptsize {\small (t)}}$ {\small and}{\scriptsize \ }$%
{\scriptsize {\small p=Na}}^{{\scriptsize {\small n}}}{\scriptsize {\small %
(t).}}${\scriptsize \ }
{\scriptsize {\small $\quad \quad \quad \widetilde{DS^{n}}[i,t]:=MIN\left(
\begin{tabular}{l}
$\left\{
\begin{tabular}{ll}
$\widetilde{DS^{s}}[i,t]+\lambda ^{n-1}[s]+\widetilde{DS^{n}}[n,p]$ & if $%
0\leq i\leq s\leq p$ \\
$\widetilde{DS^{i}}[i,t]+\widetilde{DS^{n}}[i,n]+\widetilde{DS^{n}}[n,p]$ &
if $0\leq s\leq i\leq p$%
\end{tabular}%
\right. $ \\
$\widetilde{DS^{n-1}}[i,t]$ \\
$\widetilde{DS^{n}}[i,n]+\widetilde{DS^{n-1}}[n-1,t]+\widetilde{DS^{n-1}}%
[t_{f}^{n},n-1]$%
\end{tabular}%
\right. $}\newline
}
{\scriptsize {\small $\quad \quad \quad \widetilde{DS^{n}}[t,i]:=MIN\left(
\begin{tabular}{l}
$\left\{
\begin{tabular}{ll}
$\widetilde{DS^{s}}[t,i]+\widetilde{DS^{n-1}}[t_{f}^{n},s]+\widetilde{DS^{n}}%
[p,n]$ & if $0\leq i\leq s\leq p$ \\
$\widetilde{DS^{i}}[t,i]+\widetilde{DS^{n}}[n,i]+\widetilde{DS^{n}}[p,n]$ &
if $0\leq s\leq i\leq p$%
\end{tabular}%
\right. $ \\
$\widetilde{DS^{n-1}}[t,i]$$\quad $ \\
$\widetilde{DS^{n}}[n,i]+MIN(0,$ $\widetilde{DS^{n-1}}[t,n-1]+\lambda
^{n-1}[n-1])$%
\end{tabular}%
\right. $} }
{\scriptsize {\small $\quad \quad \quad \widetilde{DS^{n}}[t,n]:=\quad
MIN\left\{
\begin{tabular}{l}
$\widetilde{\beta _{c}^{n-1}}[t]$ \\
$MIN(0,$ $\widetilde{DS^{n}}[t,r]+\widetilde{DS^{n}}[r,n])$%
\end{tabular}%
\right. ;$ } }
{\scriptsize {\small $\quad \quad \quad \widetilde{DS^{n}}[n,t]:=\quad
MIN\left\{
\begin{tabular}{l}
$\widetilde{D_{c}^{n-1}}[t_{f}^{n},t]$ \\
$\widetilde{DS^{n}}[r,t]+\widetilde{DS^{n}}[n,r]$%
\end{tabular}%
\right. ;\smallskip $ } }
{\scriptsize {\small $\quad \quad $If $t\in Ti(M^{n-1})$\ $\ (t$ is
inhibited for $M$$^{n-1}$), } }
{\scriptsize {\small $\quad \quad \quad \quad $Let $s=Ni^{n}(t)$ and $%
r=Ne^{n}(t).$} }
{\scriptsize {\small $\quad \quad $$\quad \widetilde{DS^{n}}[i,t]:=$}}
{\scriptsize {\small $\quad \quad \quad \quad MIN$$\left(
\begin{tabular}{l}
$\left\{
\begin{tabular}{ll}
$\widetilde{DS^{s}}[i,t]+\widetilde{DS^{n}}[s,n]$ & if $i\leq s$ \\
$\widetilde{DS^{i}}[i,t]+\widetilde{DS^{n}}[i,n]$ & if $s\leq i$%
\end{tabular}%
\right. \smallskip $ \\
$\widetilde{DS^{n-1}}[i,t]+\lambda ^{n-1}[n-1]$%
\end{tabular}%
\right. $ } }
{\scriptsize {\small $\quad $$\quad \quad \widetilde{DS^{n}}[t,i]:=$}}
{\scriptsize {\small $\quad \quad \quad \quad MIN$$\left(
\begin{tabular}{l}
$\left\{
\begin{tabular}{ll}
$\widetilde{DS^{s}}[t,i]+\widetilde{DS^{n}}[n,s]$ & if $i\leq s$ \\
$\widetilde{DS^{i}}[t,i]+\widetilde{DS^{n}}[n,i]$ & if $s\leq i$%
\end{tabular}%
\right. \smallskip $ \\
$\widetilde{DS^{n-1}}[t,i]+\widetilde{DS^{n-1}}[t_{f}^{n},n-1]$%
\end{tabular}%
\right. $ \smallskip } }
{\scriptsize {\small $\quad \quad $$\quad \widetilde{DS^{n}}[n,t]:=\quad
MIN\left(
\begin{tabular}{l}
$\widetilde{DS^{n}}[r,t]+$$\widetilde{DS^{n}}[n,r]$ \\
$\widetilde{DS^{n-1}}[n-1,t]$ \\
$\widetilde{D^{n-1}}[t_{f}^{n},t]+$$\lambda ^{n-1}[n-1];$%
\end{tabular}%
\right. $ } }
{\scriptsize {\small $\quad \quad \quad $$\widetilde{DS^{n}}[t,n]:=\quad
MIN\left(
\begin{tabular}{l}
$MIN(0,\widetilde{DS^{n}}[t,r]+\widetilde{DS^{n}}[r,n])$ \\
$\widetilde{DS^{n-1}}[t,n-1]$ \\
$\widetilde{DS^{n-1}}[t_{f}^{n},n-1]+\widetilde{\beta _{c}^{n-1}}[t]$%
\end{tabular}%
\right. \medskip $ } }
{\small such that} {\small $\forall t\in Te(M^{n-1}),\quad \widetilde{\beta
_{c}^{n-1}}[t]=\underset{\forall t^{\prime }\in Ta(M^{n-1})}{MIN}\left\{
\widetilde{D_{c}^{n-1}}[t,t^{\prime }]\right\} .$}
\end{itemize}
\end{proposition}
The previous proposition provides an efficient algorithm to compute an
overapproximation of the system $DS^{n}.$ For this effect, the algorithm
starts to determine the set $Point^{n}${\small $.$} Then it calculates the
coefficients $\widetilde{DS^{n}}[i,n]$ and $\widetilde{DS^{n}}[n,i]$ for
each point $i\in Point^{{\small n}}${\small $.$} Then for each enabled
transition $t$, the algorithm computes the other coefficients following the
cases:
\begin{figure}[b]
\centering%
\begin{tabular}{c}
\includegraphics[width=12 cm]{Tdis1.eps} \\
(a) \\
\includegraphics[width=12.5 cm]{Tdis2.eps} \\
(b)%
\end{tabular}%
\caption{Time distance Computation.}
\end{figure}
\begin{itemize}
\item When dealing with newly enabled transitions, the formulae are obvious
and are the same for inhibited and activated transitions.
\item When handling persistent transitions, the algorithm proceeds first to
compute the distances $\widetilde{DS^{n}}[i,t]$ and $\widetilde{DS^{n}}[t,i]$
for each point $i\in Point^{{\small n}}$. It is noteworthy that the previous
distances are more likely to maintain their values along a firing sequence
as long as $t$ is not inhibited in the sequence. However, if $t$ becomes
inhibited then these distances increase by the time elapsed during its
inhibition. Therefore, if a transition is activated for the point $(n-1)$,
and $t$ has never been inhibited since its last enabling point ($s=-1$),
then both distances are more likely to maintain their old values, even
decreasing in very rare cases if there is state space restriction (see the
last two items of the \textit{MIN}). However, if the transition $t$ has been
inhibited at least once since its last enabling point ($s\geq 0$), then the
duration of its last inhibition time should be re-calculated at each new
reachable point to better approximate these distances. In actual fact,
because of space restriction the inhibition times of $t$ may decrease even
after that $t$ has been activated. As a result, the interval $[-\widetilde{%
DS^{n}}[t,i],\widetilde{DS^{n}}[i,t]]$ may only narrow along a firing
sequence as long as $t$ remains activated. For this purpose, we need to
restore some time information computed earlier in the sequence at points ($s$%
) and ($i$). For instance, if the point $(i)$ occurs first in the firing
sequence, then the distance $\widetilde{DS^{n}}[i,t]$ is likely to be equal
to the same distance computed at point $s$, $\widetilde{DS^{s}}[i,t]$
augmented with the maximal inhibition time of $t$, namely\footnote{%
Note that we use rather $\lambda ^{n-1}[s]$ than $\widetilde{DS^{n}}[s,n]$
in the formula, because the point $(s)$ may be not defined in $Point^{n}$ if
$t$ is inhibited at point $(n)$.} $\lambda ^{n-1}[s]+\widetilde{DS^{n}}[n,p]$
(see \textit{Fig 3.a},). Otherwise, if the point $(i)$ occurs during the
inhibition time of $t$, then the distance $\widetilde{DS^{n}}[i,t]$ is
likely to be equal to the same distance computed at point $(i),$ $\widetilde{%
DS^{i}}[i,t]$ augmented with the maximal inhibition time of $t$ from point $%
(i)$ to $(p):\widetilde{DS^{n}}[i,n]+\widetilde{DS^{n}}[n,p]$.
In case that $t$ is inhibited for the point $(n-1)$, we follow the same
approach as previously to compute the same distances. However, in this case
the adjustment of the approximation is carried out during the inhibition
time of the transition $t$. At each new firing point, the residual time of
an inhibited transition should increase with the dwelling time measured at
point $(n-1)$. Furthermore, as shown in \textit{Fig 3.b}, if $i$ occurs
before $s$, then this distance should not surpass the residual time of the
transition reported at point $(s)$ augmented with the inhibition time
elapsed from $(s)$ till the current point $(n)$.
\item The algorithm ends the process by calculating the coefficients $%
\widetilde{DS^{n}}[n,t]$ and $\widetilde{DS^{n}}[t,n]$. As these
coefficients denote the same distances as respectively $\widetilde{D_{c}^{n}}%
[\bullet ,t]$ and $\widetilde{D_{c}^{n}}[\bullet ,t]$ already defined in a
classical $DBM$ system. Therefore, their computation is worked out also by
using the formulae of Definition 5, already established in \cite{abdelli}.
Better still, new formulae are added to tighten still more their
approximation.
\end{itemize}
We propose thereafter to exploit the time distance system in the computation
of an overapproximation of the state class graph of an$\ ITPN$. \ The
proposition.1 shows that by overapproximating the computation of the system $%
DS^{n},$ we reduce the effort of its computation to a polynomial time. From
this system, we are able to restore some time information that makes it
possible to compute a $DBM$ overapproximation that is tighter than that of
other approaches \cite{abdelli}\cite{Bucci}\cite{IHTPN}. Formally, the time
distance based approximation of the graph $GR$ is built as follows:
\begin{definition}
{\small {\ The time distance based approximation graph of an $ITPN$, denoted
by $\widetilde{GRc}$ is the tuple $(\widetilde{CEc},\widetilde{E_{c}^{0}}%
,\leadsto )$\ such that: }}
\begin{itemize}
\item {\small {\ $\widetilde{CEc}$\ is the set of approximated classes
reachable in $\widetilde{GRc\text{ }};$\ } }
\item {\small {$\widetilde{E_{c}^{0}}=(M^{0},Ne^{0},Ni^{0},Na^{0},\widetilde{%
DS^{0}},\widetilde{D_{c}^{0}})\in \widetilde{CEc}$ is the initial class such
that }$\widetilde{{DS}}${$^{0}=DS^{0}$ and }$\widetilde{{D_{c}^{0}}}${\ is
the system }$\forall t,t^{\prime }\in Te(M^{0}),\underline{t}^{\prime }-%
\underline{t}\leq tmax(t^{\prime })-tmin(t)$ }
\item {\small $\leadsto ${\ is a transition relation between approximated
classes defined on $\widetilde{CEc}\times T\times \widetilde{CEc},$\ such
that $((M^{n-1},Ne^{n-1},Ni^{n-1},Na^{n-1},\widetilde{DS^{n-1}},\widetilde{%
D_{c}^{n-1}}),t_{f}^{n},(M^{n},Ne^{n},Ni^{n},Na^{n},\widetilde{DS^{n}},%
\widetilde{D_{c}^{n}}))$ $\ \in \leadsto ,$ iff: } }
\begin{description}
\item[(i)] {\small $t_{f}^{n}$$\in Ta(M^{n-1}).$ }
\item[(ii)] {\small $\widetilde{\beta _{c}^{n-1}}[t_{f}^{n}]\geq 0$.}
\end{description}
{\small The new class is computed as follows: }
\begin{itemize}
\item {\small $\forall p\in P,$ $%
M^{n}(p):=M^{n-1}(p)-B(p,t_{f}^{n})+F(p,t_{f}^{n})\smallskip .$ }
\item {\small Compute the function $Ne^{n},Ni$}$^{{\small n}}${\small \ and $%
Na^{n}$\ as in Definition.7. }
\item {\small Compute the coefficients of the system $\widetilde{DS^{n}}$\
as in Proposition 1: }
\item {\small The DBM system $\widetilde{D_{c}^{n}}$\ is obtained as follows
: }
{\small $\forall (t,t^{\prime })\in (Te(M^{n-1}))^{2}\wedge (t\neq t^{\prime
})$ }
{\small \quad If $t$ or $t^{\prime }$ are newly enabled. }
{\small $\ \ \quad \widetilde{D_{c}^{n}}[t,t^{\prime }]:=\widetilde{DS^{n}}%
[n,t^{\prime }]+\widetilde{DS^{n}}[t,n]$.\medskip \medskip }
{\small \quad If $t$\ and $t^{\prime }$\ are persistent. }
{\small $\quad $\quad If ($t,t^{\prime })\notin (Ti(M^{n-1}))^{2}$\ \ ($t$\
and $t^{\prime }$\ are not inhibited for $M^{n-1}$) }
{\small $\quad \quad \quad \widetilde{D_{c}^{n}}[t,t^{\prime }]:=MIN(%
\widetilde{D_{c}^{n-1}}[t,t^{\prime }],\quad \alpha ^{n}[t,t^{\prime
}]).\medskip $\ }
{\small \quad $\ \ $If \ ($t$,$t^{\prime })\in (Ti(M^{n-1}))^{2}$\ \ ($t$\
and $t^{\prime }$\ are inhibited for $M^{n-1}$) }
{\small $\quad \quad \quad \widetilde{D_{c}^{n}}[t,t^{\prime }]:=MIN(%
\widetilde{D_{c}^{n-1}}[t,t^{\prime }],\quad \alpha ^{n}[t,t^{\prime
}]).\medskip $\ }
{\small $\quad $\quad If $\ (t\in Ti(M^{n-1}))\wedge (t^{\prime }\notin
Ti(M^{n-1}))$\ \ \ \ (Only $t$\ is inhibited for $M^{n-1}$). }
{\small $\quad \quad \quad \widetilde{D_{c}^{n}}[t,t^{\prime }]$\ $:=MIN(%
\widetilde{D_{c}^{n}}[t,t^{\prime }]+\widetilde{D_{c}^{n-1}}%
[t_{f}^{n},\bullet ],\quad \alpha ^{n}[t,t^{\prime }]).$\ }
{\small $\quad $\quad If $\ (t\notin Ti(M^{n-1}))\wedge (t^{\prime }\in
Ti(M^{n-1}))$\ \ \ \ (Only $t^{\prime }$\ is inhibited for $M^{n-1}$) }
{\small \ $\quad \quad \quad \widetilde{D_{c}^{n}}[t,t^{\prime }]$\ $:=$$MIN(%
\widetilde{D_{c}^{n-1}}[t,t^{\prime }]+\lambda ^{n-1}[n-1],\quad \alpha
^{n}[t,t^{\prime }]).$\medskip }
{\small such that $\alpha ^{n}[t,t^{\prime }]$\ =$\underset{i\in
Point^{n}\cup \{n\}}{MIN}\left( \widetilde{DS^{n}}[i,t^{\prime }]+\widetilde{%
DS^{n}}[t,i]\right) $ }
\end{itemize}
\end{itemize}
\end{definition}
The previous definition provides an algorithm to compute a \textit{DBM}
overapproximation of an \textit{ITPN}. To avoid redundancy, each computed
\textit{DBM} system of a reachable class, noted $\widetilde{D_{c}^{n}},$ is
reduced to the constraints $t^{\prime }-t\leq \widetilde{D_{c}^{n}}%
[t,t^{\prime }].$ Note that the other constraints of type $-\widetilde{%
D_{c}^{n}}[t,\bullet ]\leq t\leq \widetilde{D_{c}^{n}}[\bullet ,t]$ are
already computed in the system $\widetilde{DS^{n}}$ as $-\widetilde{DS^{n}}%
[t,n]\leq t\leq \widetilde{DS^{n}}[n,t].$ Comparatively to the construction
of the graph $\widetilde{GR}$ given in Definition.5, the class is extended
to the parameters $Ni^{n}$, $Na^{n}$, $Ni^{n}$ and $\widetilde{DS^{n}}$. The
\textit{DBM} system $\widetilde{D_{c}^{n}}$ computed thereof is used in the
firing and class' equivalence tests. It is noteworthy that the same firing
condition is used in both constructions. However, the computation of the
coefficients of $\widetilde{D_{c}^{n}}$ are better approximated than those
of the system $\widetilde{D^{n}}$. First of all, as it is given in
Proposition 1, the maximal and the minimal residual times of an enabled
transition use formulae that are more precise than those provided in
Definition 5. As a result, the \textit{DBM} coefficients $\widetilde{%
D_{c}^{n}}[t,t^{\prime }]$ are more precise too. This makes it possible to
tighten still more the approximation and therefore to avoid the generation
of additional sequences that stand in the graph $\widetilde{GR}$. The
resulted graph $\widetilde{GRc}$\ is therefore more precise than $\widetilde{%
GR}$. However, the cost of computing the system $\widetilde{D_{c}^{n}}$ is
slightly higher than $\widetilde{D^{n}}$ as it requires also to consider the
computation effort of the system $\widetilde{DS^{n}}$. Nevertheless, the
total cost of computing a class in $\widetilde{GRc}$ remains polynomial and
equal to $o(m^{2}l+m^{2}+ml),$ where $m$ and $l$ denote respectively the
number of enabled transitions\ and the number of reported points. We need to
prove now formally that the construction of the $\widetilde{GRc}$ computes
in all cases an overapproximation of the exact graph $GR,$ which remains
always tighter than the graph $\widetilde{GR}$.
\begin{theorem}
Let $RT$ be an ITPN and $\widetilde{GR}=(\widetilde{CE},(M^{0},\widetilde{%
D^{0}}),\rightsquigarrow ),$ $\widetilde{GRc}=(\widetilde{CEc}%
,(M^{0},Ne^{0},Ni^{0},Na^{0},\widetilde{DS^{0}},\widetilde{D_{c}^{0}}%
),\leadsto )$\ and $GR=(CE,(M^{0},D^{0}),\longmapsto )$ the graphs build on $%
RT$: $\widetilde{GR}$ is an overapproximation of the graph $\widetilde{GRc}$
and the latter is an overapproximation of the exact graph $GR$.
\end{theorem}
\begin{pf}
{\small The proof is given in Appendix.}
\end{pf}
The previous theorem establishes that the algorithm of $Definition.8$
computes a more precise graph than that computed by using other $DBM$
overapproximation approaches \cite{abdelli}\cite{IHTPN}\cite{Bucci}. As a
result, the size of the graph is reduced since additional sequences might be
fired when using classical DBM approximations whereas they are not in $%
\widetilde{GRc}$ as well as in $GR$. To advocate the benefits of the defined
construction, let us consider again the $ITPN$\ of \textit{Fig 1}. As shown
in \textit{Fig.5}, the obtained graph $\widetilde{GRc}$ is much compact than
$\widetilde{GR}$ and contains $19$ classes and $25$ edges. Moreover, some of
the additional sequences reported in $\widetilde{GR}$ due to
overapproximation are removed in $\widetilde{GRc}.$
\begin{figure}[h]
\centering\includegraphics[width=9 cm]{TdisG.eps}
\caption{Time Distance based approximation graph.}
\end{figure}
For example, let us consider the firing sequence $\widetilde{E_{c}^{0}}%
\overset{t_{4}}{\leadsto }\widetilde{E_{c}^{2}}\overset{t_{1}}{\leadsto }$ $%
\widetilde{E_{c}^{3}}\overset{t_{2}}{\leadsto }$ $\widetilde{E_{c}^{5}}%
\overset{t_{5}}{\leadsto }\widetilde{E_{c}^{6}}$ already discussed in page
10. After firing the transition $t_{4}$ from the initial class, we reach the
class $\widetilde{E_{c}^{2}}$ where $t_{3}$ is inhibited for the first time.
The algorithm proceeds first by computing the system $\widetilde{DS^{2}}$\
from $\widetilde{DS^{0}}$\ and $\widetilde{D_{c}^{0}},$ then it determines
the system $\widetilde{D_{c}^{2}}$.
{\scriptsize {$\widetilde{E_{c}^{2}}$\textit{=}$\left(
\begin{tabular}{lll}
\multicolumn{3}{l}{%
\begin{tabular}{ll}
$M^{2}:p_{2},p_{3},p_{3},p_{7}\rightarrow 1$ & $Ni^{2}$: $%
\{t_{1},t_{2},t_{7}\}\rightarrow -1;t_{3}\rightarrow 2.$ \\
$Ne^{2}$: $\{t_{1},t_{3}\}\rightarrow 0;\{t_{2},t_{7}\}\rightarrow 2.$ & $%
Na^{2}$: $\{t_{7},t_{2}\}\rightarrow 2;t_{3}\rightarrow 0;t_{1}\rightarrow
0. $%
\end{tabular}%
} \\
\begin{tabular}{|c|c|c|c|c|}
\hline
$\widetilde{D_{c}^{2}}$ & $t_{1}$ & $t_{2}$ & $t_{3}$ & $t_{7}$ \\ \hline
$t_{1}$ & 0 & 4 & 1 & 9 \\ \hline
$t_{2}$ & 1 & 0 & 2 & 8 \\ \hline
$t_{3}$ & 1 & 5 & 0 & 10 \\ \hline
$t_{7}$ & -7 & -5 & -6 & 0 \\ \hline
\end{tabular}
&
\begin{tabular}{l}
\begin{tabular}{|c|c|c|c|c|}
\hline
$\widetilde{DS^{2}}[i,t]$ & $t_{1}$ & $t_{2}$ & $t_{3}$ & $t_{7}$ \\ \hline
{\tiny 0} & {\tiny 3} & {\tiny 7} & {\tiny 4} & {\tiny 12} \\ \hline
n={\tiny 2} & {\tiny 3} & {\tiny 5} & {\tiny 4} & {\tiny 10} \\ \hline
\end{tabular}%
\medskip \\
\begin{tabular}{|c|c|c|c|c|}
\hline
$\widetilde{DS^{2}}[t,i]$ & $t_{1}$ & $t_{2}$ & $t_{3}$ & $t_{7}$ \\ \hline
{\tiny 0} & {\tiny -3} & {\tiny -2} & {\tiny -2} & {\tiny -10} \\ \hline
n={\tiny 2} & {\tiny -1} & {\tiny -2} & {\tiny 0} & {\tiny -10} \\ \hline
\end{tabular}%
\medskip\
\end{tabular}%
\medskip & \
\begin{tabular}{l}
\begin{tabular}{|c|c|}
\hline
$\widetilde{DS^{2}}[i,n]$ & {\tiny 0} \\ \hline
n=2 & {\tiny 2} \\ \hline
\end{tabular}%
\medskip\ \\
\begin{tabular}{|c|c|}
\hline
$\widetilde{DS^{2}}[n,i]$ & {\tiny 0} \\ \hline
n=2 & {\tiny 0} \\ \hline
\end{tabular}%
\medskip\
\end{tabular}%
\end{tabular}%
\right. $ } }
Then firing $t_{1}$ from $\widetilde{E_{c}^{2}}$ yields the class $%
\widetilde{E_{c}^{3}}$. At this stage, the resulted \textit{DBM }system%
\textit{\ }$\widetilde{D_{c}^{2}}$ is equal to that obtained in the graph $%
\widetilde{GR}$ after firing the same sequence.
{\scriptsize {$\widetilde{E_{c}^{3}}$\textit{=}$\left(
\begin{tabular}{lll}
\multicolumn{3}{l}{%
\begin{tabular}{ll}
$M^{3}:p_{2},p_{3},p_{5},p_{7}\rightarrow 1$ & $Ni^{3}$: $%
\{t_{2},t_{5},t_{7}\}\rightarrow -1;t_{3}\rightarrow 2.$ \\
$Ne^{3}$: $\{t_{2},t_{7}\}\rightarrow 2;t_{3}\rightarrow 0;t_{5}\rightarrow
3.$ & $Na^{3}$: $\{t_{7},t_{2}\}\rightarrow 2;t_{3}\rightarrow
0;t_{5}\rightarrow 3.$%
\end{tabular}%
} \\
\begin{tabular}{|c|c|c|c|c|}
\hline
$\widetilde{D_{c}^{3}}$ & $t_{2}$ & $t_{3}$ & $t_{5}$ & $t_{7}$ \\ \hline
$t_{2}$ & 0 & 4 & 0 & 8 \\ \hline
$t_{3}$ & 4 & 0 & 0 & 9 \\ \hline
$t_{5}$ & 4 & 4 & 0 & 9 \\ \hline
$t_{7}$ & -5 & -3 & -7 & 0 \\ \hline
\end{tabular}%
\medskip &
\begin{tabular}{l}
\begin{tabular}{|c|c|c|c|c|}
\hline
$\widetilde{DS^{3}}[i,t]$ & $t_{2}$ & $t_{3}$ & $t_{5}$ & $t_{7}$ \\ \hline
{\tiny 0} & {\tiny 7} & {\tiny 7} & {\tiny 3} & {\tiny 12} \\ \hline
{\tiny 2} & {\tiny 5} & {\tiny 7} & {\tiny 3} & {\tiny 10} \\ \hline
n={\tiny 3} & {\tiny 4} & {\tiny 4} & {\tiny 0} & {\tiny 9} \\ \hline
\end{tabular}%
\medskip \\
\begin{tabular}{|c|c|c|c|c|}
\hline
$\widetilde{DS^{3}}[t,i]$ & $t_{2}$ & $t_{3}$ & $t_{5}$ & $t_{7}$ \\ \hline
{\tiny 0} & {\tiny -3} & {\tiny -3} & {\tiny -3} & {\tiny -10} \\ \hline
{\tiny 2} & {\tiny -2} & {\tiny -1} & {\tiny -1} & {\tiny -10} \\ \hline
n={\tiny 3} & {\tiny 0} & {\tiny 0} & {\tiny 0} & {\tiny -7} \\ \hline
\end{tabular}%
\medskip%
\end{tabular}%
\ & \
\begin{tabular}{l}
\begin{tabular}{|c|c|c|}
\hline
$\widetilde{DS^{3}}[i,n]$ & {\tiny 0} & {\tiny 2} \\ \hline
n=3 & {\tiny 3} & {\tiny 3} \\ \hline
\end{tabular}%
\medskip\ \\
\begin{tabular}{|c|c|c|}
\hline
$\widetilde{DS^{3}}[n,i]$ & {\tiny 0} & {\tiny 2} \\ \hline
n=3 & {\tiny -3} & {\tiny -1} \\ \hline
\end{tabular}%
\medskip\
\end{tabular}%
\end{tabular}%
\right. $ }}
Firing the transition $t_{2}$ from the previous class leads to $\widetilde{%
E_{c}^{5}}$. Here, we notice that the minimal residual time of the
persistent inhibited transition $t_{3}$ relatively to the point $(0)$ has
increased to 4, as we have $\widetilde{DS^{5}}[t_{3},0]=-4.$ {\scriptsize {$%
\widetilde{\mathit{E}_{\mathit{c}}^{5}}$\textit{=}$\left(
\begin{tabular}{lll}
\multicolumn{3}{l}{%
\begin{tabular}{ll}
$M^{5}:p_{3},p_{5},p_{7}\rightarrow 1$ & $Ni^{5}$: $\{t_{5},t_{7}\}%
\rightarrow -1;t_{3}\rightarrow 2.$ \\
$Ne^{5}$: $t_{7}\rightarrow 2;t_{3}\rightarrow 0;t_{5}\rightarrow 3.$ & $%
Na^{5}$: $t_{7}\rightarrow 2;t_{3}\rightarrow 0;t_{5}\rightarrow 3.$%
\end{tabular}%
} \\
\begin{tabular}{|c|c|c|c|}
\hline
$\widetilde{D_{c}^{5}}$ & $t_{3}$ & $t_{5}$ & $t_{7}$ \\ \hline
$t_{3}$ & 0 & -1 & 7 \\ \hline
$t_{5}$ & 4 & 0 & 8 \\ \hline
$t_{7}$ & -3 & -7 & 0 \\ \hline
\end{tabular}%
\medskip & \medskip
\begin{tabular}{l}
\begin{tabular}{|c|c|c|c|}
\hline
$\widetilde{DS^{5}}[i,t]$ & $t_{3}$ & $t_{5}$ & $t_{7}$ \\ \hline
{\tiny 0} & {\tiny 7} & {\tiny 3} & {\tiny 11} \\ \hline
{\tiny 2} & {\tiny 7} & {\tiny 3} & {\tiny 10} \\ \hline
3 & 4 & 0 & 8 \\ \hline
n={\tiny 5} & {\tiny 4} & {\tiny 0} & {\tiny 8} \\ \hline
\end{tabular}%
\smallskip \\
\begin{tabular}{|c|c|c|c|}
\hline
$\widetilde{DS^{5}}[t,i]$ & $t_{3}$ & $t_{5}$ & $t_{7}$ \\ \hline
{\tiny 0} & {\tiny -4} & {\tiny -3} & {\tiny -10} \\ \hline
{\tiny 2} & {\tiny -2} & {\tiny -2} & {\tiny -10} \\ \hline
3 & 0 & 0 & -7 \\ \hline
n={\tiny 5} & {\tiny -1} & {\tiny 0} & {\tiny -7} \\ \hline
\end{tabular}%
\medskip\
\end{tabular}
& \
\begin{tabular}{l}
\begin{tabular}{|c|c|c|c|}
\hline
$\widetilde{DS^{5}}[i,n]$ & {\tiny 0} & {\tiny 2} & 3 \\ \hline
n=5 & {\tiny 3} & {\tiny 3} & 0 \\ \hline
\end{tabular}%
\medskip\ \\
\begin{tabular}{|c|c|c|c|}
\hline
$\widetilde{DS^{5}}[n,i]$ & {\tiny 0} & {\tiny 2} & 3 \\ \hline
n=5 & {\tiny -3} & {\tiny -2} & 0 \\ \hline
\end{tabular}%
\medskip\
\end{tabular}%
\end{tabular}%
\right. $}}
The formula given in Proposition 1 suggests to compute this distance from
the system $\widetilde{DS^{2}}$, since $2$ is the point that inhibited $%
t_{3} $ for the last time. As $t_{3}$ still remains inhibited in the
sequence we have according to Proposition 1, $\widetilde{DS^{5}}%
[t_{3},0]=MIN(\widetilde{DS^{2}}[t_{3},0]+\widetilde{DS^{5}}[2,5],$ $\ $ \ $%
\widetilde{DS^{3}}[t_{3},0]+\widetilde{DS^{3}}[t_{2},3])$; we obtain $%
\widetilde{DS^{5}}[t_{3},0]=MIN(-2-2,$ \ $-3+0)=-4$. Hence we compute the
minimal residual time of $t_{3}$ relatively to point n=5 and we obtain: $%
\widetilde{DS^{5}}[t_{3},5]=${\scriptsize {{\small $-1.$} } }
Comparatively to the construction of the graph $\widetilde{GR}$, this class
is better approximated in $\widetilde{GRc}.$ This prevents the appearance of
false behaviors as it is the case in $\widetilde{GR}.$ To highlight this
fact, let us consider the firing of the transition $t_{5}$ from $\widetilde{%
\mathit{E}_{\mathit{c}}^{5}}$ which produces the class $\widetilde{\mathit{E}%
_{\mathit{c}}^{6}}.${\small \ }
{\scriptsize {$\widetilde{\mathit{E}_{\mathit{c}}^{6}}$\textit{=}$\left(
\begin{tabular}{lll}
\multicolumn{3}{l}{%
\begin{tabular}{ll}
$M^{6}:p_{3},p_{6}\rightarrow 1$ & $Ni^{6}$: $t_{6}\rightarrow
-1;t_{3}\rightarrow 2.$ \\
$Ne^{6}$: $t_{3}\rightarrow 0;t_{6}\rightarrow 6.$ & $Na^{6}$:$%
\{t_{6},t_{3}\}\rightarrow 6.$%
\end{tabular}%
} \\
\begin{tabular}{|c|c|c|}
\hline
$\widetilde{D_{c}^{6}}$ & $t_{3}$ & $t_{6}$ \\ \hline
$t_{3}$ & 0 & -1 \\ \hline
$t_{6}$ & 4 & 0 \\ \hline
\end{tabular}%
\medskip & \medskip
\begin{tabular}{ll}
\begin{tabular}{|c|c|c|}
\hline
$\widetilde{DS^{6}}[i,t]$ & $t_{3}$ & $t_{6}$ \\ \hline
{\tiny 0} & {\tiny 7} & {\tiny 3} \\ \hline
2 & 7 & 3 \\ \hline
n={\tiny 6} & {\tiny 4} & {\tiny 0} \\ \hline
\end{tabular}%
\smallskip &
\begin{tabular}{|c|c|c|}
\hline
$\widetilde{DS^{6}}[t,i]$ & $t_{3}$ & $t_{6}$ \\ \hline
{\tiny 0} & {\tiny -4} & {\tiny -3} \\ \hline
2 & -2 & -2 \\ \hline
n={\tiny 6} & {\tiny -1} & {\tiny 0} \\ \hline
\end{tabular}%
\medskip%
\end{tabular}
& \
\begin{tabular}{l}
\begin{tabular}{|c|c|c|}
\hline
$\widetilde{DS^{6}}[i,n]$ & {\tiny 0} & 2 \\ \hline
n=6 & {\tiny 3} & 3 \\ \hline
\end{tabular}%
\medskip\ \\
\begin{tabular}{|c|c|c|}
\hline
$\widetilde{DS^{6}}[n,i]$ & {\tiny 0} & 2 \\ \hline
n=6 & {\tiny -3} & -2 \\ \hline
\end{tabular}%
\medskip\
\end{tabular}%
\end{tabular}%
\right. $} }
As we notice, $\widetilde{\mathit{E}_{\mathit{c}}^{6}}$ is exactly
approximated relatively to the exact class obtained after firing the same
sequence in the graph $GR$. Only the transition $t_{6}$ is firable from $%
\widetilde{\mathit{E}_{\mathit{c}}^{6}}$ whereas both $t_{6}$ and $t_{3}$
are firable from the corresponding class in the graph $\widetilde{GR}$. The
same observation is made when considering the alternative firing sequence $%
\widetilde{E_{c}^{0}}\overset{t_{4}}{\leadsto }\widetilde{E_{c}^{2}}\overset{%
t_{1}}{\leadsto }$ $\widetilde{E_{c}^{3}}\overset{t_{5}}{\leadsto }$ $%
\widetilde{E_{c}^{9}}\overset{t_{2}}{\leadsto }\widetilde{E_{c}^{6}},$ . In
this sequence, the transition $t_{3}$ is no longer inhibited when reaching
the class $\widetilde{E_{c}^{9}\text{ }}$ and we have:
{\scriptsize {$\widetilde{\mathit{E}_{\mathit{c}}^{9}}$\textit{=}$\left(
\begin{tabular}{lll}
\multicolumn{3}{l}{%
\begin{tabular}{ll}
$M^{9}:p_{3},p_{2},p_{6}\rightarrow 1$ & $Ni^{9}$: $t_{6},t_{2}\rightarrow
-1;t_{3}\rightarrow 2.$ \\
$Ne^{9}$: $t_{3}\rightarrow 0;t_{2}\rightarrow 2;t_{6}\rightarrow 6.$ & $%
Na^{9}$:$\{t_{6},t_{3}\}\rightarrow 9;t_{2}\rightarrow 2.$%
\end{tabular}%
} \\
\begin{tabular}{|c|c|c|c|}
\hline
$\widetilde{D_{c}^{9}}$ & $t_{2}$ & $t_{3}$ & $t_{6}$ \\ \hline
$t_{2}$ & {\tiny 0} & {\tiny 4} & {\tiny 0} \\ \hline
$t_{3}$ & {\tiny 4} & {\tiny 0} & {\tiny 0} \\ \hline
$t_{6}$ & {\tiny 4} & {\tiny 4} & {\tiny 0} \\ \hline
\end{tabular}%
\medskip & \medskip
\begin{tabular}{ll}
\begin{tabular}{|c|c|c|c|}
\hline
$\widetilde{DS^{9}}[i,t]$ & $t_{2}$ & $t_{3}$ & $t_{6}$ \\ \hline
{\tiny 0} & {\tiny 7} & {\tiny 7} & {\tiny 3} \\ \hline
{\tiny 2} & {\tiny 5} & {\tiny 7} & {\tiny 3} \\ \hline
n={\tiny 9} & {\tiny 4} & {\tiny 4} & {\tiny 0} \\ \hline
\end{tabular}%
\smallskip &
\begin{tabular}{|c|c|c|c|}
\hline
$\widetilde{DS^{9}}[t,i]$ & $t_{2}$ & $t_{3}$ & $t_{6}$ \\ \hline
{\tiny 0} & {\tiny -3} & {\tiny -3} & {\tiny -3} \\ \hline
2 & {\tiny -2} & {\tiny -1} & {\tiny -1} \\ \hline
n={\tiny 6} & {\tiny 0} & {\tiny 0} & {\tiny 0} \\ \hline
\end{tabular}%
\medskip%
\end{tabular}
&
\begin{tabular}{l}
\begin{tabular}{|c|c|c|}
\hline
$\widetilde{DS^{9}}[i,n]$ & {\tiny 0} & {\tiny 2} \\ \hline
n=9 & {\tiny 3} & {\tiny 3} \\ \hline
\end{tabular}%
\medskip\ \\
\begin{tabular}{|c|c|c|}
\hline
$\widetilde{DS^{9}}[n,i]$ & {\tiny 0} & {\tiny 2} \\ \hline
n=9 & {\tiny -3} & {\tiny --1} \\ \hline
\end{tabular}%
\medskip\
\end{tabular}%
\end{tabular}%
\right. $} }
At this stage, the minimal residual time of the activated transition $t_{3}$
is equal to 0, but it increases to 1 after firing $t_{2}$ to reach the class
$\widetilde{E_{c}^{6}}.$ Indeed, the firing of $t_{2}$ restricts the space
to the states that have fired initially $t_{4}$ between [0,1]. The formula
provided in Proposition 1 allows to restore appropriate time information to
exactly approximate the reachable class. Therefore, as the inhibition of $%
t_{3}$ occurs and stops earlier in the sequence, the formula suggests to
recompute the inhibition time of $t_{3}$ to better approximate the
calculation of its residual times. Hence, we find that the distance $%
\widetilde{DS^{6}}[t_{3},0]$ has decreased from -3 to -4, and therefore we
obtain $\widetilde{DS^{6}}[t_{3},6]=-1.$
If we consider now the sequence $\widetilde{E_{c}^{0}}\overset{t_{4}}{%
\leadsto }\widetilde{E_{c}^{2}}\overset{t_{1}}{\leadsto }$ $\widetilde{%
E_{c}^{3}}\overset{t_{5}}{\leadsto }$ $\widetilde{E_{c}^{9}}\overset{t_{3}}{%
\leadsto }\widetilde{E_{c}^{10}}$, we notice that $t_{2}$ is fired from $%
\widetilde{E_{c}^{10}}$ whereas it is not in the exact graph $GR$ from the
class $E^{4}$. In actual fact, as $t_{3}$ is fired before $t_{2}$, some of
the points connected to $t_{3}$ are no longer stored in the class $%
\widetilde{E_{c}^{10}}.$ Therefore, the time information computed in the
class $\widetilde{E_{c}^{9}}$ that could better approximate the class $%
\widetilde{E_{c}^{10}}$ is removed after firing $t_{3}$. As a result, the
sequence $\widetilde{E_{c}^{10}}\overset{t_{2}}{\leadsto }$ $\widetilde{%
E_{c}^{14}}\overset{t_{6}}{\leadsto }\widetilde{E_{c}^{8}\text{ }}$is added
mistakenly in both graphs $\widetilde{GRc}$ and $\widetilde{GR}.$
In other respects, we notice that the amount of data needed to represent
each class of the graph $\widetilde{GRc}$ is two or three times higher than
in the graph $\widetilde{GR}$. However, this additional data may be very
useful for the time analysis of the model as it makes it possible, for
instance, to determine efficiently the quantitative properties of the model.
Therefore, no further greedy computation is needed for such a process,
unlike other graph constructions which require to perform further
calculations to determine such properties. For example, in \cite{Bert-grid}%
\cite{IHTPN} the authors have proposed to extend the original model with an
observer containing additional places and transitions modeling the
quantitative property to determine. This method is quite costly as it
requires to compute the reachability graph of the extended model for each
value of the quantitative property to check. Therefore, many graphs
generations may be necessary to determine the exact value of the
quantitative property. In \cite{Bucci}, the authors proposed an interesting
method for quantitative timed analysis. They compute first the $DBM$
overapproximation of the graph. Then, given an untimed transition sequence
from the resulted graph, they can obtain the feasible timings of the
sequence as the solution of a linear programming problem. In particular, if
there is no solution, the sequence has been introduced by the
overapproximation and can be cleaned up, otherwise the solution set allows
to determine the quantitative properties of the considered sequence.
However, this method consumes for each sequence to handle an exponential
complexity time, as a result of solving a general linear programming problem.
As regards our graph construction, the quantitative properties can be
extracted from the graph $\widetilde{GRc}$ in almost all cases without
further computations \cite{Abdelli un}. In the other cases, we need to
perform small calculations. So, let us consider a firing sequence $%
S=(t^{i+1},..,t^{n})$; $S$ describes a path in the graph going from the node
representing the class $\widetilde{E^{i}}$ to the node which represents the
class $\widetilde{E^{n}}$. The system $\widetilde{DS^{n}}$ provides the
minimal and the maximal time distances from transition's enabling,
activating and inhibiting points to point $(n)$ . Hence, to measure the
minimal or the maximal times of the sequence $(t_{f}^{i+1},..,t_{f}^{n})$,
we need to check whether the elements $\widetilde{DS^{n}}[i,n]$ and $%
\widetilde{DS^{n}}[n,i]$ are already computed in the system $\widetilde{%
DS^{n}}$ namely that $i\in Point^{n}$. Otherwise, if $(i)$ does not belong
to the latter, then we need to perform further computations on the final
graph by using $Proposition.1.$ In concrete terms, the idea is to extend the
set of points with the missing point $(i),$ and this for every node $(j)$ of
the path going from the node $(i+1)$ to the node $(n-1)$. Then, we compute
in the systems $\widetilde{DS^{j}}$ only the time distances involving the
missing point $(i)$ since the other distances are already computed when
generating the graph. The process carries on until reaching the node $(n);$
there we should have achieved the computation of $DS^{n}[n,i]$ and $%
DS^{n}[i,n]$.
To determine the $BCRT$ and the $WCRT$\ (best and worst cases response
times), of a task, we should repeat this process for all the related
sequences in the graph. For this effect, the graph is first computed to
browse for the sequences to handle. Then each sequence is analyzed to
extract its quantitative properties.
In order to advocate the efficiency of our graph construction, we give in
the next section some experimental results that compare the performances of
our algorithms with those of other approaches.
\subsection{Experimental results}
We have implemented ours algorithms using $C++$ builder language on a
Windows XP workstation. The experiments have been performed on a Pentium $V$
with a processor speed of $2,27$ $GH$ and $2$ $GB$ of RAM capacity. The
different tests have been carried out while using four tools: $ORIS$ tool
\cite{ORIS TOOL}, $\mathit{TINA}$\textit{\ }tool \cite{tina}, $\mathit{ROMEO}
$ tool \cite{Romeo}, and our tool named $ITPN$\textit{\ Analyzer \cite{ITPN}}%
. Then, we compared the obtained graphs while considering three parameters,
the number of classes, the number of edges and the computation times.
Thereafter, we denote by $\mathit{NF}$ (\textit{Not Finished}) the tests
that have spent more than 5 minutes of time computation or that have led to
memory overflows. Moreover, we denote by $\mathit{NA}$ (\textit{Not Available%
}) when no parameter measurement is provided by the tool.
\begin{figure}[tbph]
\centering\includegraphics[width=8 cm]{aneexeb-fig1.eps} \centering
\caption{$TPN$ used in the experiments.}
\end{figure}
\begin{table}[th]
\caption{Results of experiments performed on $TPN$\textit{.}}\centering%
{\tiny {\ {\scriptsize {\
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Examples & Tools & TINA & ROMEO & ORIS & \multicolumn{2}{|c|}{ITPNAnalyzer}
\\ \hline
& & DBM & DBM & DBM & DBM & Tdis \\ \hline
& Classes & 2 & 2 & 2 & 2 & 2 \\ \cline{2-7}\cline{5-7}
Proc 1 & Edges & 2 & 2 & $NA$ & 2 & 2 \\ \cline{2-7}\cline{5-7}
& Times (ms) & 0 & $NA$ & 0 & 0 & 0 \\ \hline
& Classes & 186 & 186 & 188 & 186 & 186 \\ \cline{2-7}\cline{5-7}
Proc 1 2 & Edges & 262 & 262 & $NA$ & 262 & 262 \\ \cline{2-7}\cline{5-7}
& Times (ms) & 0 & $NA$ & 16 & 0 & 0 \\ \hline
& Classes & 958 & 958 & 1.038 & 958 & 958 \\ \cline{2-7}\cline{5-7}
Proc 1 2 3 & Edges & 1506 & 1506 & $NA$ & 1506 & 1506 \\
\cline{2-7}\cline{5-7}
& Times (ms) & 16 & $NA$ & 391 & 5 & 30 \\ \hline
& Classes & 5.219 & 5.219 & 6.029 & 5.219 & 5.219 \\ \cline{2-7}\cline{5-7}
Proc 1 2 3 4 & Edges & 8.580 & 8.580 & $NA$ & 8.580 & 8.580 \\
\cline{2-7}\cline{5-7}
& Times (ms) & 140 & $NA$ & 2.719 & 38 & 80 \\ \hline
& Classes & 42.909 & 42.909 & 52.452 & 42.909 & 42.909 \\
\cline{2-7}\cline{5-7}
Proc 1 2 3 4 5 & Edges & 73.842 & 73.842 & $NA$ & 73.842 & 73.842 \\
\cline{2-7}\cline{5-7}
& Times (ms) & 6.739 & $NA$ & 30.000 & 670 & 1.310 \\ \hline
\end{tabular}%
} }}}
\end{table}
The first tests that have been carried out intended to verify whether our $%
\mathit{TPN}$\ graph construction produce the same graphs as when using
other tools. For this effect, we have considered the combination of the $%
\mathit{TPNs}$\ shown in $\mathit{Fig.6}$. First, we started by testing the
net \textit{Proc1,} then we composed \textit{Proc1} with \textit{Proc2}, \
and so on. The results of these experiments are reported in $\mathit{Table}$
$\mathit{3}$\textit{.} The latter shows that all graphs are identical except
for ORIS which extends the expression of a class to the parameter $NEW$%
\footnote{$NEW(t)$ is a boolean that denotes whether the transition $t$ is
newly enabled or not. Therefore, although they are bisimilar, two classes
that have the same marking $M$ and the same firing space $D$ are considered
as non equivalent if the parameter $New$ is not identical in both classes.}.
In other respects, as it was expected, the times needed to compute the
\textit{DBM} state class graphs are faster than when computing the time
distance based graphs.
The second series of tests that have been performed aimed at comparing the
construction of the graph $\widetilde{GRc}$ with other graph construction
approaches. For this effect, we considered the $ITPN$ model shown in \textit{%
Fig.7} while varying the intervals of transitions \textit{t}$_{2},$ and
\textit{t}$_{3}$. The results of these tests are reported in $\mathit{Table}$
$\mathit{4}$. This $ITPN,$\ presented previously in \cite{Bucci}, describes
three independent tasks that are in conflict for a common resource (e.g
\textit{a processor}), and given respectively by the following pairs of
transitions: Task$_{1}$=$(t_{1},t_{4})$, Task$_{2}$=$(t_{2},t_{5})$ and Task$%
_{3}$=$(t_{3},t_{6})$. Task 1 has a higher priority than the two other
tasks, and Task 2 has priority on the Task 3. The priorities are
characterized by using inhibitors arcs that connect the place $p_{1}$ to the
transitions $t_{5}$ and $t_{6};$ and\ the place $p_{2}$ to the transition $%
t_{6}.$
\begin{figure}[h]
\centering\includegraphics[width=6 cm]{FSEM-ex.eps}
\caption{An \textit{ITPN}\ example modeling three conflicting tasks.}
\end{figure}
For this purpose, different approaches have been tested: The exact graph
construction defined in \cite{LIme2003} and its \textit{DBM}
overapproximation defined in \cite{IHTPN} which are both implemented in $%
\mathit{ROMEO}$; the \textit{DBM} overapproximation defined in \cite{abdelli}
and the time distance based approximation defined in this paper which are
both implemented in $\mathit{ITPN\ Analyzer}$; the \textit{DBM}
overapproximation defined in \cite{Bucci} and implemented in $\mathit{ORIS}$%
; and finally, the K-grid based approximation defined in \cite{Bert-grid}
and implemented in $TINA$. Notice that for the latter construction, we have
considered the highest grid to approximate the polyhedra. In this case, this
approach succeeds to compute the exact graph in almost all cases, but
nevertheless with the highest cost.
\begin{table}[tbph]
\caption{Results of experiments performed on $\mathit{ITPN}$\textit{.}}%
\centering{\tiny {\
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& TOOLS & TINA & \multicolumn{2}{|c|}{ROMEO} & \multicolumn{2}{|c|}{ITPN
Analyser} & ORIS \\ \cline{2-8}\cline{3-8}
Examples & Methods & K-grid & Exact & DBM & $\widetilde{GR}(DBM)$ & $%
\widetilde{GRc}(TDis)$ & DBM \\ \hline\hline
t$_{2}$\ [100,150] & Classes & 4.489 & 4.489 & 5.431 & 5.378 & 4.483 & 5.538
\\ \cline{2-8}\cline{6-8}
t$_{3}$\ [160,160] & Edges & 6.360 & 6.360 & 7.608 & 7.530 & 6.345 & NA \\
\cline{2-8}\cline{6-8}
& Times(ms) & 1632 & NA & NA & 60 & 80 & 1578 \\ \hline\hline
t$_{2}$\ [80,120] & Classes & 27.901 & NF & \ 47.777 & 39.648 & 27.889 &
40.414 \\ \cline{2-8}\cline{2-8}
t$_{3}$\ [145,145] & Edges & 40.073 & NF & 67.754 & 56.238 & 40.163 & NA \\
\cline{2-8}\cline{2-8}
& Times(ms) & 5.086 & NA & NA & 300 & 530 & 5.360 \\ \hline\hline
t$_{2}$\ [80,120] & Classes & 29.976 & NF & \ 47.888 & 42.247 & 29.964 &
42.733 \\ \cline{2-8}
t$_{3}$\ [165,165] & Edges & 42.844 & NF & 67.546, & 59.635 & 42.832 & NA \\
\cline{2-8}
& Times(ms) & 5.522 & NA & NA & 220 & 580 & 5.188 \\ \hline\hline
t$_{2}$\ [100,150] & Classes & 16.913 & 16.913 & 21.033 & 20.802 & 16.901 &
21.116 \\ \cline{2-8}
t$_{3}$\ [145,145] & Edges & 23.583 & 23.583 & 28.989 & 28.635 & 23.571 & NA
\\ \cline{2-8}
& Times(ms) & 2.870 & NA & NA & 170 & 230 & 5157 \\ \hline\hline
t$_{2}$\ [100,150] & Classes & 320 & 320 & 403 & 394 & 319 & 429 \\
\cline{2-8}\cline{6-8}
t$_{3}$\ [150,150] & Edges & 460 & 460 & 575 & 562 & 459 & NA \\
\cline{2-8}\cline{6-8}
& Times(ms) & 110 & NA & NA & 4 & 10 & 156 \\ \hline\hline
t$_{2}$\ [100,150] & Classes & 4.142 & 4.142 & 5.034 & 4.982 & 4.136 & 5140
\\ \cline{2-8}\cline{6-8}
t$_{3}$\ [140,140] & Edges & 5.889 & 5.889 & 7.095 & 7.014 & 5.883 & NA \\
\cline{2-8}\cline{6-8}
& Times(ms) & 1502 & NA & NA & 20 & 80 & 1.765 \\ \hline\hline
t$_{3}$\ [80,120] & Classes & 28 392 & NF & 47.622 & 40.842 & 28.920 & 41.368
\\ \cline{2-8}\cline{6-8}
t$_{5}$\ [155,155] & edges & 41.452 & NF & 67.309 & 57.766 & 41.440 & NA \\
\cline{2-8}\cline{6-8}
& Times (ms) & 15703 & NA & NA & 280 & 640 & 9.062 \\ \hline\hline
t$_{2}$\ [80,120] & Classes & 7.018 & 7.018 & 12.379 & 10.004 & 7.012 &
10.400 \\ \cline{2-8}
t$_{3}$\ [140,140] & Edges & 10.242 & 10.242 & 17.829 & 14.406 & 10.236 & NA
\\ \cline{2-8}
& Times(ms) & 2834 & NA & NA & 50 & 140 & 3.516 \\ \hline\hline
t$_{2}$\ [100,150] & Classes & 11.351 & 11.351 & 16.354 & 15.178 & 11.339 &
15.318 \\ \cline{2-8}
t$_{3}$\ [135,135] & Edges & 15.649 & 15.649 & 22.230 & 20.486 & 15.639 & NA
\\ \cline{2-8}
& Times(ms) & 4907 & NA & NA & 90 & 230 & 4765 \\ \hline\hline
t$_{2}$\ [100,150] & Classes & 17.612 & 17.612 & 21.857 & 21.626 & 17.600 &
21.942 \\ \cline{2-8}
t$_{3}$\ [155,155] & Edges & 24.522 & 24.522 & 30.065 & 29.711 & 24.520 & NA
\\ \cline{2-8}
& Times(ms) & 7951 & NA & NA & 130 & 230 & 5594 \\ \hline
\end{tabular}%
} }
\end{table}
As we notice, the graphs computed by the considered \textit{DBM}
overapproximations are not identical. As concerns $ORIS,$ the reason is
given above.However for \textit{ROMEO}, we have shown in \cite{abdelli} that
the $DBM$ approximation defined in \cite{IHTPN} is not truly implemented. In
actual fact, in \textit{ROMEO} the normalization of the \textit{DBM} system
is performed after removing the polyhedral inequalities, whereas it must be
done before, thus yielding a loss of precision in the resulted graphs. It is
noteworthy that among these \textit{DBM} overapproximations, the approach
defined in \cite{abdelli} and implemented in \cite{ITPN} is the one that
computes the tightest graphs with the fastest times. Concerning the time
distance based approximation defined in this paper, the results show that
the obtained graphs are of the sime size relatively to the exact ones, even
smaller. Moreover, the times needed for their computation are 10 even 20
times faster than those of \textit{TINA}, and slightly more comparing to
\textit{ROMEO}.
It should be noticed that although the $\widetilde{GRc}$ is almost equal to $%
GR$, it still remains an overapproximation of it. In actual fact, many
classes that stand unequal in $GR$ despite they are bisimilar, become
equivalent when they are approximated\footnote{%
The polyhedral inequalities that prevent class' equality in $GR$ and hence
their equivalence are removed in $\widetilde{GRc}$.} in $\widetilde{GRc},$
thus compacting its size comparatively to the graph $GR$.
\begin{table}[tbph]
\caption{WCRT and BCRT estimation of Task 3.}\centering\centering{\tiny {\
\begin{tabular}{|c||c|c|c|c|}
\hline
{\scriptsize Examples} & Methods & $\widetilde{GR}(DBM)$ & $\widetilde{GRc}%
(Tdis)$ & {\scriptsize Exact} \\ \hline\hline
{\scriptsize t}$_{2}${\scriptsize \ [100,150]} & {\scriptsize WCRT} &
{\scriptsize 126} & {\scriptsize 88} & {\scriptsize 88} \\
\cline{2-5}\cline{2-5}
{\scriptsize t}$_{3}${\scriptsize \ [160,160]} & {\scriptsize BCRT} &
{\scriptsize 20} & {\scriptsize 20} & {\scriptsize 20} \\ \hline\hline
{\scriptsize t}$_{2}${\scriptsize \ [100,150]} & {\scriptsize WCRT} &
{\scriptsize 126} & {\scriptsize 88} & {\scriptsize 88} \\
\cline{2-5}\cline{2-5}
{\scriptsize t}$_{3}${\scriptsize \ [150,150]} & {\scriptsize BCRT} &
{\scriptsize 30} & {\scriptsize 30} & {\scriptsize 30} \\ \hline\hline
{\scriptsize t}$_{2}${\scriptsize \ [80,120]} & {\scriptsize WCRT} &
{\scriptsize 198} & {\scriptsize 128} & {\scriptsize 128} \\ \cline{2-5}
{\scriptsize t}$_{3}${\scriptsize \ [130,130]} & {\scriptsize BCRT} &
{\scriptsize 20} & {\scriptsize 20} & {\scriptsize 20} \\ \hline\hline
{\scriptsize t}$_{2}${\scriptsize \ [100,150]} & {\scriptsize WCRT} &
{\scriptsize 126} & {\scriptsize 88} & {\scriptsize 88} \\ \cline{2-5}
{\scriptsize t}$_{3}${\scriptsize \ [135,135]} & {\scriptsize BCRT} &
{\scriptsize 20} & {\scriptsize 20} & {\scriptsize 20} \\ \hline\hline
{\scriptsize t}$_{2}${\scriptsize \ [100,150]} & {\scriptsize WCRT} &
{\scriptsize 126} & {\scriptsize 88} & {\scriptsize 88} \\ \cline{2-5}
{\scriptsize t}$_{3}${\scriptsize \ [155,155]} & {\scriptsize BCRT} &
{\scriptsize 20} & {\scriptsize 20} & {\scriptsize 20} \\ \hline\hline
{\scriptsize t}$_{2}${\scriptsize \ [80,120]} & {\scriptsize WCRT} &
{\scriptsize 208} & {\scriptsize 128} & {\scriptsize 128} \\ \cline{2-5}
{\scriptsize t}$_{3}${\scriptsize \ [140,140]} & {\scriptsize BCRT} &
{\scriptsize 20} & {\scriptsize 20} & {\scriptsize 20} \\ \hline
\end{tabular}
}}
\end{table}
The final tests, results of which are given in $\mathit{Tab}$ $\mathit{5,}$
report the obtained \textit{BCRT} and \textit{WCRT} of Task 3, while
assuming different graph constructions. As we can see, the computed values
of the \textit{BCRT} are the same for all the nets whatever the graph
construction we consider. On the other hand, the \textit{WCRT} is
differently estimated following the approach we use. When considering the
graph $\widetilde{GR}$, the approximated values are too coarse as it is the
case in the tests 3 and 6. However, the $\widetilde{GRc}$ preserves the
exact value of the \textit{WCRT}\ for all the tested nets. These results
show how tight this approximation is, because the additional sequences that
are distorting the estimation of the \textit{WCRT} in the graph $\widetilde{%
GR}$ are completely removed in the $\widetilde{GRc}$.
\section{Conclusion}
We have proposed in this paper a novel approach to compute an
overapproximation of the state space of real time preemptive systems modeled
using the $ITPN$ model. For this effect, we have defined the time distance
system that encodes the quantitative properties of each class of states
reachable in the exact graph. Then we have provided efficient algorithms to
overapproximate its coefficients and to compute the \textit{DBM}
overapproximation of a class. We proved that this construction is more
precise than other classical \textit{DBM} overapproximation defined in the
literature \cite{abdelli} \cite{IHTPN}\cite{Bucci}, and showed how it is
appropriate to restore the quantitative properties of the model. Simulation
results comparing the performances of our graph construction with other
techniques were reported.
|
1,116,691,499,133 | arxiv | \section{Introduction}
\label{sec:introduction}
Modern hadron collider experiments such as HERA, RHIC and especially
the forthcoming LHC operate at high enough energies to observe new
phenomenon associated with high gluon density. The principal
characteristics of high--energy QCD scattering are the following:
firstly, owing to Lorentz contraction, the configurations probed
appear to be \emph{frozen in time} compared to the natural time
scales of the interaction. Secondly, the number density of soft
gluons gets \emph{saturated} at densities of the order of $1/g^2$.
These features, which are usually referred to as the Color Glass
Condensate (CGC), are a consequence of the non-Abelian nature of the
interaction and of the fact that gluons are massless. These features
are therefore unique to QCD.
At sufficiently high energy the dominant interaction between the
projectile and the target can be described by ensembles of
boost--enhanced field configurations, the ``frozen'' modes mentioned
above. This has been extensively explored in the context of the
McLerran--Venugopalan model~\cite{McLerran:1994ni, McLerran:1994ka,
McLerran:1994vd,
Kovchegov:1996ty, Kovchegov:1997pc, Jalilian-Marian:1997xn, Lam:1999wu,
Lam:2000nz, Lam:2001ax}. This description is tailored for
asymmetric situations in which the field of, say, the
target can be argued to be much
stronger than that of the projectile. High--energy scattering of a
virtual photon on a large nucleus is the prototype example of this
situation. Nonetheless, at sufficiently high energy
this generic picture is applicable to nucleus--nucleus scattering as well.
Energy dependence can be incorporated into this picture by taking
into account fluctuations that acquire properties of the previously
frozen modes as one increases the collision energy. The relevant
contributions are characterized by large logarithms $\ln(s)$ in the
total invariant energy $s$ in the collision. At low gluon densities,
or weak fields, the resummation of high--energy logarithms has been
formulated long ago as a \emph{linear evolution equation} for the
gluon distribution function, the BFKL equation~\cite{Bal-Lip, Lipat,
Kuraev:1976ge, Kuraev:1977fs, Lipat-2}. However, at high densities
the resummation of these logarithms leads instead to
\emph{non-linear evolution equations} for gauge field correlators.
These can be formulated as a functional evolution equation known as
the JIMWLK equation~\cite{Jalilian-Marian:1997xn,
Jalilian-Marian:1997jx,
Jalilian-Marian:1997gr, Jalilian-Marian:1997dw, Jalilian-Marian:1998cb,
Kovner:2000pt, Weigert:2000gi, Iancu:2000hn,Ferreiro:2001qy}, or
equivalently, as an infinite coupled hierarchy of evolution
equations for field correlators known as the Balitsky
hierarchy~\cite{Balitsky:1996ub,
Balitsky:1997mk, Balitsky:1998ya}. A truncation of this hierarchy that
retains most of its essential properties is known as the BK
equation~\cite{Balitsky:1996ub, Balitsky:1997mk,
Balitsky:1998ya,Kovchegov:1999yj, Kovchegov:1999ua}. This equation
describes high--gluon--density dynamics in terms of dipole degrees of
freedom. In Refs.~\cite{Kovchegov:1999yj, Kovchegov:1999ua} the BK equation has
been derived from Mueller's dipole
model~\cite{Mueller:1994rr,Mueller:1994jq,Mueller:1995gb,Chen:1995pa},
using nuclear enhancement as a tool to trace the dominant field
configurations. This extends the ideas of the dipole model beyond
the mere onset of saturation effects~\cite{Mueller:1996te}.
The most prominent feature of the solutions of these non-linear
evolution equations is the emergence of an energy--dependent
transverse correlation length $R_s$, or saturation scale $Q_s\sim
1/R_s$, which, asymptotically, encodes all the energy dependence of
the cross section. The saturation scale characterizes the transverse
momentum scales of radiated gluons that contribute to the evolution
at any given energy. Modes much softer than $Q_s$ decouple: the
number densities of soft gluons are saturated, so they remain
constant as the energy increases. Independently of the initial
condition, at sufficiently high energies the saturation scale
increases rapidly with the energy. Therefore, $Q_s$ can be
considered a hard scale: $Q_s\gg \Lambda$.
The possibility to describe saturation by perturbative evolution
equations is a highly non-trivial result, since the gauge field
involved is necessarily strong. The evolution equation is derived
perturbatively by expanding in small fluctuations on a strong
background field. This is justified a posteriori: having found that
soft modes do not contribute to the evolution, the equation is
perturbatively consistent. This kind of infrared stability is a
direct consequence of gluon saturation. It is therefore not shared
by the linear BFKL equation, which is instead afflicted by diffusion
into the infrared. As was beautifully illustrated in
Ref.~\cite{Golec-Biernat:2001if} it is the non-linearity of the
JIMWLK and BK equations that makes them infrared stable.
The presence of the nonlinearities and hence $Q_s$ will also modify the
r\^ole and influence of power corrections compared to the BFKL case
discussed in Ref.~\cite{Anderson:1997bw}.
Despite these strengths, JIMWLK and BK evolution suffer from a
serious shortcoming: they are derived only at leading logarithmic
accuracy, i.e. at fixed coupling. To partially compensate for this,
all recent studies of the evolution have included running--coupling
effects in some more or less ad hoc manner. There are several
reasons why running--coupling effects are essential:
\begin{itemize}
\item{} Running--coupling effects are known~\cite{Ciafaloni:1998gs,
Fadin:1998py, Kovchegov:1998ae, Brodsky:1998kn, Ball:2005mj,
Altarelli:1999vw, Ciafaloni:1999yw, Ciafaloni:2003rd, Thorne:2001nr} to
provide a large part of the next--to--leading--order (NLO) corrections to
the evolution in the low density limit, where the description matches onto
the BFKL equation.
\item{} On the purely phenomenological side, for example in fits to HERA
data~\cite{Iancu:2003ge}, running coupling (or more precisely a dependence
of the coupling on a scale involved in a single emission step, see below) is
essential to slow down evolution by reducing gluon emission from small
objects.
\item{} Conceptually, it is understood that the evolution is dominated by
scales of the order of $Q_s$. The non-linearity of the equation ensures that
dipoles much larger than $1/Q_s$ are inherently suppressed through the
evolution (the saturation mechanism). However, with strictly fixed coupling,
dipoles \emph{much smaller} than $1/Q_s$ still contribute to the evolution.
As soon as the coupling depends on the size of the emitting dipole such
contributions are also suppressed through the
evolution~\cite{Rummukainen:2003ns}.
\end{itemize}
Despite both the practical and conceptual importance of running--coupling
effects in the nonlinear evolution equations, there has been no derivation of
how they enter. All simulations done so far involved ad hoc prescriptions
for the scale of the coupling,
based on nothing more than educated guesswork.
In this paper we approach the problem on the more fundamental level.
We show that the JIMWLK and BK equations can indeed be derived
beyond the fixed coupling level. We find that the equations take a
similar form to the fixed coupling case, while their kernel changes
in a rather drastic way. For example, it does not naturally appear
as a single scale--dependent coupling times the LO
scale--invariant kernel. We explicitly compute running coupling
${\cal O}(\beta_0^{n-1}\alpha_s^n)$ corrections to the kernel to all
orders and resum them by means of Borel summation. We find that the
resummed kernel present infrared-renormalon ambiguities. These are
indicative of the form and importance of non-perturbative power
corrections.
In order to perform this calculation we develop a dispersive
representation of the dressed gluon propagator \emph{in the
background of Weisz\"acker--Williams fields}. This is a
generalization of the well--known dispersive representation of the
\emph{free} dressed gluon propagator, a technique that has been used
to compute running--coupling corrections and estimate power
corrections in a variety of applications, see
e.g.~\cite{Bigi:1994em,Smith:1994id,Beneke:1994qe,Ball:1995ni,Dokshitzer:1995qm,Grunberg:1998ix,Gardi:1999dq,Gardi:2000yh,Cacciari:2002xb,Gardi:2003iv}.
As in the BFKL case, the non-linear evolution equations are expected
to receive additional sub-leading corrections, which are not related
to the running of the coupling. In this paper we will not attempt to
include such corrections. Some steps in this direction on the level
of the BK equation have been taken in Ref.~\cite{Balitsky:2001mr},
or with an entirely different focus in Ref.~\cite{Gotsman:2004xb},
and can be combined with our treatment where desired.
The structure of the paper is as follows: in
Sec.~\ref{sec:jimwlk-equation} we give a short introduction to the
physics described by the JIMWLK equation, in order to establish the
key ideas and the notation that will be used in the rest of the
paper. For more detailed background we refer the reader to the
original literature~\cite{Jalilian-Marian:1997xn,
Jalilian-Marian:1997jx, Jalilian-Marian:1997gr, Jalilian-Marian:1997dw,
Jalilian-Marian:1998cb, Kovner:2000pt, Weigert:2000gi,
Iancu:2000hn,Ferreiro:2001qy, Balitsky:1996ub, Balitsky:1997mk,
Balitsky:1998ya,Kovchegov:1999yj, Kovchegov:1999ua} or review
articles~\cite{Iancu:2002xk, Iancu:2003xm, Weigert:2005us,
Jalilian-Marian:2005jf}. In
Sec.~\ref{sec:evolution-equations} we briefly review the JIMWLK and
BK equations and recall those details of their derivation that are
needed in what follows. Sec.~\ref{sec:running-coupling-types} is
devoted to a discussion of running--coupling effects in JIMWLK and
BK evolution, contrasting what has been done previously with what we
want to achieve in this paper. This will be important also to
clarify the terminology used in the remainder of the paper.
Sec.~\ref{sec:deriv-jimwlk-running} extends the derivation of the
JIMWLK equation to the running--coupling case. It is divided into
four subsections: Sec.~\ref{sec:runn-coupl-disp} collects the tools
of the conventional dispersive technique for the calculation of
running--coupling corrections in the free field case.
Sec.~\ref{sec:dress-prop-backgr} generalizes these tools for use in
the presence of the Weizs\"acker-Williams background as needed in
the derivation of the JIMWLK equation. Sec.~\ref{sec:disp-runn-pres}
presents a re-derivation of the JIMWLK equation with a running
coupling. Next, in Sec.~\ref{sec:borel-rep-of-resummed-kernel} we
formulate the newly computed corrections to the kernel as an
all--order Borel sum. In Sec.~\ref{sec:perturbative-expansions} we
discuss the convergence of perturbation theory. In
Sec.~\ref{sec:power-corrections} we use the renormalon singularities
to determine the parametric form and the typical magnitude of power
corrections affecting the kernel. Finally, in
Sec.~\ref{sec:BK-numerics} we investigate numerically the effect of
running coupling and power corrections on the BK evolution as a
function of the saturation scale. In Sec.~\ref{sec:conclusions} we
summarize our conclusions.
\section{The physics of the JIMWLK equation}
\label{sec:jimwlk-equation}
The key points can be most easily understood in the context of deep
inelastic scattering (DIS) of leptons on protons or nuclei, where
$q$ and $p$ are the momenta of the virtual photon and the target
respectively. Here, two kinematic variables play a r\^ole: (1) the
deeply--spacelike momentum $q^2=-Q^2<0$ carried by the exchanged
photon. This scale defines the transverse resolution of the probe
and thereby the apparent size of the quarks and gluons encountered;
and (2) Bjorken $x:={Q^2}/(2 p\cdot q)$, which is inversely
proportional to the total energy $s=(p+q)^2$ in the collision:
$x\approx {Q^2}/{s}$. At high energy, the \emph{rapidity} $Y$ is
directly related to Bjorken $x$ via $Y=\ln(1/x)$. The rapidity is
the natural evolution variable since $\ln(1/x)\approx
\ln\left(s/Q^2\right)$ reflects the large hierarchy of scales in the
high--energy limit, which appears with increasing powers in
perturbation theory.
At large $Q^2$ with fixed $x$ there are well-established methods to treat such
a system based on the Operator Product Expansion (OPE), a short--distance
expansion in powers of $1/Q^2$. Since $Q^2$ also controls the apparent size of
the particles encountered, the OPE can be viewed as a small density expansion,
despite the fact that particle numbers, driven by large logarithms in $Q^2$,
increase in parallel with increasing resolution. As a consequence the
description at large $Q^2$ can be based entirely on single--particle
properties such as quark and gluon distribution functions: particle
correlations are not important. This restriction is a key ingredient of the
derivation of the Dokshitzer--Gribov--Lipatov--Altarelli--Parisi (DGLAP)
equations that describe the increase of particle numbers with $Q^2$ in this
domain.
Going to small $x$ at fixed $Q^2$, no matter how large, one ends up
in an entirely different domain, that of high densities, even if one
starts out in a dilute situation. As the energy increases BFKL
evolution keeps generating new particles (mostly gluons) which are
all of effective size of ${\cal O}(1/Q^2)$, and so the density keeps
increasing. Eventually the density reaches a level where particle
correlations become
essential~\cite{Mueller:1994rr,Mueller:1994jq,Mueller:1995gb,Chen:1995pa,Mueller:1996te}
and a description in terms of distribution functions alone becomes
untenable. Since the BFKL description is based on gluon
distributions, this is also the point where this evolution equation
ceases to be adequate. Appropriate degrees of freedom and more
general evolution equations are needed to describe the system beyond
this point. The most general of these existing to date are the
JIMWLK equation, or, the completely equivalent Balitsky hierarchies,
with their factorized truncation, the BK equation.
JIMWLK and BK equations are formulated in terms of path--ordered
exponentials, as defined in Eq.~(\ref{eq:U-via-b}) below, with paths
collinear to the projectile direction, which can be interpreted as
quark and gluon constituents of the projectile. The path--ordered
exponentials encode the fact that, owing to the high energy in the
collision, these constituents penetrate the target without being
deflected from their straight--line trajectories. The $ \gamma^* A$
cross section then reads
\begin{equation}
\label{eq:dipole-cross}
\sigma_{\mathrm{DIS}}(Y,Q^2) = \text{\cal Im}\
\begin{minipage}[c]{3cm}
\includegraphics[height=.8cm]{dipole}
\end{minipage}
=\int\!\!d^2 r
\, \vert \psi\vert^2(r^2 Q^2) \hspace{.1cm}
\int d^2b \ \left\langle
\frac{\tr(1-U_{\bm{x}} U^\dagger_{\bm{y}})}{N_c}
\right\rangle(Y)
\end{equation}
where $\bm{r}=\bm{x}-\bm{y}$ corresponds to the transverse size of a
given $q\Bar q$ dipole and $\bm{b}=(\bm{x}+\bm{y})/2$ to the impact
parameter of this dipole relative to the target. $q^2:=-Q^2$ is the
large spacelike momentum carried by the virtual photon. The square
of the $q\Bar q$ component of the photon wave function $\vert
\psi\vert^2(r^2 Q^2)$ describes the probability to find a $q\Bar q$
pair of size $r$ inside the virtual photon and can be calculated in
QED. It consists of a known combination of Bessel functions together
with an integral over longitudinal momentum fractions already
absorbed in the notation. The remaining factor, the expectation
value of $U$-operators is usually called the dipole cross section
$\sigma_{\text{dipole}}(Y,\bm{r})$ of the target in question. All
the properties of this interaction
--- details of the target wave function, gluon exchange between
the target and projectile etc. --- are encoded in this expectation
value. The leading--logarithmic corrections at small $x$ appear in
powers of $\alpha_s \ln(1/x)$. These corrections are resummed by the
JIMWLK equation.
The dipole operator $\Hat N_{\bm{x y}} := {\tr(1-U_{\bm{x}}
U^\dagger_{\bm{y}})}/{N_c}$ itself is naturally bounded between zero
and one. Typically, gluon densities grow towards large $r$, such
that the expectation value of $\Hat N_{\bm{x y}}$ interpolates
between $0$ for infinitesimally small dipoles, and $1$ for very
large ones. This encodes the idea of color transparency at short
distances and saturation at large distances where gluon densities
grow up to ${\cal O}(1/g^2)$. The length scale that characterizes
the transition between the two domains can be interpreted as the
correlation length $R_s$ of $U$-operators, or equivalently gluon
fields. The corresponding momentum scale $Q_s\propto 1/R_s$ is
usually called the saturation scale. Clearly, as more gluons are
generated in JIMWLK evolution towards small $x$, the correlation
length gets small and $Q_s$ increases.
One key feature that emerges for this evolution is that details about the
initial conditions are erased quickly and a universal scaling form of
correlators such as the dipole cross section is reached. From then on, all $x$
or $Y$ dependence is carried by the saturation scale $Q_s(Y)$. Such behavior
has been seen in HERA data (geometric scaling)~\cite{Golec-Biernat:1998js,
Golec-Biernat:1999qd, Stasto:2000er} and has important consequences for RHIC
(disappearance of Cronin enhancement from mid to forward
rapidities)~\cite{Baier:2003hr, Kharzeev:2003wz,
Jalilian-Marian:2003mf,Albacete:2003iq,Arsene:2004ux} and the
LHC experiments where the energies are higher.
It had been noted early on in the context of the BK equation that a treatment
at the strictly leading--logarithmic level is insufficient: running--coupling
effects have a strong influence on the speed of evolution; quantities like the
evolution rate $\lambda(Y):=\partial_{Y} \ln Q_s^2(Y)$ are reduced by more
than $50\%$ if running--coupling effects are introduced (in some heuristic
way). Despite the explicit scale breaking introduced by the appearance of
$\Lambda_{\text{QCD}}$ in the running coupling, scaling of the dipole cross
section with $Q_s$ is retained to very good accuracy. In
Ref.~\cite{Rummukainen:2003ns} it was emphasized that running--coupling
effects are also conceptually important: only with running--coupling effects
included does the phase--space region active in the evolution center around
the physical scale $Q_s$. At strictly leading--logarithmic level --- in the
conformal limit for the evoluton kernel --- the evolution involves
short--distance\footnote{As explained above, the infrared is not a problem due
to the presence of the correlation length $R_s$, which acts an an effective
infrared cutoff.} contributions from more than 7 orders of magnitude away
from the physical scale.
Unlike the other evolution equations in QCD, such as DGLAP and BFKL, the
JIMWLK and BK equations have been derived only at leading logarithmic
accuracy. Only partial calculations of two--loop corrections to the BK equation
are available~\cite{Balitsky:2001mr} but they do not include any attempt to
determine the running of the coupling. For our purposes, the existing results
in the low--density limit for BFKL evolution are of limited use:
they offer no hint as for how to extend or extrapolate the
calculation into the high--density domain where non-linearities appear.
A direct calculation in the context of the JIMWLK and BK
equations is therefore necessary.
As announced in the introduction, in this paper we compute
running--coupling corrections to the JIMWLK and BK equations. In
Sec.~\ref{sec:evolution-equations} we briefly review the
fixed--coupling derivation of the JIMWLK equation, by considering
small fluctuations in a strong Weizs\"acker-Williams field that is
encoded in the eikonal factors of Eq.~\eqref{eq:dipole-cross}. This
derivation will then be generalized to the running--coupling case in
Sec.~\ref{sec:deriv-jimwlk-running} using a dispersive
representation of the dressed gluon propagator in such a background
field. This will not only enable us to calculate running--coupling
corrections, but also to explore non-perturbative effects in the
evolution.
\subsection{Evolution equations}
\label{sec:evolution-equations}
\subsubsection*{The JIMWLK equation and the Balitsky hierarchies}
The JIMWLK equation is a functional Fokker-Planck equation for the
statistical weight $Z_Y[U]$ defining the $Y$--dependent averaging
procedure $\langle\ldots\rangle(Y)$ that determines the expectation
value of operators $O[U]$ made of an arbitrary number of
path--ordered exponentials $U$, where
\begin{equation}
\label{eq:U-via-b}
U^{-1}_{\bm{x}} =
P\exp\left\{
ig \int\limits_{-\infty}^{\infty} dz^- \delta(z^-) \beta(\bm{x})
\right\}
\ .
\end{equation}
Such an average was already encountered above in Eq.~\eqref{eq:dipole-cross}
in the case of the dipole operator ${\tr(1-U_{\bm{x}}
U^\dagger_{\bm{y}})}/{N_c}$. To understand why these averages determine
virtually all cross sections at small~$x$, recall the origin of the
path--ordered exponentials (\ref{eq:U-via-b}): they encode the interaction of
fast moving quarks and gluons in the projectile wave function with the target
field. It is the high energy of the collision that allows the description of
this interaction in terms of the eikonal factors $U$, that at leading order
follow perfectly straight, lightlike worldlines. We have chosen a frame where
these trajectories extend along the minus light--cone direction at $x^+=0$. Each
particle is then characterized by the remaining coordinates, namely its
transverse location $\bm{x}$. The leading contribution comes from interaction
with the non-Abelian Weizs\"acker-Williams field of the target,~$A^+$. It
takes the form
\begin{equation}
\label{eq:bgfield}
A^+(x) =b^+(x^-,{\bm x}) +\delta A(x),
\hspace{2cm}
b^+(x^-,{\bm x}) = \delta(x^-)\beta(\bm{x})
\end{equation}
where $b^+$, or more specifically $\beta(\bm{x})$, is the single
leading degree-of-freedom, a strong field, while $\delta A$ is a
small fluctuation in which we will expand. The $\delta(x^-)$
reflects the lack of resolution in longitudinal direction: no
internal details of the field of the target are probed. The
independence on $x^+$ reflects the fact that the target wave
function is frozen during the interaction, an extreme time dilation.
At fixed rapidity all dominant contributions are determined by the
background field $b^+=\delta(x^-)\beta(\bm{x})$. Moreover, they only
enter in a very specific form, via the Wilson lines $U$.
For some generic operator made of these Wilson--line fields, $O[U]$,
the average of Eq.~\eqref{eq:dipole-cross} will be written as
\begin{equation}
\label{eq:corrs}
\left\langle O[U] \right\rangle_Y \,:=\, \int \Hat{D}[U]\, O[U]\, Z_Y[U]
\end{equation}
where $\Hat{D}[U]$ is a functional Haar-measure and $Z_Y[U]$
contains the detailed physics beyond the eikonal approximation
already incorporated by selecting $U$ as the relevant
degrees-of-freedom: $Z_Y[U]$ is the statistical weight for all
possible field configurations.
Most of our knowledge of $Z_Y[U]$ is perturbative: the JIMWLK
equation\footnote{For a first derivation of this equation in its
most compact form see Ref.~\cite{Weigert:2000gi}. The version
presented here is based on Ref.~\cite{Weigert:2005us}.} determines
the $Y$ dependence of this average:
\begin{equation}
\label{eq:JIMWLK}
\partial_Y \Hat Z_Y[U]\,=\,
- {\cal H}[U]\,
Z_Y[U],
\end{equation}
where the JIMWLK Hamiltonian ${\cal H}[U]$ is given by
\begin{align}
\label{eq:JIMWLK-Hamiltonian-LO}
{\cal H}[U] =-\frac{\alpha_s(\mu^2)}{2\pi^2}\, {\cal K}_{\bm{x z y}}\, \Big[
U_{\bm z}^{a b}\left(i\Bar\nabla^a_{\bm x}i\nabla^b_{\bm y} +i\nabla^a_{\bm x}
i\Bar\nabla^b_{\bm y}\right) + \left( i\nabla^a_{\bm x} i\nabla^a_{\bm
y}+i\Bar\nabla^a_{\bm x} i\Bar\nabla^a_{\bm y}\right) \Big] \
,
\end{align}
with the LO kernel:
\begin{equation}
\label{eq:JIMWLK-LO-kernel}
{\cal K}_{\bm{x z y}} = \frac{(\bm{x}-\bm{z})\cdot(\bm{z}-\bm{y})}{%
(\bm{x}-\bm{z})^2(\bm{z}-\bm{y})^2} \,=\,
-\frac{ {{\bm r}_1}\cdot {{\bm r}_2}}{\,{r_1}^2 \,{r_2}^2}\ ,
\end{equation}
where $\bm{x}$, $\bm{y}$ and $\bm{z}$ are transverse coordinates.
Here we also introduced a shorthand notation for the vectors
connecting the points in the transverse plain:
\begin{equation}
\label{eq:distshort}
{{\bm r}}=\bm{x}-\bm{y}\ ,\hspace{2cm}
{{\bm r}_1}=\bm{x}-\bm{z}\ ,\hspace{2cm}
{{\bm r}_2}=\bm{y}-\bm{z}\ ,
\end{equation}
and their lengths: $r_1:=|{{\bm r}_1}|$, etc. The notation in
(\ref{eq:JIMWLK-Hamiltonian-LO}) assumes an integration convention
over repeated coordinates appearing as an index in ${\cal K}_{\bm{x
z y}}$ and in the vector field operators $\nabla^a_{\bm x}$. The
Hamiltonian ${\cal H}[U]$~is~second order in left-- and
right--invariant vector fields $\nabla^a_{\bm x}$ and
$\Bar\nabla^a_{\bm x}$, which are Lie derivatives: they act on the
Wilson--line variables $U_{\bm x}$ according to\footnote{See
Ref.~\cite{Weigert:2000gi} for more details.}
\begin{equation}
\label{eq:nabla_acting} i\nabla^a_{\bm{x}} U_{\bm{y}} := -U_{\bm{x}}
t^a \delta^{(2)}_{\bm{x y}}\,; \quad \qquad i\Bar\nabla^a_{\bm{x}}
U_{\bm{y}} := t^a U_{\bm{x}} \delta^{(2)}_{\bm{x y}}.
\end{equation}
The terms in the square brackets on the r.h.s of
Eq.~(\ref{eq:JIMWLK-Hamiltonian-LO}) are grouped according to their
origin in real--emission and virtual corrections, respectively:
real--emission contributions involve an additional Wilson line
$U_{\bm z}^{a b}$ at transverse location $\bm z$.
The full derivation of Eq.~(\ref{eq:JIMWLK}) has been presented
exhaustively in Refs.~\cite{Weigert:2000gi,
Balitsky:1996ub, Iancu:2000hn, Ferreiro:2001qy}. We nevertheless need to
recall here how the JIMWLK Hamiltonian relates to Feynman diagrams, in
order to prepare its re-derivation with running--coupling
corrections. Rapidity dependence of $Z_Y[U]$ at LO is
driven by the lowest--order fluctuations $\delta A$ around the
background $\delta(x^-)\beta({\bm x})$ of Eq.~\eqref{eq:bgfield}. In
this sense, the LO JIMWLK Hamiltonian in
Eq.~\eqref{eq:JIMWLK-Hamiltonian-LO} is {\em constructed} such that
it adds the LO ``exchange'' and ``self-energy''
corrections to, say, an interacting $q\Bar q$ pair, represented by
its Wilson--line bilinear $U_{{\bm
x}}\otimes U_{{\bm y}}^\dagger$:
\begin{align}
\label{eq:JIMWLK-LO-diagram-cont}
\ln(1/x)\,\, {\cal H}[U] \,\, U_{{\bm x}}\otimes U_{{\bm y}}^\dagger
= &
\parbox{2.7cm}{\includegraphics[width=2.7cm]{chiqqb}}
+\parbox{2.7cm}{\includegraphics[width=2.7cm]{sigmaqb-U}}
+\parbox{2.7cm}{\includegraphics[width=2.7cm]{sigmaq-Ud}}
\end{align}
The diagrams shown in~\eqref{eq:JIMWLK-LO-diagram-cont} are Feynman
diagrams where the gluon propagator of the fluctuations
$\langle\delta A \delta A\rangle$ is taken in the background of the
strong target field $\delta(x^-)\beta({\bm x})$.
The correspondence to real and virtual diagrams becomes visible upon
resolving the Feynman diagrams into $x^-$ ordered diagrams of
light--cone perturbation theory. For instance,
\begin{align}
\label{eq:interaction-diagrams}
\parbox{2.7cm}{\includegraphics[width=2.7cm]{chiqqb}} = &
\parbox{2.7cm}{\includegraphics[width=2.7cm]{chiqqb-t1}}
+
\parbox{2.7cm}{\includegraphics[width=2.7cm]{chiqqb-t2}}
+\parbox{2.7cm}{\includegraphics[width=2.7cm]{chiqqb-tup}}
+\parbox{2.7cm}{\includegraphics[width=2.7cm]{chiqqb-tdown}},
\end{align}
where the diagrams on the r.h.s. should be interpreted as diagrams
of light--cone perturbation theory. Light--cone time $x^-$ runs from
bottom right to top left, the two collinear\footnote{Note that these
two lines are actually separated only in the transverse direction.}
Wilson lines in this direction represent the dipole (the projectile)
and the target is shown as a perpendicular line at $x^-=0$, from
bottom left to top right. To be precise, the last two diagrams
in~(\ref{eq:interaction-diagrams}) are a shorthand notation for a
sum of two different $x^-$ orderings each:
\begin{equation}
\label{eq:shorthand-diagrams}
\parbox{2cm}{\includegraphics[width=2cm]{chiqqb-tup}}
= \parbox{2cm}{\includegraphics[width=2cm]{chiqqb-up-to-1}}
+\parbox{2cm}{\includegraphics[width=2cm]{chiqqb-up-to-2}}
\ ;\hspace{.5cm}
\parbox{2cm}{\includegraphics[width=2cm]{chiqqb-tdown}}
=
\parbox{2cm}{\includegraphics[width=2cm]{chiqqb-down-to-1}}
+\parbox{2cm}{\includegraphics[width=2cm]{chiqqb-down-to-2}}.
\end{equation}
The factors of $U$, $U^\dagger$ and $U^{a b}$ representing the
interactions of a projectile quark, antiquark and gluon,
respectively, are indicated as large dots where these Wilson lines
cross the target line. In a derivation of JIMWLK based on
projectile wave functions\footnote{See
Refs.~\cite{Mueller:1994rr,Mueller:1994jq,Mueller:1995gb,Chen:1995pa}
for a derivation of JIMWLK based on projectile wave functions in the
context of the dipole model,
Refs.~\cite{Kovchegov:1999yj,Kovchegov:1999ua} for the BK case, or
Ref.~\cite{Hentschinski:2005er} for a re-derivation of JIMWLK via
amplitudes.}, the contribution of the first two diagrams
of~\eqref{eq:interaction-diagrams} are associated with a situation
where the interacting gluon reaches the final state and in this
sense they correspond to {\em real--emission} diagrams.
Correspondingly, as indicated by the third dot, they contain an
additional Wilson line for this produced, eikonally interacting
gluon, which appears in (\ref{eq:JIMWLK-Hamiltonian-LO}) as $U_{\bm
z}^{a b}$. The remaining diagrams represent purely {\em virtual}
contributions in which the number of Wilson lines does not change.
Similarly, both real--emission and virtual corrections are present
in the self--energy--like diagrams in
(\ref{eq:JIMWLK-LO-diagram-cont}).
Note that in the absence of the target field~(\ref{eq:bgfield}), the
eikonal factors become trivial: $U\to 1$, and then there is strictly
no evolution. In this limit each individual diagram on the
r.h.s.~of~(\ref{eq:JIMWLK-LO-diagram-cont}) vanishes identically
owing to exact real--virtual cancellation:
in~(\ref{eq:interaction-diagrams}) the first two diagrams,
corresponding to real gluon emission, cancel against the last two
diagrams, which represent virtual corrections. Analogous
cancellations occur in this limit in the light--cone perturbation
theory decomposition of the self--energy--like diagrams in
~(\ref{eq:JIMWLK-LO-diagram-cont}).
As a functional equation, Eq.~(\ref{eq:JIMWLK}) is equivalent to an
infinite set of equations for $n$-point correlators of $U$ and
$U^\dagger$ fields in any representation of $SU(N_c)$, called the
Balitsky hierarchies~\cite{Balitsky:1996ub}. In order to obtain the
evolution equation for a correlator of a given composite operator
$O[U]$ made of $U$-fields, one first takes the $Y$ derivative of
(\ref{eq:corrs}) and then uses (\ref{eq:JIMWLK}) to replace the
$\partial_Y \Hat Z_Y[U]$ by $- {\cal H}[U]\, Z_Y[U]$, obtaining:
\begin{equation}
\label{eq:corrs_derivative}
\partial_Y \left\langle O[U] \right\rangle (Y) \,=\, - \int \Hat{D}[U]\,
O[U]\, {\cal H}[U]\, Z_Y[U],
\end{equation}
where the Hamiltonian still acts on $Z_Y[U]$. Using the self-adjoint nature
of ${\cal H}[U]$ with respect to the Haar measure (as ensured by the Lie
derivatives), one can then rewrite (\ref{eq:corrs_derivative}) as
\begin{equation}
\label{eq:op-JIMWLK}
\partial_Y \langle O[U] \rangle(Y) =
-\left\langle \big({\cal H}[U] O[U]\big) \right\rangle(Y) \ .
\end{equation}
Here the JIMWLK Hamiltonian acts on $O[U]$. Observing that ${\cal H}[U]$
explicitly contains a $U$-operator, and that the number of $U$-operators
remains invariant when acted upon by the Lie derivatives $\nabla^a_{\bm{x}}$
(see Eq.~(\ref{eq:nabla_acting})) one understands that the evolution equation
for $\left< O[U]\right>$ must involve operators with more $U$ fields than the
original operator $O[U]$. Thus, the nonlinear nature of ${\cal H}[U]$ implies
that the r.h.s. of the equation depends on a new type of correlator of $U$
fields. To determine the rapidity dependence of $\langle O[U] \rangle(Y)$ one
will therefore need also the evolution equation of this new correlator, which
in turn will couple to yet higher composite operators containing more $U$
fields. Continuing the process one ends up with an infinite hierarchy of
equations, defined by the operator $O[U]$ used to start the process. The
derivation of the BK equation shown below provides a simple example for such a
hierarchy.
\subsubsection*{Truncation and the BK equation}
As the simplest correlator with immediate phenomenological relevance
we consider the two--point function of the dipole
operator\footnote{Below we will often use a shorthand notation where
the $Y$ dependence is not explicitly indicated.}:
\begin{equation}
\label{eq:dipole}
N_{Y,\bm{x}\, \bm{y}}:= \langle \Hat N_{\bm{x y}} \rangle(Y) \hspace{2cm}
\Hat N_{\bm{x}\, \bm{y}} := \frac{\tr(1 - U_{\bm{x}}^\dagger
U_{\bm{y}})}{N_c}.
\end{equation}
Using Eq.~(\ref{eq:op-JIMWLK}) in the case of $\Hat N_{\bm{x y}}$
with the explicit expressions corresponding to the diagrams in
Eq.~\eqref{eq:JIMWLK-LO-diagram-cont}, one immediately obtains:
\begin{equation}
\label{eq:pre-BK}
\partial_Y \big\langle \Hat N_{\bm{x y}} \big\rangle(Y)
= \frac{\alpha_s N_c}{2\pi^2}\int\!\! d^2 z
\big(
2{\cal K}_{\bm{x z y}}- {\cal K}_{\bm{x z x}}-{\cal K}_{\bm{y z y}}
\big)
\big\langle
\Hat N_{\bm{x z}}+
\Hat N_{\bm{z y}}- \Hat N_{\bm{x y}}-
\Hat N_{\bm{x z}}\, \Hat N_{\bm{z y}}\big\rangle(Y).
\end{equation}
The linear combination of JIMWLK kernels ${\cal K}$ that appear in
this expression are in one-to-one correspondence with the three
diagrams in~\eqref{eq:JIMWLK-LO-diagram-cont}. They combine into the
very compact form of the BK kernel $\Tilde{\cal K}_{\bm{x z y}}$:
\begin{equation}
\label{eq:BK-kernel}
\Tilde{\cal K}_{\bm{x z y}} :=
2{\cal K}_{\bm{x z y}}- {\cal K}_{\bm{x z x}}-{\cal K}_{\bm{y z y}}
=\frac{(\bm{x}-\bm{y})^2}{%
(\bm{x}-\bm{z})^2(\bm{z}-\bm{y})^2}
\ .
\end{equation}
Clearly the r.h.s. of Eq.~\eqref{eq:pre-BK} depends on a 3-point
function containing operators with up to four $U^{(\dagger)}$
factors, that in general does not factorize into a product of two
2-point correlators:
\begin{equation}
\label{eq:4-point-nonfac}
\big\langle \Hat N_{\bm{x z}}\, \Hat N_{\bm{z y}}\big\rangle(Y)
= \big\langle \Hat N_{\bm{x z}} \big\rangle(Y) \, \times\,
\big\langle \Hat N_{\bm{z y}}\big\rangle(Y) \,+\,\text{corrections}\ .
\end{equation}
To completely specify the evolution of $\langle \Hat N_{\bm{x y}}
\rangle(Y)$ one therefore need to know $\big\langle \Hat N_{\bm{x
z}}\, \Hat N_{\bm{z
y}}\big\rangle(Y) $. The latter, in its evolution equation, will couple
to yet higher $n$-point functions and thus one is faced with an
infinite hierarchy of evolution equations, as anticipated above. The
entire hierarchy (as well as others, corresponding to other
composite operators $O[U]$) is encoded in the single functional
equation~(\ref{eq:JIMWLK}). If one drops the corrections
in~\eqref{eq:pre-BK} and factorizes the correlators in the spirit of
a large--$N_c$ approximation, the hierarchy is truncated and reduces
to a single equation. This truncation can be interpreted as an
independent scattering approximation and it leads to the
Balitsky-Kovchegov (BK)
equation~\cite{Kovchegov:1999ua,Balitsky:1997mk}:
\begin{equation}
\label{eq:BK}
\partial_Y N_{Y,\bm{x y}} = \frac{\alpha_s(\mu^2) N_c}{2\pi^2}\int\!\! d^2 z
\ \Tilde {\cal K}_{\bm{x z y}}\
\Big(
N_{Y,\bm{x z}}+ N_{Y,\bm{z y}}-N_{Y,\bm{x y}}
- N_{Y,\bm{x z}}\, N_{Y,\bm{z y}}\Big)
\ .
\end{equation}
\subsubsection*{Generic features and infrared stability}
Despite the complex nature of the evolution equation, it is possible
to gain insight into some generic features. It can be
proven~\cite{Weigert:2000gi} that the evolution equation possesses
an attractive fixed point at $Y\to\infty$ at which the system has
vanishing correlation length. For the evolution of a physical
correlator, such as $N_{Y,\,\bm{x y}}$, this implies a generic
trend as shown in Fig.~\ref{fig:generic-evol} (a): with
increasing~$Y$ saturation ($N_{Y,\,\bm{x y}}\to 1$) is reached at
ever shorter distances.
\begin{figure}[htbp]
\centering
\begin{minipage}{7.1cm}
\centering
\includegraphics[height=5.1cm]{generic-evo-steps-3d}\\ (a)
\end{minipage}\hfill
\begin{minipage}{7.1cm}
\centering
\includegraphics[height=5.1cm]{generic-evo-act-steps-3d} \\ (b)
\end{minipage}
\caption{\small \em Generic evolution trend for a single--scale
dipole correlator.
(a) shows $N(Y,r)$ as a function of the dipole size $r=\vert
\bm{x}-\bm{y}\vert$ for several values of the saturation scale
$Q_s(Y)$. $Q_s$ increases with $Y$; saturation then sets in at
smaller distances. (b) shows $\partial_Y N$ and thus the activity
in a given evolution step as a function of the same variables.
With increasing $Q_s(Y)$ contributions are centered at ever
shorter distances. }
\label{fig:generic-evol}
\end{figure}
This leads to a further important property of the evolution
equation, namely its infrared stability: the evolution is not
affected by long--wavelength fluctuations beyond the characteristic
correlation length (the inverse of the saturation scale $Q_s$) where
$N_{Y,\,\bm{x y}}= 1$. In this way the saturation scale acts as an
effective infrared cutoff. Fig.~\ref{fig:generic-evol} (b)
demonstrates that the modes that contribute to the evolution have
momenta of order of $Q_s(Y)$. With increasing $Y$, the active
region moves towards the ultraviolet.
\subsection{What we mean by running coupling}
\label{sec:running-coupling-types}
Similarly to other evolution equations in QCD~\cite{Braun:2003rp},
the LO JIMWLK kernel is conformally
invariant\footnote{Note that while the LO \emph{kernel}
is conformally invariant, the solution of the equation is not: it is
characterized by a correlation length. This scale originates in the
initial condition, and it is preserved owing to the non-linearity of
the equation.}. As usual, breaking of this symmetry is expected to
appear through radiative corrections at ${\cal O}(\beta_0
\alpha_s^2)$ where $\beta_0$ is the leading coefficient of the
$\beta$ function,
\begin{equation}
\label{beta} \frac{d\alpha_s(\mu^2)/\pi}{d\ln \mu^2}=-\beta_0\,
\left(\alpha_s(\mu^2)/ \pi\right)^2-\beta_1\, \left(\alpha_s(\mu^2)/
\pi\right)^3\,+\,\cdots \,; \qquad\quad
\beta_0=\frac{11}{12}C_A-\frac16 N_f.
\end{equation}
At higher orders one expects corrections ${\cal O}(\beta_0^{n-1}
\alpha_s^n)$, as well as ones associated with subleading
coefficients of the $\beta$ function, e.g. ${\cal
O}(\beta_1\beta_0^{n-3} \alpha_s^n)$. In
Sec.~\ref{sec:deriv-jimwlk-running} we compute these
running--coupling corrections using a dispersive representation of
the dressed gluon propagator in a background field. We will show
that while the general structure of the evolution equation, namely
Eq.~(\ref{eq:JIMWLK}), holds as at LO, the kernel itself
changes drastically with respect to Eq.~(\ref{eq:JIMWLK-LO-kernel})
--- see Eq.~(\ref{eq:new-JIMWLK}) or (\ref{eq:pert-exp}) below. A
similar generalization holds in the BK case, see Eqs.
(\ref{eq:BK-running-kernel}) and~(\ref{eq:BK-full}). JIMWLK and BK
evolution with running--coupling is therefore qualitatively
different from the fixed--coupling case in that at each step in the
evolution the coupling depends on the details of the evolving
configuration; moreover, different \emph{final states}, that are
characterized by different ``daughter dipole'' sizes, are weighted
differently in the r.h.s. of the evolution equation.
Although all previous derivations of JIMWLK and BK evolution
equations were restricted to the leading logarithmic approximation,
it has been clear for quite a while that running--coupling
corrections will be necessary to get quantitative
results\footnote{In Ref.~\cite{Rummukainen:2003ns} it has been
further emphasized that running--coupling corrections, where the
scale of the coupling depends on the scales involved in a {\em
single} evolution step, are needed to reduce active phase space in
the ultraviolet from about $6$ orders of magnitude to about one.
Despite this dramatic reduction of phase space, other qualitative
features of the solution of JIMWLK or BK equations were found to be
the same: for any set of initial conditions that interpolate between
color transparency at short distance and saturation at large
distance (as shown schematically in Fig.~\ref{fig:generic-evol}) the
system approaches an asymptotic line where the dipole correlator
reaches a near--scaling form.}. By making physically--motivated
scale choices for the renormalization scale of the coupling, several
authors found that the evolution rate $\lambda(Y)$ reduces by a
factor of two or more compared to
the fixed--coupling case.
It is important to make a clear distinction between the actual
higher--order contributions to the kernel, which we compute in the
following, and ad hoc choices of scale for the coupling, which have
been often referred to as ``running coupling''. Such prescriptions
have been assumed in all numerical simulation of the BK
equation~\cite{Braun:2000wr, Kimber:2001nm,Armesto:2001fa,
Levin:2001et, Lublinsky:2001bc,
Golec-Biernat:2001if,Albacete:2004gw, Rummukainen:2003ns}. Similar
assumptions were used in the analytical estimates of
Ref.~\cite{Mueller:2002zm}, which agree well with numerical
simulations~\cite{Rummukainen:2003ns}. To put the results of the
present paper in context of previous work, we find it useful to
further distinguish between different categories of scale choices
that were made in the literature:
\begin{enumerate}
\item {\bf Fixed or essentially fixed coupling:}
$\alpha_s(\mu^2)$ is treated as a constant or as a function of $Y$.
Within this class, simulation results, starting from the same initial
condition, can be related by re-scaling the evolution variable.
For example, the scenario where the coupling depends on $Q_s(Y)$
can be related to the one where the scale of the coupling is fixed
as some $Q_0$ through the change of variables (see
also~\cite{Iancu:2002tr}):
\[
Y' = \frac{\alpha_s\left(Q_s^2(Y)\right) }{
\alpha_s\left(Q_0^2\right) } \,Y.
\]
Obviously, any ansatz of this sort would fail to capture the
essential physics of running coupling, and would inherit the
ultraviolate phase--space problem discussed in
Ref.~\cite{Rummukainen:2003ns}.
\end{enumerate}
A non-trivial change in the shape of the solutions with respect to
the fixed--coupling case occurs if the scale of the coupling is
determined by the size of the ``dipoles'' involved in the evolution.
In such cases the ultraviolate phase--space problem is generally
removed, since emission from very small objects is suppressed by
small coupling constants. The remaining two cases fall into this
category.
\begin{enumerate}\addtocounter{enumi}{1}
\item {\bf ``Parent--dipole'' running:} where the scale of the
coupling on r.h.s. of the BK equation~(\ref{eq:BK}) is assumed to
depend on the initial dipole size $r=|\bm x - \bm y|$, namely
\begin{equation}
\label{eq:parent_dipole_BK} \alpha_s\, (\mu^2) \Tilde{\cal K}_{\bm{x
z y}} \,\to\, \alpha_s(c^2/r^2) \,\Tilde{\cal K}_{\bm{x z y}},
\end{equation}
where $c$ is a dimensionless ``scale
factor'' of order one\footnote{It was often taken as $c\approx 4$,
motivated by direct Fourier transform in the double logarithmic
limit.}. This has been the most common ansatz in numerical
simulations of the BK equation.
While this appears to be a natural ansatz in the context of the BK
equation, it can not be easily reconciled with the JIMWLK equation.
To see this, recall that (\ref{eq:JIMWLK}) is a functional equation.
Therefore, the Hamiltonian in Eq.~(\ref{eq:JIMWLK-Hamiltonian-LO})
cannot depend on any scales characterizing a particular operator
$O[U]$; all three transverse coordinates appearing in
(\ref{eq:JIMWLK-Hamiltonian-LO}) are integrated over internally.
When deriving the BK equation from JIMWLK as in (\ref{eq:pre-BK})
one obtains the BK kernel as a specific combination of the original
JIMWLK kernel:
\begin{align}
\label{eq:BK_kernel_from_JIMWLK}
{\alpha_s} \Tilde{\cal K}_{\bm{x z y}} \to
\ 2 \, {\alpha_s(\mu^2_{\bm{x z y}})} {\cal K}_{\bm{x z y}}
-{\alpha_s(\mu^2_{\bm{x z x}})}\,{\cal K}_{\bm{x z x}} -
{\alpha_s(\mu^2_{\bm{y z y}})}\,{\cal K}_{\bm{y z y}},
\end{align}
where each term corresponds to one of the diagrams in
(\ref{eq:JIMWLK-LO-diagram-cont}). Whatever one assumes of the
functional dependence of $\mu^2_{\bm{x z y}}$ on the coordinates,
since the self--energy--like diagrams are independent of the parent
dipole size $|\bm x - \bm y|$, Eq.~(\ref{eq:BK_kernel_from_JIMWLK})
does not lead to Eq.~(\ref{eq:parent_dipole_BK}).
\end{enumerate}
Even considering the BK case on its own, ``parent--dipole'' running may appear
artificial: there is no reason why the sizes of the dipoles \emph{produced} in
the same step in the evolution, which can of course be quite different from
the parent size, should not play an important r\^ole~\cite{Albacete:2004gw}.
This leads us to the final category:
\begin{enumerate}\addtocounter{enumi}{2}
\item {\bf Final--state--dependent evolution:} where the scale of the
coupling on r.h.s. of the BK (\ref{eq:BK}) or JIMWLK
(\ref{eq:JIMWLK-Hamiltonian-LO}) equations is assumed to depend on
all three distance scales present in a single evolution step, namely
the ``parent dipole'' $r=|\bm x - \bm y|$ and the two newly
produced ``daughter dipoles'', $r_1=|\bm{x}-\bm{z}|$ and
$r_2=|\bm{y}-\bm{z}|$. Obviously, in this case the coupling affects
the weight of different final states, depending on the transverse
location of the emitted gluon (the coordinate ${\bm z}$) that is
integrated over.
Several such models were proposed, e.g. $\sqrt{\alpha_s({c^2}/{r_1^2
})}\,\sqrt{\alpha_s({c^2}/{r_2^2 })}$ or
$\alpha_s\left(\text{max}\left\{{c^2}/{r_1^2 },{c^2}/{r_2^2
}\right\}\right)$, and found to be in fair agreement with
``parent--dipole'' running, see e.g.~\cite{Albacete:2004gw}. As
already mentioned, no deep justification of any of these models has
been provided.
\end{enumerate}
In the next section we compute running--coupling corrections to the
JIMWLK equation. In Sec. \ref{sec:BK-numerics} shall use this result
to derive the corrections for the BK case and study the consequences
numerically. As we will see, these corrections {\em do} depend
on the details of the final state at each evolution step and involve
all three scales. Moreover, in neither of the two equations do these
corrections naturally reduce to a single scale--dependent coupling
times the LO kernel.
\section{Derivation of JIMWLK evolution with running coupling}
\label{sec:deriv-jimwlk-running}
\subsection{Running coupling, Borel transform and the dispersive approach}
\label{sec:runn-coupl-disp}
We are interested in improving the leading--logarithmic result of
the JIMWLK and BK equations by including running--coupling effects,
namely corrections associated with the renormalization--group
equation (\ref{beta}). In general, running--coupling corrections are
important in QCD, for two reasons:
\begin{itemize}
\item{} They usually constitute a large part of the higher--order (notably NLO)
corrections~\cite{BLM,Brodsky:1997vq,Gardi:1999dq,Brodsky:2000cr,Neubert:1994vb,Beneke:1994qe,Ball:1995ni,Dokshitzer:1996qm}.
The main reasons for this are: (1) the average virtuality of gluons
is usually different (typically much lower) than the principal hard
scale, which is often used as the default renormalization point; and
(2) $\beta_0$ is sizeable. These effects are especially important
when the hard scale is low, since then the coupling is large and its
evolution (\ref{beta}) is fast.
\item{} They dictate the large--order asymptotic behavior of the perturbative
series, which is dominated by factorially increasing contributions,
${\cal O}\left(n! \beta_0^{n-1} (\alpha_s/\pi)^n\right)$, the
renormalons~\cite{Lautrup:1977hs,'tHooft:1977am}. In this way the resummation
provides some insight into the non-perturbative side of the problem:
the \emph{ambiguity} in summing the perturbative expansion, which is
expected to cancel in the full theory, indicates the parametric
dependence of non--perturbative corrections on the hard scales
(power--suppressed corrections) and provides some clue on the
potential size of these
corrections~\cite{Mueller:1984vh,Parisi:1978bj,David:1983gz,Zakharov:1992bx};
for a review see Refs.~\cite{Beneke:1998ui,Beneke:2000kc}.
\end{itemize}
Both these aspects are relevant in non-linear evolution. As
demonstrated in Sec.~\ref{sec:running-coupling-types}, the
multi-scale nature of the problem calls for a systematic study.
In this section we wish to briefly recall some basic ideas and
techniques for resummation of running--coupling corrections that
will be generalized and applied to the JIMWLK case in what follows.
To this end, consider some perturbatively calculable (infrared and
collinear safe) quantity, $R(Q^2/\Lambda^2)$, depending, for
simplicity, on a single external scale $Q^2$ and having an expansion
starting at order $\alpha_s(\mu^2)/\pi$ with the LO coefficient
normalized\footnote{Using this normalization $R(Q^2/\Lambda^2)$ can
also be interpreted as an ``effective charge''~\cite{
Grunberg:1980ja}.} to $c_{00}\equiv 1$:
\begin{equation}
\label{eq:R_series}
R(Q^2/\Lambda^2)=\frac{\alpha_s(\mu^2)}{\pi}
+\left[\left(c_{11}+\ln\frac{\mu^2}{Q^2}\right)\beta_0 +
c_{10}\right]
\left(\frac{\alpha_s(\mu^2)}{\pi}\right)^2
+\ldots\ ,
\end{equation}
where $\beta_0$ is defined in (\ref{beta}) and the $c_{10}$ term is
a conformal coefficient, not associated with the running coupling.
We will work in the large--$\beta_0$ limit, where the $c_{10}$ term
is formally subleading. The well--known BLM prescription~\cite{BLM}
absorbs the NLO contribution that is leading in $\beta_0$ into the
LO by a scale choice:
$\mu^2_{\hbox{\tiny BLM}}=Q^2\,\exp\left\{{-c_{11}}\right\}$. In this way also
higher--order corrections often become smaller. This becomes
intuitive upon looking at the momentum integral that is
approximated by $\alpha_s(\mu^2_{\hbox{\tiny BLM}})/\pi$, where $\mu^2_{\hbox{\tiny BLM}}$
acquires the interpretation of the average gluon virtuality. The
all--order resummation of running--coupling effects in the single
dressed gluon approximation,
\begin{equation}
\label{eq:R_series_large_beta_0}
\left.R(Q^2/\Lambda^2)\right\vert_{\rm large \,\,\beta_0}\,=\,
\left(\frac{\alpha_s(Q^2)}{\pi}\right) \left[1+\sum_{n=1}^{\infty}
\,c_{nn}\,\times\,
\,\left(\beta_0\frac{\alpha_s(Q^2)}{\pi}\right)^n\right],
\end{equation}
can be viewed as a generalization of this procedure, where instead
of a single optimal scale choice, the proper (observable--dependent)
weight is given to any specific gluon virtuality. In the context of
the dispersive approach presented below (see Eq. (\ref{R})), this
weight function is called the ``characteristic
function''~\cite{Dokshitzer:1995qm}.
Technically, the resummation of all ${\cal
O}(\beta_0^{n-1}\alpha_s^n)$ higher--order corrections becomes
feasible in QCD, owing to its simple relation with the resummation
of diagrams with an arbitrary number of fermion--loop insertions.
Because $\beta_0$ is linear in $N_f$, one can simply compute the
fermion--loop chain diagrams
and then replace $N_f$ by
the non--Abelian value of $-6\beta_0$, according to (\ref{beta}).
In more formal terms, one begins by considering the large--$N_f$
limit with fixed $N_f\alpha_s$, the leading term in the flavor expansion.
Clearly this limit
itself is not physically interesting, it just provides a tool
to identify running--coupling contributions in the approximation where the
non-Abelian $\beta$ function (\ref{beta}) is one loop,
the so--called large--$\beta_0$ limit.
The calculation of a gluon propagator, dressed by fermion--loop
insertions is simplified by the fact that the fermion loop itself is
transverse,
\[
\Pi_{\mu\nu}(k^2) = \left(g_{\mu\nu}-\frac{k_\mu k_\nu}{k^2}\right)
\,k^2\, \Pi(k^2),
\]
and therefore, in any gauge such insertions affects only the
propagating particle pole $1/k^2$. The
all--order sum builds up a geometric series giving rise to a factor
$1/(1+\Pi(k^2))$. The resummed propagator takes the
form\footnote{Note that the $\xi$ term
in~\eqref{eq:resummed-propstructure-free-cov} is not affected as it
does not describe a physical mode. $\xi$ is the width of a Gaussian
approximation to a functional $\delta$-function that is meant to
implement the covariant gauge $\partial_\mu A^\mu=0$. Only for vanishing
width, $\xi=0$, do all gauge fields obey the gauge condition
strictly. The axial propagator is written for vanishing width, i.e.
the strict gauge condition. That is why the resummed terms multiply
the whole structure.}:
\begin{subequations}
\begin{align}
\label{eq:resummed-propstructure-free-cov}
\text{covariant gauges:} & & \frac{1}{k^2}\frac{1}{1+\Pi(k^2)} &
\left(g_{\mu\nu}-\frac{k_\mu k_\nu}{k^2}\right)+\xi
\frac{1}{k^2}\frac{k_\mu k_\nu}{k^2}
\\
\label{eq:resummed-propstructure-free-ax}
\text{strict axial gauge:} & & \frac{1}{k^2}\frac{1}{1+\Pi(k^2)} &
\left(g_{\mu\nu}-\frac{k_\mu n_\nu+n_\mu k_\nu}{k\cdot n}+ \frac{k_\mu k_\nu
n^2}{(k\cdot n)^2}\right)
\ ,
\end{align}
\end{subequations}
with
\begin{equation}
\label{eq:Pi_ren}
\left.\Pi(k^2)\right\vert_{\rm one-loop}
=\frac{\alpha_s(\mu^2) \beta_0}{\pi}\,\ln\left( -{k^2\,{\rm
e}^{-\frac53}}/{\mu^2}\right),
\end{equation}
where the renormalization of the fermion--loop $\Pi(k^2)$ was
done in the ${\overline {\rm MS}}$
scheme\footnote{We denote the $\overline {\rm MS}$ coupling
$\alpha_s^{\hbox{\tiny ${\overline{\rm MS}}$}}(\mu^2)$ by $\alpha_s(\mu^2)$.}
and where we already made the
replacement: $N_f\,\to\, -6\beta_0$.
The next step would be to insert the dressed propagator into the relevant
Feynman diagrams and perform the momentum integration, $d^4k$. Since the sum
over any number of $\Pi(k^2)$ insertions has already been done, by performing
the $k$-integration one would hope to get directly the \emph{resummed}
physical quantity $R(Q^2/\Lambda^2)$ in (\ref{eq:R_series_large_beta_0}).
Observing that the resummed propagator with (\ref{eq:Pi_ren}) has a Landau
singularity, one realizes that this direct all--order calculation cannot be
done. As we explain below, a regularization of the sum is required even if
the coefficients $c_{nn}$ are all finite\footnote{This is indeed the case if
$R$ is an observable, i.e. it requires no additional renormalization and has
no infrared divergencies.}. The simplest and most familiar way to see this
is to represent the effective running coupling\footnote{It is convenient to absorb the
factor ${\rm e}^{-\frac53}$ from the renormalization of the fermion loop in
(\ref{eq:Pi_ren}) into the definition of the coupling. We follow this
convention and define $\alpha_s^V$ as the coupling in the $V$ scheme, which
is related to the ${\overline {\rm MS}}$ scheme by $\Lambda_V^2={\rm
e}^{\frac53}\Lambda^2$.} \emph{that includes the dressing}, as a Borel sum:
\begin{equation}
\label{eq:alpha_V_Borel}
\frac{\alpha_s^V(-k^2-i0)}{\pi} := \frac{\alpha_s(\mu^2)}{\pi}
\,\frac{1}{1+\Pi(k^2)}\,=\,
\frac{1}{\beta_0}\, \int_0^{\infty}du\, T(u)\, \left(
-{k^2\,{\rm e}^{-\frac53}}/{\Lambda^2}\right)^{-u}
\ ,
\end{equation}
where for one--loop running coupling\footnote{Although we restrict the
calculation to one--loop coupling where $T(u)\equiv 1$, we will keep writing
$T(u)$ in any Borel representation: this allows for a straightforward
generalization to two--loop running coupling
following~\cite{Grunberg:1993hf,Gardi:2002xm}.},
namely upon using (\ref{eq:Pi_ren}), one simply gets
\[
\left.\frac{\alpha_s^V(-k^2-i0)}{\pi}\right\vert_{\rm one-loop}
= \frac{1}{\beta_0}\,\frac{1}{\ln\left(
-{k^2\,{\rm e}^{-\frac53}}/{\Lambda^2}\right)},
\]
and therefore $T(u)\equiv 1$. It is now
possible to proceed with the calculation of the Feynman diagrams where the
only change is that the particle pole is modified into a cut:
\begin{equation}
\label{eq:Borel_replacement}
\frac{1}{-k^2-i0} \,\to\,
\frac{1}{(-k^2-i0)^{1+u}},
\end{equation}
This modification of the propagator is known as Borel or analytic
regularization.
The all--order resummation of a given quantity $R$ in the large--$\beta_0$
limit can therefore be done using (\ref{eq:alpha_V_Borel}) by first performing
the momentum integration with the modified propagator
(\ref{eq:Borel_replacement}), which directly yields the Borel representation
of the sum in the single dressed gluon approximation, namely:
\begin{equation}
\left.R(Q^2/\Lambda^2)\right\vert_{\rm large \,\,\beta_0}\,
=\,\frac{1}{\beta_0}\,\int\limits_0^{\infty}du \,
T(u) \left(Q^2/\Lambda^2\right)^{-u} B(u).
\label{R_borel_}
\end{equation}
Upon performing the momentum integration for a typical observable in QCD one
would find that $B(u)$ has singularities along the positive real axis, which
is the integration axis. This obviously means that Eq.~(\ref{R_borel_}) is
ill-defined as it stands. The obstructing singularities are called infrared
renormalons, and they merely reflect the fact that the series for $R$ in
(\ref{eq:R_series_large_beta_0}) and thus also in (\ref{eq:R_series}), is non
summable. The problem only becomes manifest if one attempts to sum the series:
any finite order expansion --- here all the coefficients $c_{nn}$ in
(\ref{eq:R_series_large_beta_0})
--- are well--defined and finite. They can be
obtained by expanding $B(u)$ under the integral,
\begin{equation}
\label{eq:Borel_expand} B(u)\,=\, \sum_{n=0}^{\infty}
\,\frac{c_{nn}}{n!}\, \,u^n.
\end{equation}
The Borel function typically has a finite radius of convergence $u_0$, so
$c_{nn}$ grow as $n!/u_0^n$ at high orders\footnote{This asymptotic behavior
can be modified by additional factors of the form $n^{\gamma/\beta_0}$,
depending on the structure of the Borel singularity. For simplicity we
assume here simple poles.} ; in the case of infrared renormalons in QCD, the
singularity is at $u=u_0>0$, so the coefficients all have the same
sign\footnote{ In contrast, for ultraviolate renormalons in QCD $u_0<0$, so
there is sign oscillation at high order.}. In terms of the momentum
integration these large perturbative coefficients are associated with
extremely small virtualities $k^2\to 0$, namely soft modes whose dynamics is
non-perturbative. Thus, the non-existence of the sum is a reflection of the
fact that perturbation theory does not fully describe the dynamics, and
the observable is sensitive to some extent to non-perturbative contributions. The
ambiguity can therefore be resolved by a proper separation prescription
between perturbative and non-perturbative corrections, either by an infrared
curoff, or by other means, for example by modifying the integration contour of
the Borel integral~(\ref{R_borel_}) in a particular way or by taking its
principal value. The same prescription must then be applied to regularize
the non-perturbative contribution.
An additional, general property is that the singularities in $B(u)$ occur at
integer --- and sometimes half integer --- values of $u$. This corresponds to the
fact that alternative definitions of the sum of the series (that arise e.g. by
modifying the integration contour in (\ref{R_borel_})) differ by integer --- or
half integer --- powers of $\Lambda^2/Q^2$. These ambiguities must be cancelled
by non-perturbative power corrections, and they can therefore serve as a
perturbative probe of such effects. In cases where an operator product
expansion applies, one can get a direct interpretation of the source of each
ambiguity in terms of local operators of higher dimension (or
twist)~\cite{David:1983gz,Mueller:1984vh}. In such cases it is also possible
to trace the cancellation of
ambiguities~\cite{Beneke:1998ui,Gardi:2002bk,Braun:2004bu}. In the absence of
an operator product expansion, the renormalon technique often provides a
unique window into the non-perturbative regime: by identifying the ambiguities
in summing the perturbative series one learns about the parametric dependence
of power corrections on the hard scales and about their potential
size~\cite{Parisi:1978bj, David:1983gz, Zakharov:1992bx, Mueller:1984vh,
Beneke:1998ui, Gardi:2002bk, Braun:2004bu, Bigi:1994em, Smith:1994id,
Beneke:1994qe, Beneke:1995qe, Ball:1995ni, Dokshitzer:1995qm,
Grunberg:1998ix, Gardi:1999dq, Gardi:2000yh, Cacciari:2002xb, Gardi:2003iv,
Gardi:1998qr}.
Computing directly the Borel function $B(u)$ using
(\ref{eq:Borel_replacement}) may sometimes present technical difficulties. One
of the most effective techniques to deal with this problem is the dispersive
technique, which has been used in a variety of applications, see
e.g.~\cite{Bigi:1994em, Smith:1994id, Beneke:1994qe, Beneke:1995qe,
Ball:1995ni, Dokshitzer:1995qm, Grunberg:1998ix, Gardi:1999dq, Gardi:2000yh,
Cacciari:2002xb, Gardi:2003iv, Gardi:1998qr}. In the next sections we shall
generalize this technique to the case of Weizs\"acker-Williams background
fields, in order to use it in the re-derivation of the JIMWLK equation. Let us
therefore review here the basic idea, and collect the necessary formulae.
The dispersive method, see e.g.~\cite{Ball:1995ni, Dokshitzer:1996qm,
Beneke:1995qe}, recasts the dressed gluon propagator, and thus the running
coupling of (\ref{eq:alpha_V_Borel}) as a dispersive integral. As we shall
see, this allows one to compute a generic quantity $R(Q^2/\Lambda^2)$ to all
orders in the large--$\beta_0$ limit, by simply replacing the massless gluon
pole by a massive one, namely:
\begin{equation}
\label{eq:massive_gluon_replacement}
\frac{1}{-k^2-i0} \,\to\,
\frac{1}{m^2-k^2-i0},
\end{equation}
instead of~\eqref{eq:Borel_replacement}
The dispersive representation takes the form
\begin{equation}
\label{eq:simple-dispersive}
\frac{\alpha_s^V(-k^2-i0)}{\pi} \,=\,
\frac{\alpha_s(\mu^2)}{\pi} \,\frac{1}{1+\Pi(k^2)} =\, \frac{1}{\beta_0}\int\limits_0^\infty dm^2
\frac{\rho_V(m^2)}{m^2-k^2-i0}
\ ,
\end{equation}
where $\rho_V$ is the discontinuity of the coupling on the time-like
axis,
\begin{equation}
\label{eq:discontinuity}
\rho_V(m^2):= -\frac{\beta_0}{\pi}\,{\rm
Im}\left\{\alpha_s^V(-m^2-i0)/\pi\right\} =\frac{1}{\pi}
\frac{\beta_0\alpha_s(\mu^2)}{\pi}\,\frac{ \text{Im}
\left\{\Pi(m^2)\right\}}{\vert
1+\Pi(m^2)\vert^2} \ .
\end{equation}
Alternatively, it is convenient to express
Eq.~(\ref{eq:simple-dispersive}) in terms of the
``effective\footnote{Let us emphasize that we do {\em not} make any
use of the dispersive representation to impose analyticity and
regularize the Landau singularity, as done for example in
Ref.~\cite{Shirkov:1997wi}. We merely use the dispersive method
to compute the Borel function, which serves as a generating function
for the perturbative coefficients according to
(\ref{eq:Borel_expand}) and as a tool to analyze infrared renormalon
ambiguities.} time--like coupling'',
\begin{equation}
\label{eq:alpha-via-rho}
\frac{\alpha_s^V(-k^2-i0)}{\pi}
= \frac{1}{\beta_0}\int\limits_0^\infty dm^2 \frac{\rho_V(m^2)}{m^2-k^2-i0}
= \,
\frac{1}{\beta_0}
\int_0^{\infty}\frac{dm^2}{m^2} A_{{\text{eff}}}^V(m^2)
\left[-m^2\frac{d}{dm^2} \frac{1}{k^2-m^2}\right]
\ ,
\end{equation}
where $A_{{\text{eff}}}^V(m^2)$ is defined by
\begin{equation}
\label{eq:Aeff}
\rho_V(m^2) =: \,
m^2\frac{d}{d m^2}A_{\text{eff}}^V(m^2).
\end{equation}
The explicit expression for $A_{{\text{eff}}}^V(m^2)$
in the case of a one--loop running--coupling (namely
(\ref{eq:simple-dispersive}) with~(\ref{eq:Pi_ren}))
takes the form:
\[
\left.A_{{\text{eff}}}^V(m^2)\right\vert_{\rm one-loop}\,=\,\frac12 -\frac1\pi
\arctan\left(\frac1\pi\ln\frac
{m^2}{\Lambda_V^2}\right).
\]
A two-loop expression for $A_{{\text{eff}}}^V(m^2)$ can be found in
Ref.~\cite{Gardi:1998qr}.
The all--order resummation of $R$ in the
large--$\beta_0$ limit can now be done using
(\ref{eq:alpha-via-rho}) by first performing the momentum
integration with a massive gluon (i.e. with the replacement of
Eq.~(\ref{eq:massive_gluon_replacement})) leaving the dispersive integral
over the mass undone. This leads to\footnote{Note that we ignore
pure
power correction terms ($\Lambda^2/Q^2$) which distinguish this integral
from the Borel sum we eventually compute. These will be totally irrelevant
here. The interested reader is referred
to~\cite{Ball:1995ni,Grunberg:1998ix,Gardi:1999dq}.}
\begin{align}
\label{R}
\left.R(Q^2/\Lambda^2)\right\vert_{\rm large \,\,\beta_0}\,
= & \frac{1}{\beta_0} \int_0^{\infty}\frac{dm^2}{m^2}
A_{{\text{eff}}}^V(m^2) \left[-m^2\frac{d}{dm^2} {\cal F} (m^2/Q^2)\right]
\nonumber \\
= & \frac{1}{\beta_0}\int_0^{\infty}\frac{dm^2}{m^2} \rho_V(m^2) \left[ {\cal
F} (m^2/Q^2)-{\cal F} (0)\right],
\end{align}
where in the $m^2\to 0$ limit ${\cal F} (m^2/Q^2)$ yields the LO coefficient,
${\cal F} (0)=c_{00}=1$. By definition, ${\cal F} (m^2/Q^2)$ depends on the
specific quantity computed and it encodes perturbative as well as
power--correction information (see e.g.~\cite{Ball:1995ni,
Dokshitzer:1996qm}). ${\cal F} (m^2/Q^2)$ is therefore called the
``characteristic function'' of the quantity or process under consideration.
Upon expanding the effective coupling $A_{{\text{eff}}}^V(m^2)$ in (\ref{R}) in powers
of $\ln (m^2/\Lambda^2)$ and integrating term by term (log moments) one
obtains the perturbative coefficients $c_{nn}$. On the other hand, expanding
${\cal F} (m^2/Q^2)$ at small $m^2$ and extracting the non--analytic
terms~\cite{Dokshitzer:1996qm} one obtains information on renormalon
ambiguities and thus on power corrections in $\Lambda^2/Q^2$.
As explained above, the perturbative and power--correction information encoded
in the characteristic function can be conveniently extracted from the Borel
representation (\ref{R_borel_}). The Borel formulation also offers a natural
definition of the sum, using the principal value (PV) prescription. The PV Borel sum
can be related to a Euclidian--cutoff regularization~\cite{Gardi:1999dq},
and it has good analytic properties as a function of the hard
scales~\cite{Cacciari:2002xb,Andersen:2005bj}, e.g. it has no Landau singularities
and it is real--valued.
In order to relate the dispersive and the Borel formulations, (\ref{R})
and~(\ref{R_borel_}), respectively, one first computes the Borel
representation of the discontinuity of the coupling by using
(\ref{eq:alpha_V_Borel}) in (\ref{eq:discontinuity}); a straightforward
calculation yields:
\begin{equation}
\label{eq:discontinuity_Borel}
\rho_V(m^2)= - \int_0^{\infty} du\, T(u)\, \frac{\sin \pi u}{\pi} \,
e^{\frac53 u}\,\left({m^2}/{\Lambda^2}\right)^{-u}.
\end{equation}
Next, by substituting (\ref{eq:discontinuity_Borel}) in (\ref{R}) and changing
the order of integration one recovers the Borel representation
(\ref{R_borel_}) with
\begin{eqnarray}
\label{eq:B-F-relation}
B(u) &=&
-e^{\frac53 u}\frac{\sin \pi u}{\pi}\,
\int\limits_0^{\infty}d\zeta \,\zeta^{-1-u}
\,\left[{\cal F} (\zeta)-{\cal F}(0)\right] \,=\
-e^{\frac53 u}\frac{\sin \pi u}{\pi u}\,
\int\limits_0^{\infty}d\zeta \,\zeta^{-u}
\,\frac{d{\cal F} (\zeta)}{d\zeta} .
\end{eqnarray}
In the following we will use this formula to compute the Borel
function $B(u)$ from ${\cal F} (\zeta)$.
Finally, it should be emphasized that the resummed result for
$R(Q^2/\Lambda^2)$, computed with a single dressed gluon,
is exactly renormalization--scale invariant, despite using just a
subset of all perturbative corrections.
At power level, we shall define the sum using the principal value
prescription of~(\ref{R_borel_}) and use the
renormalon poles to analyze non-perturbative power corrections.
\subsection{The dressed propagator in the background field}
\label{sec:dress-prop-backgr}
At this point, we have described the method used to derive the
JIMWLK equation at fixed coupling
(Sec.~\ref{sec:evolution-equations}) and the generic tools used to
promote a one--loop calculation to one with a dressed gluon
(Sec.~\ref{sec:runn-coupl-disp}). These need now to be combined.
Since all the diagrams in~\eqref{eq:JIMWLK-LO-diagram-cont} involve
a single gluon, the dispersive technique should be applicable.
The crucial step is to construct a propagator that is both in the
background of Weizs\"acker-Williams field
\emph{and} is dressed by running--coupling corrections.
This is the main goal of this section.
Independently of the particular technique adopted, the calculation of
running--coupling corrections to the kernel involves a subtlety which is
associated with the presence of new production channels, namely the production
of a well separated $q\bar{q}$ pair or a pair of gluons, starting at NLO.
These processes involve \emph{two} additional Wilson lines --- not one ---
on the r.h.s of the
evolution equation, namely an entirely new contribution to the JIMWLK
Hamiltonian (\ref{eq:JIMWLK-Hamiltonian-LO}).
Obviously, these contributions should not be considered a running--coupling
correction, and we do not aim to compute them here. As explained
in Sec.~\ref{sec:runn-coupl-disp} running--coupling corrections are usually
identified by considering diagrams with fermion--loop insertions, namely considering the
formal large--$N_f$ limit with fixed $N_f\alpha_s$ (the flavor expansion), and
then making the substitution $N_f\to-6\beta_0$. However, in the present case
this criterion is insufficient: similarly to
running--coupling corrections, $q\bar{q}$ pair production appears at LO in the
flavor expansion, as can be seen in Fig.~\ref{fig:NLO-diagrams}.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{bubbletypes-diag}
\caption{\small \em Virtual and real contributions at LO (a), (b) and
NLO (c), (d) and (e). Vertically (as delineated by the background shading)
they are grouped by the Wilson line structure induced by the target
interaction. The latter are indicated on the diagrams via large solid
dots, their analytical form is listed at the bottom. Diagram (e) induces
a new structure via a $q\Bar q$ loop that interacts with the target.}
\label{fig:NLO-diagrams}
\end{figure}
Moreover, the running--coupling corrections and $q\bar{q}$ pair production
are physically distinct only if the transverse separation of the pair is non-vanishing.
Therefore, the identification of running--coupling corrections to the kernel requires a
separation prescription. As we shall see below, the use of the dressed propagator in the
background field automatically implements such a prescription.
A related complication arises in the derivation of the JIMWLK Hamiltonian
with running coupling because of the possibility of interaction between
the background field and a produced $q\bar{q}$ pair.
Dressing the \emph{virtual} diagram, shown in Fig.~\ref{fig:NLO-diagrams}~(a),
poses no problems, since the gluon itself does not interact with the
target\footnote{At this point one might be worried that some of intermediate
quark--gluon vertices (whose location is to be integrated over) might lie on
different sides of the target line -- this is precluded by causality since
the interaction is restricted to the light-like support of the target field
$b^+=\delta(x^-)\beta({\bm x})$. Even the dressed gluon line cannot
propagate back and forth across this hyperplane. This will be borne out by
the explicit calculation.}. For each LO virtual contribution
in~\eqref{eq:interaction-diagrams}, as well as their self--energy--like
analogues, there is just one type of dressed virtual counterpart; e.g. in
Fig.~\ref{fig:NLO-diagrams}, diagram~(c) corresponds to dressing of
diagram~(a). The eikonal interactions with the target field remain unaffected
by the dressing. The dressed propagator in (c) reduces to the dressed
propagator in the absence of the target field, and the running--coupling
corrections may be directly obtained using the tools of
Sec.~\ref{sec:runn-coupl-disp}.
On the other hand, real--emission diagrams, such as the diagram in
Fig.~\ref{fig:NLO-diagrams}~(b), lead to two physically different
generalizations upon inserting fermion loops; these are shown in diagrams~(d)
and~(e), respectively. Diagram~(d) contains the same eikonal factors as the LO
diagram, while diagram~(e) contains a new channel: the production of a well
separated $q\Bar q$--pair, which interacts with the target field at two
\emph{distinct} points on the transverse plain ${\bm z}_1$ and ${\bm z}_2$. Instead
of the single gluonic eikonal factor $U_{\bm z}$ as in diagrams~(b) and~(d),
in diagram (e) one encounters a factor $2\,\tr(t^a U_{{\bm z}_1} t^b U_{{\bm
z}_2}^\dagger)$. The distinction between the two, however, is lost if the
transverse separation of the $q\Bar q$--pair (or the invariant mass of the
pair) becomes small: in this limit the $q\Bar q$--pair in the final state
becomes indistinguishable from a gluon. This (by necessity) is mirrored in
the eikonal structure associated with the diagrams:
\begin{equation}
\label{eq:eikonal-local-lim}
\lim\limits_{{\bm z}_1,{\bm z}_2\to {\bm z}} 2\tr(t^a U_{{\bm z}_1} t^b
U_{{\bm z}_2}^\dagger) = U_{\bm z}^{a b}
\ .
\end{equation}
Therefore, while in the virtual diagrams running--coupling corrections can be
reconstructed solely from bubble sums as in Fig.~\ref{fig:NLO-diagrams} (c),
in the real--emission diagrams these corrections are associated with both
Fig.~\ref{fig:NLO-diagrams}~(d) and the local limit of~(e). However, real and
virtual contributions are related by the requirement of probability
conservation: in the absence of interactions the dressed virtual terms must
cancel the dressed real contributions exactly. In the following we show that
it is possible to generalize the LO calculation
of~Sec.~\ref{sec:evolution-equations} in a way that manifestly implements all
these requirements. Specifically, the separation prescription between the
running--coupling corrections that we compute and
the $q\bar{q}$ production channel that we neglect, simply amounts to the replacement of
$2\,\tr(t^a U_{{\bm z}_1} t^b U_{{\bm z}_2}^\dagger)$ of diagram (e)
by its local limit of (\ref{eq:eikonal-local-lim}), namely
$U_{\bm z}^{a b}$. As announced, this is automatically realized upon using
the dressed propagator in the background field to compute the diagrams of
Eq.~\eqref{eq:JIMWLK-LO-diagram-cont}.
The main ingredient in the LO calculation in~\cite{Weigert:2000gi} is the
propagator of the fluctuation $\delta A$ in the presence of the background
field $b^+$. The calculation sketched in Sec.~\ref{sec:evolution-equations}
was performed in the $A^-=0$ axial gauge in order to reduce the number of
diagrams that contribute. These diagrams are shown in
Eq.~\eqref{eq:JIMWLK-LO-diagram-cont}. The eikonal nature of the interaction
in these diagrams implies that one only needs the ``$++$'' component of the
fluctuation propagator, $\langle \delta A^+_x \delta A^+_y \rangle^{a b}$.
This remains true for the dressed propagator.
At LO, the gluon propagator can be expressed (see
Eq.~(\ref{eq:prop-int-leading}) below) in terms of an external tensor
structure and the propagator~$i\,G^{a b}_0(x',y')$ of a massless scalar field
in the adjoint color representation, propagating through the target
field~\cite{McLerran:1994vd}. Our generalization will involve its massive
counterpart $i\,G^{a b}_m(x',y')$. Since the structure of the propagator has
been discussed extensively in the
literature~\cite{McLerran:1994vd,Hebecker:1998kv}, we only describe here the
key ingredients that will be needed to apply the dispersive method and
re-derive the JIMWLK equation with running coupling.
\subsubsection*{Massive scalar propagator in the background field}
The starting point is a spectral representation of the massive scalar
propagator
\begin{equation}
\label{eq:G-m-disp-def}
G^{a b}_m(x,y) := \int\frac{d^4k}{(2\pi)^2} \frac{1}{k^2-m^2+i0}
\
[\phi_{k}(x)\phi^\dagger_{k}(y)]^{a b}\,,
\end{equation}
where we have combined a spectral integral over
the virtuality $\int d k^2$, and the sum over all states at fixed virtuality,
$\int {d^2{{k}_{\perp}}\, dk^-}/{ (2 k^+)}$,
into one 4-dimensional integral ${\int d^4k}$.
Here $\phi_{k}^{a c}(x)$ are the solutions of the Klein-Gordon equation at
virtuality $k^2=2k^+k^- -{\bm k}^2$ in the background field~$b^+$ of
Eq.~\eqref{eq:bgfield}:
\begin{equation}
\label{eq:K-G-off-shell}
(-D[b]^2-k^2)^{a b} \phi_{k}^{b c}(x) =0
\ ,
\end{equation}
where $a$ and $b$ are color indices and $c$ is a basis label.
$D_\mu[b]= \partial_\mu -i g b_\mu$ is the (adjoint) covariant
derivative in the background field~\eqref{eq:bgfield}.
Explicitly,
\begin{align}
\label{eq:phi-bg-sol}
\phi^{a b}_k(x) = \frac{1}{(2\pi)^2} \int d^4p \,e^{-i p\cdot x} \,\delta\left(p^2-k^2\right)\,
\delta\left(1-\frac{p^-}{k^-}\right) \ \int d^2z\ e^{-i(\bm{p}-\bm{k})\bm{z}}
[U^{-1}_{x^-,-\infty}(\bm{z})]^{a b} \, ,
\end{align}
reflecting the fact that the interaction with the background field can
transfer \emph{transverse} momentum from the target, while it cannot modify
the conserved $k^-$ component nor the virtuality $k^2$. The color structure
in (\ref{eq:phi-bg-sol}) is carried by the adjoint eikonal factor
$[U^{-1}_{x^-,y^-}(\bm{z})]^{a b}$ defined by
\begin{equation}
\label{eq:U-def-one-side}
[U^{-1}_{x^-,y^-}(\bm{z})]^{a b} :=
\left[ {\sf P}\exp\left\{
ig \int\limits_{y^-}^{x^-} dz^- \delta(z^-) \beta(\bm{z})
\right\}\right]^{a b}
\ .
\end{equation}
A straightforward calculation using~(\ref{eq:phi-bg-sol})
in~(\ref{eq:G-m-disp-def}) yields:
\begin{align}
\label{eq:KGpropexpl}
G^{ab}_m(x,y) = & \int\!\! \frac{ dk^-}{2k^- (2\pi)^3}
\left[\theta(x^--y^-)\theta(k^-) - \theta(y^--x^-)\theta(-k^-)\right]
\int\!\! {d^2p_\perp d^2q_\perp } \nonumber \\ & \hspace{1cm} \times \left[
e^{-i p\cdot x} \int\! \frac{ d^2z_\perp}{(2\pi)^2}\
e^{-i(\bm{p}-\bm{q})\bm{z}} [U^{-1}_{x^-,y^-}(\bm{z})]^{a b} e^{i
q\cdot y} \right] \ ,
\end{align}
where the 4-momenta $p$ and $q$ obey the constraints
\[
{p}^+
= \frac{{\bm{p}}^2+m^2}{ 2k^-}\,;\qquad {q}^+= \frac{{\bm{q}}^2+m^2}{ 2k^-}\,;\qquad
{p}^-={q}^-=k^-.
\]
Eq.~\eqref{eq:KGpropexpl} differs from the free massive propagator by the
Wilson line (\ref{eq:U-def-one-side}) that represents the interaction with the target field.
Since the interaction in (\ref{eq:U-def-one-side}) is localized at $z^-=0$, this Wilson line
reduces to $U_{\bm{z}}$ (defined in (\ref{eq:U-via-b})) if
$x^->0>y^-$, to $U^\dagger_{\bm{z}}$ if $x^-<0<y^-$ and to~$1$ otherwise.
This implies~\cite{McLerran:1994vd,Hebecker:1998kv}
free propagation\footnote{With $U=1$ there is no obstacle to performing the $\bm z$
integration in~\eqref{eq:KGpropexpl}.} when the endpoints $x^-$ and $y^-$
are both on the same side of the $z^-=0$
hyperplane, while if they lie on opposite sides
one encounters a three--step process: free
propagation from the initial point onto the hyperplane, a current interaction
with the background field, and free propagation again from there to the
endpoint. Only in the latter case does the background field induce a change
of the transverse momentum in the propagator. It is important to note that
these features carry over to the fluctuations propagator $\langle \delta A^+_x
\delta A^+_y \rangle^{a b}$ in both the massless and the massive case.
\subsubsection*{Massive gauge--field propagator in the background field}
The relevant, ``++'' component of the fluctuation propagator at LO is
expressed in terms of the scalar--field propagator $i\,G^{a b}_0(x',y')$ as
\begin{equation}
\label{eq:prop-int-leading}
\langle \delta A^+_x \delta A^+_y\rangle^{a b} =
- \left[\frac{1}{{\partial^-}}\right]_{x x'} \left\{
-
D^2[B]_{x'} i\, G^{a b}_0(x',y')
\,+\,
\overrightarrow{\partial^j_{x'}}
i\, G^{a b}_0(x',y')
\overleftarrow{\partial^j_{y'}}
\right\}
\left[\frac{1}{{\partial^-}}\right]_{y' y}
\ .
\end{equation}
Here an integration convention over 4-vector coordinates $x'$ and $y'$ is
implied.
Note that one may simplify the first term in this expression using the defining
equation for $G_0$, namely
\[
D^2[b]_{x'} \, G^{a b}_0(x',y') = \delta^{(4)}(x'-y').
\]
We have chosen not to do this since $G^{a b}_0(x',y')$ represents the
propagating particle pole that will be affected by the dressing,
while the remainder of the expression will not change. To illustrate that
this is indeed the right procedure let us connect back to the free case,
or alternatively restrict ourselves to virtual diagrams such as
Fig.~\ref{fig:NLO-diagrams} (c), by replacing
\begin{align}
\label{eq:to-int-repl}
D^2[b]^{a b} \longrightarrow \Box_{x'}
\hspace{2cm}
i G^{a b}_0(x,y) \longrightarrow
\frac{i}{\Box}(x,y)\ \delta^{a b}
\end{align}
to obtain
\begin{equation}
\label{eq:prop-free-leading}
\left.\langle \delta A^+_x \delta A^+_y\rangle^{a b}\right\vert_{\rm free} =
- \left[\frac{1}{{\partial^-}}\right]_{x x'} \left\{
-
\Box_{x'} \Big[\frac{i}{\Box}\Big]_{x' y'}
\,+\,
\overrightarrow{\partial^j_{x'}}
\Big[\frac{i}{\Box}\Big]_{x' y'}
\overleftarrow{\partial^j_{y'}}
\right\}
\left[\frac{1}{{\partial^-}}\right]_{y' y} \delta^{a b}.
\end{equation}
This coordinate--space expression may be readily Fourier--transformed and
shown to coincide with the ``++'' component of the free axial--gauge
propagator\footnote{For our purposes the gauge vector $n$ projects out the
minus component of a vector: $n\cdot a=a^-$.}
of~\eqref{eq:resummed-propstructure-free-ax}, which is
$({1}/{k^2})\,{2k^+}/{k^-}$. One can in fact relate the two terms
in~\eqref{eq:prop-free-leading} respectively, to the contributions
of longitudinal and transverse polarizations in the conventions of
light--cone perturbation theory. To this end let us define
transverse polarizations by
\[
{\epsilon_T}^{\mu\,\lambda} := \left(\frac{{\bm k}\cdot{\bm
\epsilon}_T^\lambda}{k^-}, {\bm
\epsilon}_T^\lambda,0\right) \qquad \text{based on}\,\,\,{\bm
\epsilon}_T^\lambda:=\left(1,i(-1)^{\lambda+1}\right)/\sqrt{2}
\quad \text{with}\,\, \lambda=1,2
\ .
\]
Here we used the notation $(+,\perp,-)^\mu$ for the components of a
four--vector. We may then recast
Eq.~\eqref{eq:resummed-propstructure-free-ax} (before dressing the propagator),
for our light--cone gauge (where $n^2=0$), as
\begin{align}
\label{eq:transv-long-pol}
\frac1{k^2}\left(g_{\mu\nu}-\frac{n^\mu k^\nu + k^\mu n^\nu}{k \cdot n}\right) =
- \frac{n^\mu n^\nu}{(k\cdot n)^2}\ k^2 \frac1{k^2}
-\sum\limits_{\lambda=1,2} \frac{ {\epsilon_T^*}^{\mu\,\lambda}
{\epsilon_T}^{\nu\,\lambda} }{k^2}.
\end{align}
This uniquely identifies the second term in~\eqref{eq:prop-free-leading}
with transverse polarizations while
the first term is related to longitudinal contributions.
In~\eqref{eq:prop-free-leading} the factors $\frac{i}{\Box}(x,y)$ correspond
to the propagating particle pole $1/k^2$
in~\eqref{eq:resummed-propstructure-free-ax}, which get multiplied by
${1}/({1+\Pi(k^2)})$ upon dressing the gluon. The point of separating the
simple tensor structure of $({1}/{k^2})\,{2k^+}/{k^-}$
by polarizations into two
terms as in~\eqref{eq:prop-int-leading} and~\eqref{eq:prop-free-leading} is that the
separation helps to extract the leading small--$x$ contribution: when used in
the diagrams of Eq.~\eqref{eq:JIMWLK-LO-diagram-cont}, only the second term
contributes a logarithm in $1/x$, while the first term is subleading.
Eq.~\eqref{eq:prop-free-leading} offers a straightforward route to
apply the dispersive method for {\em virtual} diagrams such as
Fig.~\ref{fig:NLO-diagrams}~(c).
As explained above, in this case the dressed propagator does not
interact with the target field, so the standard dispersive method
described in Sec.~\ref{sec:runn-coupl-disp} directly applies.
Following Eq.~(\ref{eq:massive_gluon_replacement}) one replaces
$1/k^2$ by $1/(k^2-m^2)$ or, in coordinate
language \[\Big[\frac{i}{\Box}\Big]_{x' y'} \to
\Big[\frac{i}{\Box+m^2}\Big]_{x' y'}.\]
After making this replacement in (\ref{eq:prop-free-leading}) the first term
may be further split according to
\begin{equation}
\label{eq:firsttermsplit}
-
\Box_{x'}\Big[\frac{i}{\Box+m^2}\Big]_{x' y'}
=\,-
\,i\delta^{(4)}_{x' y'}
\,+
\, m^2 \Big[\frac{i}{\Box+m^2}\Big]_{x' y'}
\ ,
\end{equation}
so the massive propagator finally takes the form
\begin{equation}
\label{eq:prop-free-disp}
\left.\langle \delta A^+_x \delta A^+_y\rangle_m^{a b}\right\vert_{\rm free} =
- \left[\frac{1}{{\partial^-}}\right]_{x x'} \left\{
\,-
\,i\delta^{(4)}_{x' y'}
\,+
\, m^2 \Big[\frac{i}{\Box+m^2}\Big]_{x' y'}
\,+\,
\overrightarrow{\partial^j_{x'}}
\Big[\frac{i}{\Box+m^2}\Big]_{x' y'}
\overleftarrow{\partial^j_{y'}}
\right\}
\left[\frac{1}{{\partial^-}}\right]_{y' y} \ \delta^{a b}.
\end{equation}
As we shall see below, the $\delta$-function term does not generate a
logarithm in $1/x$, while the other two terms do, and will therefore be
relevant for the JIMWLK equation.
The dispersive representation of the dressing was derived in the free case and
it therefore directly applies only to~\eqref{eq:prop-free-disp} and thus to
the resummation of running--coupling corrections in the virtual diagrams in
the JIMWLK Hamiltonian. Nevertheless, a similar procedure can be applied to
real--emission diagrams, where it provides a natural \emph{definition} of the
running--coupling contribution, whose separation from the $q\bar{q}$-pair
production channel is a priori ambiguous. To this end we construct the massive
propagator in the background field by reversing the
replacements~\eqref{eq:to-int-repl} in~\eqref{eq:prop-free-disp}:
\begin{equation}
\label{eq:prop-int-disp}
\langle \delta A^+_x \delta A^+_y\rangle_m^{a b} =
- \left[\frac{1}{{\partial^-}}\right]_{x x'} \left\{
\,-
\,i\delta^{(4)}_{x' y'}
\,+
\,
m^2\ i\,G^{a b}_m(x',y')
+
\overrightarrow{\partial^j_{x'}}
i\, G^{a b}_m(x',y')
\overleftarrow{\partial^j_{y'}}
\right\}
\left[\frac{1}{{\partial^-}}\right]_{y' y}.
\end{equation}
By using this propagator in \emph{both}
real and virtual diagrams, the real--virtual cancellation mechanism is in
place, just as at LO. Thus, probability conservation is guaranteed. It is by
this fundamental principal that the dressing of the \emph{virtual} diagrams by
fermion--loop insertions, which is uniquely determined by
(\ref{eq:prop-free-disp}) using the dispersive method, essentially dictates
the structure of running--coupling corrections in the JIMWLK kernel as a
whole.
\subsubsection*{Generalization of the dispersive approach}
To incorporate running--coupling corrections in the derivation of
the JIMWLK equation, we
will modify the fluctuation propagator in analogy with Eq.~(\ref{R}),
namely
\begin{align}
\label{eq:deltaA-disp-replacement}
\frac{\alpha_s}{\pi} \langle \delta A^+_x \delta A^+_y\rangle^{a b} \to
\frac{1}{\beta_0}\int\limits_0^\infty \frac{dm^2}{m^2} \rho_V(m^2)
\Big[\langle \delta A^+_x \delta A^+_y\rangle^{a b}_m -\langle \delta A^+_x
\delta A^+_y\rangle^{a b}_0\Big] \ ,
\end{align}
where $\langle \delta A^+_x \delta A^+_y\rangle_m^{a b}$ is given in (\ref{eq:prop-int-disp}).
Note this modification can be traced back to a substitution in the scalar propagator
\begin{eqnarray}
\label{eq:G-running-1}
\frac{\alpha_s}{\pi} G^{a b}_0(x,y)
= \frac{\alpha_s}{\pi}\int\frac{d^4k}{(2\pi)^4} \frac{\,\,[\phi_{k}(x)\phi^\dagger_{k}(y)]^{a b}}{k^2+i0}
\,
& \to &
\int\frac{d^4k}{(2\pi)^4}
\frac{\alpha_s^V(-k^2-i0)}{\pi}\,
\frac{\,\,[\phi_{k}(x)\phi^\dagger_{k}(y)]^{a b}}{k^2+i0}
\\
&=&
\frac{1}{\beta_0}\int\limits_0^\infty \frac{dm^2}{m^2} \rho_V(m^2)
\Big[G^{a b}_m(x,y) -G^{a b}_0(x,y)\Big]\nonumber
\ ,
\end{eqnarray}
where in the final expression we used the dispersive representation of the
running coupling~\eqref{eq:alpha-via-rho}. Restoring the tensor structure then
yields~\eqref{eq:deltaA-disp-replacement}.
As explained above, our procedure, which uses the
propagator~(\ref{eq:prop-int-disp}) for both virtual and
real--emission diagrams guarantees probability conservation. Its
diagrammatic interpretation at the level of fermion loops can be
read off Fig.~\ref{fig:NLO-diagrams}: in the absence of interaction,
(the limit $U\to 1$), the sum of diagrams (d) and (e) reduces to
diagram (c) and their cancellation is complete; as the interaction
is turned on they become distinct and evolution takes place. At the
same time a new channel, the production of a $q\bar{q}$ pair in
diagram (e), opens up. Our procedure uses the leading--$N_f$
corrections in the virtual diagrams to identify (define)
running--coupling corrections. This amounts to a specific
separation of the leading--$N_f$ corrections in the real--emission
diagrams into ones that are part of the running coupling and ones
that constitute the new production channel of diagram (e). By virtue
of Eq. (\ref{eq:eikonal-local-lim}) the local limit of diagram (e)
is included in the running--coupling contribution. As a
consequence, real--virtual cancellation holds {\em separately} for
the running--coupling corrections, and the remainder, namely
the new $q\Bar q$ production channel can be computed separately: it is
a well--defined NLO contribution to the kernel that is not associated with
the running coupling. This is confirmed by an
explicit calculation of the diagrams using light--cone perturbation
theory~\cite{HW-YK:2006}.
Finally, let us briefly comment on the separate terms in
Eq.~\eqref{eq:prop-int-disp}. We begin by observing that, as in the
LO case, the first term in~\eqref{eq:prop-int-disp}, $-
\,i\delta^{(4)}_{x' y'}$, is suppressed at large $p^-$ and will
therefore not contribute to the evolution equations. The two terms
containing $G_m$, however, will generate $\partial^-$ contributions
in the numerator, and will therefore contribute to
logarithmically--enhanced terms at small $x$ through $\int
{dp^-}/{p^-}=\ln(1/x)$. Next, we note that the two surviving terms
are very different in nature: the last term is already present in
the LO fixed--coupling calculation, while the middle term does not
contribute there, as it vanishes in the $m\to 0$ limit. It starts
contributing at NLO. This is in keeping with the different origin
of the terms: the transverse partial derivatives in the last term
reflect the Weizs\"acker-Williams field structure of the LO emission
kernel~\eqref{Kdef} that is entirely driven by transverse
polarizations. The middle term, in contrast, arises entirely from
longitudinal polarizations that are absent at leading order. As such
it has an entirely new dependence on the transverse coordinates not
present at LO. The corresponding diagrammatic calculation
in~\cite{HW-YK:2006} confirms this structure and the association of
the terms with transverse and longitudinal polarizations; the latter
enter there (starting at NLO) via instantaneous contributions.
\subsection{Application to JIMWLK}
\label{sec:disp-runn-pres}
The dispersive method introduces running--coupling corrections by performing a
direct replacement of the gluon propagator in the {\em leading--order
calculation} according to~\eqref{eq:deltaA-disp-replacement}. Therefore,
the generalization of the JIMWLK equation to running coupling essentially
constitutes of recalculating the LO diagrams
of~\eqref{eq:JIMWLK-LO-diagram-cont} with the massive propagator of
(\ref{eq:prop-int-disp}) instead of that of (\ref{eq:prop-int-leading}). In
the following we first present an explicit calculation of one of the diagrams,
and then generalize and derive the running--coupling corrections to the
Hamiltonian ${\cal H}[U]$ in (\ref{eq:JIMWLK-Hamiltonian-LO}) .
Let us first recapitulate the ingredients: in
(\ref{eq:JIMWLK-LO-diagram-cont}) we consider the calculation of the leading
logarithmic correction to the propagation of a fast moving dipole, a $q\Bar
q$-pair. As explained in Sec.~\ref{sec:jimwlk-equation}, this object can be
represented as a product of two Wilson lines along the classical trajectories
of the quarks. We take the trajectories to lie along the minus light--cone
direction at $x^+=0$. The pair is then characterized by the transverse
locations $\bm{x}$ and $\bm{y}$ of the quark and antiquark, respectively.
The first diagram\footnote{This quantity was called $\Bar\chi^{q \Bar
q}_{\bm{x y}}$ in~\cite{Weigert:2000gi}.}
in~\eqref{eq:JIMWLK-LO-diagram-cont} contains all the information we need: the
other two can be deduced from it. It describes the leading $\ln(1/x)$
contribution to the exchange of a gluon between the two quarks in the presence
of Weizs\"acker-Williams background field. The result
(see~\cite{Balitsky:1996ub,Weigert:2000gi}) involves the fluctuation
propagator and the Wilson lines that represent the external quark and
antiquark:
\begin{align}
\label{eq:example-standard}
\parbox{2cm}{\includegraphics[height=2.0cm]{chiqqb}}
= \, & g^2 \int d x^-\, d y^- \big\langle \delta A^+_x \delta
A^+_y\big\rangle_0^{a b} \notag \\[-.6cm] & \hspace{.5cm}\times \Big(
\theta(-x^-) U_{\bm{x}} t^a +\theta(x^-) t^a U_{\bm{x}} \Big) \otimes \Big(
\theta(y^-) U^\dagger_{\bm{y}} t^b + \theta(-y^-) t^b U^\dagger_{\bm{y}}
\Big),
\notag
\intertext{which, after some algebra involving the fluctuation
propagator~\eqref{eq:prop-int-leading} (see below) may be recast as} = &
-\frac{\alpha_s}{\pi^2} \ln(1/x) \int\!\! d^2\bm{z}\, {\cal K}_{\bm{x z y} }
\Big( [U]^{a b}_{\bm{z}}\ t^a U_{\bm{x}} \otimes t^b
U^\dagger_{\bm{y}} + [U^\dagger]^{a b}_{\bm{z}}\ U_{\bm{x}} t^a
\otimes U^\dagger_{\bm{y}} t^b \notag \\ & \hspace{4cm} - \delta^{a b} \big[
U_{\bm{x}} t^a \otimes t^b U^\dagger_{\bm{y}} +t^a U_{\bm{x}}\otimes
U^\dagger_{\bm{y}} t^b \big] \Big).
\end{align}
The four terms on the r.h.s are in one--to--one correspondence with
the four diagrams in Eq.~\eqref{eq:interaction-diagrams}, and
${\cal K}$ is a purely two--dimensional scale--invariant kernel, given by
\begin{align}
\label{Kdef}
{\cal K}_{\bm{x z y}} = -\partial_x^j \partial_y^j \int
\frac{d^2p\,d^2q}{(2\pi)^2} \frac{{\rm
e}^{i\bm{p}(\bm{x}-\bm{z})}}{\bm{p}^2}\, \frac{{\rm
e}^{i\bm{q}(\bm{z}-\bm{y})}}{\bm{q}^2} = \frac{(\bm{x}-\bm{z})\cdot
(\bm{z}-\bm{y})}{%
(\bm{x}-\bm{z})^2 (\bm{z}-\bm{y})^2}
= -\frac{ {{\bm r}_1}\cdot {{\bm r}_2}}{\,{r_1}^2 \,{r_2}^2}
= \frac12
\frac{ r^2-(r_1^2+r_2^2)}{\,{r_1}^2 \,{r_2}^2}
\ ,
\end{align}
where we used the notation for ``parent--'' and ``daughter--dipoles''
introduced in~\eqref{eq:distshort}. The last formula shows explicitly that
this leading order kernel changes sign at a circle whose diameter is defined
by the ``parent.'' To arrive at the final expression in
(\ref{eq:example-standard}) with the kernel of (\ref{Kdef}) we have performed
the $x^-$ and $y^-$ integrals, using~\eqref{eq:prop-int-leading} with the
explicit formula for $G^{ab}_0(x,y)$, given by \eqref{eq:KGpropexpl} with
$m=0$. These integrations give rise to factors $2p^-/\bm{p}^2$ and
$2q^-/\bm{q}^2$, respectively. The minus momentum components in the numerator
compensate the explicit $\partial^-$ in the denominator
in~\eqref{eq:prop-int-leading}, while the additional $\int_0^\infty
{dk^-}/{k^-}$ in $G^{ab}_0(x,y)$ leads to the factor $\ln(1/x)$ upon imposing
cutoffs in~$x$. Note that the first term in the curly brackets
in~\eqref{eq:prop-int-leading} reduces to $- \,i\delta^{(4)}_{x' y'}$ and
it does not contribute to the equation since it is suppressed at large $p^-$.
Eqs. (\ref{eq:example-standard}) and (\ref{Kdef}) summarize the result for
one of the diagrams in~\eqref{eq:JIMWLK-LO-diagram-cont}
used in deriving the JIMWLK equation~\cite{Weigert:2000gi}.
As announced we will now use this diagram to
demonstrate how the JIMWLK equation can be promoted to running coupling by means of the
dispersive method. To this end we use the substitution~\eqref{eq:deltaA-disp-replacement}
obtaining
\begin{align}
\label{eq:dispersive-replacement-direct}
\frac{\alpha_s}{\pi}\begin{minipage}[m]{2.0cm}
\includegraphics[height=2.0cm]{chiqqb}
\end{minipage}
\to \frac{1}{\beta_0}\int_0^{\infty}\frac{dm^2}{m^2} \rho_V(m^2) \left(
\begin{minipage}[m]{2.0cm}
\includegraphics[height=2.0cm]{chiqqb}
\end{minipage}(m) -\begin{minipage}[m]{2.0cm}
\includegraphics[height=2.0cm]{chiqqb}
\end{minipage}(m=0) \right).
\end{align}
Recall that $\rho_V(m^2)$ is a universal object, the discontinuity of
the coupling on the time--like axis, defined in (\ref{eq:discontinuity}).
To compute running--coupling corrections using (\ref{eq:dispersive-replacement-direct})
we need to repeat the calculation of the diagram, but now with the
massive propagator of (\ref{eq:prop-int-disp}) instead of the massless
one~\eqref{eq:prop-int-leading}.
Doing this we find that Eq. (\ref{eq:example-standard}) is unmodified while
${\cal K}_{\bm{x z y}}$ is replaced by its massive counterpart:
\begin{align}
\label{eq:Kmassive} {\cal K}_{\bm{x z y}} \to {\cal K}_{\bm{x z y}}^m =
-\left(\partial_x^j \partial_y^j + m^2\right)
\int
\frac{d^2p\,d^2q}{(2\pi)^2}
\frac{{\rm e}^{i\bm{p}(\bm{x}-\bm{z})}}{\bm{p}^2+m^2}\, \frac{{\rm
e}^{i\bm{q}(\bm{z}-\bm{y})}}{\bm{q}^2+m^2},
\end{align}
where the two terms $\partial_x^j \partial_y^j$ and $m^2$ originate in the
last and the middle terms in Eq. (\ref{eq:prop-int-disp}), respectively. As in
the LO case, the $- \,i\delta^{(4)}_{x' y'}$ term in Eq.
(\ref{eq:prop-int-disp}) does not contribute in the small--$x$ limit.
Let us now compute the integrals in (\ref{eq:Kmassive}) explicitly.
Denoting the ``daughter dipoles'' respective \emph{lengths} by
$ r_i :=\vert{{\bm r}}_{i} \vert$ ($i=1,2$) we obtain:
\begin{align}
\label{eq:Kmassive_result}
\begin{split}
{\cal K}_{\bm{x z y}}^m\,= &\,
{\cal K}_{\bm{x z y}}\, r_1 m \,K_1(r_1 m) \,\,r_2 m \,K_1(r_2 m)\,
-
\,
m^2\, K_{0}(r_1\, m) K_{0}(r_2\, m)
\end{split}
\end{align}
where
${\cal K}_{\bm{x z y}}$ is the LO kernel of~(\ref{Kdef})
and $K_{0}(x)$ and $K_{1}(x)=-dK_0(x)/dx$ are $K$-Bessel functions. These functions
depend only on the {\rm lengths} of the vectors ${{\bm r}}_i$, not on the
angles --- a direct consequence of the angular integration in~(\ref{eq:Kmassive}).
Angular dependence appears here \emph{only} through the derivative $\partial_x^j \partial_y^j$,
giving rise to the factor ${\cal K}_{\bm{x z y}}$ in the first term in (\ref{eq:Kmassive_result}), just as at LO.
The next observation is that upon applying the dispersive method to the self--energy
diagrams in Eq.~\eqref{eq:JIMWLK-LO-diagram-cont}, in analogy with
(\ref{eq:dispersive-replacement-direct}), one recovers again the LO result with the kernel
of (\ref{eq:Kmassive}).
Thus, \emph{all} the diagrams share the same massive kernel ${\cal K}_{\bm{x z y}}^m$
of Eq.~\eqref{eq:Kmassive_result}. Since also the dispersive integral of
Eq.~\eqref{eq:dispersive-replacement-direct}
is common to all three diagrams, running--coupling corrections to the JIMWLK Hamiltonian
${\cal H}[U]$ of (\ref{eq:JIMWLK-Hamiltonian-LO}) appear through the replacement:
\begin{align}
\label{eq:K-replacement-direct}
\frac{\alpha_s}{\pi}{\cal K}_{\bm{x z y}}\,\to\,
{\cal K}_{\bm{x z y}} \frac{1}{\beta_0}\int_0^{\infty}\frac{dm^2}{m^2}
\rho_V(m^2) \Big[{\cal F} ({\bm r}_1 m,{\bm r}_2 m)-1\Big] \ ,
\end{align}
where we defined the ``characteristic function'' ${\cal F} ({\bm r}_1 m ,{\bm r}_2 m)$,
following the general discussion in Sec.~\ref{sec:runn-coupl-disp},
by extracting the LO kernel:
\begin{align}
\begin{split}
\label{Fdef}
{\cal K}_{\bm{x z y}}^m\,=:& \,{\cal K}_{\bm{x z y}}\ {\cal F} ({\bm r}_1 m ,{\bm r}_2 m)=
{\cal K}_{\bm{x z y}}\Big[ {\cal F}^T({\bm r}_1 m ,{\bm r}_2 m)\,+\,{\cal F}^L({\bm r}_1 m ,{\bm r}_2 m)\Big],\\
&{\cal F}^T ({\bm r}_1 m ,{\bm r}_2 m) \,= \,r_1 m \,K_1(r_1 m) \,\,r_2 m \,K_1(r_2 m)\, ,\\
&{\cal F}^L ({\bm r}_1 m ,{\bm r}_2 m) \,=\,
m^2\,\frac{\,{r_1}^2 \,{r_2}^2}{ {{\bm r}_1}\cdot {{\bm r}_2}} \, K_{0}(r_1\, m) K_{0}(r_2\, m)\,,
\end{split}
\end{align}
and used the fact that $\lim\limits_{m\to 0} {\cal F} ({\bm r}_1
m,{\bm r}_2 m)=1$.
Here ${\cal F}^T$ and ${\cal F}^L$ correspond to the first and second terms in
(\ref{eq:Kmassive_result}), which in turn originate in the $\partial_x^j \partial_y^j$
and $m^2$ terms is (\ref{eq:Kmassive}), respectively.
As mentioned in Sec.~\ref{sec:dress-prop-backgr} (see Eq.~(\ref{eq:prop-int-disp}) there)
these terms are associated with transverse~($T$)
and longitudinal~($L$) gluons, respectively, which explains the notation in (\ref{Fdef}).
As we see, these two contributions to ${\cal K}_{\bm{x z y}}^m$
are rather different in nature: the first can be naturally
written as a product of the scale--invariant kernel
${\cal K}_{\bm{xzy}}$ times a function of the lengths of the
``daughter dipoles'', while the second brings in an entirely new structure that does not share the
angular dependence of the LO kernel.
We also introduce a notation for the ``effective charge''
\begin{equation}
\label{eq:Rdef}
R({\bm r}_1\Lambda,{\bm r}_2\Lambda) :=
\frac{1}{\beta_0}\int_0^{\infty}\frac{dm^2}{m^2} \rho_V(m^2)
\Big[{\cal F} ({\bm r}_1 m,{\bm r}_2 m)-1\Big] \ ,
\end{equation}
which plays the r\^ole of a scale--dependent ${\alpha_s}/{\pi}$
in all our calculations: the substitution of
\begin{equation}
\label{eq:new-JIMWLK}
\frac{\alpha_s}{\pi}{\cal K}_{\bm{x z y}}\to {\cal M}_{\bm{x z y}} :=
R({\bm r}_1 \Lambda,{\bm r}_2\Lambda)\ {\cal K}_{\bm{x z y}}
\end{equation}
in~\eqref{eq:JIMWLK-Hamiltonian-LO} promotes all the fixed coupling diagrams
entering the JIMWLK equation to running coupling. The running coupling
corrected JIMWLK Hamiltonian then reads
\begin{align}
\label{eq:JIMWLK-Hamiltonian-NLO}
{\cal H}[U] =-\frac{1}{2\pi}\, {\cal M}_{\bm{x z y}}\, \Big[
U_{\bm z}^{a b}\left(i\Bar\nabla^a_{\bm x}i\nabla^b_{\bm y} +i\nabla^a_{\bm x}
i\Bar\nabla^b_{\bm y}\right) + \left( i\nabla^a_{\bm x} i\nabla^a_{\bm
y}+i\Bar\nabla^a_{\bm x} i\Bar\nabla^a_{\bm y}\right) \Big]
\ .
\end{align}
All other ingredients in the JIMWLK equation~\eqref{eq:JIMWLK} and the
definition of correlator averages~\eqref{eq:corrs} remain unchanged.
Eq.~\eqref{eq:new-JIMWLK}, and with it~\eqref{eq:JIMWLK-Hamiltonian-NLO}, is
our key result that fully implements running coupling in the JIMWLK equation.
\subsection{Borel representation of the resummed kernel}
\label{sec:borel-rep-of-resummed-kernel}
To systematically define the perturbative sum and analyze power
corrections in the JIMWLK kernel ${\cal M}_{\bm{x z y}}$, it is convenient to
write a Borel representation of the ``effective charge'' $R$,
in the form of Eq.~\eqref{R_borel_}:
\begin{align}
\label{eq:K-replacement}
{\cal M}_{\bm{x z y}}={\cal K}_{\bm{x z y}} \, R({\bm r}_1 \Lambda,{\bm r}_2 \Lambda,)
= \frac{1}{\beta_0} \int_0^{\infty}du & \ T(u)
\left(\frac{\mu^2}{\Lambda^2}\right)^{-u} {\cal K}_{\bm{x z y}}\,B(u,{\bm r}_1\mu,{\bm r}_2\mu),
\end{align}
where the Borel function is related to the characteristic function ${\cal F}$
(see Eq.~\eqref{eq:B-F-relation}) by
\begin{align}
\label{eq:Bdef}
B(u,{\bm r}_1\mu,{\bm r}_2\mu)= &\, -{\rm e}^{\frac53 u} \frac{\sin \pi u}{\pi}\,
\int_0^{\infty}\frac{dm^2}{m^2} \bigg(\frac{m^2}{\mu^2}\bigg)^{-u}
\Big[{\cal F} ({\bm r}_1 m,{\bm r}_2 m)-1\Big] \ .
\end{align}
Owing to the different nature of the transverse and longitudinal contributions
to ${\cal F}$, it is convenient to deal with them separately also on the level
of the Borel function. To this end we write
\begin{equation}
B(u,{\bm r}_1\mu,{\bm r}_2\mu) \,= \, B^T(u,{\bm r}_1\mu,{\bm r}_2\mu) +
B^L(u,{\bm r}_1\mu,{\bm r}_2\mu), \label{B_TL}
\end{equation}
corresponding to the two terms in (\ref{Fdef}).
Below we present explicit expressions for these functions in both momentum and
coordinate space. Using (\ref{eq:Kmassive}) and (\ref{eq:Bdef}) we obtain
the following expressions as Fourier transformations from momentum space:
\begin{subequations}
\label{eq:mom-space-Borels}
\begin{align}
\label{eq:mom-space-trans-res}
{\cal K}_{\bm{x z y}}\, B^T(u,{\bm r}_1\mu,{\bm r}_2\mu) = &\, -e^{\frac53 u}
\frac{\sin\pi u}{\pi} \int \frac{d^2p\,d^2q}{(2\pi)^2} {\text
e}^{i\bm{p}\cdot{\bm r}_1}{\text e}^{- i\bm{q}\cdot{\bm r}_2} \notag \\
&\times \int\limits_0^\infty \frac{dm^2}{m^2}
\left(\frac{m^2}{\mu^2}\right)^{-u} (- 1)
\left[ \frac{{\bm
p}\cdot{\bm q}}{({\bm p}^2+m^2)({\bm q}^2+m^2)} -\frac{{\bm
p}\cdot{\bm q}}{{\bm p}^2\, {\bm q}^2} \right] \notag \\ = & \,
-
e^{\frac53 u} \int \frac{d^2p\,d^2q}{(2\pi)^2}\
e^{i\bm{p}\cdot{\bm r}_1}\ e^{- i\bm{q}\cdot{\bm r}_2} \ \frac{{\bm
p}\cdot{\bm q}}{{\bm p}^2\, {\bm q}^2}\ \frac{{\bm
q}^2\left(\frac{{\bm p}^2}{\mu^2}\right)^{-u} -{\bm
p}^2\left(\frac{{\bm q}^2}{\mu^2}\right)^{-u}}{{\bm q}^2-{\bm p}^2}
\ ,
\displaybreak[0]\\
{\cal K}_{\bm{x z y}}\, B^L(u,{\bm r}_1\mu,{\bm r}_2\mu) = &\, -e^{\frac53 u}
\frac{\sin\pi u}{\pi} \int \frac{d^2p\,d^2q}{(2\pi)^2} \
e^{i\bm{p}\cdot{\bm r}_1}\ e^{- i\bm{q}\cdot{\bm r}_2} \notag \\ &\times
\int\limits_0^\infty \frac{dm^2}{m^2} \left(\frac{m^2}{\mu^2}\right)^{-u}
\frac{- m^2}{({\bm p}^2+m^2)({\bm q}^2+m^2)}
\label{eq:mom-space-inst-res}
\notag \displaybreak[0]\\ = & -
e^{\frac53 u} \int
\frac{d^2p\,d^2q}{(2\pi)^2}\ e^{i\bm{p}\cdot{\bm r}_1}{\text
e}^{i\bm{q}\cdot{\bm r}_2} \ \frac{ \left(\frac{{\bm p}^2}{\mu^2}\right)^{-u}
- \left(\frac{{\bm q}^2}{\mu^2}\right)^{-u} }{ {\bm p}^2-{\bm q}^2}
\ .
\end{align}
\end{subequations}
These expression assume a perhaps more familiar form if we restrict ourselves
to one--loop running, where, with $T(u)=1$, expanding under the
Borel integral amounts to replacing the powers of the momenta ${\bm k}={\bm
p}, {\bm q}$, i.e. $\left({{\bm k}^2}/{\mu^2}\right)^{-u}$ by the
corresponding geometric series according to
\begin{equation}
\label{eq:borel-to-geom}
\left(\frac{{\bm k}^2}{\mu^2}\right)^{-u} \to
\frac{\alpha_s(\mu^2)}{\pi}
\frac1{1+\frac{\beta_0\alpha_s(\mu^2)}{\pi}
\ln\left({{\bm k}^2}/{\mu^2}\right)}
\ .
\end{equation}
In fact, using~\eqref{eq:K-replacement} and~\eqref{eq:borel-to-geom},
the expressions in~\eqref{eq:mom-space-Borels} can be directly mapped
onto the expressions derived diagrammatically in~\cite{HW-YK:2006}.
We note that in the purely virtual case of
Fig.~\ref{fig:NLO-diagrams}~(c) the result simplifies in the expected
manner: there the ${\bm z}$ integral may be performed and, in the absence
of interaction with the target, it sets ${\bm q}={\bm p}$.
Then the {\em sum} of the integrands of transverse and longitudinal
contributions~\eqref{eq:mom-space-Borels} yields
\begin{equation}
\label{eq:borel-virtual-simplification}
\underbrace{\frac1{{\bm p}^2} \left(\frac{{\bm p}^2}{\mu^2}\right)^{-u} (1+u)}_{{\rm from}
\,\, {\cal K}\, B^T}
\
\underbrace{-\frac1{{\bm p}^2} \left(\frac{{\bm p}^2}{\mu^2}\right)^{-u} u}_{{\rm from}
\,\, {\cal K}\, B^L}
=
\frac1{{\bm p}^2} \left(\frac{{\bm p}^2}{\mu^2}\right)^{-u}.
\end{equation}
Via~\eqref{eq:borel-to-geom}, this corresponds to $(1/{\bm
p^2})\,{\alpha_s({\bm p}^2\,e^{-\frac53})}/{\pi}$, which is the expected
contribution of a gluon of transverse momentum ${\bm p}^2$
that does not interact with the background field.
Note that there is an important cancellation at NLO (and beyond) between the
separate contributions of transverse and longitudinal gluons.
On the level of the dispersive integrals used to define $B^T$ and $B^L$ this
cancellation is even more obvious: for ${\cal K}^m_{\bm{x z y}}$ (c.f.
Eq.~\eqref{eq:Kmassive}) it takes the form
\begin{align}
\label{eq:mom-sum-trans-long-virt}
\frac{{\bm p}\cdot{\bm q}+m^2}{({\bm p}^2+m^2)({\bm q}^2+m^2)}
\xrightarrow{{\bm q}\,\to\,{\bm p}} \frac{1}{{\bm p}^2+m^2},
\end{align}
which has exactly the same interpretation.
Direct Fourier integration of the perturbative sum obtained through
(\ref{eq:borel-to-geom}) is not well defined owing to the Landau pole. In this
respect these all--order expressions are largely symbolic. Correspondingly,
the Borel integrals of both~\eqref{eq:mom-space-trans-res}
and~\eqref{eq:mom-space-inst-res} converge for $u\to\infty$ only when the
momenta are larger than the QCD scale $\Lambda$. Thus, when we exchange the
order of integrations and perform the Borel integration {\em after} the
Fourier transforms, the Borel integral will no longer be unambiguous for any
value of the coordinates. As we shall see explicitly below, the
coordinate--space Borel function exhibits poles along the integration path, at
positive integer values of $u$. This reflects ambiguities in summing the
series, which are indicative of power corrections (see the general discussion
in Sec.~\ref{sec:runn-coupl-disp}). Knowing the location of the poles and the
parametric dependence of their residues on the hard scales involved, one can
estimate the effect of non-perturbative power corrections. It is the
advantage of the Borel method to expose these power corrections in such a
clear and concise way.
With this in mind we now perform the Fourier transformation of the Borel
functions. The all--order expressions lead to infinite sums that can be
recast in terms of hypergeometric functions. The calculation is done using
integral representations of the Bessel functions. The integrals over the
dispersive mass intertwine the parameter integrations so that it becomes
necessary to use Mellin-Barnes representations to decouple them. The
Mellin-Barnes integrals are done last and lead to infinite sums of residues.
Details of the rather involved algebra are given separately for $B^T$ and
$B^L$ in Appendix~\ref{sec:FT-of-BT} and~\ref{sec:FT-of-BL}, respectively.
To express the results in a compact manner we define
\begin{equation}
r_> :=\max\left\{r_1,r_2 \right\} \,;\qquad
r_< :=\min\left\{r_1,r_2 \right\} \,;\qquad
\xi:= \frac{{r}_<}{{r}_>}.
\end{equation}
We present two versions for each of the results for $B^T$ and $B^L$, as a
series in $\xi^2$ and in $1-\xi^2$, respectively. This is important for
numerical implementations near $\xi=0$ and $\xi=1$, respectively, and will be
used in Sec.~\ref{Sec:numerical} below. In addition, the first form
conveniently displays the singularity structure and the functional form of the
residues, while the second form is convenient for performing an expansion in
$u$, in order to extract the perturbative coefficients.
The coordinate--space result for $B^T$ is:
\begin{subequations}
\label{eq:BT}
\begin{align}
\label{eq:BTa}
{\cal K}_{\bm{x z y}} \, B^T(u,{\bm r}_1\mu,{\bm r}_2\mu)
= & \frac{{\bm r}_1\cdot {\bm r}_2}{r_1^2r_2^2}
\left(\frac{4\,e^{-\frac53 }}{r_>^2
\mu^2}\right)^{-u}\frac{\sin(\pi u)}{\pi}\,
\bigg\{\Gamma(1-u)\Gamma(-u)\, \nonumber\\ &\vspace*{-30pt}
+\sum_{k=0}^{\infty} (\xi^2)^{1+k}\,
\frac{\Gamma(2-u+k)\Gamma(1-u+k)}{\Gamma(1+k)\Gamma(2+k)}\,
\,\notag\\&\vspace*{-30pt} \times
\Big(\Psi(2-u+k)+\Psi(1-u+k)-\Psi(2+k)-\Psi(1+k)+\ln \xi^2\Big)\bigg\}
\ ,
\intertext{which can be recast in terms of a hypergeometric series
$\sideset{_2}{_1}\F$ using 15.3.11 of~\cite{AbramSteg}}
\begin{split}
\label{eq:BTb}
= & -\frac{{\bm r}_1\cdot {\bm r}_2}{r_1^2r_2^2}
\left(\frac{4\,e^{-\frac53 }}{r_>^2 \mu^2}\right)^{-u}
\frac{\sin(\pi u)}{\pi}\,
\frac{u(1-u)\Gamma^2(-u)\Gamma^2(1-u)}{\Gamma(2-2u)}\\
&\hspace*{50pt}\times \qFp{2}{1}{1-u,-u}{2-2u}{1-\xi^2}.
\end{split}
\end{align}
\end{subequations}
A useful alternative expression to (\ref{eq:BTb}) can be obtained using the
identity:
\begin{equation}
\label{eq:_hyper_identity}
\qFp{2}{1}{1-u,-u}{2-2u}{1-\xi^2}
\,=\,
\xi^2\,\bigg[1\,-\,\frac{1-\xi^2}{1-u}\frac{d}{d\xi^2}\,
\bigg]\,
\ \qFp{2}{1}{1-u,1-u}{2-2u}{1-\xi^2}.
\end{equation}
The result for $B^L$ is:
\begin{subequations}
\label{eq:BL}
\begin{align}
\label{eq:BLa} {\cal K}_{\bm{x z y}} \, B^L(u,{\bm r}_1\mu,{\bm r}_2\mu) = &
-
\frac{\sin(\pi u)}{\pi} \frac1{{\bm r}_>^2} \left(\frac{4\,e^{-\frac53}}{
{\bm r}_>^2 \mu^2 }\right)^{-u} \sum\limits_{k=0}^\infty
\left(\frac{\Gamma(k+1-u)}{\Gamma(k+1)}\right)^2 \left(\xi^2\right)^k
\notag \\ & \hspace*{50pt}
\times \Big( \ln(\xi^2) -2\Psi(k+1)+2\Psi(k+1-u)
\Big)
\ ,
\intertext{which again can be expressed in terms of a hypergeometric
function, this time using 15.3.10 of~\cite{AbramSteg},}
\label{eq:BLb}
= & + \frac{\sin(\pi u)}{\pi} \frac{1}{{\bm r}_>^2} \left(
\frac{4\,e^{-\frac53}}{{\bm r}_>^2\mu^2} \right)^{-u}
\frac{\Gamma^4(1-u)}{\Gamma(2-2u)}\ \
\qFp{2}{1}{1-u,1-u}{2-2u}{1-\xi^2}
\ .
\end{align}
\end{subequations}
The result is fully symmetric in ${\bm r}_1$ and ${\bm r}_2$, in particular all
coefficients in an expansion in powers of $u$ (or $\alpha_s$) have
this property.
Our expressions for $B^T$ and $B^L$ can be put to use in several
ways. The first two pertain to obtaining information on the
perturbation theory:
\begin{itemize}
\item \emph{A generating function} for the perturbative
expansion in powers of $\alpha_s(\mu^2)$ (at a fixed scale). To this end one
expands the results for the Borel function around $u=0$ and
then integrates over $u$ in (\ref{eq:K-replacement}) term by term. Assuming one-loop coupling
($T(u)=1$) this amounts to the replacement:
\begin{equation}
\label{eq:u_replacement}
u^n \longrightarrow n!\left(\beta_0\,\frac{\alpha_s(\mu^2)}{\pi}\right)^{n+1}.
\end{equation}
\item Knowing the analytic structure of the Borel function, which has a leading
renormalon singularity at $u=1$, it is clear that the expansion: (1)
does not converge, and (2) it is not Borel summable; \emph{A
definition of the sum} in (\ref{eq:K-replacement}) is required.
Starting from the expansion, a natural prescription is to truncate
the sum at the minimal term. A more systematic regularization, which
has been found useful in several applications in QCD, see
e.g.~\cite{Cacciari:2002xb,Andersen:2005bj,Gardi:1999dq}, is the
Principal Value (PV) of the Borel integral in
(\ref{eq:K-replacement}).
\end{itemize}
In the next section (Sec.~\ref{sec:perturbative-expansions}) we will
perform the perturbative expansion of the JIMWLK kernel, study
numerically its apparent convergence in the first few orders, and
compare it to the PV Borel sum. We also examine the approximation of
the sum by scale setting. These issues are revisited in
Sec.~\ref{sec:BK-numerics} where we examine the effect of the newly
computed higher--order corrections on the evolution in the case of
the BK equation.
The third application goes beyond perturbation theory; we can
extract information on the power corrections from the integrand:
in~\eqref{eq:K-replacement}:
\begin{itemize}
\item The renormalon poles at positive integer $u$ in $B^T$ and $B^L$ can be
used to infer the parametric form and significance of non-perturbative
power corrections that are expected to affect the evolution kernel.
\end{itemize}
This is discussed in Sec.~\ref{sec:power-corrections} for a single
evolution step and re-visited in Sec.~\ref{sec:BK-numerics} when
solving the evolution equations numerically.
\section{Perturbation theory}
\label{sec:perturbative-expansions}
Having computed the Borel function $B(u)$ entering
Eq.~(\ref{eq:K-replacement}), we have essentially determined the
expansion coefficients for $R({\bm r}_1\Lambda,{\bm r}_2\Lambda)$, and thus
for the JIMWLK evolution kernel, to all orders in the
large--$\beta_0$ limit. To get the coefficients one expands the
expressions for $B^T(u,{\bm r}_1\mu,{\bm r}_2\mu)$ and
$B^L(u,{\bm r}_1\mu,{\bm r}_2\mu)$ in
Sec.~\ref{sec:borel-rep-of-resummed-kernel} under the integral in
(\ref{eq:K-replacement}),
\begin{equation}
B^T(u,{\bm r}_1\mu,{\bm r}_2\mu)= 1+\sum\limits_{n=1}^\infty
b^T_n({\bm r}_1\mu,{\bm r}_2\mu)u^n; \qquad
B^L(u,{\bm r}_1\mu,{\bm r}_2\mu)=\sum\limits_{n=1}^\infty
b^L_n({\bm r}_1\mu,{\bm r}_2\mu)u^n
\end{equation}
and integrates over the Borel variable term by term\footnote{Since
we are using one--loop running coupling, this amounts to making the
replacement of Eq.~(\ref{eq:u_replacement}).}, to obtain:
\begin{align}
\label{eq:pert-exp}
{\cal M}_{\bm{x z y}} = R({\bm r}_1\Lambda,{\bm r}_2\Lambda){\cal K}_{\bm{x z y}} =
&\, {\cal K}_{\bm{x z y}}\,\frac{\alpha_s(\mu^2)}{\pi}\,
\left(1 +\sum\limits_{n=1}^\infty n!
\left(\beta_0\frac{\alpha_s(\mu^2)}{\pi}\right)^n
b_n({\bm r}_1\mu,{\bm r}_2\mu)\right) \ ,
\intertext{with } b_n({\bm r}_1\mu,{\bm r}_2\mu) := &\,
b^T_n({\bm r}_1\mu,{\bm r}_2\mu)+ b^L_n({\bm r}_1\mu,{\bm r}_2\mu) \ .
\end{align}
The purpose of this section is to study the effect of these
perturbative corrections to the JIMWLK evolution kernel.
Explicit expressions for the expansion coefficients will be
presented in Eqs. (\ref{eq:BTexpansion}) and (\ref{eq:BLexpansion})
below. Before looking into the details, let us briefly recall what
one expects on general grounds following the discussion in
Sec.~\ref{sec:runn-coupl-disp}:
\begin{itemize}
\item{} {\bf The all--order sum is renormalization scale invariant:}
Eq.~(\ref{eq:K-replacement}) is renormalization--scale inavriant. This
means that also (\ref{eq:pert-exp}) shares this property, if it is formally
considered to all orders. However, the choice of the expansion parameter
would make any finite--order partial sum scale dependent.
\item{} {\bf The series in non-summble owing to infrared renormalons, which
amount to power--suppressed ambiguities:} Going beyond the level of a
finite--order partial sum, in an attempt to compute
$R({\bm r}_1\Lambda,{\bm r}_2\Lambda)$ \emph{to all orders}, one finds infrared
renormalon ambiguities. Let us explain how they arise here: since $B(u)$ in
(\ref{eq:K-replacement}) has a finite radius of convergence ($u=1$), $b_n$
in (\ref{eq:pert-exp}) are characterized at high orders by geometrical
progression with no sign oscillation. With the explicit $n!$ growth in
(\ref{eq:pert-exp}), it is obvious that the series would not converge, but
instead reach a minimal term, and then start diverging. An optimal
perturbative approximation may be defined by truncation at the minimal term.
This, however, is inconvenient since the truncation order may (and in our
case does) depend on all the scales in the problem (in our case $r^2$,
$r_1^2$ and $r_2^2$). A more systematic definition of the sum can be made
using the Borel sum~(\ref{eq:BTexpansion}). This integral does not exist as
it stands, since the poles appear at real positive values of $u$, on
the integration path. One can {\em define} the sum by shifting the contour
above or below the real axis, or by choosing the Principal Value (PV)
prescription. The differences between these choices are
\emph{power--suppressed} in the hard scales: they scale as integer powers of
$\Lambda^2r^2$, $\Lambda^2r_1^2$ and $\Lambda^2r_2^2$ (up to logarithms,
see Sec.~\ref{sec:power-corrections}). The size of this ambiguity is
obviously controlled by the residues of $B(u)$ in (\ref{eq:BTexpansion}) and
can therefore be explicitly computed. It is also similar to the magnitude of
the minimal term in the series. Since these ambiguities are expected to
cancel against non-perturbative power corrections, they provide important
information on such corrections to the kernel, that are otherwise unknown.
We will return to discuss these corrections in detail in
Sec.~\ref{sec:power-corrections}.
\end{itemize}
Our default choice for defining the all--order sum of the series is
the PV prescription. Having made this choice, we will examine the
convergence of the perturbative expansion in the first few orders,
and demonstrate (Fig.~\ref{fig:conv-pert-theory}) the dependence of
this (apparent) convergence on the region of phase space as well as
on the choice of the expansion parameter.
Technically, the implementation of the PV regularization in the
present case is complicated by the fact that Eqs.~\eqref{eq:BT}
and~(\ref{eq:BL}) contain terms with double poles\footnote{The
current example is by no means unique in this
regard. The same occurs also in the absence of large target fields, e.g. for
the well--studied example of the Adler function $D(Q^2)=4\pi^2
d\Pi(Q^2)/dQ^2$,
see~\cite{Broadhurst:1992si,Lovett-Turner:1995ti,Beneke:1998ui}.} in
$u$.
We cope with this in the standard technique by isolating the double
pole contributions and using integration by parts. The explicit
expressions used in defining $R$ are given in
Appendix~\ref{sec:PV-def-pert-sum}; they are in turn based on the
explicit expression for the first-- and second--order poles
extracted in Appendices~\ref{sec:poles-BT} and~\ref{sec:poles-BL}.
The numerical result for the PV Borel sum is shown in
Fig.~\ref{fig:conv-pert-theory}. To appreciate the qualities of this
prescription as means of defining the all--order sum, it is useful
to compare it with increasing--order partial sums. The expansion of
the Borel functions can be conveniently obtained starting with
expressions (\ref{eq:BLb}) for $B^L$ and (\ref{eq:BTb}) with
(\ref{eq:_hyper_identity}) for $B^T$, which involve
$\qFp{2}{1}{1-u,1-u}{2-2u}{1-\xi^2}$, a special case of the type~E
function expanded in~\cite{Kalmykov:2006pu} (see Eq. (4.29) there).
It is convenient to define the
dimensionless variable
\begin{equation}
\label{eq:Omega} \Omega:= \frac{4\,e^{-\frac53-2\gamma_E}}{r_>^2
\mu^2}
\ ,
\end{equation}
which appears as the argument of the logarithms in the coefficients.
The expansion for $B^T$ takes the form
\begin{subequations}
\label{eq:BTexpansion}
\begin{align}
{\cal K}_{\bm{x z y}}\, & B^T(u,{\bm r}_1\mu,{\bm r}_2\mu) = \,
\frac12\frac{r^2-(r_1^2+r_2^2)}{r_1^2r_2^2}\left[
1+\sum\limits_{n=1}^\infty
b^T_n({\bm r}_1\mu,{\bm r}_2\mu)u^n \right]
\ ,
\intertext{where}
\label{eq:BTexpansion-b}
b^T_1({\bm r}_1\mu,{\bm r}_2\mu) = &\,
-\ln (\Omega)
-\frac{\xi^2 \ln(\xi^2)}{1-\xi^2} \ ,
\displaybreak[0]\\
\label{eq:BTexpansion-c}
b^T_2({\bm r}_1\mu,{\bm r}_2\mu) = &\, \frac{1}{2}\,\Big(b^T_1({\bm r}_1\mu,{\bm r}_2\mu)\Big)^2
-\frac{\pi^2}{6}-\frac12\frac{\Big(\xi^2 \ln(\xi^2)\Big)^2}{(1-\xi^2)^2}
+\frac{1+\xi^2}{1-\xi^2} \text{Li}_2(1-\xi^2)
\ ,
\displaybreak[0]\\
\label{eq:BTexpansion-d}
b^T_3({\bm r}_1\mu,{\bm r}_2\mu) = &\,\frac{1}{1-\xi^2}
\Bigg\{-{\displaystyle \frac {1}{6}}(1-\xi ^{2} )\,
\ln^{3}(\Omega) - {\displaystyle \frac {1}{2}} \,\ln^{2}(\Omega )\,\xi
^{2}
\,\ln(\xi ^{2}) \notag \displaybreak[0]\\
& + \left[(\xi ^{2} + 1)\,\left(\ln(1 - \xi ^{2})\, \ln(\xi ^{2}) +
{\rm Li}_2(\xi ^{2}) \right)-
{\displaystyle \frac {\xi ^{2}\,\pi ^{2}}{3}} \right]\,\ln(\Omega )
\notag \\
& + 2\,\xi ^{2}\,\ln^{2}(\xi ^{2})\,\ln(1
- \xi ^{2}) + 3\,\xi ^{2}\,\ln(\xi ^{2})\,
{\rm Li}_2(\xi ^{2}) + (2\,\xi ^{2} + 2)\,{\rm Li}_3(1 - \xi ^{2})
\notag\\
& - 2\,\xi ^{2}\,{\rm Li}_3(\xi ^{2}) + \left( - {\displaystyle
\frac {4}{3}} + {\displaystyle \frac {10\,\xi ^{2 }}{3}}
\right)\,\zeta_3 - 4\,\xi ^{2}(1-\xi ^{2})\,\left.
\frac{dS_{2,2}(z)}{dz} \right\vert_{z=1 - \xi ^{2}}\Bigg\}
\ .
\end{align}
\end{subequations}
${dS_{2,2}(z)}/{dz}$ in (\ref{eq:BTexpansion-d}) is the first
occurrence of a Nielsen polylogarithm,
\begin{equation}
S_{a,b}(z):= \frac{(-1)^{a+b-1}}{(a-1)!b!}
\int_0^1\frac{d\xi}{\xi}\ln^{a-1}(\xi)\ln^b(1-\xi z)
\ .
\end{equation}
At higher orders in the expansion one
encounters~\cite{Kalmykov:2006pu} higher Nielsen polylogarithms, as
well as other harmonic polylogarithms~\cite{Remiddi:1999ew}.
The longitudinal part, $B^L$ starts at order $u$, corresponding to
${\cal O}(\beta_0 \alpha_s^2)$. The expansion takes the form:
\begin{subequations}
\label{eq:BLexpansion}
\begin{align}
{\cal K}_{\bm{x z y}}\, B^L(u,{\bm r}_1\mu,{\bm r}_2\mu) = &\,
\sum\limits_{n=1}^\infty {\cal K}_{\bm{x z y}}b^L_n({\bm r}_1\mu,{\bm r}_2\mu)u^n \ , \intertext{where,
from~\eqref{eq:BLb}, we find} {\cal K}_{\bm{x z y}}\,
b^L_1({\bm r}_1\mu,{\bm r}_2\mu) =
&\,- \frac{1}{{\bm r}_>^2}\frac{\ln(\xi^2)}{1-\xi^2}\label{eq:BLexpansion1}
\ ,
\\
{\cal K}_{\bm{x z y}}\, b^L_2({\bm r}_1\mu,{\bm r}_2\mu) = &\, + \frac{1}{{\bm r}_>^2}
\frac{\ln(\xi^2)\ln(\Omega) +2\text{Li}_2(1-\xi^2)}{1-\xi^2}
\ ,
\\
\begin{split}
{\cal K}_{\bm{x z y}}\, b^L_3({\bm r}_1\mu,{\bm r}_2\mu) = &\, -\frac{1}{{\bm
r}_>^2}\, \frac{1}{1-\xi^2}\Bigg[\ln(\xi ^{2})\,{\rm Li}_2(\xi ^{2})
- 2\,{\rm Li}_3(\xi ^{2}) +
2\,\zeta_3 - 4\,{\rm Li}_3(1 - \xi ^{2}) \\
& + {\displaystyle \frac {1}{2}} \,\ln(\xi ^{2})\, \ln^{2}(\Omega )
+ 2\,\ln(\Omega )\,{\rm Li}_2(1 - \xi ^{2})\Bigg].
\end{split}
\end{align}
\end{subequations}
It is interesting to note that there is a qualitative difference
between the leading behavior of the transverse and the longitudinal
contributions, respectively. The sign of the LO, transverse contribution to
the kernel ${\cal M}_{\bm{x z y}}$ in (\ref{eq:pert-exp})
can be directly read of Eq.~(\ref{Kdef}); it is
\begin{align}
\label{eq:circle}
\begin{split}
\begin{array}{lll}
{\cal M}_{\bm{x z y}}^T &> 0 \quad \text{inside the circle:}\,\,&r_1^2+r_2^2<r^2\\
{\cal M}_{\bm{x z y}}^T &< 0 \quad \text{outside the circle:}\,\,&r_1^2+r_2^2>r^2,
\end{array}
\end{split}
\end{align}
while the NLO longitudinal contribution to ${\cal M}_{\bm{x z y}}$,
in (\ref{eq:BLexpansion1}), is \emph{always} positive.
With the explicit expressions of~(\ref{eq:BTexpansion}) and
(\ref{eq:BLexpansion}) we can now study the convergence of the
perturbative series in~\eqref{eq:pert-exp} and compare the
finite--order results to the PV definition of the all--order sum.
Of course, at this point we must\footnote{To avoid the ultraviolet
problems of a conventional
fixed--coupling calculation one in fact needs to choose $\mu$ as \emph{some}
function of $r_1$, $r_2$ and $r$ instead of a single homogeneous scale
independent of the configuration encountered~\cite{Rummukainen:2003ns}.}
make a choice of scale $\mu$.
Since ${\cal M}_{\bm{x z y}} = R({\bm r}_1\Lambda,{\bm r}_2\Lambda)\,{\cal
K}_{\bm{x z
y}}$ is renormalization invariant (i.e. $\mu$--independent) one may choose
{\em any} function of $r_1$, $r_2$ and $r$ as long as the convergence of the
perturbative series is good enough. As we shall see, this choice makes a
significant difference for the apparent convergence at the first few orders.
A priori, a natural choice (see
Sec.~\ref{sec:running-coupling-types}) may be the ``parent dipole''
size: $\mu^2=c/r^2$. A posteriori, knowing the coefficients in
(\ref{eq:BTexpansion}) and (\ref{eq:BLexpansion}),
one can optimize the scale to reduce the
size of subleading corrections. One possibility is to introduce BLM scale
setting~\cite{BLM} by requiring that the entire NLO coefficient
$b_1({\bm r}_1\mu,{\bm r}_2\mu)$ would vanish identically\footnote{To express
the ${\cal K}_{\bm{x z y}}$ in the transverse part in terms of
dipole lengths only we substitute: $-{{\bm r}_1\cdot
{\bm r}_2}=(r^2-r_1^2-r_2^2)/2$, as in (\ref{Kdef}) above.}:
\begin{eqnarray}
\label{eq:mu_BLM} &&{\cal K}_{\bm{x z y}}b_1^T({\bm r}_1\mu,{\bm r}_2\mu)+
{\cal K}_{\bm{x z y}}
b_1^L({\bm r}_1\mu,{\bm r}_2\mu)=0\nonumber \\
&&\qquad \Longrightarrow \qquad \mu^2_{\hbox{\tiny BLM}} =
\frac{4}{r_>^2}\,\exp\left\{
-\frac{5}{3}-2\gamma_E+\frac{\xi^2\,\ln(\xi^2)}{1-\xi^2}\,
\left(1+\frac{2r_>^2}{r^2-r_>^2-r_<^2}\right)\right\}.
\end{eqnarray}
In our case, however, this approach is too simplistic: since the
leading--order kernel vanishes on the circle $r_1^2+r_2^2=r^2$, as implied
by (\ref{eq:circle}), while the NLO contribution associated with $B^L$ does
not, (\ref{eq:mu_BLM}) develops an artificial singularity \emph{within the
parturbative region}. Consequently, with this particular choice of scale,
higher--order terms would not be bounded on the circle, while no special
difficulty would arise there otherwise. While better possibilities for
optimizing the scale exist --- for example, setting the BLM scale such that
the NLO \emph{transverse} contribution would vanish $b_1^T({\bm r}_1\mu,{\bm r}_2\mu)=0$
--- we will not take this avenue here. Let us just note in passing that the
multi--scale nature of the problem, which is reflected in the dependence on
the parent and daughter dipoles --- or in momentum space by the separate Borel
powers of the transverse momentum before and after the interaction with the
target (Eq.~(\ref{eq:mom-space-Borels})) --- renders any scale--setting
procedure in terms of a \emph{single} coupling unnatural. Further progress in
this direction will be reported in Ref.~\cite{HW-YK:2006}. Here we consider a
couple of simple examples for the scale:
\begin{equation}
\mu^2 = \frac{4e^{-5/3-2\gamma_E}}{r^2}
\mbox{~~~~ and ~~~~}
\mu^2 = \frac{8e^{-5/3-2\gamma_E}}{r_1^2+r_2^2}.
\label{eq:mu-setting}
\end{equation}
The choice of the constant $4e^{-5/3-2\gamma_E} \approx 0.24$ is
motivated by Eq.~(\ref{eq:mu_BLM}).
\begin{figure}[t!]
\centering
\begin{minipage}[t]{7.1cm}
\centering
\includegraphics[width=7cm]{comp_R_pert4}\\ (a)
\end{minipage}
\hfill
\begin{minipage}[t]{7.1cm}
\centering
\includegraphics[width=7cm]{comp_R_pert5}\\ (b)
\end{minipage}
\caption{\small \em $R({\bm r}_1\Lambda,{\bm r}_2\Lambda)$ as
defined with a PV regularization of the Borel sum (\ref{eq:K-replacement}),
and the convergence of perturbation theory at
the first few orders, according to~\eqref{eq:pert-exp},
with $\mu^2=4\,\exp{\left(-\frac53-2\gamma_E\right)}/r^2$ (in (a)) and
$\mu^2=8\, \exp{\left(-\frac53-2\gamma_E\right)}/(r_1^2+r_2^2)$ (in (b)).
$R$ is shown as a function of $r_1$, with $r\Lambda = 10^{-4}$,
${\bm r} \parallel {\bm r}_1 \parallel {\bm r}_2$ and $r_2=r_1+r$.
}
\label{fig:conv-pert-theory}
\end{figure}
In Fig.~\ref{fig:conv-pert-theory} we show a numerical comparison of the
order--by--order expansion with the PV Borel sum.
As expected, at short distance scales perturbation theory is
well--behaved\footnote{If we allow the constant in
(\ref{eq:mu-setting}) to vary significantly compared the choice indicated, the
convergence in both Figs.~\ref{fig:conv-pert-theory}~(a) and~(b) becomes
appreciably worse, an obvious consequence of the logarithmic terms in the
coefficients, see (\ref{eq:BTexpansion}) and~(\ref{eq:BLexpansion}).}:
the first few orders in~\eqref{eq:K-replacement} gradually approach
the PV Borel sum. For larger distances, the corrections become large,
and the minimal term in the series
is reached sooner (see Fig.~\ref{fig:conv-pert-theory}~(b)).
Eventually, for $r_1\Lambda$ of order of a few times $10^{-1}$, the series
diverges right from the start. Note that even in that region the PV Borel sum
is uniquely defined, but as we shall see in the next section, power corrections
are large as well.
It is interesting to further observe that there is a significant difference in
the apparent convergence in the first few orders between
Figs.~\ref{fig:conv-pert-theory}~(a) and~(b): the ``parent dipole'' scale
setting in Fig.~\ref{fig:conv-pert-theory}~(a) has significant corrections
even at short distances, while with the scale choice of
Fig.~\ref{fig:conv-pert-theory}~(b) the first few orders provide a better
approximation. It should be emphasized though that
Fig.~\ref{fig:conv-pert-theory} shows a particular situation where $r_1,r_2\gg
r$. If the dipole sizes are of comparable magnitude, there is no
significant difference between these two choices of scale\footnote{%
For clarity, let us comment that we have chosen to show the function $R$
with $\bmr_1$ parallel to $\bmr_2$. This avoids the apparent divergence
$(\bmr_1\cdot\bmr_2)^{-1}$ in the definition of $R^L$; this divergence is
exactly canceled in the BK kernel when $R$ is multiplied by $K_{\bm x \bm z
\bm y}$.}. The fact that ``parent dipole'' running fails to approximate
the kernel for $r_1,r_2\gg r$, makes a difference for evolution when $R_s\gg
r_1,r_2\gg r$. We observe that parent dipole running in this region
generically underestimates the perturbative sum. Accordingly, in
Sec.~\ref{sec:simulation-results}, we will see that evolution is slower with
the ``parent dipole'' ansatz than with the Borel sum.
In Fig.~\ref{fig:compare-all} we compare the PV Borel sum with
two ad hoc running--coupling formulae, the ``parent dipole'' running and
the ``square--root daughter--dipole'' running:
\begin{equation}
R_{\rm parent} = \alpha_s^{\hbox{\tiny ${\overline{\rm MS}}$}}\left({c^2}/{r^2}\right)/\pi
=\frac{1}{\beta_0\ln\left(c^2/(r^2\Lambda^2)\right)};
\qquad \quad
R_{\rm sqrt} =
\Big[\alpha_s^{\hbox{\tiny ${\overline{\rm MS}}$}}\left({c^2}/{r_1^2}\right)\,
\alpha_s^{\hbox{\tiny ${\overline{\rm MS}}$}}\left({c^2}/{r_2^2}\right)\Big]^{1/2}/\pi.
\label{eq:parent-sqrt}
\end{equation}
In these expressions we fix the scale
$c^2=4\,e^{-\frac53-2\gamma_E}$, as motivated by Eq.~\eqref{eq:mu-setting}.
This ensures that these couplings would match the PV
Borel sum soon enough when both $r_2, r_1 \ll 1/\Lambda$. This is shown on
the right panel of Fig.~\ref{fig:compare-all}. We note here that
$c^2$ here is much smaller than the value $c^2=4$ arising from a
Fourier transform in the double logarithmic limit.
\begin{figure}[htb]
\centering
\includegraphics[width=7cm]{comp_Rall2}\hfill
\includegraphics[width=7cm]{comp_Rall3}
\caption{\small \em Comparison of ad hoc implementations for effective
couplings $R$ calculated with the principal value (PV) prescription, the
``square root'' prescription
$[\alpha_s(c^2/r_1^2)\,\alpha_s(c^2/r_2^2)]^{1/2}/\pi$, and the parent dipole
running $\alpha_s(c^2/r^2)/\pi$. On the left panel the size of the parent
dipole is fixed, $r\Lambda = 0.1$ (and $r_2 = r_1+r$); on the right
panel the ratio of the dipole sizes is fixed, $r_1 = 2r$, $r_2 = 3r$.}
\label{fig:compare-all}
\end{figure}
From Fig.~\ref{fig:compare-all} it is clear that different definitions of
running coupling can be made to match reasonably well at small enough dipole
sizes, at least in parts of phase space, if the scales are chosen
appropriately. On the other hand, at small dipole
sizes, already at $0.1$--$0.3 \times 1/\Lambda$ the coupling diverge strongly.
This related of course to the smallness of the effective scale at which the
coupling is evaluated,
$\mu^2_{\rm eff} \sim 4\,e^{-\frac53-2\gamma_E}/r^2$.
As we shall see below, the effect of this divergence on the evolution rate
(in the BK approximation) is somewhat less important than for $R$. This is to be
expected, as evolution rate is an inclusive observable, and as such has
reduced sensitivity to the phase--space region where the effective coupling
is large.
\section{Non-perturbative power corrections}
\label{sec:power-corrections}
After discussing the perturbative features in some detail, let us
now turn to estimates the non-perturbative corrections to the JIMWLK
kernel ${\cal M}_{\bm{x z y}}$. The renormalon poles in $R$ allow us
to determine the type of power corrections, and their parametric
form, based on the singularities of $B^T$ and $B^L$. The original
expressions in (\ref{eq:BT}) and (\ref{eq:BL}) have simple and
double poles at integer values of $u$. As explained in the
Appendices~\ref{sec:poles-BT} and~\ref{sec:poles-BL}, we perform
integration by parts over the double pole terms to arrive at a
representation of the Borel integral where the integrand is composed
of simple poles only; in this way the double poles have been
converted into a simple ones accompanied by a
logarithmic--enhancement factor, $\ln(\Omega)$ where $\Omega$ is
given in Eq.~(\ref{eq:Omega}). In this representation, the explicit
expressions for the renormalon poles are the following:
\begin{subequations}
\label{eq:B-single-pole}
\begin{align}
\label{eq:BT-single-pole}
{\cal K}_{\bm{x z y}}B^T(u,{\bm r}_1\mu,{\bm r}_2\mu
= &
\,{\cal K}_{\bm{x z y}}\,
\left\{
\left(\frac{4\,e^{-\frac53}}{r_>^2
\mu^2}\right)^{-1}\frac{-2+(1-\xi^2)}{u-1}
\right. \notag \displaybreak[0]\\ &
+\sum\limits_{m=2}^\infty \sum\limits_{n=2(m-1)}^\infty
\frac{(-1)^m}{u-m}\frac{(1-\xi^2)^n}{\Gamma(n+1)}
\left(\frac{4\,e^{-\frac53}}{r_>^2 \mu^2}\right)^{-m}
\frac{\Gamma(1-m+n)}{\Gamma(n-2(m-1))\Gamma(m-1)} \notag \\ & +
\sum\limits_{m=2}^\infty \sum\limits_{n=2}^{m-1}
\frac{(1-\xi^2)^n}{\Gamma(n+1)} \frac{ 2(-1)^{m-n}}{(u-m)} \notag \\ &
\left.
\times \frac{d}{d m}\left[ \left(\frac{4\,e^{-\frac53}}{r_>^2
\mu^2}\right)^{-m} \frac{\Gamma(2m-(n+1))}{
\Gamma(m)\Gamma(m-1)\Gamma(m-n)\Gamma(m+1-n)} \right]\right\}
\notag \\ & \, +\text{regular contributions}
\end{align}
and
\begin{align}
\label{BL-single-pole-part}
{\cal K}_{\bm{x z y}} & B^L(u,{\bm r}_1\mu,{\bm r}_2\mu
= \,-\, \sum\limits_{m=1}^\infty
\sum_{n=m}^\infty \frac{-(-1)^m}{u-m}\frac{1}{r_>^2} \left(
\frac{4\,e^{-\frac53}}{{\bm r}_>^2\mu^2} \right)^{-m}
\frac{(1-\xi^2)^n}{\Gamma(n+1)}
\frac{(\Gamma(n+1-m))^2}{(\Gamma(m))^2\Gamma(n+2-2m)} \notag \\ & -
\sum\limits_{m=1}^\infty \sum\limits_{n=0}^{m-1} \frac{2 (-1)^{m-n}}{u-m}
\frac{(1-\xi^2)^n}{\Gamma(n+1)} \frac{1}{r_>^2}
\frac{d}{d m}\left[\left(
\frac{4\,e^{-\frac53}}{{\bm r}_>^2\mu^2} \right)^{-m}
\frac{\Gamma(2m-(n+1))}{(\Gamma(m))^2(\Gamma(m-n))^2} \right]
\notag \\ & \, +\text{regular contributions}
\ .
\end{align}
\end{subequations}
\begin{figure}[htb]
\centering
\parbox{.45\textwidth}{\includegraphics[width=.45\textwidth]{R_pv_tl_res_pi_2}}
\hfill
\parbox{.45\textwidth}{\includegraphics[width=.45\textwidth]{R_pv_tl_res_pi_3}}
\caption{\small \em $R_{\rm PV}$ and the expected range of power
corrections for fixed $r\Lambda=0.1$ (left panel), and fixed
ratio $r_1=2r$, $r_2=3r$ (right panel) with $r_1$ parallel to $r_2$.
The central solid line
corresponds to the principal value result. The shaded regions estimate the
relevance of power corrections by adding and subtracting $\pi
|\text{residue}|(u=m)$. The inner band takes the first residue at $u=1$,
and the outer hashed band is the sum of absolute values of
contributions from all residues. The dashed line shows the
result when the $u$-integral has been cut at
$u_{\rm max}=0.75$,
before the first pole is encountered.}
\label{fig:R-non-pert-uncert}
\end{figure}
To determine the ambiguity associated with each renormalon we should
first sum transverse and longitudinal contributions, and then
isolate the residue at fixed $u=m$.
Power corrections are expected to follow this ambiguity structure. Let us
therefore introduce a non-perturvative parameter $C_m$, of order 1, relating
each power correction to the corresponding renormalon residue. The underlying
assumption is that genuine non-perturbative effects would be of the same order
as the ambiguities. The parametric dependence of the corrections on the hard
scales then follow directly from the residues in Eq.~\eqref{eq:B-single-pole}:
they are written as powers of $r_>^2 \Lambda^2$ times some function of
$\xi^2$, with additional logarithmic terms in $\Omega$. The dependence soft
scales is subsumed in the coefficients $C_m$. In doing so we write the full
kernel ${\cal M}_{\bm{x z y}}$ as a sum of perturbative contributions ${\cal
M}_{\bm{x z y}}^{\text{PV}}$ and power corrections $\delta{\cal
M}^{(m)}_{\bm{x z y}}$:
\begin{align}
\label{eq:calM+power}
{\cal M}_{\bm{x z y}} =
{\cal M}_{\bm{x z y}}^{\text{PV}} + \sum\limits_{m=1}^\infty
\delta{\cal M}^{(m)}_{\bm{x z y}}
\ .
\end{align}
As an example we explicitly present the power correction $\delta{\cal
M}^{(1)}_{\bm{x z y}}$ corresponding to the leading $u=1$ renormalon:
\begin{align}
\label{eq:leading-residue}
\delta{\cal M}^{(1)}_{\bm{x z y}}
=C_1\frac{\pi}{ \beta_0}
\left\{{\cal K}_{\bm{x z y}}
(1+\xi^2) -
\frac{1}{r_<^2} \left[\ln\left(\xi^2\right)
-2\ln\left(\frac{4\,e^{-\frac53-2\gamma_E}}{{\bm r}_>^2\Lambda^2}
\right)\right] \right\}
\frac{1}{4}\,r_>^2\Lambda^2\,e^{\frac53}
\ .
\end{align}
Fig.~\ref{fig:R-non-pert-uncert} shows $R$ and the expected size of power
corrections as functions of $r_1$, for fixed parent dipole $r =
0.1/\Lambda$, and for fixed ratio of dipole sizes $r_2=r_1/2$ (here $r_2 =
r_1+r$ and $r_2$ is parallel to $r_1$). The power corrections are assumed
here to be of the order of the ambiguity in choosing an integration contour in
the Borel plane, $\pi$ times the absolute
value of the residues at $u=m=1,2,3\ldots$. The relative importance of
renormalon poles can be seen from the width of the error band in
Fig.~\ref{fig:R-non-pert-uncert}. The contribution from the pole at $u=1$,
Eq.~\eqref{eq:leading-residue}, is shown separately. It is evident that this
first pole strongly dominates, except very close to $1/\Lambda$, as expected
from the hierarchy of powers in the analytical expressions.
The key feature of the power corrections in
Fig.~\ref{fig:R-non-pert-uncert} is the fact that they quickly die
away at small scales\footnote{Note that on the left panel of
Fig.~\ref{fig:R-non-pert-uncert} $r_2$ approaches $r =
0.1/\Lambda$ as $r_1\rightarrow 0$.}. This feature is very robust
and it does {\em not} depend on the detailed prescriptions used to
estimate the size of the power corrections. Just to give an
example, one might estimate the uncertainty by cutting off the $u$
integral at some value $u_{\text{max}}<1$, before one encounters the
first renormalon pole. This is shown in
Fig.~\ref{fig:R-non-pert-uncert} for $u_{\rm max} = 0.75$ with
dashed lines. Clearly this prescription leads to an alternative
estimate for the perturbative sum for $R$ which is roughly within
the error band of the full PV integral.
From Fig.~\ref{fig:R-non-pert-uncert} it becomes strikingly clear
that the poles limit the precision of our knowledge of the
(non-perturbative) evolution kernel at large distances. We will see
in Sec.~\ref{sec:BK-numerics}, when we discuss evolution in the
context of the BK equation, that for $Q_s$ near $1-2$GeV, where
large dipoles contributes significantly to the evolution, power
corrections {\em must} be taken into account if one wants to
quantitatively determine the evolution. If we wish to apply JIMWLK
or BK evolution starting from $Q_s$--values in this range, both the
evolution rate and the generic functional form of the initial
condition for the dipole cross section $N_{Y_0,\bm{x y}}$ receive
sizeable non-perturbative contributions.
\section{Evolution in the BK approximation}
\label{sec:BK-numerics}
Our discussion so far focused on the \emph{kernel} of the JIMWLK
equation. We computed and resummed running--coupling corrections to
the kernel and extracted an estimate for non-perturbative power
corrections. All these affect only the ``effective charge'' in front
of the leading--order kernel, although in a way that strongly
depends on the size of the evolving dipole, and that varies
significantly over the transverse phase space of the newly produced dipoles.
In order to explore
the consequences of these results on the \emph{evolution}, one
should clearly consider the solution of the equation over a
sufficiently large range in rapidity. Only in this way would it be
possible to examine the sensitivity of observable quantities, such
as the rate of evolution, to the corrections computed.
The purpose of the present section is to perform a first exploration
of this kind, by considering the evolution in the case of the BK
equation. To this end, let us repeat the derivation of the BK
equation, as described in Sec.~\ref{sec:evolution-equations},
starting with the JIMWLK equation with the Hamiltonian of
Eq.~(\ref{eq:JIMWLK-Hamiltonian-NLO}), which includes
running--coupling corrections. A simple way to see the way the
corrections enter is to first write the leading--order BK equation
with the kernel separated according
Eq.~\eqref{eq:JIMWLK-LO-diagram-cont} into exchange and
self--energy--like diagrams:
\begin{align}
\label{eq:BK-sep}
\partial_Y N_{Y,\bm{x y}} = \frac{ N_c}{2\pi}\int\!\! d^2 z &
\left[2\frac{\alpha_s(\mu^2)}{\pi}{\cal K}_{\bm{x z y}}
-\frac{\alpha_s(\mu^2)}{\pi} {\cal K}_{\bm{x z x}}
-\frac{\alpha_s(\mu^2)}{\pi}{\cal K}_{\bm{y z y}} \right]
\\ & \notag
\hspace*{40pt}\times\,\Big(
N_{Y,\bm{x z}}+ N_{Y,\bm{z y}}-N_{Y,\bm{x y}}
- N_{Y,\bm{x z}}\, N_{Y,\bm{z y}}\Big)\ .
\end{align}
The result obtained using (\ref{eq:JIMWLK-Hamiltonian-NLO}) by
repeating the steps of Sec.~\ref{sec:evolution-equations} simply
amounts the substitution~\eqref{eq:new-JIMWLK}, separately for each
of the three terms in (\ref{eq:BK-sep}):
\begin{align}
\label{eq:BK-running-kernel}
\frac{\alpha_s}{\pi} \Tilde{\cal K}_{\bm{x z y}} &\,\to \Tilde{\cal
M}_{\bm{x z y}}= \ 2 \, R({\bm r}_1\Lambda,{\bm r}_2\Lambda)\, {\cal K}_{\bm{x z y}}
-R({\bm r}_1\Lambda,{\bm r}_1\Lambda)\,{\cal K}_{\bm{x z x}} -
R({\bm r}_2\Lambda,{\bm r}_2\Lambda)\,{\cal K}_{\bm{y z y}}
\end{align}
where the vectors ${\bm r}_i$ are defined in (\ref{eq:distshort}).
Our final result for the resummed BK equation is then written as
\begin{equation}
\label{eq:BK-full}
\partial_Y N_{Y,\bm{x y}} = \frac{ N_c}{2\pi}\int\!\! d^2 z
\
\Tilde{\cal M}_{\bm{x z y}}\,\,
\Big(
N_{Y,\bm{x z}}+ N_{Y,\bm{z y}}-N_{Y,\bm{x y}}
- N_{Y,\bm{x z}}\, N_{Y,\bm{z y}}\Big)
\ .
\end{equation}
\subsection{Numerical implementation}
\label{Sec:numerical}
The numerical solution of the BK evolution equation~\eqref{eq:BK-full}
requires care. In the following we briefly describe the choices we made
in our implementation.
Firstly, we consider only translationally--invariant and
spherically--symmetric solutions; we set $\bm{x} \rightarrow 0$ and
$N_{\bm{xy}} \rightarrow N_{|\bm{y}|}$. Despite this, the
$\bm{z}$-integral in
Eq.~\eqref{eq:BK-full} renders the evolution equation in essence
two-dimensional. The simplest way to perform this integral is to discretize the
two-dimensional space using a finite regular square lattice. This was used
in~\cite{Rummukainen:2003ns} in order to compare BK with JIMWLK evolution.
This approach, however, restricts the ratio of ultraviolet and infrared
cutoffs (the ratio of the lattice size to the lattice spacing) to, at most,
$\sim 10^4$, and hence strongly limits the $Y$ range over which the shrinking
correlation length $R_s(Y)$ can be resolved on such a lattice. A solution to
this problem is to use discretization with higher resolution power at small
distances, as was done in~\cite{Albacete:2004gw}.
To achieve this we discretize $N_r$ on an even logarithmic scale,
$r_n = r_0 \exp(n\Delta)$, using $\sim 250$ points with $r_{\text{
min}} = e^{-22}/\Lambda$ and $r_{\text{max}} = 1/\Lambda$. The
two-dimensional integral in Eq.~(\ref{eq:BK-full}) is evaluated using
nested Simpson integrations in the
($\log |\bm z|$, $\arg\bm z$) coordinates.
While the values of $|\bm{z}|$ and $|\bm{y}|$ are
restricted to discrete $r_n$-values, $|\bm{z-y}|$ is not. Thus, it
is necessary to interpolate the value of $N_{|\bm{z-y}|}$ in
Eq.~\eqref{eq:BK-full} using the known points $N_{r_n}$. We have
checked the stability of the simulation by changing the number of
discretization points, and verified that this leads to a
negligible change in the results.
In order to evaluate the $R$-functions in the kernel $\Tilde{\cal M}$ of
(\ref{eq:BK-running-kernel}),
we need to integrate over the Borel function $B(u)$. However, as
described in Sec.~\ref{sec:perturbative-expansions}, one cannot
directly use the expressions in Eqs.~(\ref{eq:BTa}) to~(\ref{eq:BLb})
when integrating over the positive real axis,
because of the presence of single and double infrared
renormalon poles there. As
described in App.~\ref{sec:PV-def-pert-sum},
the double poles were converted analytically
into simple ones by means of integration by parts. The integration over
the simple poles was performed by a numerical implementation of the
Principal Value prescription. This implementation is based on different
analytic formulae, depending on the value of $\xi=r_< / r_>$:
for $\xi^2 < 0.8$ we use
\eqref{eq:RT-single-pole-version} and
\eqref{eq:RL-single-pole-version} for $R^T$ and $R^L$, switching
over to \eqref{eq:RT-single-pole-version-2} and
\eqref{eq:RL-single-pole-version-2} when $\xi^2 > 0.8$. The
integrals are completely stable when the threshold value of $\xi^2$
is changed within the interval $(0.01,0.99)$, and our default value of $0.8$
is chosen to optimize the evaluation speed. The numerical evaluation of the
kernel is further stabilized by pulling the integrands of the three
$R$-functions appearing in \eqref{eq:BK-running-kernel} under a
single integral.
If the parent dipole is much smaller than the daughter dipoles, $r
\ll r_1 \sim r_2$, the kernel $\tilde{\cal M}$ contains large terms
that cancel each other almost completely, leading to numerical
difficulties. This can be dealt with by recognizing that the large
contributions arise in fact from the leading terms in the series expressions
of Borel functions Eqs. \eqref{eq:RT-single-pole-version-2} and
\eqref{eq:RL-single-pole-version-2} appearing in \eqref{eq:BK-full}
(in this parameter region $1-\xi^2 \approx 0$). Combining the first
terms analytically stabilizes the evaluation of the series.
The evolution in rapidity $Y$ is performed with the second-order
Runge-Kutta method, with a ``time'' step of $\deltaY = 0.05 \ldots 0.1$.
Again, we observed no significant difference in the results upon varying the
step size.
\subsection{Simulation results}
\label{sec:simulation-results}
The key feature of JIMWLK and BK evolution is scaling of the solutions with
$Q_s(Y)$ after the initial--state effects have died out.
This scaling is exact in the fixed--coupling limit and has been
shown to be retained to a very good approximation in simulations with ad hoc
implementations of running coupling. Also our all--order resummation result
shows this feature, as is demonstrated in Fig.~\ref{fig:N-scaling}.
The initial state in our simulations is $N_{\rm ini}(r) = 1-\exp[-r^2/r_0^2]$,
where $r_0$ sets the initial scale. The evolution tends to flatten $N(Y,r)$
until the scaling shape has been reached. However, the shape remains
considerably steeper than with fixed coupling, see
Fig.~\ref{fig:N-scaling}~(b). From this plot we also see that the
scaling shape is quite similar for the running coupling
corrections computed in this paper and the ad hoc prescriptions shown. Thus,
the asymptotic shape of $N(Y,r)$ is largely insensitive to the functional dependence
of the scale of the coupling on the phase space.
\begin{figure}[tb]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=\textwidth]{N_tau_evo_T_L.eps}
\\ (a)
\end{minipage}
\hfill
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=\textwidth]{N_tau_mod.eps}
\\ (b)
\end{minipage}
\caption{\small \em (a): The evolution of the function $N(r)$
as a function of $Y
= \ln 1/x$, shown in intervals of $\DeltaY=4$. After initial
settling down the function evolves towards smaller $r$ while
approximately preserving its shape. The main effect of the
running coupling is to slow down the evolution at small $r$.
(b): The shape of $N(Y,r)$ using the computed running--coupling
corrections (PV) as well as some running--coupling models,
taken at $Y$ where the saturation scale
{\hbox{$R_s(Y) =
0.005/\Lambda$}},
where we define $R_s$
through $N(Y,R_s)=1/2$. Different
running--coupling forms display similar shapes for $N(r)$, albeit slightly
steeper for the parent--dipole running; much shallower than the
initial state $N_{\rm ini} = 1-e^{-r^2/r_0^2}$,
but much steeper than the fixed coupling shape shown for
comparison.}
\label{fig:N-scaling}
\end{figure}
\begin{figure}[tb]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=\textwidth]{tau_rs_all_mod2.eps}
\\ (a)
\end{minipage}
\hfill
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=\textwidth]{rs_lambda_all_mod2.eps}
\\ (b)
\end{minipage}
\caption{\small \em (a): the evolution of the saturation scale
$R_s(Y)$ with the computed running--coupling corrections (PV), in comparison
with a few different running--coupling models. The evolution
starts at initial \hbox{$R_s(Y=Y_0) \approx 0.1/\Lambda$}. The
horizontal dotted line shows the value of $R_s$ where $N(r)$
was shown in Fig.~\ref{fig:N-scaling}.
(b): the evolution speed $\lambda(Y)$ plotted against the saturation
scale $R_s(Y)$. The evolution starts at large $R_s$ values; the
rapid increase at $R_s\Lambda \sim 0.1$ is an initial--state
effect. Several choices of scale are presented for the
``parent dipole'', demonstrating the sensitivity to the scale and
the fact that upon tuning the scale, this model too can be made
to agree with the resummed result for $\lambda(Y)$ at small $R_s$.
}
\label{fig:R-Y-all}
\end{figure}
In Fig.~\ref{fig:R-Y-all} (a) we show the evolution of the
saturation scale $R_s \Lambda$ as a function $Y$; here the
full PV Borel sum result is compared with
the parent--dipole running and the ``sqrt'' ansatz of~\eqref{eq:parent-sqrt}.
The correlation length $R_s$ characterizes the scale where $N(Y,r)$ is
changing and saturation sets in; here we define $R_s(Y)$ through the condition $N(Y,R_s(Y)) =
1/2$. The initial $R_s(Y=Y_0)$ is $ \approx 0.1/\Lambda$. We see that
$R_s(Y)$ rapidly evolves towards smaller values.
We note that a different definition of $R_s(Y)$ would naturally give
somewhat different curves. However, when the system has evolved
sufficiently long so that it reaches the scaling form, all reasonable
definitions should yield $R_s(Y)$ values that differ only by a
constant factor. Above all, the evolution speed
\begin{equation}
\label{eq:ideal-lambda}
\lambda(Y) = \frac{1}{Q_s^2(Y)}\frac{\partial Q_s^2(Y)}{\partial Y}
= - \frac{1}{R_s^2(Y)}\frac{\partial R_s^2(Y)}{\partial Y}\,; \qquad\qquad Q_s
:= 1/R_s
\end{equation}
should be the same.
As with the ad hoc prescriptions, the most interesting consequence of running
coupling on JIMWLK and BK evolution is the fact that it drastically slows down
the evolution compared to the purely fixed--coupling case
by restricting the active phase
space to within one order of magnitude of the characteristic scale
$Q_s(Y)$~\cite{Rummukainen:2003ns}. This qualitative feature
does not depend on
the details of the implementation of the running--coupling effects.
The precise value of the evolution speed, as expressed by the evolution rate
$\lambda(Y) $ on the other hand, does depend on these details. For fixed
coupling, scaling with $Q_s$ is exact, and $\lambda$ becomes a constant
proportional to $\alpha_s(\mu^2)$~\cite{Iancu:2002tr}. With running coupling,
scaling is approximate, and $\lambda$ becomes a function of $Y$ that will
receive both perturbative and non-perturbative corrections via
$R({\bm r}_1\Lambda,{\bm r}_2\Lambda)$, which we already examined numerically in
Fig.~\ref{fig:R-non-pert-uncert}.
While using Eq.~\eqref{eq:ideal-lambda} with $N(Y,R_s(Y)) = C$
(where $C=1/2$ was used above) is certainly possible,
during the initial stages of
the evolution it is sensitive to the particular value of the
constant $C$ chosen. A more robust definition is the one used
in~\cite{Rummukainen:2003ns}, using a $1/r^2$ moment of the
evolution equation as an operational definition of $\lambda$:
\begin{equation}
\label{eq:running-lambda-integral-def}
\lambda(Y) = \frac{N_c}{2\pi} \int
\frac{d^2r}{r^2} \int\,d^2z\ {\cal M}_{\bm{x z y}}\
\big( N(Y,r_1^2)
+ N(Y,r_2^2) - N(Y,r^2)
- N(Y,r_1^2) N(Y,r_2^2)
\big)
\end{equation}
We will follow this convention here.
Our results for $\lambda(Y)$ are shown in Fig.~\ref{fig:R-Y-all} (b). Here
we plot $\lambda$ against $R_s(Y)$; in this way the result is independent of
the initial value for $Y$.
The initial--state effects are visible as a very rapid increase of
$\lambda$ near the initial (large) $R_s$.
As soon as these effects die out, the evolution approaches an
initial--state--independent curve (see Fig.~\ref{fig:lambda-initial}).
On this curve $N(r)$ has reached the scaling shape shown
in Fig.~\ref{fig:N-scaling}, and $\lambda$ decreases slowly as $R_s$
decreases at large $Y$. This is entirely driven by the logarithmic
decrease of the coupling at short distances.
In this plot one can also observe the sensitivity of the evolution rate
to running--coupling corrections. We show the resummed result computed by the
PV prescription (full line) in comparison with LO evolution where the scale is
set in variety of ways. LO parent--dipole running with $\mu^2=4/r^2$ strongly
under--estimates the evolution rate \emph{even} at very small correlation lengths
$R_s$. This clearly demonstrates the significance of higher--order
running--coupling corrections, which are included in the PV Borel sum.
Interestingly, as seen in Fig.~\ref{fig:R-Y-all} (b),
one can approximate the PV Borel sum result
(at sufficiently small $R_s$) with a variety of different functional forms,
\emph{provided} that an appropriate choice of scale is made.
Thus, despite the large differences in the
``effective charge'' between the resummed result and the
models considered --- which are particularly significant for large dipoles ---
the differences in~$\lambda$, at sufficiently
small $R_s$, reduce to an overall phase--space independent
multiplicative factor in the scale of the coupling.
With the specific scales given in Eq.~\eqref{eq:mu-setting} the ``square root''
ansatz approximates the PV Borel sum well, whereas parent--dipole running
slightly underestimates it. In the latter case, choosing
$\mu^2=2{\rm e}^{\frac53-2\gamma_E}/r^2$ yields a very good approximation to
$\lambda$ for $R_s\Lambda\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 10^{-2}$, but somewhat over--estimates it
at larger $R_s$. The possibility to approximate $\lambda$ in
(\ref{eq:running-lambda-integral-def}) well \emph{as a function of
the correlation length} (provided it is sufficiently small)
in terms of a phase--space independent coupling, such as the parent--dipole one,
is probably related to the scaling property of $N(Y,r)$, namely the fact that
its shape, which determines the weight given to different final states in
(\ref{eq:running-lambda-integral-def}) is invariant.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{rs_lam_pv_init.eps}
\caption{\small \em The evolution speed
$\lambda(R_s)$ for different initial conditions $N_{\rm ini}(r) =
1-\exp(-r^2/r^2_0)$, using the PV Borel sum.
As the initial--state effects vanish, $\lambda(R_s)$ approaches
a universal evolution trajectory.}
\label{fig:lambda-initial}
\end{figure}
In Fig.~\ref{fig:lambda-initial} we show the evolution rate
$\lambda(Y)$ as a function of $R_s$ using different initial conditions.
The trajectories approach a universal curve, which is independent
of the initial condition.
In Fig.~\ref{fig:lambda-nonpert-uncertainties}~(a) we show the
evolution rate as a function of $R_s$ using the perturbatively expanded kernel,
Eq.~\eqref{eq:pert-exp} at different truncation orders. Here the
scale of the coupling is chosen to be
$\mu^2={8e^{-5/3-2\gamma_E}}/({r_1^2+r_2^2})$ as in
(\ref{eq:mu-setting}).
We note that in contrast with the Borel sum,
the perturbative expansion suffers from a Landau pole when the
daughter dipoles become too large. These Landau singularities
obstruct the phase--space integration, so the large $r_i$ region
must be cut out. To avoid any visible effect by this cut we
performed the simulation of
Fig.~\ref{fig:lambda-nonpert-uncertainties}~(a) with relatively
small initial\footnote{To study the fixed--order perturbative result
at larger $R_s$ with this or a similar phase--space dependent scale
one would have to consider the effect of the cut or
introduce another regularization of the Landau pole.} $R_s$. In the
small $R_s$ regime shown, one observe good convergence towards the
PV Borel sum. At sufficiently small $R_s$ only the leading--order
result differs significantly from the PV one.
This is of course specific to the (almost optimal) choice of scale
made here. Much larger corrections appear for a generic scale choice
as we have already seen in Fig.~\ref{fig:R-Y-all}.
Fig.~\ref{fig:lambda-nonpert-uncertainties}~(b) shows the effect of the
estimated power corrections on evolution.
The relative importance of the
power corrections decrease with $Q_s(Y)$: the active phase
follows $Q_s(Y)$ towards the ultraviolet.
With increasing $Q_s(Y)$ the uncertainty induced by power
corrections dies away very rapidly. Despite this
general trend, it is apparent that a quantitative determination of evolution
speed for current experiments where $Q_s=1-2$ GeV,
requires to take power
corrections into account.
To test the predictive power of the approach, it is important to
compare several distinct observables, and address
the universality of the relevant power corrections.
\begin{figure}[tb]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=\textwidth]{rs_lam_pv_pt_comp.eps}
\\ (a)
\end{minipage}
\hfill
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[width=\textwidth]{rs_lambda_pv.eps}
\\ (b)
\end{minipage}
\caption{\small \em (a):~Evolution using perturbatively
expanded kernel, Eq.~\eqref{eq:pert-exp} with
$\mu^2={8e^{-5/3-2\gamma_E}}/({r_1^2+r_2^2})$.
(b):~Estimating the non-perturbative influence on $\lambda(R_s)$
by adding and subtracting $\pi\times (\mbox{residue at $u=1$})$ to
the PV evolution kernel. The effect of the residue dies out
quickly as $R_s$ becomes smaller. }
\label{fig:lambda-nonpert-uncertainties}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We have presented a first calculation of running--coupling
corrections to both JIMWLK and BK equations. We have shown that both
these equations, which were originally derived at leading
logarithmic accuracy with strictly fixed coupling, can indeed be
promoted to the running--coupling case: the general structure of the
equations remains the same, while the kernel receives corrections.
Running--coupling corrections are singled out from other radiative
corrections in the following way: with or without running coupling,
the r.h.s of the evolution equation of any correlation involves
\emph{just
one} more Wilson line (the produced gluon) than the l.h.s (the evolving
object); other radiative corrections entering at the NLO involve up
to \emph{two} more Wilson lines. The number of additional Wilson
lines grows further at higher orders. It remains for future work to
fully generalize the non-linear small--$x$ evolution equations to
NLO. Some work in this direction, which is complementary to ours,
has already been done~\cite{Lublinsky:2001bc, Balitsky:2001mr,
Gotsman:2004xb}. A full NLO generalization exists of course for the linear
case of the BFKL equation, and our expectation is that it exists in
the non-linear case as well. Since the BFKL equation is known in
full at NLO, one can make a useful comparison. This goes beyond the
scope of the present paper. One important fact, however, that we do
learn from the BFKL limit, is that running--coupling corrections
constitute a significant part of the total NLO correction. This, we
expect holds also in the JIMWLK and BK cases.
The significance of the running coupling in the context of JIMWLK and BK
evolution has been acknowledged long ago. For one thing, in fixed--coupling
evolution the active phase space extends way into the ultraviolet,
while practically any dependence of the coupling on the scales
present at a single evolution step reduces the active phase
space to within one order of magnitude around the correlation length
$R_s$~\cite{Rummukainen:2003ns}. For this reason {\em all}
simulations of JIMWLK or BK evolution have been performed with running
coupling, implemented using some educated guess as to its scale dependence.
The simulations presented in this paper are the first where the scale
dependence is computed from QCD perturbation theory. While
\emph{qualitative} features of the effects of running coupling on the evolution
--- e.g. the decrease of the evolution rate $\lambda(Y)$ with
decreasing correlation length $R_s(Y)$ at large $Y$ --- are similar
to what has been observed before, the scale factor itself has been
determined here for the first time. As shown in
Fig.~\ref{fig:R-Y-all}, using the correct scale is crucial in
obtaining a quantitative estimate of the evolution rate. With this
predictive power, we can hope that non-linear evolution equations
with running coupling would become directly applicable to small--$x$
phenomenology at LHC energies.
Technically, running--coupling corrections are computed, as usual,
by focusing on a specific set of diagrams where a single gluon is
dressed by fermion--loop insertions, making use of the linearity of
$\beta_0$ in the number of flavors $N_f$. To perform this
calculation we utilized in this paper the dispersive approach, where
the all--order sum of vacuum--polarization insertions is traded for
a dispersive integral in the ``gluon mass''. While this technique
has been used extensively in the past, using it to compute the
diagrams entering the JIMWLK Hamiltonian is non-trivial for two
reasons: first, the large--$N_f$ limit by itself is not sufficient
to disentangle running--coupling effects from the new production
channel of a $q\bar{q}$ pair; second, the presence of the strong
Weizs\"acker-Williams background field may interfere with the
``dressing'' by interacting with the fermions. Guided by the
correspondence between real and virtual corrections owing to
conservation of probability, we could nevertheless disentangle
running--coupling corrections from other contributions and
generalize the dispersive technique to the case of the background
field. To this end we derived a dispersive representation of the
dressed gluon propagator in the background field. This formal
development may well have other applications.
By computing the JIMWLK Hamiltonian using the dispersive technique
we could go beyond the NLO ${\cal
O}(\beta_0\alpha_s^2)$ running--coupling corrections to the kernel,
and resum ${\cal O}(\beta_0^{n-1}\alpha_s^n)$ corrections \emph{to all
orders} in perturbation theory. Besides the obvious advantage of obtaining
an exactly renormalization--scale invariant kernel, the all--order
calculation offers a unique window into the non-perturbative side of
the problem. The fact that infrared--finite evolution equations can be
established in the high--energy limit, does not mean of course that
the dynamics governing the evolution is purely perturbative;
non-pertubative dynamics affects the evolution though
power--suppressed corrections. By performing an all--order resummation
one can get access to this infrared sensitivity by looking at the
ambiguity in separating perturbative and non-perturbative
contributions; these are the infrared renormalons. Using Borel
summation we identified explicitly the ambiguities in defining the
perturbative sum, and in this way established an estimate for the
parametric dependence the of non-perturbative effects on the hard
scales involved and the potential magnitude of these effects.
We find that both perturbative and non-perturbative corrections
modify the evolution kernel in a non-trivial way. In particular,
these corrections depend on all the different scales present: the
``parent dipole'' $r$ as well as the two ``daughter dipoles'',
$r_1$ and $r_2$. In this way different final states are weighted
differently. An interesting feature at NLO and beyond is the
appearance of two classes of contributions: one that is proportional
to the LO kernel, which is associated uniquely with transversely
polarized gluons, and one that is not, which is associated with
longitudinal polarizations. Both perturbative and non-perturbative
corrections to the kernel, propagate through the evolution and
affect the rate at which the saturation scale $Q_s(Y)$ flows
towards the ultraviolet with increasing energies. Conversely, the
saturation scale itself determines the active phase space, leading
to the decoupling of soft modes at large $Y$.
We studied here the effect of the newly computed corrections on two
levels: first by looking at the ``effective charge'' controlling the
contribution to the r.h.s. of the evolution equation, and then by
solving the BK equation numerically and studying the effects on the
evolution itself. We found, on both levels, that the
running--coupling corrections are significant.
Our simulations have shown that non-perturbative corrections are
strongly suppressed at high energies, reflecting the fact that the
increase of $Q_s(Y)$ with energy moves the active phase space along
with it towards shorter distances. At presently accessible energies,
where the saturation scale is estimated to be $Q_s(Y)\sim
1-2$ GeV, power corrections are definitely relevant. Going
from high to low energies, one may view the breakdown of the
perturbative evolution as the onset of the Soft Pomeron. If we
accept this premise, our calculation opens a new way to think about
the Soft Pomeron: as the energy is lowered the number of
power--suppressed terms that need to be included in the evolution
kernel increases. A possible determination of the power terms from
data can thus be attempted at intermediate energies, before the
power expansion breaks down. A detailed examination of this idea
must address the distinction between the initial condition to the
evolution and the corrections to the kernel itself.
In conclusion, we made an impotent step in extending the framework
of non-linear evolution equations at small $x$ to include
running--coupling and power corrections. Nevertheless, this endeavor
is by no means complete: small--$x$ evolution at high densities
presents many challenges, some of which we touched upon in this
paper. This includes for example the generalization of the
non-linear evolution equations to the NLO; the detailed comparison
with the NLO BFKL kernel; understanding the relation with DGLAP
evolution and higher--twist corrections to the twist expansion,
gaining better understanding of the initial condition for the
evolution; the development of (experimentally accessible!)
observables of different degree of inclusiveness, which are
sensitive to the dynamics underlying the evolution; and the
determination to non-perturbative corrections affecting the
small--$x$~evolution.
\section*{Acknowledgments}
E.G. wishes to thank Francesco Hautmann for illuminating
discussions. H.W. is very grateful to Ian Balitsky for sharing his insights on
computing running--coupling corrections to small-$x$ evolution. H.W.
also acknowledges extensive discussions with Yuri Kovchegov that
lead to an independent investigation of this topic in terms of an
explicit diagrammatic calculation presented in~\cite{HW-YK:2006}.
The work of H.W. is supported in part by the U.S. Department of
Energy under Grant No. DE-FG02-05ER41377. JK and KR were partly
supported by the Academy of Finland contract 104382.
|
1,116,691,499,134 | arxiv | \subsection{Reverberation: from close-talk to far-field}
\label{sec:rir}
The far-field speech can be obtained by convolving the close-talk speech with room-impulse response filters:
\vspace{-0.5em}
\begin{equation}
\label{eq:reverb}
x_R(t) = x_S(t)\ast r(t)
\end{equation}
where $r$ is room impulse response, $x_S$ is source close-talk signal and $x_R$ is the reverberated signal, respectively. The image-source method, as presented originally in \cite{Allen1979_ImageMF}, can be used to precisely simulate the RIRs for small rooms with known measurements. However the computation grows exponentially with the order of reflections thus preventing it from practical usage
In this paper we applied a proprietary library of RIR filters \cite{Raju2018_Augm} to mimic far-field audio. The RIR filters library was collected through real devices set up in real rooms simulating household living rooms by playing audio signal such as chirps at different locations. By varying the locations of the devices or the sound sources, the RIR library approximated the effects of playing the audio at random locations in the room
\subsection{Corruption: improving the noise robustness}
\label{sec:corruption}
Since our close-talk data is from clean acoustic environment, the reverberated data is also acoustically clean, hence it does not reflect well the real noisy conditions.
We employ two internally collected noise data libraries: household noise and music/movie audio, to improve the performance in both general household and media playing conditions. The simulated noisy far-field data can be achieved by
\vspace{-0.5em}
\begin{equation}
\label{eq:corrup}
x_A(t) = x(t) + \alpha\cdot n(t) + \beta\cdot m(t)
\end{equation}
where $x\in x_R\cup x_S$ is the clean original CTM or reverberated signal, $x_A$ denotes the corrupted signal, $n$ and $m$ are noise or music interferences, respectively. $\alpha$ and $\beta$ are the scaling factors to control the ratio of the two noise sources and they are coupled with the target SNR, which is randomly drawn from a normal distribution with mean 10.0 dB and standard-deviation 3.0 dB.
\subsection{Mixed-condition training}
We use a combination of the above data augmentation methods to create a mixed-condition training set of clean, clean+reverb, clean+noise, clean+reverb+noise speech.
Any labels, such as the DNN targets, of the augmented data are the same as the labels of original data. We pool the mixed-condition data together and train a single DNN model, which is also called mixed-conditional model or multi-style model \cite{Huang2014_MCT}
\section{Introduction}
\label{sec:intro}
\input{intro}
\vspace{-0.5em}
\section{System overview}
\label{sec:system}
\input{system}
\vspace{-0.5em}
\section{Multi-condition training}
\label{sec:augm}
\input{augm}
\vspace{-0.5em}
\section{Semi-supervised learning}
\label{sec:ssl}
\input{ssl}
\vspace{-0.5em}
\section{Experiments}
\label{sec:expr}
\input{experiments}
\vspace{-0.5em}
\section{Conclusion}
\label{sec:concl}
\input{conclusion}
\bibliographystyle{IEEEbib}
\subsection{Model architecture}
The WW spotting system used in this paper is composed of a compressed feed-forward deep neural network (DNN), a posterior smoothing and a peak detection modules, as shown in Fig. \ref{fig:mtdnn}. The input audio signal to the WW spotter is resampled at 16KHz and a 20-bin log filterbank energy ( LFBE) feature is computed every 10 msec with a 25 msec observation window. The LFBE features from a context window of 31 frames (20 on the left and 10 on the right) are stacked into a 620-dimensional vector and fed into the DNN.
The DNN is composed of three non-linear fully-connected layers with 400 nodes, and before each non-linear layer a linear bottleneck layer with 87 nodes is inserted by decomposing the weighting matrix into two smaller matrices, similar to \cite{Tucker2016_SVD,Sun2017_TDNN,Sainath2013_SVD}. The output layer is a two-dimensional softmax, which can be easily extend to multiple WWs. For decoding, the posteriors corresponding to the WW are first smoothed by a windowed moving average, where the window size is approximately the same as the average duration of the WW, then a thresholding and peak-detection algorithm is applied to infer both the location and duration of a WW instance.
\vspace{-0.5em}
\begin{figure}[htp]
\centering
\includegraphics[width=0.48\textwidth]{figures/Spotter.png}
\caption{The WW spotter.}
\label{fig:mtdnn}
\end{figure}
\vspace{-1em}
\vspace{-0.5em}
\subsection{Data sets}
We choose ``Amazon" as our new wake word for experiments. A total of 9487 utterances beginning with the chosen WW were recorded from a close-talk microphone in quiet room, denoted as "CTM10K" dataset. The average duration of the utterances is 3.86 secs and the total duration of the dataset is 10.18 hrs. For comparison, a baseline production-grade WW model was trained using 375 hours of transcribed speech data collected from far-field devices in real use cases.
The evaluation dataset contains a composition of more than 300 hours of audio collected from far-field, mid-field and near-field devices in real use cases from various acoustic conditions. The dataset is sufficiently large to show strong statistical significance.
\begin{figure*}[htp]
\centering
\subfloat[][]{
\includegraphics[width=0.67\columnwidth]{figures/DET_MCT}}
\subfloat[][]{
\includegraphics[width=0.67\columnwidth]{figures/DET_SSL}}
\subfloat[][]{
\includegraphics[width=0.67\columnwidth]{figures/DET_Altogether}}
\centering
\caption{ DET curves with (a) multi-condition training sets of different sizes; (b) untranscribed and partial transcribed data; (c) multi-condition training and semi-supervised learning combined.}
\label{fig:results}
\end{figure*}
\vspace{-0.5em}
\subsection{Experimental setup}
We use the GPU-based distributed trainer as described in \cite{Strom2015_Nemo} for DNN training. The models are trained using transfer-learning paradigm where the weights of the DNN are initialized by an ASR acoustic model of the same architecture and size trained with ASR senone targets. During training a multi-task learning framework similar to \cite{Panchi2016_Multitask, Sun2017_TDNN,Guo2018_Highway, Wu2018_Monophone, Raju2018_Augm} is used where the loss is a weighted sum of WW loss and ASR loss. The ASR branch regularizes the DNN training and is helpful especially when training dataset is small.
The evaluation metrics in this paper are false reject rate (FRR), which is 1 minus recall, and false alarm rate (FAR), which is a normalized number of false accepts. Absolute values of FAR in this paper have been anonymized for confidentiality reasons. The range of used FAR corresponds to the range normally used for production keyword spotting models. We report the two metrics in a detection-error-tradeoff (DET) curve as we tune the decision thresholds for each model. The lower the DET curve, the better the performance.
\vspace{-0.5em}
\subsection{Results of multi-condition training}
Our data augmentation strategy defines the training sets to constitute conditions denoted as (CTM, CTM+R, CTM+N and CTM+RN), where CTM, R, N, RN denote the original close-talk microphone data, reverberation, noise corruption, reverberation followed by corruption, respectively.
We sweep the size of the training set by increasing the portion of clean CTM data to as much as 10\% by repeated up-sampling,
and mixing with equal amount of the three augmented datasets. The augmentation used in our experiments are detailed in Table \ref{tab:augmentation
\vspace{-0.5em}
\begin{table}[htp]
\centering
\caption{Details of the multi-condition training sets.}
\label{tab:augmentation}
\begin{tabular}{ccccc}
\hline
size & CTM & CTM+R&CTM+N&CTM+RN\\
\hline
50K&10K&14K&14K&14K\\
200K&20K&60K&60K&60K\\
350K&35K&105K&105K&105K\\
500K&50K&150K&150K&150K\\
\hline
\end{tabular}
\end{table}
\vspace{-0.5em}
Fig. \ref{fig:results} (a) shows the DET curves of the augmented training sets (prefixed with ``A") of size 50K, 200K, 350K and 500K respectively, compared to the two baselines: ``Baseline-CTM10K" (blue) and ``Baseline-Prod"(orange). The DET curves show that the multi-condition training boosts the performance significantly.
In our experiments 20x augmentation (A200K) achieves the maximum gain, which suggests that overwhelming augmentation may hurt the performance.
There remains a gap between the best multi-condition DET curve (A200K) and the Baseline-Prod DET curve, indicating some mismatch exists between the simulated data and the real-world.
\vspace{-0.5em}
\subsection{Results of semi-supervised learning}
We use the the semi-supervised learning (SSL) strategy described in section \ref{sec:ssl} to construct two datasets (prefixed with ``U") of 500K and 1000K total utterances from the untranscribed in-house production speech corpus by employing a production-grade ASR model similar to \cite{Maas2017_Endpointing}.
Both datasets contain equal portion of utterances containing the WW and the confusable words as described in \ref{sec:confusable}.
Fig. \ref{fig:results} (b) shows the DET curves of the WW model trained with those datasets. The untranscribed datasets also boost the WW detection performance significantly, and U1000K achieves more gain than U500K.
Note that a gap still exists between the DET curves of U1000K and Baseline-Prod, which suggests that only untranscribed data is not enough to achieve production performance. We further transcribed 10\% of the 500K utterances to study the effect of human input and created the dataset called ``U450K+T50K" where "T" denotes transcribed. The model trained with this dataset outperforms the models with untranscribed-only data or the augmented-only data, and the DET curve is very close to the one of the Baseline-Prod.
\subsection{Results of MCT and SSL combined }
Having observed that both the multi-condition training (MCT) and semi-supervised learning (SSL) approaches work better at different regions, we hypothesize that they supplement each other to achieve better performance if combined together. Therefore we take the best augmented dataset (A200K), and add to the untranscribed or the partially-transcribed dataset, denoted as A200K+U500K and A200K+U450K+T50K, respectively, and the result DET curves are shown in Fig. \ref{fig:results} (c).
We can observe that the A200K+U500K DET curve becomes much closer to the Baseline-Prod DET curve.
Moreover, with 50K utterances transcribed, the A200K+U450K+T50K DET curve is further improved and performs slightly better than Baseline-Prod model. This observation indicates that each approach (multi-condition training, semi-supervised learning, transcription) has own strength and limitation, and can supplement each other to achieve better than production-quality performance.
\subsection{Relation to prior work}
\subsection{Lexicon filter}
\label{sec:confusable}
In machine learning overemphasizing hard examples during training commonly leads to better model discrimination and generalization.
For a particular WW, ideally a hard example
is a combination of syllables that pronounces similarly to the WW. However accurately calculating the similarity of pronunciations of words in two sentences is computationally intractable. Here we propose a mechanism called ``lexicon filter", which calculates the Levenshtein distances \cite{Navarro2001} with the target WW over the phoneme representations of words provided by the lexicon from the ASR model as an approximation of the pronunciation similarity to define the set of confusable words for the WW. Since the word dictionary in ASR model is usually very large, and the majority of the words rarely appear in a speech corpus, constraining the search within the most frequent words would save a lot of computations. This set is used to filter the ASR transcripts for utterances containing the confusable words. Utterances containing such words with confidence score higher than a threshold $\theta_N$ are added to training set as the negative examples. Without loss of generality we use the popular wake word ``Alexa" to explain the concept of the lexicon filter in Fig. \ref{fig:alexa}.
\begin{figure}[tp]
\centering
\includegraphics[width=0.42\textwidth]{figures/lexicon_alexa.png}
\caption{Lexicon filter for ``Alexa".}
\label{fig:alexa}
\end{figure}
\vspace{-0.5em}
\subsection{Loss function}
The SSL loss for DNN training is
\vspace{-0.5em}
\begin{equation}
\begin{aligned}
L_{SSL} =\sum_{\substack{x \in \mathbf{u} \\ \mathbf{u}\in P\cup N}}
\left(\mathbbm{1} (\mathbf{u}\in P) y\log \frac{1}{q(x)}\right.
+\left.(1-y)\log\frac{1}{1-q(x)}\right)
\end{aligned}
\end{equation}
where $P$ denotes the set of WW (positive) examples ($d = 0$) and $N$ denotes the set of confusable (negative) examples ($d \geq 1$), $x\in \mathbf{u}$ denotes the context-dependent feature frame in an utterance $\mathbf{u}$, $q(x)$ denotes the DNN output posterior probability for $x$ , and $y \in \mathbf{y}$ is the corresponding framewise DNN target obtained by forced alignment using the ASR transcripts, i.e.
\vspace{-0.5em}
\begin{equation}
\mathbf{y} = fa(\mathbf{u}, ASR(\mathbf{u}))
\end{equation}
where $\mathbf{u}$ denote the utterances in the untranscribed speech corpus, $fa$ denotes the forced alignment operator\cite{Jelinek76_ASR}, respectively.
The thresholds $\theta_P$ and $\theta_N$ can be tuned to balance the ratio of positive and negative examples in the training set. In our experiments we choose both $\theta_P$ and $\theta_N$ as $0.5$. In addition, for a six-phoneme wake word, we found the lexicon filter threshold on the Levenshtein distance $d = 1, 2$ produces sufficient negative examples for augmenting the training with confusable cases.
|
1,116,691,499,135 | arxiv | \section{Basics}
Generally quantum averages result from weighting observables with complex amplitudes rather than with positive
probabilities. The technique colloquially referred to as Complex Langevin can in principle be used to replace this by a standard, statistical averaging over a suitably defined stochastic process. The method was proposed long time ago \cite{P,Kl}, but recently has attracted a new wave of interest, especially in studies of quantum chromodynamics at finite chemical potential \cite{S1,S2}. Still, contrary to the real Langevin approach, there is no general proof of the convergence \cite{S3,S4} and the evidence for the success is somewhat limited \cite{Ph,Bl}.
In this article the positive probabilities, which replace complex weights, will be directly constructed (i.e. without any reference to stochastic processes and/or Fokker-Planck equations). Thus above mentioned difficulties are avoided albeit in a few simple test cases.
Building on this the positive representation for Feynman path integrals could be derived. This is done in Ref.~\cite{Ja} for some typical
quantum mechanical applications.
The essence of the complex Langevin approach is summarized by the following relation
\begin{eqnarray}
\frac{\int f(x) e^{-S(x)} dx}{\int e^{-S(x)} dx } = \frac{\int \int f(x+i y) P(x,y) dx dy }{\int \int P(x,y) dx dy}.
\label{B1}
\end{eqnarray}
For complex action, $S(x)$, the left hand side (LHS) does not have statistical meaning, however the right hand side (RHS) does, since $P(x,y)$ is the
well defined distribution of the long stochastic process associated with $S(x)$. Precise form of the Langevin equation is not relevant here. Suffices to say that for real action the Langevin process is real -- the density $P(x)$ is concentrated on the real axis. It satisfies the Fokker-Planck (FP) equation whose solution
converges to $e^{-S(x)}$ for large Langevin time. On the other hand, for complex actions the stochastic trajectory is driven into the complex plane. The density $P(x,y)$ satisfies the FP equation in two variables, however the asymptotic (in the Langevin time) behaviours of solutions are not known in general and their relations to the original complex actions are not clear \cite{AY,AFP,HW}.
Nevertheless the complex Langevin approach {\em is known} to work as has been proven in the gaussian case:
\begin{eqnarray}
S_g(x)=\frac{1}{2}\sigma x^2,\;\;\;\;\sigma=\sigma_R+i \sigma_I, \;\;\;\sigma_R > 0.\label{S}
\end{eqnarray}
The large time asymptotics of the solution of the corresponding FP equation has been given in \cite{AY}
\begin{eqnarray}
P_g(x,y)=\exp{\left(-\sigma_R (x^2+2 r x y + (1+2 r^2) y^2)\right)},\;\;\;\; r=\frac{\sigma_R}{\sigma_I}, \label{P}
\end{eqnarray}
\begin{eqnarray}
\int_{R^2} P_g(x,y)=\frac{\pi}{\sigma_R\sqrt{1+r^2}} , \nonumber
\end{eqnarray}
and thoroughly analysed in the literature \cite{DH,HP}.
To see the validity of (\ref{B1}), e.g. for polynomial observables, consider the generating function
\begin{eqnarray}
G_{LHS}(t)= \frac{\int_{-\infty}^{\infty} e^{t x} e^{-S_g(x)} dx}{\int_{-\infty}^{\infty} e^{-S_g(x)} dx } =\exp{\left(\frac{t^2}{2\sigma}\right)}, \label{B2}
\end{eqnarray}
and the average from the RHS of (\ref{B1})
\begin{eqnarray}
G_{RHS}(t)=\frac{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} dx dy e^{ t (x + i y) } P_g(x,y)}{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} dx dy P_g(x,y)}, \nonumber \label{B5}
\end{eqnarray}
which indeed agrees with (\ref{B2}).
Summarising: the complex Langevin approach can in principle be used to perform simulations with ``complex distributions".
However, in practice, extending
the stochastic process into a complex plane encounters difficulties. Asymptotic solutions of the two dimensional Fokker-Planck equation
are generally not known and cannot be simply constructed from the complex action. Moreover, the random walk wanders often far
into the imaginary direction and may run away or converge to the wrong answer.
\section{Generalization}
On the other hand we do not really need to generate the positive two dimensional distribution with the stochastic process in the complex plane.
The only and the real problem is to find a positive distribution which satisfies (\ref{B1}). Given $P(x,y)$ one can generate it with other
methods.
Therefore, we propose to avoid difficulties of the complex random walk and concentrate instead on constructing $P(x,y)$ directly, using
eq.(\ref{B1}) as a guide. To this end rewrite the RHS of (\ref{B1}) in terms of two, holomorphic and antiholomorphic, variables
\begin{eqnarray}
\;\;\;z=x+iy,\;\;\;\bar{z}=x-iy, \label{xy}
\end{eqnarray}
\begin{eqnarray}
\frac{\int_R \int_R f(x+i y) P(x,y) dx dy }{\int_R \int_R P(x,y) dx dy}=\frac{\int_{\Gamma_z} \int_{\Gamma_{\bar{z}}} f(z) P(z,\bar{z}) dz d\bar{z} }{\int_{\Gamma_z} \int_{\Gamma_{\bar{z}}} P(z,\bar{z}) dz d\bar{z}}. \label{Pz}
\end{eqnarray}
Now, continue analytically the complex density on the LHS of (\ref{B1}) from the real axis into the complex plane
\begin{eqnarray}
\rho(x)=e^{-S(x)} \longrightarrow \rho(z), \nonumber
\end{eqnarray}
rotate the contour of integration on the LHS of (\ref{B1}), $R\rightarrow \Gamma_z$, and then seek to satisfy the relation
\begin{eqnarray}
\frac{\int_{\Gamma_z} f(z) \rho(z) dz}{\int_{\Gamma_z} \rho(z) dz} = \frac{\int_{\Gamma_z} \int_{\Gamma_{\bar{z}}} f(z) P(z,\bar{z}) dz d\bar{z} }{\int_{\Gamma_z} \int_{\Gamma_{\bar{z}}} P(z,\bar{z}) dz d\bar{z}}. \label{B3}
\end{eqnarray}
This will be the case provided
\begin{eqnarray}
\rho(z)=\int_{\Gamma_{\bar{z}}} P(z,\bar{z}) d \bar{z}. \label{Pro}
\end{eqnarray}
That is, we will look for the distribution $P(z,\bar{z})$, which: (1) upon integration over $\bar{z}$ reproduces the analytic continuation $\rho(z)$,
and (2) is positive and normalizable when expressed in terms of real and imaginary parts $x$ and $y$. Given that, we will have found
the positive representation for the LHS of (\ref{B3})
\begin{eqnarray}
\frac{\int_{\Gamma_z} f(z) \rho(z) dz}{\int_{\Gamma_z} \rho(z) dz} = \frac{\int_{R^2} f(x+iy) P(x,y) dx dy }{\int_{R^2} P(x,y) dx dy}.\nonumber
\end{eqnarray}
The integral on the RHS is over the whole $(x,y)$ plane (at least in the cases considered here), while the contours $\Gamma_z$ and $\Gamma_{\bar{z}}$
have to be within domains determined by parameters of both distributions. For a range of parameters
a domain for $\Gamma_z$ contains the real axis and then Eq.(\ref{B1}) can be established.
It is shown below that this program can in fact be carried through quantitatively, at least in few physically interesting cases, already providing
some novel results.
\section{Generalized gaussian model}
A more general than (\ref{P}) positive distribution can be derived if we start from a generic quadratic action for (\ref{Pz}) in two complex variables $z$ and $\bar{z}$
\begin{eqnarray}
S(z,\bar{z})&=& a^* z^2 + 2 b z \bar{z} + a \bar{z}^2, \nonumber
\end{eqnarray}
with an arbitrary complex $a=\alpha+i\beta$ and real $b=b^*$.
In terms of real and imaginary parts (\ref{xy})
\begin{eqnarray}
S(x,y)= 2(b+\alpha) x^2 + 4\beta x y + 2(b-\alpha) y^2\label{Sb},
\end{eqnarray}
and gives the positive and normalizable (for real $x$ and $y$) distribution
\begin{eqnarray}
P(x,y)= \exp{\left(-S(x,y)\right)}, \label{Pb}
\end{eqnarray}
provided $b > |a|$, since the two eigenvalues of (\ref{Sb})
\begin{eqnarray}
\lambda_{\pm}=2 (b\pm |a|). \nonumber
\end{eqnarray}
At the same time the normalization reads
\begin{eqnarray}
\int_{R^2} dx dy P(x,y) = \frac{\pi}{2\sqrt{b^2-|a|^2}}.\label{Pc}
\end{eqnarray}
On the other hand, integrating
\begin{eqnarray}
P(z,\bar{z})=\frac{i}{2}P(x,y)=\frac{i}{2}\exp{\left(-S(z,\bar{z})\right)}, \nonumber
\end{eqnarray}
as in (\ref{Pro}), gives
\begin{eqnarray}
\rho(z)=\int_{\Gamma_{\bar{z}}} P(z,\bar{z}) d \bar{z} =
\frac{1}{2}\sqrt{\frac{\pi}{-a}}\exp{\left( - s z^2\right)},\;\;\;s=\frac{|a|^2-b^2}{a}. \label{ip2}
\end{eqnarray}
which is properly normalized in view of (\ref{Pc}).
The contour $\Gamma_{\bar{z}}$ depends on a phase of a complex parameter $a$ and is chosen such that the integral converges. This choice also determines unambiguously the phase of $-a$.
With $a$ and $b$ parametrized by
\begin{eqnarray}
b=\frac{\sigma_R}{2}(1+r^2) , \;\;\;\; \alpha=-\frac{\sigma_R}{2} r^2 , \;\;\;\; \beta=\frac{\sigma_R}{2} r, \;\;\;\;\sigma_R>0, \nonumber
\end{eqnarray}
equations (\ref{Sb}) and (\ref{ip2}) reproduce the original gaussian model, i.e. (\ref{P}) and (\ref{S}) respectively. However (\ref{Sb}) gives a more general, positive and normalizable probability. In fact the generalized model (\ref{Sb}) realizes the positive representation of the gaussian (\ref{ip2}) for any complex value of the slope, $s$, or equivalently $a$, $b > |a|$.
A complex gaussian, e.g. $e^{- s z^2}, s,z \in C$, is integrable only along a family of contours contained in a wedge specified by a phase
of $s$. However its moments can be analytically continued to any complex $s$. The point of (\ref{Sb},\ref{Pb}) is that it provides a positive and normalizable integral representation for this continuations at arbitrary complex $s$.
In another words: even though the complex density $\rho$ was derived and is integrable only along particular family of contours for a given $a$,
the positive density $P(x,y)$ exists and is integrable for all $a\in C$.
It is a simple matter to check the equivalence of (\ref{Pb},\ref{Sb}) and (\ref{ip2}), e.g. by calculating generating function (\ref{B5}) with both representations.
Here we illustrate this only for the second moment. In the matrix notation the action (\ref{Sb}) reads
\begin{eqnarray}
S(x,y)=X^T M X,\;\;\;X^T=(x,y). \nonumber
\end{eqnarray}
Therefore
\begin{eqnarray}
\langle (x+i y)^2 \rangle_P=\frac{1}{2}\left(M^{-1}_{11}-M^{-1}_{22} + 2 i M^{-1}_{12}\right)=
-\frac{1}{2}\frac{\alpha+i\beta}{b^2-|a|^2}, \nonumber
\end{eqnarray}
which indeed is identical to the average over the complex density (\ref{ip2})
\begin{eqnarray}
\langle z^2 \rangle_{\rho} = \frac{1}{2 s}. \nonumber
\end{eqnarray}
To conclude this Section we discuss two interesting special cases.
For real and negative $s$, the complex density blows up along the real axis. On the other hand the distribution $P(x,y)$ is positive and normalizable at $\alpha>0$ and $\beta=0$ producing the correct average over the "divergent" distribution $\rho$. This explains a ``striking example" observed in the literature \cite{AS}, namely that, upon change of variables, the complex Langevin simulation based on (\ref{P}) actually has the correct fixed point also for negative ${\cal R}e\;\sigma$. The answer is that the positive distribution (\ref{P}) used until now is part of a richer structure (\ref{Sb}), which accommodates negative $\sigma_R$ as well.
Similarly, the complex density $\rho(z)$ for purely imaginary $s$ is readily represented by the positive distribution $P(x,y)$, which is perfectly well defined at $\alpha=0$ and arbitrary $\beta$, as long as $|\beta|<b$. This opens an exciting possibility of positive representations for Feynman path integrals directly in the Minkowski time. Such a construction is presented in detail in \cite{Ja}.
In both cases the original density (\ref{P}) does not exist.
\section{A quartic model}
Another possible solution obtains if we start from the action
\begin{eqnarray}
S_4(z,\bar{z})=(a^* z^2+2 b z \bar{z} + a \bar{z}^2)(c^* z^2+2 d z \bar{z} + c \bar{z}^2), \nonumber
\end{eqnarray}
with complex $a$ and $c$ and real $b \gtrless |a|$ and $d \gtrless |c|$. The density $P(x,y)$ is again positive and normalizable on the $x,y$ plane.
To derive $\rho(z)$ introduce an arbitrary shift parameter $e$ and change the variables. This gives
\begin{eqnarray}
S_4(z,\bar{z}=u-e z)=A_0 z^4 + A_1 z^3 u + A_2 z^2 u^2 + A_3 z u^3 + A_4 u^4,\nonumber
\end{eqnarray}
with
\begin{eqnarray}
A_4&=& a c,\nonumber\\
A_3&=&2(a d + b c)- 4 a c e,\nonumber\\
A_2&=&a^* c + c^* a + 4 b d - 6 e ( a d + b c) + 6 a c e ^2,\nonumber\\
A_1&=&2(b c^* + a^* d) - 2 e (a^* c+ a c^*-4 b d) + 6 e^2 (b c + a d)- 4 e^3 a c,\nonumber\\
A_0&=&(a^* - 2 b e + a e^2 )(c^* - 2 d e + c e^2).\nonumber
\end{eqnarray}
Now choose $e$ such that $A_3=0$. The coefficients become
\begin{eqnarray}
A_4&=& a c,\nonumber\\
A_2&=&\frac{1}{2 a c}\left(2 a^2(|c|^2-d^2)+2 c^2(|a|^2 - b^2)-(a d - b c)^2\right),\nonumber\\
A_1&=&\frac{1}{a^2 c^2}\left( a d - b c \right)\left(a^2(|c|^2-d^2)-c^2(|a|^2-b^2) \right),\nonumber \\
A_0&=&\frac{1}{16 a^3 c^3}\left(4 c^2(|a|^2-b^2)+(a d - b c)^2\right)\left(4 a^2(|c|^2-d^2)+(a d - b c)^2\right).\nonumber
\end{eqnarray}
Then $A_1$ can be also eliminated setting
\begin{eqnarray}
c=\frac{d}{b} a, \nonumber
\end{eqnarray}
which essentially reduces $S_4(z,\bar{z})$ to a square. Remaining coefficients simplify
\begin{eqnarray}
A_4&=&\frac{d}{b}a^2,\nonumber \\
A_2&=&2 \frac{d}{b} \left(|a|^2- b^2\right),\nonumber \\
A_0&=&\frac{d}{b} \left(|a|^2- b^2\right)^2 \frac{1}{a^2}.\nonumber
\end{eqnarray}
The complex density $\rho_4(z)$ can be then obtained in a closed form as
\begin{eqnarray}
\rho_4(z)&=&\frac{i}{2}\int_{\Gamma_{\bar{z}}} d \bar{z} e^{- S_4(z,\bar{z})}\nonumber\\&=&\frac{i}{2}\exp{\left(- A_0 z^4\right) } \int_{\Gamma_u} d u \exp{\left(- A_4 u^4 - A_2 z^2 u^2\right)} \label{prop}\\
&=&\frac{i}{2}\left(\frac{b}{2 d a^2}\right)^\frac{1}{4} \exp{\left(-\sigma z^4\right) } \left(\sigma z^4 \right)^\frac{1}{4}
K_{\frac{1}{4}}\left(\sigma z^4\right),\nonumber
\end{eqnarray}
with an arbitrary complex
\begin{eqnarray}
\sigma=\frac{d (b^2-|a|^2)^2}{2 b a^2}.\nonumber
\end{eqnarray}
All contours (here and below) are such that the integrals exists. Basically one can choose straight lines with slopes determined by
the phase of $a$.
It is a simple exercise to show that normalization of both densities is the same:
\begin{eqnarray}
\int_{\Gamma_z} \rho_4(z) dz= \frac{i}{2}\left(\frac{b}{2 d a^2}\right)^\frac{1}{4}\int_{\Gamma_z} \exp{\left(-\sigma z^4\right) } \left(\sigma z^4 \right)^\frac{1}{4} K_{\frac{1}{4}}\left(\sigma z^4\right)=\nonumber\\
\frac{\pi^{\frac{3}{2}}}{4}\sqrt{\frac{b}{d(b^2-|a|^2)}}=\nonumber\\
\int_{R^2} dx dy e^{- 4 \frac{d}{b} \left( (b+\alpha)x^2 +2 \beta x y +(b - \alpha)y^2 \right)^2 }=\int_{R^2} dx dy P_4(x,y).\nonumber
\end{eqnarray}
The difference however, being that while on the LHS the density $\rho$ is in general complex and contour $\Gamma_z$ has to be adjusted depending on a phase of $\sigma$, the integral on the RHS is always over $R^2$ and the density $P_4(x,y)$ is positive and normalizable for all complex $\sigma$. The same applies for higher moments:
\begin{eqnarray}
\int_{\Gamma_z} z^n \rho_4(z) dz = \int_{R^2} dx dy (x+i y)^n P_4(x,y). \nonumber
\end{eqnarray}
In fact the construction works for a larger range of parameters that in the gaussian case since the condition
$|a| < b$ can be released.
The density (\ref{prop}) has the simple leading asymptotics
\begin{eqnarray}
\rho_4(z)\sim e^{\left(- 2 \sigma z^4 \right)} ,\;\;\; z \longrightarrow \infty, \nonumber
\end{eqnarray}
and therefore might be of some practical interest (e.g. in optimizing some reweighting algorithms). The main point of this example is however, that the original idea, namely constructing positive representations by introducing a second variable, seems to be general and points towards existence of some unexplored yet structures.
Obviously there is a lot of freedom in choosing an initial action. It remains to be seen to what extent this freedom allows to derive complex densities of wider physical interest.
\section{Summary and outlook}
Simulating complex distributions with the complex Langevin technique still rises some theoretical doubts and practical difficulties.
These problems can be avoided by direct construction of corresponding positive densities. It turns out that introducing
a second complex variable allows to carry this programme in practice at least in the two cases discussed in this article.
One is the gaussian distribution with a complex inverse dispersion parameter $\sigma$. The corresponding positive density, in the case of positive $ {\cal R}e\; \sigma$, is known for a long time. Within the present approach this classical result is generalized to arbitrary $\sigma \in C$. In particular the equivalent positive
and normalizable density exists for purely imaginary $\sigma$. Thereby an intriguing possibility of positive representations for Feynman path integrals, directly in the Minkowski time, emerges \cite{Ja}.
The second example deals with
the specific quartic action. Again the equivalent, positive density is explicitly constructed and is valid even in the larger range of parameters, than for the gaussian case.
Of course plenty of questions require further studies. First, we would like to invert the procedure followed here, namely instead of
deriving the complex density $\rho$ by integrating a positive ansatz for $P$, one really needs to derive the latter from the former. At first sight this may not have a unique answer. While this is not the main problem (any answer would do), the opposite question poses the real challenge. Namely whether, and under which conditions, $P$ exists at all. Also, it remains to be seen whether and how the positivity, normalizability and the integral sum rule (\ref{Pro}) allow to determine $P$ in practice for more general cases.
Keeping these reservations in mind, a host of further extensions and applications suggest itself. To name a few: generalization for other nonlinear actions, compact integrals, many degrees of freedom, path integrals, field theory etc. We are looking forward for further studies of these issues.
\vspace*{.5cm}
\noindent{\em Acknowledgements} \ Existence of positive representations for complex, gaussian and more general, densities has been discussed in \cite{Sa,We}. The projection relation employed in \cite{Sa} bears some similarity to the condition (\ref{Pro}).
The author would like to thank Owe Philipsen for the discussion.
|
1,116,691,499,136 | arxiv | \section{Introduction}
Cooperative scattering by large collections of resonant atoms has
been studied extensively for many years adopting either a
classical or a quantum description. Often a quantum formalism is
more convenient to describe the atom-light interaction. For
instance, superradiant emission can be obtained when many
independent atoms interact resonantly with photons, as studied in
the seminal work by Dicke \cite{Dicke54}. However, many features
can well be described with classical models for the atoms with a
polarizability and for the light field. Many features of Dicke
superradiance can thus be explained using classical theory. As
atoms appear to be excellent systems to study any possible
deviation from classical many body features, it is of general
interest to understand and to monitor cooperative effects, even at
a classical level, in a cloud of atoms. New intriguing effects can
arise, when fluctuations due to the coupling with the vacuum modes
can no longer be described by a classical field approach and the
atoms can also become entangled during the cooperative scattering.
Such cooperative scattering with atomic ensembles can appear in a
number of experimental situations in free space \cite{Javanainen}
or in cavities \cite{Raimond}.
Collective spontaneous emission at single excitation level \cite{Dicke54} has also been studied in resonant nuclear scattering of synchrotron radiation \cite{Trammell99, Rohlsberger10} and has recently
received growing interest with the study of single photon
superradiance from $N$ two-level atoms prepared by the absorption
of a single photon \cite{Scully06,Eberly06,Svidzinsky08,Svi10}. It
has been shown that the photon is spontaneously emitted in the
same direction as the incident field with a cooperative decay rate
proportional to $N$ and inversely proportional to the size of the
atomic cloud \cite{Svidzinsky08}. In a series of theoretical
\cite{Courteille10} and experimental \cite{Bienaime10,Bender10}
studies, the authors have recently addressed the question of the
quasi-resonant interaction of light with clouds of cold atoms,
bridging the gap from single atom behavior, with granularity
effects due to the discrete nature of the atomic distribution, to
a `mean-field' regime, where a continuous density distribution is
the relevant description and leads to cooperative effects. In
these works, the authors describe the collective atomic response under
continuous excitation of a low intensity light field using an
effective Hamiltonian model valid in the linear regime. From it, the average radiation pressure
force and the scattered intensity have been derived, observing the modification from the
single-atom values due to cooperativity \cite{Bienaime11}.
In this paper, we present an extended model based on a master
equation in the single-excitation approximation. Such model
provides a more complete description of the linear regime, useful
to investigate the distinctions between classical and quantum
features, even though a complete description of quantum
correlations in our system requires to take into account a larger
number of excitations \cite{Popescu,Bariani}. In particular, we
show that by a first-order perturbation expansion of the master
equation we recover the previously adopted linear model from
classical optics. We also derive an analytical expression showing
the cooperative suppression of atomic excitation and the related
fluorescence. This effect bears some common features with the
dipole blockade studied in Rydberg systems \cite{Tong, Singer,
Molmer}. In our case the long range $1/r$ dipole-dipole coupling
is at the origin of this important suppression even for dilute
clouds of cold atoms. We note that this suppression occurs in the
linear optics regime and is thus not related to a photon blockade.
\section{The master equation approach}
\subsection{The exact master equation for driven atoms}
The dynamics of a system of atoms driven by an external laser beam
and undergoing cooperative re-emission into vacuum modes (see
Fig. \ref{fig1}) can be described by a master equation approach
\cite{Agarwal,Das}. Let us consider a system of $N$ two-level
atoms with transition frequency $\omega_a$, positions
$\mathbf{r}_j$ and excited decay time $\/\Gamma$. Each atom is
described by the spin half angular momentum algebra, with
$S^{i}_{-}=|g_i\rangle\langle e_i|$, $S^{i}_{+}=|e_i\rangle\langle
g_i|$, $S^{i}_{z}=|e_i\rangle\langle e_i|-|g_i\rangle\langle g_i|$
satisfying the commutation relations
$[S^{i}_{+},S^{j}_{-}]=\delta_{ij}S^{i}_{z}$ and
$[S^{i}_{\pm},S^{j}_{z}]=\mp 2\delta_{ij}S^{i}_{\pm}$.
The
interaction Hamiltonian is
\begin{eqnarray}\label{H}
H&=&\frac{\hbar\Omega_0}{2}\sum_{j=1}^N
\left(S_-^je^{i\Delta_0 t -i \mathbf{k}_0\cdot \mathbf{r}_j}+S_+^je^{-i\Delta_0 t +i \mathbf{k}_0\cdot
\mathbf{r}_j}\right)\nonumber\\
& & +\hbar\sum_{j=1}^N\sum_{\mathbf{k}}g_{\mathbf{k}}
\left(S_-^je^{i\omega_0 t}+S_+^je^{-i\omega_0 t}\right)
\left(a_{\mathbf{k}}e^{-i\omega_kt+i \mathbf{k}\cdot \mathbf{r}_j}+a_{\mathbf{k}}^\dagger
e^{i\omega_kt-i \mathbf{k}\cdot \mathbf{r}_j}\right)
\end{eqnarray}
where $\Omega_0=dE_0/\hbar$ is the Rabi frequency of the classical
incident field with amplitude $E_0$ and wave vector
$\mathbf{k}_0$, $d$ is the dipole matrix element,
$\Delta_0=\omega_0-\omega_a$ is the pump-atom detuning,
$a_{\mathbf{k}}$ is the photon annihilation operator with wave
number $\mathbf{k}$ and frequency $\omega_k=ck$, $g_{\mathbf{k}} =
d[\omega_k/(2\hbar\epsilon_0 V_{ph})]^{1/2}$ and $V_{ph}$ the
photon volume.
\begin{figure}[t]
\centerline{{\includegraphics[height=4cm]{fig1.pdf}}}
\caption{(color online) Experimental configuration: a cloud of
two-level atoms (a) is driven by an incident laser detuned by
$\Delta_0$ from the atomic resonance $\omega_a$, with wave vector
$\mathbf{k}_0$ (b).} \label{fig1}
\end{figure}
We have assumed the rotating wave approximation (RWA) in the first
term accounting for the interaction with the external laser, but
not in the second term: there, the coupling between atoms and
vacuum field modes is described in the scalar light approximation,
where near field and polarization effects are neglected, since we
are considering dilute clouds, with $N(\lambda/R)^3<<1$ ($R$ the system size).
It has been shown in \cite{Agarwal} that the spontaneous emission
properties can be conveniently described by a reduced master
equation for the atomic system in the Born-Markov approximation,
given by
\begin{eqnarray}\label{ME}
\frac{d\rho}{dt}&=&-i\frac{\Omega_0}{2}\sum_{i}
\left[e^{i\Delta_0 t-i\mathbf{k}_0\cdot \mathbf{r}_i}S^{i}_{-}+e^{-i\Delta_0 t+i\mathbf{k}_0\cdot
\mathbf{r}_i}S^{i}_{+},\rho\right]\nonumber\\
& & - i\sum_{i}\sum_{j\neq
i}\Delta_{ij}[S^{i}_{+}S^{j}_{-},\rho]\nonumber\\
& & +\frac{1}{2}\sum_{i}\sum_{j}\gamma_{ij}
\left\{
2S^{j}_{-}\rho S^{i}_{+}-S^{i}_{+}S^{j}_{-}\rho-\rho S^{i}_{+}S^{j}_{-}
\right\}.
\end{eqnarray}
where
\begin{equation}\label{deltaij}
\Delta_{ij}=-\frac{\Gamma}{2}\frac{\cos(k_0|\mathbf{r}_i-\mathbf{r}_j|)}{k_0|\mathbf{r}_i-\mathbf{r}_j|},
\end{equation}
\begin{equation}\label{gammaij}
\gamma_{ij}=\Gamma\frac{\sin(k_0|\mathbf{r}_i-\mathbf{r}_j|)}{k_0|\mathbf{r}_i-\mathbf{r}_j|}.
\end{equation}
and $\Gamma = d^2k^3_0/(2\pi\hbar\epsilon_0)$.
The first term in the master equation (\ref{ME}) describes the
interaction with the external laser. The second term describes
the dipole-dipole interactions and arises from the virtual photon
exchange between pairs of atoms. It becomes especially important
at small interatomic distances, it is responsible for the
collective Lamb shift \cite{Friedberg,Scully09LS} and plays an
important role in the subradiant emission \cite{Svi10,Fano,Cummings}.
Finally, the third term of Eq. (\ref{ME}) describes the
cooperative emission. The Markov approximation ignores retardation
effects and takes the long time limit, i.e. $t\gg 1/\omega_0$ and
$t\gg \max\{|\mathbf{r}_{i}-\mathbf{r}_j|/c\}$. Hence, it requires
that the system size is not too large, such that one photon
travels through the atomic cloud faster than the characteristic
cooperative emission time.
The RWA approximation adopted in the present model requires more
subtle arguments. As discussed in \cite{Agarwal}, when applied to
the second term of the Hamiltonian of Eq. (\ref{H}), the RWA consists
in ignoring anti-resonant terms like $a_{\mathbf{k}}^\dagger
S_+^j$ and $a_{\mathbf{k}} S_-^j$ which correspond to
simultaneous creation or annihilation of a photon and atomic
excitation (i.e. virtual transition). These terms are responsible
of the shift of the ground state which contributes to the terms
$\Delta_{ij}$ in Eq. (\ref{ME}). However, the master equation
(\ref{ME}) has been obtained in \cite{Agarwal} from the complete
Hamiltonian (\ref{H}) including the anti-resonant term, and making
the RWA only on the master equation, neglecting the rapidly
oscillating terms like $S_+^{j}\rho S_+^{i} e^{2i\omega_0 t}$.
Hence, Eq. (\ref{ME}) obtained by making the RWA on the master
equation rather than on the Hamiltonian does include the shift of
the ground state. These remarks make clear that RWA on the
Hamiltonian is not the same as RWA on the master equation and that
one should make RWA on the final equations of motion. Since the
counter-rotating terms as $S_+\rho S_+ e^{2i\omega_0 t}$ are not
important because $\Gamma\ll\omega_{0}$, Eq. (\ref{ME}) provides an
accurate description of the interaction, including both the
contributions of the real and virtual transitions to the frequency
shift.
\subsection{Single-excitation approximation}
The master equation (\ref{ME}) has been used to describe, in the
absence of the driving laser, the superradiant and subradiant
decays from excited atoms. For a system confined in a volume whose
size is much smaller than the radiation wavelength, and neglecting
the dipole-dipole interaction terms $\Delta_{ij}$, Dicke
\cite{Dicke54} introduced the angular momentum states
$|J,M\rangle=\textrm{Sym}\{|e\dots e;g\dots g\rangle\}$, with
$M=-J,\dots,J$. If the system is initially prepared in a symmetric
(superradiant) state (with e.g. $J=N/2$), without any coupling
between this state and the antisymmetric (subradiant) states. Note
however that the presence of dipole-dipole interactions may induce
coupling with subradiant states \cite{Fano}. In our case, the
presence of the driving laser provides the atom excitation which
subsequently scatters and/or decays cooperatively. If the driving
field is sufficiently weak or largely detuned from the atomic
resonance, we can assume that the atomic system is weakly excited,
with at most one atom out of $N$ excited. If the atoms are
organized in a symmetric state, the decay rate is $N\Gamma$.
Recently, it has been shown that an extended system of size $R \gg
\lambda$ ($\lambda=2\pi/k_0$ is the light wavelength) containing a
single excited atom among $N$ and prepared in the timed Dicke
state \cite{Scully06, Trammell99, Scully09,Svidzinsky08}
\begin{equation}\label{TD}
|TD\rangle=\frac{1}{\sqrt{N}}\sum_{j=1}^N\exp(i\mathbf{k}_0\cdot
\mathbf{r}_j) \, |g_1,\dots,e_j,\dots,g_N\rangle,
\end{equation}
decays with a rate $\Gamma_N\propto N\Gamma/(k_0 R)^2$. Also, an external field $\Omega_0$
drives the system mainly in a steady state of the form
\cite{Courteille10,Bienaime10}
\begin{equation}\label{DTD}
|\Psi\rangle_N\approx |g_1,\dots,g_N\rangle + \frac{\sqrt{N}\Omega_0e^{-i\Delta_0 t}}{2\Delta_0+i\Gamma(1+b_0/12)} |TD\rangle,
\end{equation}
where $b_0=(3\lambda^2/2\pi)\int dz \, \rho(0,0,z)\propto
N/(k_0R)^2$ is the resonant optical thickness along the
propagation direction $z$ of the driving laser and
$\rho(\mathbf{r})$ is the atomic density. However, a small
fraction of the initial ground state is still coupled to the
subradiant states, as discussed in \cite{Bienaime12}.
The previous works studying cooperative scattering by weakly
excited atoms were based on a effective Hamiltonian model and
brought us to a description equivalent to that of $N$ classical
linear dipoles driven by the external field
\cite{Courteille10,Bienaime11}. Here, we go beyond the linear
optics approximation using a master equation approach, still
restricted to a single excitation. This restriction would be
limited to the case of weak driving field, but may have a larger
validity, for instance for Rydberg's atoms where some kind of
blockade is provided \cite{Gaetan}.
By projecting Eq. (\ref{ME}) on the ground state
$|G\rangle\equiv|g_1,\dots,g_N\rangle$ and on the single-excitation
states $|i\rangle\equiv|g_1,\dots,e_i,\dots,g_N\rangle$, neglecting the
states containing more than one excitation and defining
$\rho_{i,G}=\langle i|\rho|G\rangle\exp(i\Delta_0 t)$,
$\rho_{G,G}=\langle G|\rho|G\rangle$ and $\rho_{i,j}=\langle
i|\rho|j\rangle$, we obtain
\begin{eqnarray}
\frac{d\rho_{G,G}}{dt} &=& -i\frac{\Omega_0}{2}\sum_k
\left(e^{-i\mathbf{k}_0\cdot \mathbf{r}_k}\rho_{k,G}-e^{i\mathbf{k}_0\cdot \mathbf{r}_k}\rho_{G,k}\right)
+\sum_{k,l} \gamma_{kl}\rho_{l,k}\label{rho1},\\
\frac{d\rho_{i,G}}{dt} &=&
\left(i\Delta_0-\frac{\Gamma}{2}\right)\rho_{i,G}-i\frac{\Omega_0}{2}
\left(e^{i\mathbf{k}_0\cdot \mathbf{r}_i}\rho_{G,G}-\sum_k e^{i\mathbf{k}_0\cdot \mathbf{r}_k}\rho_{i,k}\right)
\nonumber \\
& & -\sum_{k\neq i}\left(\frac{\gamma_{ik}}{2}+i\Delta_{ik}\right)\rho_{k,G}
\label{rho2},\\
\frac{d\rho_{i,j}}{dt} &=& -i\frac{\Omega_0}{2}
\left(e^{i\mathbf{k}_0\cdot \mathbf{r}_i}\rho_{G,j}-e^{-i\mathbf{k}_0\cdot \mathbf{r}_j}\rho_{i,G}\right)
-i\sum_{k\neq i}\Delta_{ik}\rho_{k,j}+i\sum_{k\neq
j}\Delta_{kj}\rho_{i,k} \nonumber \\
& & -\frac{1}{2}\sum_{k}\left(
\gamma_{ik}\rho_{k,j}+\gamma_{kj}\rho_{i,k}\right).
\label{rho3}
\end{eqnarray}
We note that these equations still conserve the probability:
$\rho_{G,G}+\sum_i\rho_{i,i}=1$.
\subsection{Perturbative solution}
We now show that the effective Hamiltonian approach can be
obtained by a perturbative expansion of $\rho$ in terms of the
Rabi frequency $\Omega_0$ of the external field. Assuming
$\rho=\rho^{(0)}+\rho^{(1)}+\dots$, if initially the system is in
the ground state, the zero-order term yields $\rho_{G,G}^{(0)}=1$
and $\rho_{i,G}^{(0)}=\rho_{i,j}^{(0)}=0$, whereas at the first
order, Eqs. (\ref{rho1})-(\ref{rho3}) reduce to:
\begin{eqnarray}
\frac{d\rho_{G,G}^{(1)}}{dt} &=& \sum_{k,l}\gamma_{kl}\rho_{l,k}^{(1)}, \label{rho1:1}\\
\frac{d\rho_{i,G}^{(1)}}{dt} &=& \left(i\Delta_0-\frac{\Gamma}{2}\right)\rho_{i,G}^{(1)} -i\frac{\Omega_0}{2}e^{i\mathbf{k}_0\cdot \mathbf{r}_i}
-\sum_{k\neq i}\left(\frac{\gamma_{ik}}{2}+i\Delta_{ik}\right)\rho_{k,g}^{(1)}, \label{rho2:1}\\
\frac{d\rho_{i,j}^{(1)}}{dt} &=& -
i\sum_{k\neq i}\Delta_{ik}\rho_{k,j}^{(1)}+i\sum_{k\neq
j}\Delta_{kj}\rho_{i,k}^{(1)} -\frac{1}{2}\sum_k
\left(
\gamma_{ik}\rho_{k,j}^{(1)}+\rho_{i,k}^{(1)}\gamma_{kj}
\right)\label{rho3:1}.
\end{eqnarray}
Since $\rho_{i,j}(0)=0$, Eqs. (\ref{rho1:1}) and (\ref{rho3:1})
imply that at all times
$\rho_{i,j}^{(1)}(t)=\rho_{G,G}^{(1)}(t)=0$. The remaining Eq.
(\ref{rho2:1}) can be written defining $\beta_i=\rho_{i,G}^{(1)}$
and using Eqs. (\ref{deltaij}) and (\ref{gammaij}), as
\begin{eqnarray}
\frac{d\beta_i}{dt} &=& \left(i\Delta_0-\frac{\Gamma}{2}\right)\beta_i -i\frac{\Omega_0}{2}e^{i\mathbf{k}_0\cdot \mathbf{r}_i}
-\frac{\Gamma}{2}\sum_{k\neq i}\frac{\exp(ik_0|\mathbf{r}_i-\mathbf{r}_k|)}{ik_0|\mathbf{r}_i-\mathbf{r}_k|}\beta_k. \label{beta}
\end{eqnarray}
Furthermore, we observe that, since $\rho_{i,j}=\langle
i|\rho|G\rangle\langle G|\rho|j\rangle+\sum_k\langle
i|\rho|k\rangle\langle k|\rho|j\rangle$, the first non zero order
for the dipole correlations corresponds to the second order for the field: $\rho_{i,j}^{(2)}=\rho_{i,G}^{(1)}\rho_{G,j}^{(1)}=\beta_i\beta_j^*$.
\subsection{Effective Hamiltonian}
Eq. (\ref{beta}) can be expressed in the form of a Schr\"odinger
equation
\begin{equation}\label{eqS}
i\hbar \frac{d|\Psi\rangle}{dt}=H_{\mathrm{eff}}|\Psi\rangle,
\end{equation}
where $|\Psi\rangle=\alpha|g\rangle+\sum_{i=1}^N\beta_i|i\rangle$
and the effective Hamiltonian is
\begin{equation}\label{Heff}
H_{\mathrm{eff}}=\frac{\hbar\Omega_0}{2}\sum_i\left(e^{-i\mathbf{k}_0\cdot \mathbf{r}_i}S^{i}_{-}+e^{i\mathbf{k}_0\cdot
\mathbf{r}_i}S^{i}_{+}\right)-\hbar\Delta_0\sum_i
S^{i}_{+}S^{i}_{-}-\frac{\hbar\Gamma}{2}\sum_jV_{ij}S^{i}_{+}S^{j}_{-},
\end{equation}
where
\begin{equation}\label{Vij}
V_{ij}=(1-\delta_{ij})\frac{\cos(k_0|\mathbf{r}_i-\mathbf{r}_j|)}{k_0|\mathbf{r}_i-\mathbf{r}_j|}+i
\frac{\sin(k_0|\mathbf{r}_i-\mathbf{r}_j|)}{k_0|\mathbf{r}_i-\mathbf{r}_j|}.
\end{equation}
Assuming that at low saturation the system is weakly excited, so
that $\alpha\approx 1$, the projection of Eq. (\ref{eqS}) over the
excited states $|i\rangle$ leads to Eq. (\ref{beta}). Is has been
shown that this equation describes also the temporal evolution of
$N$ harmonic oscillators driven by the scalar electric field
radiation $E_0$, as predicted by classical linear optics
\cite{Svi10,BienaimePhD}. It is important to notice that, even if the
collective atomic state $|\Psi\rangle$ is entangled, the knowledge
of only the probability amplitudes $\beta_i$ obtained from the
linear equations (\ref{beta}) is not by itself sufficient to detect
entanglement, due to its classical nature. Conversely,
entanglement could be observable in the solution of the master
equation, even restricted to a single-excitation, since it
contains correlations between atoms, and in particular between the
ground state and the excited states.
Finally, we notice that the exact master equation (\ref{ME}) can
be written in terms of the effective Hamiltonian (\ref{Heff}),
using Eqs. (\ref{eqS}), (\ref{Heff}) and
$\rho=|\Psi\rangle\langle\Psi|$, as
\begin{equation}\label{rhoHeff}
\frac{d\rho}{dt}=\frac{1}{i\hbar}\left(
H_{\mathrm{eff}}\rho-\rho
H^{\dagger}_{\mathrm{\mathrm{eff}}}\right)+\sum_i\sum_j\gamma_{ij}S_-^j\rho
S_+^i.
\end{equation}
The last term in Eq. (\ref{rhoHeff}) describes the refilling of the
ground state due to spontaneous decay of the excited states,
and is necessary to preserve the density operator
trace equal to unity.
\subsection{Timed Dicke state} \label{TDsection}
Let us show how the steady state Eq. (\ref{DTD}) provides an
approximated solution of the single-excitation master equation.
First, we observe that the external field couples the ground state
$|G\rangle$ to the timed Dicke state $|TD\rangle$ defined in
Eq. (\ref{TD}). The matrix elements involving the TD state are
\begin{eqnarray}
\rho_{TD,G}&=&\langle TD|\rho|G\rangle=\frac{1}{\sqrt{N}}\sum_{j}e^{-i\mathbf{k}_0\cdot
\mathbf{r}_j}\rho_{j,G}, \label{tdrho:1}\\
\rho_{TD,TD}&=&\langle
TD|\rho|TD\rangle=\frac{1}{N}\sum_{j}\sum_{m}
e^{-i\mathbf{k}_0\cdot
(\mathbf{r}_j-\mathbf{r}_m)}\rho_{j,m}. \label{tdrho:2}
\end{eqnarray}
Their temporal evolutions are obtained from
Eqs. (\ref{rho1})-(\ref{rho3}) as
\begin{eqnarray}
\frac{d\rho_{G,G}}{dt} &=& -i\frac{\sqrt{N}\Omega_0}{2}
\left(\rho_{TD,G}-\rho_{G,TD}\right)
+\sum_{k}\sum_{l} \gamma_{kl}\rho_{l,k}, \label{rho1:TD}\\
\frac{d\rho_{TD,G}}{dt} &=&
\left(i\Delta_0-\frac{\Gamma}{2}\right)\rho_{TD,G}-i\frac{\sqrt{N}\Omega_0}{2}
\left(\rho_{G,G}-\rho_{TD,TD}\right) \nonumber \\
& & -\frac{1}{\sqrt{N}}\sum_{i}e^{-i\mathbf{k}_0\cdot \mathbf{r}_i}\sum_{k\neq i}\left(\frac{\gamma_{ik}}{2}+i\Delta_{ik}\right)\rho_{k,G}, \label{rho:TD2}\\
\frac{d\rho_{TD,TD}}{dt} &=&
-i\frac{\sqrt{N}\Omega_0}{2}\left(\rho_{G,TD}-\rho_{TD,G}\right)\nonumber\\
& & +\frac{i}{N}\sum_{i}\sum_{j}e^{-i\mathbf{k}_0\cdot(\mathbf{r}_i-\mathbf{r}_j)}
\left(\sum_{k\neq i}\Delta_{kj}\rho_{i,k}-\sum_{k\neq
i}\Delta_{ik}\rho_{k,j}\right)\nonumber\\
& & - \frac{1}{2N}\sum_{i}\sum_{j}\sum_{k}e^{-i\mathbf{k}_0\cdot(\mathbf{r}_i-\mathbf{r}_j)}\left(
\gamma_{ik}\rho_{k,j}+\gamma_{kj}\rho_{i,k}\right).
\label{rho3:TD}
\end{eqnarray}
These equations show explicitly the coupling induced by the laser
between $|G\rangle$ and $|TD\rangle$. However, the dipole-dipole
interactions and the cooperative decay favor the coupling also to
all the other ``subradiant'' states $|s\rangle$, with
$s=1,\dots,N-1$, completing the single-excitation Hilbert
subspace. We make the assumption that all the matrix elements
among these states $|s\rangle$ can be neglected, supposing that
their occupation probability is small compared to the timed Dicke
state. In practice, we assume
\begin{equation}\label{rhoig}
\rho_{i,G}=\langle i|TD\rangle\langle TD|\rho|G\rangle+\sum_s
\langle i|s\rangle\langle s|\rho|G\rangle\approx \frac{1}{\sqrt{N}}
e^{i\mathbf{k}_0\cdot\mathbf{r}_i}\rho_{TD,G},
\end{equation}
and
\begin{equation}\label{rhoij}
\rho_{i,j}=\langle i|TD\rangle\langle TD|\rho|TD\rangle\langle TD|j\rangle+\dots
\approx \frac{1}{N}
e^{i\mathbf{k}_0\cdot(\mathbf{r}_i-\mathbf{r}_j)}\rho_{TD,TD}.
\end{equation}
By substituting these expressions in
Eq. (\ref{rho1:TD})-(\ref{rho3:TD}) we obtain
\begin{eqnarray}
\frac{d\rho_{G,G}}{dt} &=& -i\frac{\sqrt{N}\Omega_0}{2}
\left(\rho_{TD,G}-\rho_{G,TD}\right)
+\Gamma_N\rho_{TD,TD}, \label{rho1:TD2}\\
\frac{d\rho_{TD,G}}{dt} &=&
\left[i\Delta_N-\frac{\Gamma_N}{2}\right]\rho_{TD,G}-i\frac{\sqrt{N}\Omega_0}{2}
\left(\rho_{G,G}-\rho_{TD,TD}\right),
\label{rho2:TD2}\\
\frac{d\rho_{TD,TD}}{dt} &=&
-i\frac{\sqrt{N}\Omega_0}{2}\left(\rho_{G,TD}-\rho_{TD,G}\right)-\Gamma_N\rho_{TD,TD},
\label{rho3:TD2}
\end{eqnarray}
where $\Delta_N=\Delta_0-L_N$ and \cite{Bienaime11}
\begin{eqnarray}
\Gamma_N &=& \frac{1}{N}\sum_{i}\sum_{j}\gamma_{ij}e^{-i\mathbf{k}_0\cdot(\mathbf{r}_i-\mathbf{r}_j)}
=N\Gamma\langle |S_N(k_0,\theta,\phi)|^2\rangle_{\theta,\phi} \label{GammaN},\\
L_N &=&-\frac{1}{N}\sum_{i}\sum_{j\neq i}\Delta_{ij}e^{-i\mathbf{k}_0\cdot(\mathbf{r}_i-\mathbf{r}_j)}
=\frac{N\Gamma}{2\pi}\mathrm{P}\int_0^\infty d\kappa\frac{\kappa^3}{\kappa-1}\langle |S_N(k_0\kappa,\theta,\phi)|^2\rangle_{\theta,\phi},
\label{LN}
\end{eqnarray}
are the cooperative decay rate and the collective Lamb shift,
respectively. In Eqs. (\ref{GammaN}) and (\ref{LN}) we introduced
the structure function
\begin{equation}\label{SN}
S_N(\mathbf{k})=\frac{1}{N}\sum_{j=1}^N e^{i(\mathbf{k}-\mathbf{k}_0)\cdot \mathbf{r}_j}.
\end{equation}
and the average is taken over the total solid angle of emission of
a photon with wave vector $\mathbf{k}$ at an angle $\theta$ with
$\mathbf{k}_0$, where $|\mathbf{k}|=k_0$. Finally, the integral
over $\kappa$ in Eq. (\ref{LN}) is evaluated as a principal part.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[height=5cm]{fig2.pdf}
\end{tabular}
\caption{(color online) Left: Ratio of the excited state population
computed numerically and in the timed Dicke approximation
$P/P_{TD}$. Right: Standard deviation
$\sigma_\beta$. The cloud is Gaussian with $k_0
\sigma_R = 15$. The contour plots show the iso-optical thickness
lines $b(\Delta_0)=b_0/(1+4 \Delta_0^2 / \Gamma^2)$ where $b_0 = 3
N / (k_0 \sigma_R)^2$. This color-coded plot helps visualize the
timed Dicke approximation validity region: $b(\Delta_0) < 1$. }
\label{fig2}
\end{figure}
Eqs. (\ref{rho1:TD2})-(\ref{rho3:TD2}) have the form of
Optical Bloch equations for a two-level system with collective
states $|G\rangle$ and $|TD\rangle$, interacting with a collective
Rabi frequency $\sqrt{N}\Omega_0$, detuning $\Delta_N$ and
linewidth $\Gamma_N$. The steady-state solution is given by,
\begin{eqnarray}
\rho_{TD,G}^{s} &=& \frac{1}{1+s_c}\left(\frac{\sqrt{N}\Omega_0}{2\Delta_N+i\Gamma_N}\right),\\
\rho_{TD,TD}^{s} &=& \frac{1}{2}\left(\frac{s_c}{1+s_c}\right),
\end{eqnarray}
where
\begin{equation}\label{sc}
s_c=\frac{2N\Omega_0^2}{4\Delta_N^2+\Gamma_N^2},
\end{equation}
is the collective saturation parameter.
We recover the previous linear result of Ref.\cite{Courteille10}
in the limit $s_c\ll 1$, with a steady-state amplitude of the
excited state
\begin{equation}\label{betaTD}
\beta_i\approx
\left(\frac{\Omega_0}{2\Delta_N+i\Gamma_N}\right)e^{i\mathbf{k}_0\cdot\mathbf{r}_i}
=\frac{\beta_{TD}}{\sqrt{N}}e^{i\mathbf{k}_0\cdot\mathbf{r}_i}.
\end{equation}
We have numerically verified that the timed Dicke ansatz of Eqs.
(\ref{rhoig}) and (\ref{rhoij}), or equivalently Eq.
(\ref{betaTD}) in the linear regime, is a good approximation of
the system when $b(\Delta_0) < 1$, where
$b(\Delta_0)=b_0/[1+(2\Delta_0/\Gamma)^2]$ is the optical
thickness. Figure \ref{fig2} shows a contour plot of the ratio
$P/P_{TD}$ ($P=\sum_j|\beta_j|^2$ and
$P_{TD}=|\beta_{TD}|^2$, where $\beta_j$ and $\beta_{TD}$ have been obtained from the numerical solution of Eqs.(\ref{beta}) and from (\ref{betaTD}) respectively) as a function of $N$ and
$\Delta_0/\Gamma$, for a Gaussian spherical distribution with
parameter $k_0\sigma_R=15$. For all values of $b(\Delta_0) < 1$,
the excited state population is well described by the timed Dicke
ansatz and yields a ratio $P/P_{TD}\sim 1$. This result is
confirmed by the analysis of the standard deviation $\sigma_\beta=\sqrt{\langle|\tilde\beta|^2\rangle-|\langle\tilde\beta\rangle|^2}/|\langle\tilde\beta\rangle|$
of $\tilde\beta=\beta
e^{-i\mathbf{k}_0.\mathbf{r}}$ (see right Fig.\ref{fig2} where the contour plot of $\sigma_\beta$ is also showed). $\sigma_\beta$ quantifies the
deviation of the excitation field from the `mean-field'
timed-Dicke state: this deviation becomes significant when $b(\Delta_0)$ is larger than unity. Fig.\ref{fig2} (right) suggests that for large optical thickness the homogeneous approximation assumed in the timed Dicke ansatz (see eq.(\ref{betaTD})) is no more valid and a better description is demanded.
\subsection{Beyond the timed Dicke state approximation} \label{BTDsection}
To account for the non-uniformity of the excitation within the cloud, the field $\beta$ can be decomposed as a sum of waves that describe its spatial fluctuations. This approach has been considered
in Ref.\cite{Bachelard11}, where Eq.(\ref{beta}) has been solved
analytically for a continuous distribution with a Gaussian
spherical profile, neglecting the cosine part of the exponential
kernel and so the associated Collective Lamb shift $\Delta_{ij}$.
Although the use of a such truncated kernel is not allowed when
the decay of the excitation is observed \cite{Manassah12}, it may
still provide a reasonable approximation for small density and low
optical thickness $b(\Delta_0)$. The stationary excitation
amplitude obtained in ref.\cite{Bachelard11} using a partial wave
expansion is
\begin{equation}\label{betaBTD}
\beta(\mathbf{r})=\Omega_0\sum_{n=0}^\infty\frac{i^n(2n+1)}{2\Delta_0+i\Gamma(1+\lambda_n)}j_n(k_0r)P_n(\cos\theta),
\end{equation}
where $j_n$ are the spherical Bessel functions, $P_n$
the Legendre polynomials, $\theta$ the angle with respect to the
laser wave vector $\mathbf{k}_0$ and
$\lambda_n=N(\pi/2\sigma^2)^{1/2}I_{n+1/2}(\sigma^2)\exp(-\sigma^2/2)$,
where $\sigma=k_0\sigma_R$ and $\sigma_R$ is the rms width of the
Gaussian distribution. Eq. (\ref{betaBTD}) accounts for
non-homogeneity of the excitation probability density
$|\beta(\mathbf{r})|^2$. In this context, the timed Dicke expression (\ref{betaTD})
appears as a `mean-field' approximation, which can be recovered assuming
$\Gamma(1+\lambda_n)\sim \Gamma_N$ in Eq. (\ref{betaBTD}). Going further beyond, an
exact solution for the continuous-density limit of Eq.
(\ref{beta}) with the exponential kernel has been derived using the
Mie theory, although more mathematically demanding
\cite{Bachelard12}. In that case, the excitation of an atom was
calculated properly including both the incident and the
phase-shifted radiation field from the other atoms.
\section{Observables}
\subsection{Force on center of mass}
Cooperative effects can be investigated by a direct detection of
the scattered photons. However, this measurement can be in general
difficult, since for an extended atomic system the emission is
strongly forward directed and the detector can be saturated by the
incident laser. The scattered radiation field detected at distance
$\mathbf{r}$ and time $t$ is the sum of the single fields
scattered by the $N$ atoms of position $\mathbf{r}_j$,
\begin{equation}\label{ER}
E(\mathbf{r},t)=\frac{d k_0^2}{2i\epsilon_0}\sum_{j=1}^N
\frac{e^{-i\omega_0(t-|\mathbf{r}-\mathbf{r}_j|/c)}}{|\mathbf{r}-\mathbf{r}_j|}
S_-^j(t).
\end{equation}
In the far field limit, $|\mathbf{r}-\mathbf{r}_j|\approx
r-(\mathbf{r}\cdot \mathbf{r}_j)/r$ and
\begin{equation}\label{E4}
E(\mathbf{r},t)\approx\frac{d k_0^2}{2i\epsilon_0 r}e^{-i\omega_0(t-r/c)}\sum_{j=1}^N
e^{-i\mathbf{k}\cdot \mathbf{r}_j}
S_-^j(t),
\end{equation}
where $\mathbf{k}=k_0(\mathbf{r}/r)$ and $\omega_0=ck_0$, so that
the average intensity is
\begin{equation}\label{Isca}
I(\mathbf{r},t)=\epsilon_0 c\langle E^\dagger(\mathbf{r},t)E(\mathbf{r},t)\rangle=
\left(\frac{d^2\omega_0^4}{16\pi^2\epsilon_0 c^3 r^2}\right)\sum_{j}\sum_{m}
e^{-i\mathbf{k}\cdot(\mathbf{r}_j-\mathbf{r}_m)}\rho_{j,m}(t).
\end{equation}
Alternatively, it is relatively easier to detect the cooperative
effects by considering the radiation pressure force exerted on the
atoms. If the atoms are sufficiently cold, it is experimentally
possible to measure the atomic motion after their exposition to
the incident laser beam. The radiation pressure force acting on
the $j$th-atom is $\hat{\mathbf{F}}_j=-\nabla_{\mathbf{r}_j}
H=\hat{\mathbf{F}}_{aj}+\hat{\mathbf{F}}_{ej}$ where
\cite{Courteille10,Bienaime11}
\begin{eqnarray}
{\mathbf{F}}_{aj}&=& i\hbar \mathbf{k}_0\frac{\Omega_0}{2}
\left\{e^{i\Delta_0t-i\mathbf{k}_0\cdot \mathbf{r}_j}S_-^j-e^{-i\Delta_0t+i\mathbf{k}_0\cdot \mathbf{r}_j}S_+^j\right\},\label{Force-abs}\\
{\mathbf{F}}_{ej} &=&{\mathbf{F}}_{ej}^{(\mathrm{self})}
-\frac{\hbar k_0\Gamma}{2}\sum_{m=1}^N
\frac{\hat{\mathbf{r}}_{jm}}{(k_0 r_{jm})^2}
\left\{S_+^j S_-^m(1-ik_0r_{jm})e^{ik_0r_{jm}}+h.c.\right\},
\label{Force-emi}
\end{eqnarray}
where $r_{jm}=|\mathbf{r}_j-\mathbf{r}_m|=|\mathbf{r}_{jm}|$ and
$\hat{\mathbf{r}}_{jm}=\mathbf{r}_{jm}/r_{jm}$.
${\mathbf{F}}_{aj}$ and ${\mathbf{F}}_{ej}$ result from the recoil
received upon absorption of a photon from the pump and from the
emission of a photon into a direction $\mathbf{k}$, respectively.
The emission force $\mathbf{F}_{ej}$ acting on the $j$th-atom has
two contributions: a self-force
${\mathbf{F}}_{ej}^{(\mathrm{self})}=-\hbar\Gamma\sum_{|\mathbf{k}|=k_0}\mathbf{k}S_+^j
S_-^j$ due to its own photon emission, and a contribution
accounting for the coupling between the $j$th-atom and all the
other atoms. Note that the dipole-dipole interactions can occur
via a coupling to common vacuum modes of radiation. The
interference terms in the total scattered field can leave a
fingerprint on the forces acting on the atoms inside the cloud.
This force has a term decreasing as $1/r_{jm}$ and one decreasing
as $1/r_{jm}^2$. Their average values on the single-excitation atomic states are
\begin{eqnarray}
\langle{\mathbf{F}}_{a}^j\rangle&=& i\hbar \mathbf{k}_0\frac{\Omega_0}{2}
\left\{e^{-i\mathbf{k}_0\cdot \mathbf{r}_j}\rho_{j,G}-\textrm{c.c.}\right\}, \label{Force-abs2}\\
\langle{\mathbf{F}}_{e}^j\rangle &=&
-\frac{\hbar k_0\Gamma}{2}\sum_{m=1}^N
\frac{\hat{\mathbf{r}}_{jm}}{(k_0 r_{jm})^2}
\left\{\rho_{j,m}(1-ik_0r_{jm})e^{ik_0r_{jm}}+h.c.\right\}.
\label{Force-emi2}
\end{eqnarray}
Notice that the self-force average to zero since the emission is
isotropic. The force on the center-of mass of the atomic cloud,
$\langle{\mathbf{F}}\rangle=(1/N)\sum_j
\langle{\mathbf{F}}_j\rangle$ is of particular interest. From
Eq. (\ref{Force-abs2}) and (\ref{Force-emi2}), its component along
the $z$ axis of incidence of the laser is
\begin{eqnarray}
\langle{\mathbf{F}}_{z}\rangle&=& \frac{\hbar k_0}{N}\left\{
\Omega_0\sum_j \textrm{Im}[\exp(i\mathbf{k}_0\cdot
\mathbf{r}_j))\rho_{j,G}^*]-\Gamma\sum_{j,m}\hat z_{jm}j_1(k_0r_{jm})\textrm{Im}(\rho_{jm})\right\},
\label{Force-CM}
\end{eqnarray}
where $j_1(z)=\sin(z)/z^2-\cos(z)/z$ is the first order spherical
Bessel function and $\hat{z}_{jm}=(z_j-z_m)/r_{jm}$. In the
timed-Dicke limit, using the approximations (\ref{rhoig}), (\ref{rhoij}) and neglecting saturation, Eq. (\ref{Force-CM}) becomes~\cite{Bienaime11}
\begin{eqnarray}
\langle{\mathbf{F}}_{z}\rangle&=& \hbar k_0\Gamma
\frac{\Omega_0^2}{4\Delta_N^2+\Gamma_N^2}N\left\langle (1-\cos\theta)|S_N(\mathbf{k})|^2\right\rangle_{\theta,\phi}.
\label{Force-TD}
\end{eqnarray}
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[height=3.0cm]{fig3.pdf}
\end{tabular}
\caption{(color online) Ratio of the cooperative to independent
radiation pressure force as a function of the atom number $N$ and
detuning $\Delta_0$. The cloud is Gaussian with $k_0 \sigma_R =
10$. The contour plots show the iso-optical thickness lines
$b(\Delta_0)=b_0/(1+4 \Delta_0^2 / \Gamma^2)$ where $b_0 = 3 N /
(k_0 \sigma_R)^2$. The left figure shows the numerical results
computed from the effective Hamiltonian Eq. (\ref{Heff}) and Eq.
(\ref{Force-CM}), the central figure stands for the analytical formula
Eq. (\ref{Force_Analytical}) and the right figure correspond to the modal expansion (\ref{ForcePRA}).} \label{fig3}
\end{figure}
The radiation pressure force can be influenced by different
effects. On one side, the finite extent of the atomic cloud can
produce strong forward oriented scattering. The balance between
the momentum of the incident and scattered photons and the atoms
indicate that for forward emission, the net recoil imprinted onto
the atoms is vanishing, resulting in a reduction of the radiation force. A
different contribution to the reduction of the radiation force can
be seen in the prefactor of Eq. (\ref{Force-TD}), which would
appear even in the case of isotropic scattering (i.e. when
$\langle \mathbf{F}_e\rangle=0$). The importance of this prefactor
can be understood from the cooperative coupling of several atoms
into the same vacuum mode. The number of available modes for large
spherical clouds can be estimated by $N_m \sim (k_0 R)^2$ (where
$R$ is the cloud's size), resulting in a number of atoms per mode
scaling as $N/N_m \sim N/(k_0 R)^2$. This scaling is conveniently
related to the on-resonant optical thickness of the atomic cloud
$b_0$. For a Gaussian density distribution with root mean square
size $\sigma_R$, $b_0=3N/(k_0 \sigma_R)^2$,
$N\langle|S_N|^2\rangle_{\theta,\phi}=1+b_0/12$ and
$N\langle(1-\cos\theta)|S_N|^2\rangle_{\theta,\phi}=b_0/24(k_0\sigma_R)^2$
\cite{Courteille10}.
It is convenient to compare the cooperative radiation pressure force to the force acting on a single independent atom
$F_{\text{ind}} = \hbar k_0 \Gamma \Omega_0^2 / (\Gamma^2 + \Delta_0^2)$. The ratio of the cooperative radiation pressure
force for a Gaussian cloud to the single-atom force in the timed Dicke limit can be written as
(neglecting the collective Lamb shift)
\begin{equation}
\frac{F_z}{F_{\text{ind}}} = \frac{4 \Delta_0^2 + \Gamma^2}{4 \Delta_0^2 + \left(1 + \frac{b_0}{12} \right)^2 \Gamma^2}
\left[1 + \frac{b_0}{24 (k_0 \sigma_R)^2 } \right]. \label{Force_Analytical}
\end{equation}
One can also use the partial wave expansion to account for the inhomogeneity of the field $\beta$, in which case the force ratio reads~\cite{Bachelard11}:
\begin{equation}
\frac{F_z}{F_{\text{ind}}} =\frac{\Gamma^2+\Delta_0^2}{N}
\sum_{n=0}^\infty
\left(\frac{(2n+1)\lambda_n(1+\lambda_n)}{4\Delta_0^2+\Gamma^2(1+\lambda_n)^2}
-\frac{(2n+2)\lambda_n\lambda_{n+1}[4\Delta_0^2+\Gamma^2(1+\lambda_n)(1+\lambda_{n+1})]}
{[4\Delta_0^2+\Gamma^2(1+\lambda_n)^2][4\Delta_0^2+\Gamma^2(1+\lambda_{n+1})^2]}\right).\label{ForcePRA}
\end{equation}
Fig. \ref{fig3} shows the ratio of the cooperative to independent radiation pressure force as a function of the atom
number $N$ and detuning $\Delta_0$ for a Gaussian cloud with root mean square size $k_0 \sigma_R = 10$.
The left figure shows the numerical results computed from the effective Hamiltonian Eq. (\ref{Heff}) and Eq. (\ref{Force-CM}), the
central one describes the timed-Dicke formula Eq. (\ref{Force_Analytical}) and the right picture stands for the partial wave equation (\ref{ForcePRA}). While all methods predict a significant reduction of the force ratio for small detuning and large atom number, the partial wave approach yields better agreement with the numerical approach for $\Delta_0 < \Gamma$ and $N>1000$, despite the fact that the present partial wave approach has been limited to a sine kernel, not fully accounting for virtual-transition induced phase shifts~\cite{Manassah12}.
\subsection{Dicke subradiance}
\begin{figure}[t]
\centerline{{\includegraphics[height=12cm]{fig4.pdf}}}
\caption{(color online) Energies and decay rates of the modes of the system obtained by computing the eigenvalues of the effective Hamiltonian
$H_{\text{eff}}$ for Gaussian clouds with $N = 2000$ atoms and $k_0 \sigma_R = 10, 30, 100$ ($b_0 = 60, 6.7, 0.6$) given respectively by the blue,
red and green curves. The denser the system is, the larger the energy and decay rate distributions are.
The continuous straight lines show the timed Dicke state decay rates $\Gamma_N$ for the three different system sizes.
For dilute clouds (green line), the timed Dicke emission rate $\Gamma_N$ is centered on the distribution
$P(\Gamma)$ and $\Gamma_N \simeq \Gamma$. When the cloud optical density increases (blue line),
the timed Dicke decay rate tends to the tail of the distribution and $\Gamma_N \simeq \Gamma_{\text{max}}$.} \label{fig4}
\end{figure}
Dicke subradiance is the counterpart of superradiant emission and corresponds to the partial trapping of light due to destructive interferences.
In a subradiant state, the atomic dipoles are arranged such that the macroscopic polarization of the cloud is small reducing the emission rate
of the system. Subradiant emission has been previously observed for two ions \cite{Brewer96} and also for the emission of a cloud of $N$ atoms
in a free space into a single radiation mode \cite{Pillet85}. In a recent paper \cite{Bienaime12}, we showed that the system presented above is
ideal to observe for the first time long photon storage into metastable subradiant states for $N$ atoms in free space.
In section \ref{TDsection}, we saw that the laser pumps the system from the ground state $|G\rangle$ into the timed Dicke state $|TD\rangle$.
Then the dipole-dipole interaction terms couple the timed Dicke state to the different other states of the system. Some of these states
$|\psi_{\text{super}}\rangle$ have short lifetimes $\Gamma_{\text{super}} > \Gamma$ and are thus called \emph{superradiant}. Some other
states $|\psi_{\text{sub}}\rangle$ have long lifetimes $\Gamma_{\text{sub}} < \Gamma$ and are called \emph{subradiant}.
As the effective Hamiltonian is non-Hermitian, its eigenstates are not orthogonal and have common features with autoionizing states or Fano
resonances \cite{Fano}. Fig. \ref{fig4} shows the mode energies and decay rates for three different system sizes.
It shows that the denser the system is, the larger the energy and decay rate distributions are. Fig. \ref{fig4}
also shows the timed Dicke decay rate $\Gamma_N$ to compare it to the decay rate distribution of the system modes $P(\Gamma)$.
For dilute clouds, the timed Dicke emission rate $\Gamma_N$ is centered on the distribution $P(\Gamma)$ and $\Gamma_N \simeq \Gamma$
and when the cloud optical density increases, the timed Dicke decay rate tends to the tail of the distribution and
$\Gamma_N \simeq \Gamma_{\text{max}}$.
After driving the system with the laser for a long time, which allows populating subradiant states using the scheme sketched in Fig. \ref{fig5},
we monitor the decay of the system by looking at the excited state population $P(t)$ after switching off the laser. A typical decay curve of
the excited state population computed from the numerical solution of the effective Hamiltonian approach is shown on Fig. \ref{fig6}.
\begin{figure}[t]
\centerline{{\includegraphics[height=2.5cm]{fig5.pdf}}}
\caption{(color online) Sketch of the subradiant emission couplings.} \label{fig5}
\end{figure}
This figure shows the excitation probability $P(t)=\sum_j|\beta_j|^2$ as a function of time (black solid line), obtained by integrating
Eq. (\ref{beta}) for $N=2000$ atoms distributed by a Gaussian distribution with $k_0\sigma_R=10$. The other parameters are $\Omega_0=0.01\Gamma$
and $\Delta_0=10 \, \Gamma$ and the laser is switched off after $t=50 \, \Gamma^{-1}$. The origin of time is set such that it corresponds to the
time when the laser is switched off. Under the action of the continuous laser excitation, the atoms reach a quasi-stationary state close to the
timed Dicke state. The small subradiant fraction present in the atomic state after the exposition to the laser can be detected observing the
excitation decay after the laser has been switched off. The fast initial decay rate of the superradiant state is $\Gamma_N = (1 + b_0/12) \Gamma$
as expected since the steady state corresponds approximately to the timed Dicke state. After some time, the emission rate becomes much below the
single atom emission rate (black
dotted line in Fig. \ref{fig6}). It corresponds to the subradiant emission region. At first, the subradiant decay is not exponential since
several modes decay simultaneously. For longer times, it then ends up with a pure exponential decay, referred as the subradiant decay rate,
when only one long-lived mode dominates \cite{BienaimePhD,Bienaime12}, as shown in the red part of the decay curve of Fig. \ref{fig7}.
We have checked numerically the very intuitive result that the subradiant decay rate measured on the relaxation curves $P(t)$ corresponds
to the longest lifetime of the effective Hamiltonian modes \cite{BienaimePhD}. This confirms the role of cooperativity in long lived
excitations in the cloud as investigated in ref. \cite{Akkermanns08}.
\begin{figure}[t]
\centerline{{\includegraphics[height=5cm]{fig6.pdf}}}
\caption{(color online) Excited state population $P=\sum_j |\beta_j|^2$ decay after switching off the laser (the initial state corresponds
to the steady state. The black dashed curve shows the single atom decay (without cooperative effects). At first, the fast decay corresponds
to superradiance with a rate $\Gamma_N = (1 + b_0/12) \Gamma$. After some time, part of the light remains trapped in the cloud which
corresponds to subradiant emission. Parameters for the simulation: $N = 2000$, $k_0 \sigma_R = 10$, $\Omega_0 = 0.01 \, \Gamma$,
$\Delta_0 = 10 \, \Gamma$ (the laser was on before during $50 \, \Gamma^{-1}$ to let the system reach the steady state).
The superradiant emission diagram (blue curve) is strongly forward directed. The emission diagram of the subradiant states (red curve)
is isotropic. The green curve shows the subradiant emission diagram averaged over height realizations of disorder.}
\label{fig6}
\end{figure}
Using Eq. (\ref{Isca}) from the previous section, we can study the emission diagram of the system.
Fig. \ref{fig6} shows the emission diagram as a function of time during the decay of the cloud.
At first, the emission diagram of the timed Dicke state is clearly forward directed, a phenomenon reminiscent of Mie scattering.
At longer times, subradiant modes show isotropic emission diagrams: they do not possess the symmetry of the laser excitation since
they are not directly coupled to it. This property can be exploited in the experimental detection of subradiance.
In Ref.\cite{Bienaime12}, we proposed to use inhomogeneous broadening schemes such as the cloud optical thickness, the cloud
temperature, or the driving laser intensity as possible control parameters for subradiance. However, other parameters such as a far
detuned speckle field, magnetic fields, or near field couplings can also be used. By control of subradiance, we mean two different things:
controlling the population of the subradiant states as well as their decay rates. Exploiting these inhomogeneous broadening schemes allow
to control and tune the dipole-dipole couplings, which is the genuine interaction leading to cooperative effects (superradiance, subradiance,
cooperative Lamb shift). This would allow for the first observation of subradiant emission from a cloud of atoms in free space.
\begin{figure}[t]
\centerline{{\includegraphics[height=5cm]{fig7.pdf}}}
\caption{(color online) Excited state population as a function of time for the same parameters as Fig. \ref{fig6} ($N = 2000$,
$k_0 \sigma_R = 10$, $\Omega_0 = 0.01 \, \Gamma$, $\Delta_0 = 10 \, \Gamma$). The laser is switched off at $t=0$.
The curve together with Fig. \ref{fig6} show that the subradiant decay is purely exponential only after a certain
amount of time (red part of the curve), when the mode with the longest lifetime dominates. We call this final decay
rate the subradiant decay rate. In this example, $\Gamma_{\text{sub}} = 8.5 \, 10^{-3} \, \Gamma$.}
\label{fig7}
\end{figure}
\subsection{Dipole-dipole induced suppression of excitation}
In the perturbative limit, where each dipole is driven by
the external field, plus a small perturbation by the field
scattered by all other dipoles, we can obtain an analytical solution
of the excitation state population and the angle-resolved
scattered field.
>From Eq. (\ref{Isca}) we obtain the steady-state scattered
intensity in the direction $(\theta,\phi)$ for the timed Dicke
state, still neglecting saturation:
\begin{equation}\label{IscaTD}
I(r,\theta,\phi)=\left(\frac{I_0}{16\pi^2 k_0^2 r^2}\right)\frac{\Gamma^2 N^2|S_N(\theta,\phi)|^2}{4\Delta_N^2+\Gamma_N^2}.
\end{equation}
and the total scattered intensity
\begin{equation}\label{PscaTD}
P_{s}=r^2 \int_0^{2\pi}d\phi\int_0^\pi d\theta\sin\theta \, I(r,\theta,\phi)
=P_0\frac{N\Gamma\Gamma_N}{4\Delta_N^2+\Gamma_N^2},
\end{equation}
where $I_0$ is the incident
intensity and $P_0=I_0/(4\pi k_0^2)$. In Fig. \ref{fig8}, we plot the normalized excited state population (blue solid line on the left figure) as a function of $b_0$ and for $\Delta_0=100\Gamma$,
\begin{equation}\label{Pnorm}
\frac{P}{NP^{(1)}}=\frac{4\Delta_0^2+\Gamma^2}{4\Delta_N^2+\Gamma_N^2}\approx
\frac{4\Delta_0^2+\Gamma^2}{4\Delta_0^2+\Gamma^2(1+b_0/12)^2},
\end{equation}
and the normalized total scattered power (blue solid line on the left figure)
\begin{equation}\label{Psnorm}
\frac{P_s}{NP_0}\approx
\frac{\Gamma^2}{4\Delta_0^2+\Gamma^2(1+b_0/12)^2}(1+b_0/12),
\end{equation}
where $P^{(1)}=\Omega_0^2/(4\Delta_0^2+\Gamma^2)$ is the
single-atom excitation probability and we have neglected the
collective Lamb shift, $\Delta_N\approx \Delta_0$. We observe that
increasing the optical thickness $b_0$ the excitation population
decreases, whereas the total scattered power has a maximum around
$b_0\sim 24(\Delta_0/\Gamma)$. The normalized excited state population and total scattered power resulting from the partial wave solution (\ref{betaBTD}) in the continuous-density
approximation, obtained in ref.\cite{Bachelard11} for large Gaussian clouds,
\begin{equation}\label{PRA1}
\frac{P}{NP^{(1)}}=
\frac{4\Delta_0^2+\Gamma^2}{\Delta_0\Gamma(b_0/3)}\arctan\left[\frac{\Delta_0\Gamma(b_0/3)}{4\Delta_0^2+\Gamma^2(1+b_0/6)}\right],
\end{equation}
and
\begin{equation}\label{PRA2}
\frac{P_s}{NP_0}=
\frac{3}{b_0}\ln\left[1+\frac{\Gamma^2
b_0}{3}\frac{1+b_0/12}{4\Delta_0^2+\Gamma^2}\right],
\end{equation}
are also plotted in Fig.\ref{fig8} (red dashed lines): they show a
deviation from the timed Dicke approximation for very large
resonant optical thickness $b_0$.
\begin{figure}[t]
\centering{\includegraphics[width=\linewidth, height=5cm]{fig8.pdf}}
\caption{(color online) Normalized excited state population (left) and normalized
total scattered power (right) as a function of the optical thickness $b_0$ for $\Delta_0=100 \, \Gamma$.
The blue solid lines correspond to Eqs. (\ref{Pnorm}), (\ref{Psnorm}) and the red dashed lines to Eqs. (\ref{PRA1}), (\ref{PRA2}).} \label{fig8}
\end{figure}
>From these curves we can see that even for dilute clouds of cold
atoms, the long range coupling between the dipoles leads to a
cooperative modification of the excitation of the atoms and its
related total scattered power. Note that the total scattered power initially
increases with increasing number of atoms, despite the decrease in
the normalized total population of the excited state. This can be
understood by the fact that cooperativity leads to enhanced
superradiant emission rates. For larger number of atoms, the
suppression of the atomic excitation dominates the enhanced
superradiant emission and the total scattering rate of the large
cloud of atoms is reduced by the dipole-dipole couplings. We
stress that even though the signature of such a suppression of
fluorescence of the cloud of $N$ atoms might bear a resemblance
with a photon blockade regime \cite{Tong, Singer, Molmer}, our
model does not take into account optical nonlinearities required
to describe such photon-photon coupling. A suppression of
excitation can thus be obtained in the absence of nonlinear
optical response. In contrast to dipole blockade effects in
Rydberg states with near-field ($1/r^3$) or Van der Waals
coupling ($1/r^6$), we are in the presence of long-range
dipole-dipole couplings where all atoms participate in the
suppression of the atomic excitation, and not only a small volume
around an excited atom.
\section{Conclusion}
In this paper, we have presented a master equation and an effective Hamiltonian approach to describe
cooperative effects in clouds of cold atoms. This master equation approach, even though still restricted
to single excitation, allows to go beyond the effective Hamiltonian approach. In particular, we have highlighted the possibility of cooperative suppression of the atomic excitation, via the long range dipole-dipole couplings and in the absence of any non-linear photon blockade mechanism. Future work will include the possibility of experimental observation of Dicke subradiance, long range dipole-dipole blockade and cooperative effects beyond linear optics.
\begin{acknowledgement}
Funding from IRSES project COSCALI and from USP/COFECUB is acknowledged.
\end{acknowledgement}
|
1,116,691,499,137 | arxiv | \section{Introduction}
\label{sec:intro}
The term ``dark matter'' was originally coined because of the need to explain the observed rotation curves of stars in galaxies and the peculiar velocities
of galaxies within clusters. The~velocities of these large gravitational systems were measured to be incompatible with the expectations
based on Newtonian mechanics and the visible amount of matter in the system~\cite{Zwicky:1933gu,Bosma:1981zz,Sofue:2000jx}. These observations seemed to provide indirect evidence for the
existence of either a~non-luminous matter component in the universe, or~a deviation from the standard Newtonian mechanics as we use it to try
to make sense of the universe at larges scales~\cite{Castiblanco:2019mgb}. Additional matter is also needed to understand how
the first galaxies could have formed from the small density perturbations imprinted in the cosmic microwave background (CMB)~\cite{Cyburt:2015mya} and to describe the large-scale
structure of the universe as seen today~\cite{Planelles:2014zaa,Vogelsberger:2019ynw}. Observations of gravitational lensing added to the list of evidence for the need of
dark matter~\cite{Ellis:2010aaa}.
While proposed modifications of gravity and other recent approaches~\cite{Famaey:2011kh,Deur:2017aas,Christodoulou:2015qna,Khoury:2014tka} do not need dark matter at all, in~this review we summarize the current experimental efforts
aimed at probing indirectly the most popular ``solution'' to the dark matter problem: The particle~solution.
The particle solution to the dark matter problem is based on the assumption that stable (or enough long-lived) relic particles exist, whose present density is
determined by the thermal history of the early universe~\cite{Steigman:2012nb}. Incidentally, models needed to explain certain shortcomings of the Standard Model of particle
physics do provide viable candidates for dark matter in the form of new weakly-interacting massive particles (WIMPs) in the GeV-$\mathcal{O}$(100~TeV) mass range.
If they are thermally produced in the early universe, the~upper mass limit for WIMPs arises from theoretical arguments in order to preserve unitarity~\cite{Blum:2014dca}.
There is practically no lower limit on the mass of the dark matter candidates as long as they can perform their role of being abundant enough to solve
the problem and they do not over close the universe~\cite{Leane:2018kjk}. Higher masses than $\mathcal{O}$(100~TeV) and/or stronger interactions can be accommodated in models
where the dark matter candidates are not produced thermally~\cite{Baer:2014eja}. So the dark matter zoo encompasses a~wide class of particles with a~wide range of interactions
as illustrated in Figure~\ref{fig:candidates}. From~a theoretical point of view, the~connection between a~cosmological problem (the need for additional matter at cosmological
scales) and the possibility of explaining it with particle physics ideas is extremely~compelling.
\begin{figure}[t]
\centering\includegraphics[width=0.55\linewidth]{figures/candidates.pdf}
\caption{Popular candidates of dark matter indicating their mass and strength of interaction with ordinary matter. The~red, pink and blue colors represent
hot dark matter (HDM), warm dark matter (WDM) and cold dark matter (CDM) candidates, respectively. Reprinted from~\cite{Baer:2014eja} with permission from Elsevier.}
\label{fig:candidates}
\end{figure}
The term ``indirect'' appertaining to dark matter searches refers to those methods based on detecting the products of the annihilation or
decays of dark matter candidates gravitationally accumulated in cosmological objects, in~contrast with ``direct'' detection techniques
where the interactions of dark matter particles from the local halo with a~suitable target are searched for~\cite{Schumann:2019eaa}.
Still, these two techniques rely on detecting the dark matter that surrounds us. Usually not included in ``indirect'' searches is the use of early universe
probes (cosmic microwave background power spectrum, 21-cm astronomy and nucleosynthesis) to set constraints to the dark matter content and characteristics. There is no
more indirect way of searching for something that using signatures that might have occurred billions of years ago, so we have included a~short description of dark matter
searches using those probes at the end of this review.
Additionally, accelerator searches try to produce new
particles in the controlled conditions of collider or fixed target interactions, that could be identified also as candidates for dark matter~\cite{Buchmueller:2017qhf,Kahlhoefer:2017dnp,Bondi:2020xnc}. In~any case, the~problem remains to identify and characterize this new form of matter, if~this is
indeed the way Nature chose to~function.
Any review about dark matter, be it about direct or indirect searches, faces at least two, I think insurmountable, problems: To be sufficiently up to date in a~rapidly
evolving field, and~to give due credit to previous work by the innumerable scientists that have contributed to the field. I confess already here that I have given up in trying to solve these
two problems. Instead, I have tried to provide a~personal choice of topics and mention some general problems faced in the field of indirect dark matter searches
that I think are relevant and that sometimes can have a~bearing on the interpretation of results (the choice of halo model or the consideration of particle physics or astrophysical uncertainties
for example). This means that I have not followed a~historical approach (there are excellent reviews from that perspective~\cite{Sanders:2010aaa,Bertone:2016nfn}), neither I have tried to be complete in describing the
theoretical models being usually, or~less usually, considered (there are also excellent reviews on that~\cite{Profumo:2019ujg,Feng:2010gw,Bertone:2004pz}), or~tried to be comprehensive in mentioning the
results of every single experiment that exists in the field. I~have also favoured citing published results than preprints or conference proceedings when possible, sometimes~at the expense of not pointing the reader to the very latest results. With~these caveats in mind, I hope this review is of use to some~readers.
\section{Dark matter in galactic halos}
\label{sec:halo}
A viable particle candidate for dark matter must be stable in timescales comparable with the age of the universe and have interactions with ordinary matter such it does
not affect the evolution of the universe in a~way that leads to a~large-scale structure that is incompatible with observations today. It must also be neutral,
otherwise star formation and evolution would have been disturbed in ways incompatible with observations~\cite{Gould:1989gw}, and~the formation of ``dark atoms''
in the interstellar medium should have already been revealed in spectroscopy studies of heavenly objects~\cite{Basdevant:1989fh}. However,~micro-charges, about
a~million times smaller than the charge on the electron, can still be accommodated if only a~few percent of the dark matter would have such charge~\cite{Munoz:2018pzp}.
Aside these generic constraints, there are still plenty of room for dark matter candidates with characteristics that avoid these limitations. And~there are plenty of
theoretical models that can accommodate them. Even~though WIMPs are popular candidates because straightforward extensions of the Standard Model,
like many flavours of super-symmetry, predict them~\cite{Jungman:1995df}, extremely light dark matter in the
form of axion-like particles (ALPs)~\cite{Pargner:2019wxt} or super-heavy dark matter~\cite{Chung:1998zb,Chung:1998ua} are also well justified. We will therefore not use the term WIMP in this
review since many of the experimental results summarized later can be reinterpreted in terms of more general dark matter candidates. See~\cite{Palomares-Ruiz:2020ytu,Plehn:2017fdg,Khlopov:2018ttr,Zhang:2015ffa,Baer:2014eja}
and references therein for recent reviews of candidates and production scenarios beyond the vanilla WIMP~miracle.
\begin{figure}[t]
\centering\includegraphics[width=0.7\linewidth]{figures/DMprofiles04.pdf}
\caption{Commonly used Milky Way dark matter density profiles, including an~isothermal model (Iso). See the
reference below for details on the parameters used to obtain each curve. Here, suffice to show that the choice of one profile or another can have important consequences on the
interpretation of experimental results. The~distributions are normalized to a~local density $\rho_{\odot}=0.3$ GeV/cm$^3$ in the solar neighborhood,
at $r_{\odot}=8.3$ kpc from the Galactic Centre. Figure from~\cite{Cirelli:2010xx}, reproduced by permission of IOP Publishing. Copyright SISSA Medialab Srl. All rights reserved.
}
\label{fig:DMprofiles}
\end{figure}
Accumulations of dark matter are expected in halos around galaxies, including the Milky Way, and~in galaxy clusters.
In order to detect signatures from these regions with high density of dark matter, the~dark matter particles need to be able to annihilate between themselves, or~decay.
It is the stable standard model particles that result from these processes that constitute the potential signature for indirect searches.
These can be neutrinos, photons, electrons, protons or even light nuclei like He and D (and their antiparticles).
The distribution of dark matter in the halos of galaxies is an~important ingredient both in calculating expected observables and eventually in
interpreting a~signal~\cite{Salucci:2018hqu,Zavala:2019gpq}. We know from N-body simulations of galaxy formation that galaxies are embedded in clumpy, rather complex dark matter halos
that extend well beyond the visible galaxy~\cite{Springel:2005nw}. There have been several proposals to describe the dark matter density distribution in galactic halos as a~function of
the distance from the centre of the galaxy~\cite{Gunn:1977aa,Navarro:1995iw,Einasto:1965czb,Salucci:2000ps,Moore:1999gc,Kravtsov:1997dp}. The~common feature of these profiles is a~denser,
typically spherically symmetric, region of dark matter in the centre of the galaxy, with~decreasing density as the radial distance to the centre increases.
Figure~\ref{fig:DMprofiles} shows a~few examples of these profiles.
Note that the slope of the density distribution varies with the distance, deviating from initial calculations in~\cite{Gunn:1977aa} that predicted a~single power law distribution
across the whole distance range.
Where the different models diverge is in the predicted shape of the central region. Profiles obtained from N-body simulations of galaxy formation and
evolution with a~collisionless dark matter contribution tend to predict a~steep power-law type behaviour of the dark matter component in the central region~\cite{Navarro:1995iw,Moore:1999gc}, while
profiles based on observational data (stellar velocity fields) tend to favour a~more constant dark matter density near the core~\cite{Einasto:1965czb,Salucci:2000ps,Kravtsov:1997dp,Burkert:1995yz}.
This discrepancy has been dubbed the core-cusp problem~\cite{deBlok:2009sp}, and~it is an~unresolved issue. A~recent fit to the motion of stars in the disk of the Milky Way~\cite{Benito:2019ngh}
gives inconclusive evidence for a~cored or cusped profile, reflecting on the observational difficulty in accurate mapping the dark matter distribution even of our galaxy. There are proposals
within the dark matter paradigm that can ameliorate the problem, or~even solve it, based on self-interacting dark matter models~\cite{Spergel:1999mh,Rocha:2012jg,Peter:2012jh,Vogelsberger:2012ku,Bernal:2019uqr}. Self-interactions among dark matter particles
while dark matter halos form will heat up the dark matter component, avoiding the otherwise strong collapse that forms cusped halos. The~strength of the interactions (and therefore the mean free path of the
dark matter particles between interactions) can be chosen to reproduce observations, including avoiding undesirable effects in early universe cosmology. Particle physics models describing the
type of interactions that are allowed can be built under these simple assumptions.
However, this solution still suffers from the fact that no dark matter candidates have been identified. A~less exotic solution to the cusp problem could arise from the fact that one can actually
describe the inner dark core from kinematic measurements of the visible inner region and global characteristics of the galaxy, like its viral mass~\cite{Salucci:2007tm}. This provides a
measurement of the size of the dark matter core which is directly correlated to the luminous counterpart and that does not show any cuspy behaviour. A~more radical claim is the recent study
in~\cite{Baushev:2018ptw} proposing that the cusp
obtained in N-body simulations represents a~numerical issue of the simulations themselves that do not reflect reality. Lacking further confirmation of the claims of the authors, their results
can be taken to illustrate the difficulties and limitations involved in performing N-body simulations, even~with today's powerful~computers.
Following~\cite{Hernquist:1990be}, a~general parametrization of the dark halo
in galaxies can written as (reference~\cite{Hernquist:1990be} has nothing to do with dark matter, but~on the density distribution of visible mass in
spherical galaxies. Equation~(\ref{eq:profiles}) here is an~adaptation of Equation~(43) in that paper).
\begin{equation}
\rho_{DM}(r)\,=\,\frac{\rho_0}{\left ( \delta + \frac{r}{r_s} \right )^\gamma \cdot \left ( 1 + (\frac{r}{r_s})^\alpha \right )^{(\beta-\gamma)/\alpha} }
\label{eq:profiles}
\end{equation}
where $r$ is the distance from the Galactic Centre, the~parameters $\alpha$, $\beta$ and $\gamma$ determine the shape of the halo and $\rho_0$ is a~normalization constant
depending on the model, i.e.,~depending on the choice of $\alpha$, $\beta$ and $\gamma$. Different combinations of these parameters can recover the halo profiles
mentioned above.
These profiles describe the smooth distribution of dark matter around galaxies. Any substructure (clumpiness) must be added on top in order to describe the outcome of N-body
simulations~\cite{Taylor:2002zd,Ando:2019xlm}. The~effect of clumpiness is usually parametrized as a~multiplicative boost factor to the expected signal
from annihilations without substructure~\cite{Moline:2016pbm,Penarrubia:2017nzw} and it is, by~necessity, an~average of the enhancement given by simulations of a~population of~halos.
Figure~\ref{fig:DMprofiles} shows that the different halo models proposed show agreement on the dark mater distribution at the location of the Solar
System (about 8 kpc from the Galactic Centre) and beyond, but~the predicted dark matter density distribution diverges considerably at distances near
the centre of the galaxy~\cite{Weber:2009pt}. This is of course also true when considering other galaxies and assuming the same type of profiles apply universally at any redshift.
However, universality might not be completely justified, and~the shape of a~dark halo can depend on local characteristics of any given galaxy. There~are indications that halo profiles
might depend both on the size of the galaxy and the structure of its surroundings, since the original mass function of nearby satellites will affect the accretion of additional dark matter to
the halo, causing the inner slope of the dark matter profile to deviate form a~universal function like the ones in Equation~(\ref{eq:profiles})~\cite{Ricotti:2002qu}. Indeed, recent results from
the Gaia mission point to a~more complex dark matter halo in the Milky Way than just a~spherical distribution. Gaia will map the position and radial velocity of about one billion stars,
providing and unprecedented 3-dimensional picture of our galaxy and invaluable data to precisely map the dark matter halo for the first time.
Gaia's data release 2, along with data from the Sloan Digital Sky Survey (SDSS), have already provided indications
that our dark matter halo has a~non-trivial structure, with~localized substructures due to the recent history of our Galaxy that might not yet be in equilibrium. Actually, the~Gaia
results seem to be in tension with the characteristics of the dark halo derived from previous stellar data~\cite{Cautun:2019eaf,Buch:2018qdr,Necib:2018iwb,OHare:2019qxc,Banik:2019cza}.
The first implication of these results is that the Milky Way halo is more complex than initially thought, with~local substructure that now can be mapped. This can have implications for
direct dark matter searches whose expected signal depends on the assumed dark matter density in the solar~neighborhood.
However, in the bigger picture, the~Gaia results point to a~dark matter halo which
is strongly dependent on the specific history of the galaxy, determined by past accretion events from nearby satellite galaxies. If~dark matter halos strongly depend on the individual
history of each galaxy, the~assumption of universality of the dark matter profiles breaks down, with~consequences for the interpretation of indirect dark matter searches as well.
In particular there are two related aspects to take into account: The integrated rate of supernova explosions through the evolutionary history of a~galaxy and the role of baryon feedback
in the star formation cycle of the galaxy~\cite{Dekek:1986gu,Ogiya:2012jq,Weinberg:2013aya,DelPopolo:2015nda,Salucci:2018hqu}. The~ejecta from supernova explosions can heat the surrounding
interstellar gas, injecting energy into it to a~point that further star formation due to gravitational collapse significantly slows down, or~even stops. As~the affected gas expands and cools down, gravity draws it back
towards the center of the galaxy, reinitiating the star formation process and eventually leading to more supernovae explosions. For~large galaxies, with~a large amount of gas, the~cycle
from star formation to explosion to re-ignition of star formation can be quite fast. During~these processes the baryonic mass distribution of the galaxy fluctuates, inducing a~change
in the gravitational potential of the gas which can have an~effect on the dark matter distribution. In~particular, this process could solve the core-cusp problem, since the net effect
on the dark matter component is to redistribute the dark matter away from the galactic center, converting an~initially cusped profile in a~cored profile. The~rate of supernova ejecta and
the strength of baryon feedback, and~their effect on the dark matter distribution of a~galaxy, depend on the size, age, evolution~and surroundings of the galaxy, which support the idea
that universal halo functions are just a~first approximation to the structure of a~galaxy, but~that individual characteristics need to be taken into account. This can affect the calculations
of $J$-factors discussed~below.
Barring the caveats just mentioned, given a~specific dark matter halo, the~flux of cosmic-rays, gamma-rays or neutrinos arising from dark matter annihilations in a~given object can
be expressed as
\begin{equation}
\frac{dN_i}{dA dt dE_i}\,=\,\frac{d\Phi_i}{dE_i}\,=\,\frac{1}{4\pi}\,\frac{\left < \sigma_{\rm A} v \right >}{2 m^2_{\rm DM}}\,\frac{dN_i}{dE_i} \times \int_{\Omega} \int_{l.o.s.} \rho^2_{DM}(r) dr d\Omega
\label{eq:indirect_galaxy}
\end{equation}
where $i$ stands for the type of particle, $\left < \sigma_{\rm A} v \right >$, is the thermally averaged product of the dark matter self-annihilation cross-section
times the dark matter velocity, $dN_i/dE_i$ is the spectrum of species $i$ produced by the annihilations, $\rho_{DM}$ is the dark matter density of the source, given
by Equation~(\ref{eq:profiles}) and $m_{\rm DM}$ is the dark matter mass. The~integral of the squared density along the line of sight (l.o.s.) is the so-called $J$-factor. As~argued above,
the $J$-factor is source-specific, and~encompasses our knowledge of the structure of the dark halo of the source in consideration. The~$J$-factor depends on the
halo profile chosen as well as on the distance to the source, and~results from indirect dark matter searches must be given under the assumption of a~specific halo model. Clearly,
distances need to be estimated independently, which for far away objects can add to the final uncertainty of the result.
$J$-factors are measured in $GeV^2 cm^{-5} sr$.
As Figure~\ref{fig:DMprofiles} illustrates, using one parametrization of $\rho_{DM}$ or another can give results that differ by as much as orders of~magnitude.
Assuming a~particle physics model which gives the expected particle spectrum, $dN_i/dE_i$, and~a~halo model, then an~experimental
measurement of $d\Phi_i/dE_i$ can be used to probe the two independent terms in Equation~(\ref{eq:indirect_galaxy}): $\left < \sigma_{\rm A} \textrm{v} \right >$
versus the dark matter mass, $m_{\rm DM}$. Experimental efforts are usually kept as model-independent as possible, looking for signatures from generic candidates
over a~few benchmark masses that cover a~wide mass range and, since the exact branching ratios of dark matter annihilations into different channels
are unknown, analyses are typically optimized assuming 100\% annihilation to a~few characteristic channels. This brackets the expected spectrum from any
realistic model which would produce an~spectrum that is a~weighted mixture of all the possible annihilation~channels.
If signals from dark matter decay, instead of annihilation, are to be probed, then Equation~(\ref{eq:indirect_galaxy}) needs to be slightly modified:
The l.o.s. integral is over $\rho$ and not $\rho^2$ (only one particle participates in a~decay, while two are needed in an~annihilation), only
one $m_{\rm DM}$ factor is needed in the denominator (for the same reason), and~what is probed is $1/\tau$ instead of $\left < \sigma_{\rm A} v \right >$,
where $\tau$ is the candidate lifetime. In~this case the units of the $J$-factor are $GeV cm^{-2} sr$.
It is worth pointing out that the term $\left < \sigma_{\rm A} v \right >$ can provide more information than it might seem at first sight. It is not only sensitive to
a given particle model, through what the model predicts for $\sigma_{\rm A}$, but~it is also sensitive to the structure of the host galaxy (or the medium where the annihilations take place),
through the velocity distribution from which the annihilating particles are drawn. For~2$\rightarrow$2 annihilations in the non-relativistic case, the~term $(\sigma_{\rm A} v)$ can be expanded
in powers of angular momentum in the square of the relative velocity of the particles as $\sigma_{\rm A} v~=~\sigma_s + \sigma_p v^2 +...$, where $\sigma_s$ and $\sigma_p$ are constants
(see~\cite{Gondolo:1990dk} for the corresponding relativistic expression and a~thorough discussion on the applicability regimes of $\left < \sigma_{\rm A} v \right >$).
The previous expression explicitly shows the constant behaviour of s-wave ({\em{l}}~=~0) scattering and the quadratic dependence on the relative velocity of p-wave ({\em{l}}~=~1) scattering.
Given the dark matter velocity in the halos of galaxies, $\beta~\sim 10^{-6}$, the~relative velocity of two annihilating particles is typically low enough to make the
p-wave annihilation rate subdominant to the s-wave term. However, s-wave annihilations to some channels (fermions for example) can be chirality suppressed by a~factor $m_f/m_{\rm DM}$,
where $m_f$ is the fermion mass, so the prospects of indirect detection of certain dark matter candidates can become slim.
There are, however, natural ways out of this problem that bring back the hope for some of the indirect detection channels.
One is simply to search for dark matter annihilations in objects which are violent enough for dark matter to orbit with high velocities, where p-wave annihilations naturally
dominate~\cite{Johnson:2019hsm}. Further, if~one considers annihilations with three-body final states, 2$\rightarrow$3, where the third particle arises as a~$\gamma$, Z or W bremsstrahlung
in the final state, the~helicity suppression is highly ameliorated. Besides, these type of annihilations can simultaneously produce photons and electrons, or~photons and neutrinos,
providing a~multi-messenger signal from the same process. An~interesting study case for experimentalists. Another way to avoid the suppression of s-wave annihilations is through the
Sommerfeld enhancement in models with self-interactions in the dark matter sector~\cite{ArkaniHamed:2008qn,Feng:2010zp,Campbell:2010xc,Das:2016ced,Choi:2017mkk,Clark:2019nby}. In~such case
the new potential enhances the s-wave cross section through the well known Sommerfeld effect~\cite{Sommerfeld:1931aa} by factors as high as three orders of magnitude, making such models
testable with the messengers and sources mentioned in the sections~below.
There is a~further point to consider concerning the use of Equation~(\ref{eq:indirect_galaxy}). As~it has been presented it includes only the spectrum from the source under consideration but it
ignores the diffuse cosmological contribution from all the sources along the line of sight, at~different redshifts. This contribution can be sometimes safely neglected, for~example in
searches from nearby objects like the center of the Milky Way where the additional signal from the far cosmos is subdominant. However, it can be of
relevance when pointing at extended sources or in analyses with poor angular resolution. In~case this effect needs to be included, a~cosmological model needs to be assumed and energies
detected at the Earth need to be properly redshifted, and~an integral over redshift performed. These effects have been clearly described, for~example for gamma-rays and neutrinos,
in~\cite{Ullio:2002pj,Beacom:2006tt}. Diffuse cosmological contributions can also be important when considering halos with substructure. Since dark matter annihilations scale as the dark matter
density squared, high-redshift but clumpy halos could contribute a~non-negligible signal at~Earth.
\section{Dark matter signatures from the cosmos}
\label{sec:signatures}
Since the expected signal from dark matter annihilation (decay) is proportional to $\rho_{DM}^2$($\rho_{DM}$), natural sources to consider in indirect searches
are regions in the universe with an~increased density of dark matter. Typically, our own Milky Way, close-by dwarf spheroid galaxies and galaxy clusters are
considered in indirect searches. Dwarf spheroid galaxies are good candidates because they present high mass-to-light ratios, with~little interstellar gas, and~they
seem therefore to be dominated by dark-matter. Additionally, there is no evidence of non-thermal emission from any of the 20+ known dwarf spheroids that
could produce a~background of high energy neutrinos, $\gamma$-rays or cosmic-rays so they are a~clean target from the observational point of view. Even if they are
small systems, with~$J$-factors much smaller than our Galactic Centre, observations of several dwarves can be stacked in order to increase the sensitivity to a
potential dark matter~signal.
Indirect searches for dark matter have indeed been performed by
gamma-ray telescopes (MAGIC, e.g.,~\cite{Acciari:2020pno,Acciari:2018sjn,Giammaria:2016jfo}, H.E.S.S., e.g.,~\cite{Rinchiuso:2019etv,Armand:2019ive,Abdalla:2018mve,Abdalla:2016olq}, VERITAS~\cite{KhanCantlay:2019syo,Archambault:2017wyh}), X-ray telescopes (XMM-Newton, e.g.,~\cite{Boyarsky:2017wct,Ruchayskiy:2015onc,Borriello:2011un,Boyarsky:2007ay}, NuSTAR, e.g.,~\cite{Neronov:2016wdd,Perez:2016tcq,Roach:2019ctw}, Suzaku, e.g.,~\cite{Sekiya:2015jsa,Tamura:2014mta,Urban:2014yda,Kusenko:2012ch}), cosmic-ray detectors
(HAWC~\cite{Cadena:2019lor,Abeysekara:2017jxs,Albert:2017vtb}), detectors~in space (Fermi-LAT, e.g.,~\cite{Abazajian:2020tww,DiMauro:2019frs,Fermi-LAT:2016uux}, DAMPE, e.g.,~\cite{Wechakama:2019vzk}, CALET~\cite{Adriani:2018ktz}, AMS~\cite{Xu:2020zfh})
and neutrino telescopes (IceCube, e.g.,~\cite{Aartsen:2018mxl,Aartsen:2017ulx,Aartsen:2016zhm}, ANTARES, e.g.,~\cite{ANTARES:2019svn,Albert:2016dsy,Adrian-Martinez:2016gti}, Baikal, e.g.,~\cite{Avrorin:2016yhw,Avrorin:2015bct,Avrorin:2014swy}, Baksan~\cite{Boliev:2013ai} or Super-Kamiokande, e.g.,~\cite{Desai:2004pq,Abe:2020sbr,Choi:2015ara}). Many of these detectors have upgrade programs underway, in~different stages of R\&D or
implementation (CTA~\cite{Consortium:2010bc}, IceCube-Gen2~\cite{Aartsen:2020fgd}, KM3NET~\cite{Adrian-Martinez:2016fdl}, Baikal-GVD~\cite{Avrorin:2019dov}, Hyper-Kamiokande~\cite{Abe:2011ts}),
which will bring indirect dark matter searches to the next level in mass coverage and~sensitivity.
Each messenger has its quirks and features, though, both from the point of view of propagation and/or detection.
In order to increase the sensitivity to specific sources that can be observed with different instruments, several collaborations have joined forces and obtained limits from
their combined observations. Combining results from different experiments requires a~consensus on the use of common $J$-factors for each target, statistical techniques and
the treatment of the systematic uncertainties of the different instruments. The~result of these efforts are usually stronger limits than those obtained by the individual
collaborations. They also help unify analysis methods and the use of common external inputs (astrophysical quantities, cross sections, dark matter models...) across the
community, which in the long run helps obtaining a~coherent view of the field and makes it easier to compare, and~even to combine, results from different messengers.
One can really talk of a~multi-messenger era in dark matter~searches.
\subsection{Gamma- and~X-rays}
Gamma-rays propagate without deflection, so they point to their source, and~are relatively easy to detect, the~technology being available since long.
Gamma-ray telescopes and gamma-ray detectors on space are able to point with great accuracy (about 0.1 degree for single photons above 10 GeV in Fermi-LAT) and
the same for Cherenkov telescopes like MAGIC or H.E.S.S. for example (although at higher energies, above~about 100~GeV) and they
cover a~wide range of photon energies, from~20~GeV in Fermi-LAT to few hundred TeV in Cherenkov telescopes. However, absorption and backgrounds are always a~concern when searching
for new physics with photons from space. The~Milky Way centre is a~quite crowded region with $\gamma$-emitting objects and a~diffuse component from unresolved sources and star
formation regions. In~this respect, the~clean structure of dwarf spheroid galaxies is expected to provide a~favourable environment for dark matter searches with gammas, and~indeed strong limits on the dark matter annihilation cross section have been obtained from these objects. An~illustration of the results that can be obtained with this kind of
searches is shown in the left plot of Figure~\ref{fig:gamma-dwarves}. The~figure shows 95\% confidence level upper the limits on the velocity-averaged dark matter annihilation cross section
as a~function of dark matter mass obtained from observations of 45 dwarf galaxies with the Fermi-LAT telescope~\cite{Fermi-LAT:2016uux}. Refined limits can be obtained exploiting the different
energy reach (and~therefore dark matter mass sensitivity) of different instruments, as~it is shown in the right plot of Figure~\ref{fig:gamma-dwarves}. The~plot shows a~combined limit
on $\left < \sigma_{\rm A} v \right >$ using results from Fermi-LAT, HAWC, H.E.S.S., MAGIC~and VERITAS, and~reaches an~improvement of up to one order of magnitude for masses above 10~TeV due to
the contribution of the Cherenkov telescopes~\cite{Oakes:2019ywx}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth,height=0.40\linewidth]{figures/Fermi_dwarves_tau-tau.pdf}
\includegraphics[width=0.43\linewidth]{figures/dwarves_combined_tau-tau.pdf
\caption{ (\textbf{Left}) The 95\% confidence level upper limits on the velocity-averaged dark matter annihilation cross section obtained from 45 dwarf galaxies with the Fermi-LAT instrument, assuming
100\% annihilation to $\tau^+\tau^-$. The~yellow and green lines represent the 95\% and 68\% quantiles respectively. The~solid red line shows
the limit from a~previous analysis using 15 sources. The~closed contours and marker show the best-fit regions (at 2$\sigma$ confidence) of several dark matter interpretations of the Galactic Centre excess. Figure reprinted with permission from~\cite{Fermi-LAT:2016uux}. Copyright 2020 AAS.
(\textbf{Right}) The 95\% confidence level upper limits on the dark matter annihilation cross from the combination of data from observations of
20 dwarf galaxies by Fermi-LAT, HAWC, H.E.S.S., MAGIC and VERITAS, assuming 100\% annihilation to $\tau^+\tau^-$. Figure from~\cite{Oakes:2019ywx}. }
\label{fig:gamma-dwarves}
\end{figure}
The uncontroversial negative results from dwarf galaxies can be confronted with the status of dark matter searches with gammas from the Galactic Centre.
An intriguing excess of gamma-rays in the few GeV region over the background expected from conventional sources has been claimed by several authors, both from EGRET
data~\cite{Hunger:1997we} and more recently from Fermi-LAT data~\cite{Goodenough:2009gk}. However, an excess is of course defined with respect to a~background, and~here is where difficulties
arise. The~region around the Galactic Centre is one of the more complex in the sky, not least in gamma-rays. It is a~star-forming region, surrounded with interstellar gas
and with plenty of individual sources, like short-period pulsars and supernova remnants, as~well as allegedly unresolved sources. Gamma-rays can be produced in point sources but
also as a~diffuse glow by electron bremsstrahlung, by~inverse Compton scattering of electrons and positrons on the intense ambient radiation field in the region, or~as products of
the interactions of cosmic-rays with the interstellar gas. The~calculation of the expected gamma-ray flux from all these processes, along with the galactic model used to track the
diffusion of cosmic-rays in their way through the galaxy, is plagued with uncertainties which reflect in the predicted gamma ray emission from the Galactic Centre (which is the
background when searching for dark matter)~\cite{Calore:2014xka,TheFermi-LAT:2017vmf,Buschmann:2020adf,Zhong:2019ycb}. Indeed the EGRET excess was promptly explained by detailed
calculations of the expected cosmic-ray flux without the need of new physics~\cite{Bergstrom:2006tk}. An~additional important uncertainty in modeling the inner region of the Galaxy
arises from the ``Fermi bubbles''~\cite{Su:2010qj}. Their contribution to the gamma ray glow is difficult to estimate since information is lacking about their structure and emitted
spectrum close to the Galactic Centre. Masking known point sources and subtracting the diffuse emission from the Galactic Centre in order to characterize the excess is therefore a
really complicated task, plagued with unknowns. However, a new method of analyzing photon sky maps, non-Poisson template fitting, has been proposed in~\cite{Lee:2015fea} and applied to the
inner galactic region. The~method is more sensitive to unresolved point sources than traditional Poisson-count based analyses, and~shows that the Fermi-LAT gamma-ray excess can be explained
by the background emission produced by a~population of sub-threshold objects, without~the need of dark~matter.
However, the location of the excess, within~1.5$^\circ$ of the Galactic Centre, is indeed a~place where an~increased density of dark matter is expected. Given the unknowns in any dark
matter model, the~excess can of course be also fitted as originating from dark matter annihilations, as~indeed it has (see, e.g.,~\cite{Hooper:2010mq,Hooper:2011ti,Agrawal:2014oha,Calore:2014nla,Achterberg:2017emt}
although the literature is vast on this topic).
The main ``result'' of these claims is that a~dark matter particle
in the mass range of a~few tens of GeV to a~few hundred GeV, annihilating~to soft (typically $b\bar{b}$ or $\tau^+\tau^-$) or hard (typically $W^+W^-$ or $t\bar{t}$), channels
can fit the data. The~wide range of masses and annihilation channels proposed simply reflects the underlying uncertainties in estimating the background, as~well as the rather
unconstrained options that the dark matter solution provides. The~allowed parameter space in
dark matter models is ample enough to be able to fit the Fermi-LAT gamma ray excess without problem, even with the oversimplifying assumption of a~single
one-particle species annihilating through just one channel and taking into account limits from new particle searches in~accelerators.
\begin{figure}[t]
\centering
\includegraphics[width=0.70\linewidth]{figures/Fermi_excess_crop.pdf}
\caption{ Spectrum of the Galactic Centre excess. The~systematic uncertainty band shows the envelope of different Galactic Centre excess fluxes obtained using different
models of diffuse gamma-ray emission and source masking. Figure reprinted with permission from~\cite{TheFermi-LAT:2017vmf}. Copyright 2020 AAS. See reference for details of the analyses
mentioned in the inset caption.}
\label{fig:Fermi_GCexcess}
\end{figure}
The authors in~\cite{TheFermi-LAT:2017vmf} have performed an~extensive study of the Galactic Centre and used several models of the galaxy to estimate the expected
gamma ray glow, finding that an~excess over the expected gamma ray flux from the Galactic Centre can be obtained in all the models considered. However, they also point out that the
excess is still compatible, given the huge uncertainties, with~the emission from standard astrophysical sources when considering contributions from cosmic-ray interactions in the
interstellar medium or unresolved populations of pulsars. It is, of~course, also compatible with ad-hoc dark matter models but, again, Occam's razor renders at this point the dark
matter interpretation of the gamma ray excess from the Galactic Centre as, at~most, a~mere viable solution, but~not the only one. A~summary of this study, in~comparison with the original
claims of the excess, is shown in Figure~\ref{fig:Fermi_GCexcess}.
The~figure shows the expected emission from the Galactic Centre obtained by the Fermi-LAT collaboration with a~detailed modeling of the Galaxy (points tagged as ``Sample'') compared with the
excess claimed by other authors, using different assumptions on the morphology of the Galaxy and background templates (so, in~a strict sense, the~curves are not really directly comparable).
The~shaded area represents the systematic uncertainty from the Fermi-LAT collaboration analysis, and~it is obtained as the envelope of the predictions of the gamma-ray flux under different
assumptions on diffuse gamma-ray emission, galaxy morphology, gas distributions in the galaxy or cosmic-ray distribution and propagation, but, in~any case, without~any new physics.
A similar conclusion is reached in~\cite{Abazajian:2020tww}, where diffuse galactic emission, inverse-Compton emission and a~central source of electrons can explain the Fermi-LAT data without
a dark matter component, practically ruling out thermal dark matter as the explanation of the galactic centre gamma-ray~excess.
While the rather energetic gamma-rays can probe the $(\left < \sigma_{\rm A} v \right >, m_{\rm DM})$ parameter space for dark matter masses of $\mathcal{O}$(few GeV) and above,
the softer diffuse X-ray sky can be used to probe dark matter masses down to the keV region~\cite{Essig:2013goa,Jeltema:2011bd}. X-rays can be produced directly from low mass dark matter annihilations or decays,
but they can also be produced by inverse Compton scattering of $e^+e^-$ from the annihilations or decays of dark matter of any mass on background photons. Inverse Compton scattering
of charged annihilation or decay products on CMB photons, dust or starlight will produce a~diffuse photon flux with typical photon energies of the order of
1--100 ($m_{\rm DM}$/10~GeV)$^2$~keV~\cite{Jeltema:2011bd}, where $m_{\rm DM}$ is the dark matter mass, and~the energy range covers therefore the X-ray region. X-ray~telescopes
provide, therefore, a~complementary way to search for dark matter from galaxy clusters and galaxies, including the Milky Way. One of the best theoretically justified light dark matter candidate
in the keV mass region is a~right-handed (i.e., sterile) neutrino, which can
be accommodated in extensions of the standard model without many additions to the particle spectrum of the theory~\cite{Dodelson:1993je,Adhikari:2016bei,Boyarsky:2018tvu,Abazajian:2017tcc}.
Sterile~neutrinos can be produced in the early universe through mixing with active neutrinos, so the mass $M_s$ of the sterile neutrino and the mixing angle with
active neutrinos are two fundamental parameters of any model. Probing the $(M_s,{\rm sin}^2(2\theta))$ parameter space amounts to probing the feasibility of sterile neutrinos
to be dark matter. Sterile neutrinos have a~radiative decay channel into an~active neutrino and a~photon ($\nu_s \rightarrow \nu \gamma$). The~integrated effect of decays at different redshifts during the evolution of the universe would result in a~broad X-ray band today on top of the
the diffuse X-ray astrophysical background. The~non-detection of such feature has already allowed to constrain the $(M_s,{\rm sin}^2(2\theta))$ parameter space of sterile neutrino dark
matter with X-ray telescopes~\cite{Roach:2019ctw,Ng:2019gch,Boyarsky:2017wct,Ruchayskiy:2015onc,Sekiya:2015jsa,Tamura:2014mta,Urban:2014yda,Borriello:2011un,Abazajian:2006jc,Boyarsky:2005us}.
The left plot of Figure~\ref{fig:xray-limits} shows that X-ray telescopes alone have a~strong constraining power on
the $(M_s,{\rm sin}^2(2\theta))$ plane, in~this case illustrated on the $\nu$MSSM model, a~minimal extension of the
standard model that includes neutrino masses and which incorporates a~dark matter candidate in the form of a~right-handed neutrino of a~few keV~\cite{Asaka:2005an}.
Additionally, the~same type of analyses can be used to constrain the annihilation cross section of a~generic low-mass dark matter candidate. The~right plot of Figure~\ref{fig:xray-limits}
shows the limit on the velocity averaged annihilation cross section of a~generic light weakly interacting dark matter candidate assuming annihilation to two photons obtained with
the same X-ray observations as in the left plot. While~this result has been obtained for a~quite favourable situation for X-ray detectors, direct annihilation into photons, the~
limit on $\left < \sigma_{\rm A} v \right >$ is quite strong if compared with the results from other~messengers.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth,height=0.40\linewidth]{figures/nustars_sterile_limit_all.pdf}
\includegraphics[width=0.43\linewidth,height=0.40\linewidth]{figures/sigma-v_NuSTAR_gg.pdf
\caption{ (\textbf{Left}) Excluded areas of the sterile neutrino mass and mixing angle parameter space from NuSTAR observations of M31, the~Galactic Center and
the deep X-ray sky. See reference below for details.
(\textbf{Right}) Upper limit on weakly interacting dark matter annihilation cross section from NuSTAR observations of M31, assuming annihilation to two monoenergetic photons.
Figures reprinted with permission from~\cite{Ng:2019gch}. Copyright 2020 by the American Physical Society.
}
\label{fig:xray-limits}
\end{figure}
\unskip
\subsection{Neutrinos}
Neutrinos also propagate undeflected and therefore point to their source, but~they present additional challenges compared with photons. First, their extremely low cross section with matter
means that large detectors and exposure times are needed to collect just an~event from a~given source, not to mention a~handful of events that would be statistically significant by themselves.
In this sense, the~best bet of neutrino telescopes to function as dark matter detectors is to correlate their findings with other messenger or, ideally, more than one. This would, in~turn,
help interpreting any signal from gamma-rays or cosmic-rays, since an~excess of events from a~single messenger might be subject to interpretation as background from
other processes rather than from dark matter, or~it would need quite ad-hoc cooked theoretical models to explain why the dark sector annihilates or decays to some standard model
particles but not others.
Besides, neutrino detectors have a~limited angular resolution, of~the order of 1$^o$ at $\mathcal{O}$(100~GeV) neutrino energy, only reaching below one degree at TeV energies, and~therefore
pointing becomes challenging in searches for relatively low-mass dark matter candidates. Even in a~perfect detector the angular resolution would be ultimately limited by the physics of the
neutrino-nucleon cross section. Limited pointing resolution impacts the degree of background rejection, mainly atmospheric neutrinos,
when pointing towards a~source. On~the other hand, neutrinos have the advantage of not loosing energy or being absorbed through their propagation over cosmic distances. Neither there is a
significant background or foreground from astrophysical objects, so they remain an~attractive signature.
They are indeed the only possible messengers to use when looking for dark matter accumulated in dense objects like the Sun or the Earth. More on this~below.
The total number of signal events, $\mu_{\rm s}$, expected at a~neutrino telescope is given by,
\begin{equation}
\mu_{\rm s} = T_{\rm live}\sum_\alpha\int{\rm d}\Omega\int {\rm d} E_\nu A^{\rm eff}_{\nu_\alpha}(\Omega,E_\nu) \Phi_{\nu_\alpha}(E_\nu)\,,
\label{eq:dm_nevents}
\end{equation}
where $T_{\rm live}$ is the exposure time, $A^{\rm eff}_{\nu_\alpha}(\Omega,E_\nu)$ the detector effective area for neutrino flavour $\alpha$, that~depends on the detector
response with respect to the observation angle and neutrino energy and $\Phi_{\nu_\alpha}(E_\nu)$ is the neutrino flux for flavour $\alpha$ arising the annihilation processes.
Note that the effective area of neutrino telescopes is not a~fixed quantity, but~it depends not only on neutrino energy and arrival direction, but~also on the specific
event selection performed. In~the absence of a~signal, the~90\% confidence level limit on the number of signal events, $\mu_{\rm s}^{\rm 90}$, can be directly translated into a~limit
on the neutrino flux and, in~turn, into~a limit on either the velocity-averaged annihilation cross section $\langle \sigma_{\rm A}\,v \rangle$ or the dark matter lifetime as a~function
of dark matter mass, through Equation~(\ref{eq:indirect_galaxy}). As~in the case with gamma-rays, sources with similar characteristics, where the signal can be expected to be also similar,
can be stacked to increase the signal-to-noise ratio. This is typically done when considering nearby dwarf~galaxies.
The current status of searches for dark matter with neutrinos from galaxies span a~large chunk of dark matter masses as illustrated in Figure~\ref{fig:neutrinos_summary}.
The figure shows the current landscape of limits on the dark matter self-annihilation cross section as a~function of dark matter mass obtained by the main players in the neutrino
dark matter search industry, including many low-energy experiments which are not the focus of this review~\cite{Arguelles:2019ouk}. Next-generation experiments are shown as dashed-lines,
while~the shaded areas show current limits, some of them obtained independently from the collaborations by the authors (marked with a~heart). Results cover an~impressive eleven orders
of magnitude in dark matter mass, extending beyond the theoretical limit for a~thermal relic, where~results should be interpreted under different class of models than the vanilla WIMP
scenario (see e.g.,~\cite{Palomares-Ruiz:2020ytu} and references therein).
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/sigma-v_neutrinos_summary_v02.pdf}
\caption{Summary of results on the velocity weighted dark matter annihilation cross section from different experiments. Solid lines show limits, dashed lines sensitivities of
future facilities assuming five years data taking (100 h of observation for the CTA sensitivity). The~heart symbols represent analyses performed by the authors
of~\cite{Arguelles:2019ouk} with public data, and~not by the collaborations. Figure~from~\cite{Arguelles:2019ouk} }
\label{fig:neutrinos_summary}
\end{figure}
In addition to our Galaxy and other objects at cosmological distances, the~Sun and the centre of the Earth are also two possible sources of dark matter annihilations that could be
detectable due to their proximity. The~effect of the Sun gravitational attraction over the lifetime of the Solar System can result in the capture dark matter from the local halo
through repeated scatterings of dark matter particles in orbits crossing the star~\cite{Press:1985ug,Krauss:1985aaa,Srednicki:1986vj,Gaisser:1986ha,Ritz:1987mh}. The~same applies
for the Earth~\cite{Gould:1987ir,Gould:1988eq}. The~only caveat is that only neutrinos from dark matter annihilations in the centre
of these objects can escape their dense interior. So it is only neutrino telescopes that can pursue dark matter searches from the centre of the Sun and Earth (with some exceptions
mentioned below).
The expected neutrino flux from dark matter annihilations inside the Sun depends, among~other factors, on~the capture rate of dark matter (which is proportional to the dark
matter-nucleon cross section), and~the annihilation rate (which is proportional to the velocity-averaged dark matter annihilation cross section). The~evolution of the number density
of dark matter particles, $N_{\rm DM}$, accumulated in the Sun interior follows the equation
\begin{equation}
\frac{dN_{\rm DM}}{dt} = \Gamma_{\rm C} - 2\langle\sigma_{\rm A} v\rangle(N_{\rm DM}^2/2)\,,
\label{eq:density}
\end{equation}
where $\Gamma_{\rm C}$ is the capture rate per unit volume. The~numerical factors take into account that annihilations consume two particles per interaction but
there are only $1/2$ of the particles available to form pairs. We have neglected a~possible ``evaporation'' term, proportional to $N_{\rm DM}$, which is only relevant for very small
dark matter masses, $\lesssim$ 4 GeV~\cite{Steigman:1997vs,Spergel:1984re,Gaisser:1986ha}, below~the range currently probed by neutrino
telescopes due to their relatively high energy threshold (precisely a~few GeV), but~that can be an~issue to take into account in future low-energy extensions of current projects.
Equation~(\ref{eq:density}) has an~equilibrium solution given by
\begin{equation}
N_{\rm DM, eq}=\sqrt{\frac{\Gamma_{\rm C}}{\langle\sigma_{\rm A} v\rangle}}\, ,
\label{eq:equilibriumDM}
\end{equation}
which represents the steady amount of dark matter accumulated in the Sun if capture and annihilation have reached equilibrium. Since the Sun is around 4.6~Gyr old, it is usually
assumed that equilibrium has indeed been achieved. In~this case, the~neutrino emission only depends on the capture rate, which~is directly related to the dark matter-nucleon scattering
cross section, $\sigma_{\rm{DM-N}}$, and~this allows to set limits on $\sigma_{\rm{DM-N}}$ from the measured (or rather, lack of) anomalous high-energy neutrino flux from the Sun.
In models where equilibrium might not have been achieved, then an~assumption on the dark matter self annihilation cross section is necessary to derive predictions on the
neutrino signal from dark matter~annihilations.
The dark matter-nucleon cross section can be expressed in terms of a~spin-dependent, $\sigma_{\rm SD}$, and~spin-independent, $\sigma_{\rm SI}$, contributions~\cite{Engel:1992bf} and,
since the Sun is mainly composed of Hydrogen (75\% of H and 24\% of He in mass)~\cite{Grevesse:1998bj}, the~capture of dark matter from the halo by the solar gravitational well
occurs mainly through the spin-dependent scattering. Heavier elements constitute less than 2\% of the mass of the Sun, but~can still play
a role when considering dark matter capture, since the spin-independent cross section is proportional to the square of the atomic mass number. These heavy elements
can also take part in the spin-dependent capture process if dark matter presents momentum-dependent interactions with normal matter, giving rise to an~increased capture rate in
comparison to previous calculations, that can have a~bearing in the interpretation of experimental results~\cite{Catena:2015uha}. Even if searches for dark matter with neutrinos
from the Sun are performed in the most model-independent way, there is of course the possibility to probe specific models, and~both the main collaborations or external authors
using public data have done so. These include limits on Kaluza-Klein dark matter arising in models of universal extra dimensions~\cite{Bernadich:2019zss},
inelastic dark matter~\cite{Catena:2018vzc}, strongly~interacting dark matter~\cite{Albuquerque:2010bt}, or~specific extensions of the
MSSM~\cite{Silverwood:2012tp,Trotta:2009gr,Allahverdi:2009se}, among~others.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth,height=0.40\linewidth]{figures/secluded_ctau.pdf}
\includegraphics[width=0.45\linewidth,height=0.40\linewidth]{figures/secluded_SI.pdf}
\caption{ (\textbf{Left}) The 90\% confidence level limits on the mediator decay length as a~function of the the annihilation rate in the Sun for a~several dark matter masses.
The mediator is assumed to decay 100\% to neutrinos. Regions to the right of the curves are disfavoured. See the reference below for other decay modes.
(\textbf{Right}) The 90\% confidence level limits on the spin-independent dark matter-nucleon cross section as a~function of dark matter mass for different mediator decay channels,
assuming a~decay length of $2.8 \times 10^{7}$~km. Figures from~\cite{Ardid:2017lry}. Copyright IOP Publishing Ltd. and Sissa Medialab. Reproduced by permission of IOP Publishing. All rights reserved. }
\label{fig:DM_secluded}
\end{figure}
There are some theoretical scenarios that predict a~flux of gamma-rays or cosmic-rays due to dark matter annihilations in the Sun. If~the dark matter ``world'' is coupled to our
usual world through long lived mediators that can escape the Sun interior and decay into Standard Model particles in-flight to the Earth, then other techniques can also search
for dark matter accumulated in the Sun. In~such models of {\it secluded} dark matter~\cite{Pospelov:2007mp},
an excess of gamma-rays or cosmic-rays from the direction of the Sun could also be interpreted as a~signature of dark matter annihilations inside it~\cite{Ajello:2011dq,Profumo:2017obk,Cuoco:2019mlb,Leane:2017vag}.
Secluded dark matter models also present advantages for neutrino searches. Neutrinos loose energy on their way out of the dense solar interior, so the flux that would reach
the Earth from annihilations in the solar core would be mainly low-energy (up to a~few 100~GeV) independent of the mass of the dark matter annihilating inside the Sun.
The correlation between dark matter mass and highest possible neutrino energy is therefore lost. However, since the decay of the mediator in secluded models can occur outside the Sun,
the~neutrinos will reach the Earth without loosing energy. This is advantageous for neutrino telescopes, which usually have better sensitivities and pointing for higher neutrino
energies, $\mathcal{O}$(100~GeV) and above. These models add, of~course, additional free parameters: The mediator mass, lifetime and decay branching ratios to different final
states. Experimental results are therefore usually provided under different assumptions for the mediator mass and lifetime, for~a given annihilation
channel~\cite{Ardid:2017lry,Adrian-Martinez:2016ujo}. Figure~\ref{fig:DM_secluded} shows an~example of the results that can be attained within secluded dark matter models.
The figure shows limits on the mediator decay length as a~function of the the annihilation rate in the Sun (left plot) and limits on the spin-independent dark matter-nucleon
cross section as a~function of dark matter (right plot) obtained with public data of IceCube and ANTARES~\cite{Ardid:2017lry}. Note that neutrino telescopes,
IceCube in particular, provide the best limits on the spin-independent dark matter-nucleon cross section for dark matter masses above a~few 100~GeV for the model described
in the reference, improving even over direct search experiments like LUX, shown in the figure as the black dot-dashed~line.
The Earth presents somewhat a~different case, since the processes of capture of dark mater and annihilation in its core can not be assumed to have reached equilibrium.
Additionally, the~most abundant isotopes of the main components of the Earth inner core, mantle and crust, $^{56}$Fe, $^{28}$Si and $^{16}$O~\cite{Herndon:1980aaa},
are spin-0 nuclei, so capture in the Earth is a~good candidate to probe the $\sigma_{\rm SI}$ component of the dark matter-nucleon cross section. From~the experimental side,
the mass of these isotopes produces mass resonances in the capture rate, so neutrino searches for dark matter annihilations from the centre of the Earth are going to be
more sensitive to dark matter masses similar to the masses of those isotopes than to other masses. This is reflected in the shape of the limits obtained by IceCube, ANTARES
and Super-Kamiokande, shown in Figure~\ref{fig:DM_earth}. The~figure shows current limits on the spin-independent dark matter-nucleon cross section as a~function of
dark matter mass~\cite{Mijakowski:2020qer}.
\begin{figure}[t]
\centering
\includegraphics[width=0.70\linewidth]{figures/Earth_limits_various.pdf}
\caption{The 90\% confidence level upper limit on the spin-independent dark matter-nucleon cross section from annihilations in the Earth core for different annihilation channels. The~ shaded areas represent the best fit if one interprets the results from DAMA/LIBRA as indeed originating from dark matter interactions. Figure from~\cite{Mijakowski:2020qer}.
See references therein for details mentioned in the inset caption.}
\label{fig:DM_earth}
\end{figure}
However, perhaps of special interest is the recent discovery of a~high-energy cosmic neutrino flux by IceCube~\cite{Aartsen:2013jdh,Aartsen:2014gkd,Kopper:2017zzm}, since it
opens new possibilities for dark matter scenarios, specifically for annihilations or decays of very heavy dark matter, $\mathcal{O}$(PeV), that is impossible to probe in accelerators.
These are called top-down scenarios, where high energy neutrinos are produced from the decay of heavy relic particles, as~opposed to bottom-up scenarios where cosmic neutrinos
are expected to be produced from the decay of accelerated protons in a~source or its surroundings. Additionally,~extremely~heavy (and extremely light) dark matter scenarios are becoming
of interest since classic WIMP models have been disfavoured by accelerator results and the persistent negative results of direct searches.
Indeed even the first two PeV cosmic neutrinos detected by IceCube were promptly interpreted within the framework of PeV mass decaying dark matter candidates (R-parity violating gravitinos,
extra dimension fermions or hidden sector gauge bosons)~\cite{Feldstein:2013kka}. Decay of such heavy particles into neutrinos would produce a~monochromatic line, compatible with
the IceCube events at the time. With~currently an~excess of neutrinos well beyond 7$\sigma$ over the atmospheric neutrino background above 60 TeV, and~no sources identified except for
the blazar TXS 0506+056~\cite{IceCube:2018dnn,IceCube:2018cha}, the~interpretation of the IceCube excess as a~signature of dark matter has spurred a~lot of activity (e.g.,~\cite{Esmaili:2013gha,Bhattacharya:2019ucd,Boucenna:2015tra,Choi:2019ixb,Arguelles:2017atb,Kachelriess:2018rty,Chianese:2017nwe,Bhattacharya:2016tma,Dev:2016qbd,Murase:2015gea,Rott:2014kfa,Zavala:2014dla} among others). The~most general approach is to assume a~two-component flux where a~fraction of the IceCube neutrino flux is
assumed to originate from unidentified astrophysical sources, and~a fraction of the flux comming from heavy dark matter decays. The~low statistics of the IceCube data above 1~PeV do not allow for the moment to establish if the flux presents a~cut-off or not. A~sharp cut-off in the neutrino energy spectrum would favour the dark matter interpretation. Note that dark matter decays would produce also a~flux of high energy photons that can be recycled to lower energies through pair production and inverse Compton scattering on ambient radiation. So any model that explains the IceCube cosmic flux from heavy dark matter decay must be consistent with measurements of the gamma-ray sky, as~analyses like~\cite{Murase:2012xs,Cohen:2016uyg} show. Or, vice~versa, results from gamma-ray measurements can
constrain the decaying dark matter interpretation of neutrino and cosmic ray results within some dark matter models.
Current experimental results on the lifetime of heavy dark matter from IceCube~\cite{Aartsen:2018mxl}, compared with limits from HAWC and Fermi-LAT, are shown in Figure~\ref{fig:DM_lifetime}.
The IceCube result assumes a~two-component fit, with~the astrophysical component following a~single power law. Lifetimes below
$10^{27}$--$10^{29}$~s are disfavoured at 90\% CL, depending on the candidate~mass.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{figures/IC_DMlifetime_comparison.pdf}
\caption{ Comparison of IceCube lower lifetime limits on heavy dark matter with results obtained by HAWC (Dwarf Spheroidal Galaxies and
Galactic Halo/Centre) and Fermi/LAT available at the time. Figure reproduced with permission from~\cite{Aartsen:2018mxl}. References to Fermi and HAWC data therein.}
\label{fig:DM_lifetime}
\end{figure}
The existence of a~high-energy cosmic neutrino flux can be used to probe conventional (i.e.,~not~extremely heavy, and~even very low mass) dark matter in another, neat, way. Astrophysical~neutrinos
reach the Earth from far away, traversing dark halos of galaxies in their way, including the Milky Way. Interactions of these neutrinos with the dark matter could reduce the
expected neutrino flux from specific directions, providing a~unique way of probing the neutrino coupling to the dark sector~\cite{Choi:2019ixb,Arguelles:2017atb}.
While strongly model-dependent, these approaches can shed light on specific aspects of neutrino-dark matter interactions.
An example of the type of results that can be obtained with these analyses is illustrated in Figure~\ref{fig:m_phi_vs_m_chi}. The~plot shows the result of a
search for a~depletion of cosmic neutrinos from the direction of the Galactic Centre due to neutrino-dark matter interactions. The~results are cast in terms of the mass
of a~vector-like mediator of the interaction and the dark matter mass, as~well as the logarithm of the coupling strength of the interaction (color~code). Note that this method
is sensitive to very low mass dark matter candidates, on~the MeV-GeV range, and~therefore complementary to other analyses based on the cosmic neutrino flux. The~plot also shows that
neutrino telescopes have access to an~area of the parameter space, typically high mediator masses and high coupling constants, where limits from cosmological arguments
based on analyses of the cosmic microwave background do not~reach.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{figures/m_phi_vs_m_chi.pdf}
\caption{Maximum allowed value (color code) of the (log of the) coupling $g$ between neutrinos and dark matter, as~a function of dark matter mass, $m_{\chi}$, when the interaction is
mediated by a~new vector boson of mass $m_{\phi}$. The~horizontal magenta line shows the division between the parameter space regions where a~neutrino telescope
like IceCube can set stronger limits than analyses based on the power spectrum of the cosmic microwave background, or~vice~versa. Reprinted with permission from~\cite{Arguelles:2017atb}. Copyright 2020 by the American Physical Society.}
\label{fig:m_phi_vs_m_chi}
\end{figure}
\subsection{Cosmic-Rays}
Cosmic-rays (protons, electrons or light nuclei and their
antiparticles) can also be produced in the hadronization of $q\bar{q}$ or $W^+W^-$ pairs from dark matter annihilations or decays. However, using cosmic-rays as probes of dark matter poses challenges as
well. In~order to have access to particle identification, charge separation and energy measurement one must deploy real particle physics detectors in space, ideally with calorimetry and
a magnetic field for charged-particle tracking capabilities, a~technical challenge by itself. However, AMS~\cite{Lubelsmeyer:2011zz}, DAMPE~\cite{Fusco:2019lhc} or CALET~\cite{Torii:2019wxn} have demonstrated the feasibility of this type of detectors and produced impressive physics
results. The~limitations of using cosmic-rays as dark matter probes arise, though, from~fundamental physics. Cosmic-rays are charged particles (neutrons being too short lived to be of
relevance) and propagation through the Galaxy can randomize their arrival directions and prevent accurate pointing to a~given object. Additionally, cosmic-rays can also lose energy through
inverse Compton scattering and synchrotron radiation during propagation, so their energy spectrum at Earth might differ from that where dark matter annihilations take place.
However, the detection of antimatter (antiprotons, positrons or light antinuclei) still provides a~good handle in dark matter searches since antimatter from astrophysical sources is
relatively rare at the Earth. Due~to its high probability of annihilation with normal matter, any antimatter source needs to be relatively close to the Earth. However, this covers the
Galactic Centre and Halo, so there are plenty of predictions of what a~detection of an~excess of antiprotons or positrons would mean in terms of dark matter. However, even with antimatter
searches one has to deal with background from close-by sources, like~pulsars, which can produce an~(anti)cosmic-ray flux that can be indiscernible from a~dark matter signal.
Indeed the positron flux (compared to the electron flux) has been the source of controversy since results from the PAMELA satellite~\cite{Adriani:2008zr,Adriani:2013uda,Adriani:2017bfx} and the
ATIC~\cite{Chang:2008aa} balloon experiment showed an~excess of positrons at energies above $\sim$10~GeV over the expectation from standard galactic primary cosmic-ray
propagation and interactions, including the production of secondary electrons and positrons~\cite{Moskalenko:1997gh,Delahaye:2010ji}.
AMS02~\cite{Accardo:2014lma,Aguilar:2019owu}, CALET~\cite{Adriani:2017efm} and DAMPE~\cite{Ambrosi:2017wek} confirmed the excess and showed that
it extends to the TeV region, but~with a~cutoff at around 1~TeV. The~shape and normalization of the positron excess have given rise to a~wealth of models explaining the excess from dark matter
annihilations.
Some of these solutions are quite contrived since, for~example, the~antiproton flux does not show such a~pronounced deviation from the expected background (there remains to be seen if there is
an antiproton excess at all, see below) and therefore an~explanation of the positron excess based on dark matter annihilations should prevent the same dark matter to produce a~significant excess
of antiprotons. These have been called leptophilic models, e.g.,~\cite{ArkaniHamed:2008qn,Haba:2010ag,Cohen:2009fz,Ibarra:2009bm,Fox:2008kb}
and they are an~example of the challenges posed by trying to explain spectral features by invoking multi-component contributions to the spectrum. A~fit will always get better the more free
parameters are added to the function to fit, without~this reflecting on the physical relevance of the parameters.
For anyone willing to wield Occam's razor, there is a~simpler, standard astrophysical explanation to a~positron excess in the cosmic-ray flux: It can indeed be produced by localized
pulsar wind nebulae close enough to the Earth~\cite{Hooper:2008kg,Profumo:2008ms,Malyshev:2009tw,Blasi:2010de,Yin:2013vaa,Bartels:2015aea}. In~a pulsar wind nebula, positrons are created
by electromagnetic cascading from seed electrons that have been ejected from the rapidly spinning neutron star. This electron and positron ``wind'' eventually creates a~shock when interacting
with the older ejecta of the original supernova remnant. A~fraction of the positrons might escape this complex system into the interstellar medium and become the galactic positron flux.
Pulsars are, thus, antimatter sources in the Galaxy. There are known, close enough pulsars to the Earth that can provide the needed positron flux to explain the measured excess over the
vanilla, point source free, background. Additionally, a~pure dark matter explanation of the positron excess is also
strongly disfavoured by the fact that it should have produced a~signal in gamma-rays. The~lack of a~signal in Fermi-LAT searches for dark matter in dwarf spheroid galaxies have set a~limit on the
dark matter annihilation cross section into channels that would produce an~$e^+e^-$ flux which is incompatible with the measured excess~\cite{Lopez:2015uma,Chan:2015gia}. Again, very few and very
contrived models can escape the strong Fermi-LAT constrains and still be compatible with the measurements of PAMELA, ATIC, AMS, CALET and~DAMPE.
We mentioned above antiprotons, and~the story with them as a~dark matter signal is not so simple as it might have been suggested.
Antiprotons are produced in interactions of protons and light nuclei with the interstellar medium. So, given the cosmic-ray flux, the~distribution of matter in the Galaxy, the~antiproton
production cross section and a~model for propagation, the~standard background of antiprotons can be, in~principle, calculated. Antiprotons from dark matter annihilations would be produced
as final state hadrons in annihilations to $q\bar{q}$ or $W^+W^-$ pairs, and~they would appear as an~
excess over the standard background~\cite{Bergstrom:1999jc,Donato:2003xg,Fornengo:2013xda}. Figure~\ref{fig:limits_AMS} shows the result of looking for dark matter annihilations into antiprotons with AMS-02. Limits on the velocity-weighted annihilation cross section as a~function of dark matter mass for annihilation into $b\bar{b}$ (left) and $W^+W^-$ (right) are shown, along with the 1$\sigma$ and 2$\sigma$ sensitivity~bands.
One current problem is that the uncertainties in the ingredients just mentioned above that enter into the calculation of the standard background are larger than present
measurements of the antiproton flux by AMS02. The~strongest culprit is the uncertainty on the antiproton production cross section in p-p and p-He collisions, which is just known to about 20\%,
slightly depending on energy, from~recent measurements of NA61 and LHCb~\cite{Korsmeier:2018gcy}.
Additionally, uncertainties in the propagation in the Galaxy also play a~determinant role. Antiprotons, as~charged
particles, diffuse in the galactic magnetic field and a~complex transport equation, which must include diffusion, convection and reacceleration, with~uncertainties in each term, needs to be
solved numerically to estimate the flux at the Earth.
While there are recurrent claims of a~dark matter signal in cosmic-rays from the global fit industry~\cite{Cuoco:2019kuu,Cholis:2019ejx}, the~fact remains that, within~current uncertainties
on the cosmic-ray propagation and on the elementary production cross sections, the~data remain compatible with the expected background from standard processes.
The simple fact that different authors performing what in principle is a~very similar analysis~\cite{Cholis:2019ejx,Reinert:2017aga} reach contradicting conclusions on the significance of
the antiproton excess (4.7$\sigma$ versus 2.2$\sigma$) points to the effect of the uncertainties involved in these type of calculations, mainly due to the cosmic-ray propagation in the
galaxy.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth,height=0.40\linewidth]{figures/sigma-v_AMS_bb.pdf}
\includegraphics[width=0.45\linewidth,height=0.40\linewidth]{figures/sigma-v_AMS_WW.pdf}
\caption{The 95\% confidence level limit on the dark matter annihilation cross section into $b\bar{b}$ (\textbf{Left}) and $W^+W^-$ (\textbf{Right}) as a~function of dark matter
mass derived from the antiproton and B/C data of AMS-02. Figures from~\cite{Reinert:2017aga}. Copyright IOP Publishing Ltd. and Sissa Medialab. Reproduced by permission of IOP Publishing.
All rights reserved. }
\label{fig:limits_AMS}
\end{figure}
A way out of this impasse might be the precise measurement of the antideuteron and antihelium fluxes, which should be correlated with the antiproton flux.
A comprehensive analysis of the different antimatter cosmic-ray nuclei could help disentangle the standard or exotic origin of these fluxes~\mbox{\cite{Korsmeier:2017xzj,Reinert:2017aga}}.
A very different approach to use cosmic rays as tracers of dark matter is to look for deviations of the expected galactic flux of electrons and protons at the Earth due to interactions with
dark matter during their propagation through the galaxy. These interactions will result in a~loss of energy of the cosmic rays and a~distorted energy spectrum at Earth in comparison with
the scenario with no dark matter. This method has been used to set limits on the {elastic} dark-matter proton and electron cross sections in what has been called ``inverse
indirect detection''~\cite{Cappiello:2018hsu}. The~argument rests on a~similar idea than the one used with neutrinos in~\cite{Arguelles:2017atb} and illustrated in Figure~\ref{fig:m_phi_vs_m_chi},
but using the galactic cosmic rays as ``beam'' and the dark matter particles in the galaxy as ``target''. The~method is specially sensitive to low dark matter candidates, well below a~GeV,
since their number density can then be high enough for several scatterings to occur during the propagation of a~cosmic ray, leaving an~imprint in their spectrum. This approach complements direct detection experiments
that are not currently sensitive to dark matter masses below $\mathcal{O}$(GeV) due to target threshold effects. On~the other hand, the~prediction on the expected distortion of the
$p$ and $e$ spectra depends on the details of the propagation of these particles through the galaxy, which introduces certain degree of uncertainty. In~any case, the~analysis performed in~\cite{Cappiello:2018hsu}
obtains very competitive limits on the elastic proton and electron cross section with dark matter for dark matter masses as low as a~keV. The~results are shown in Figure~\ref{fig:sigma_invert}
for protons (left plot) and electrons (right plot). The~method can be extended to probe more complex interactions, like inelastic scattering or velocity dependent interactions, but~at
the expense of introducing further model~dependencies.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth,height=0.40\linewidth]{figures/sigma_xp_invert.pdf}
\includegraphics[width=0.45\linewidth,height=0.40\linewidth]{figures/sigma_xe_invert.pdf}
\caption{The 5$\sigma$ upper limit (red lines) on the dark matter elastic scattering cross section on protons (\textbf{left}) and electrons (\textbf{right}) obtained by comparing cosmic ray data from AMS-02 and CREAM with the expectation of cosmic ray
propagation in the Milky Way including scattering on dark matter. Figures from~\cite{Cappiello:2018hsu}.}
\label{fig:sigma_invert}
\end{figure}
\unskip
\section{Constrains from Cosmology: CMB, BBN and the 21-cm Hydrogen~Line}
In this section we summarize the effect of dark matter annihilations and decays in observables related to the evolution of the early universe. This is an~
independent way to infer the presence and characteristics of dark matter, be it on its imprint on the cosmic microwave background,
on the production of primordial light elements through Big-Bang Nucleosynthesis (BBN) or on the 21-cm power spectrum of Hydrogen (see~\cite{Clark:2019gug} for a~review and references therein).
The annihilation of dark matter present in the primordial soup injects additional energy in the form of photons,
leptons or hadrons into the expanding universe. These will heat, ionize and excite the medium leaving imprints in the expected angular power spectrum that, assuming a~given dark matter candidate,
can be calculated and compared with observations. Turning the argument around: The consistency of the $\Lambda$CDM model with observations provides a~strong constrain on the characteristics of any
new dark matter particle species. These methods have the advantage that they are free from astrophysical uncertainties like halo models of galaxies or dark matter distribution in clusters.
They also probe a~quite ample range of the history of the universe, from~the recombination era (z~=~1000) to the end of reionization (z$\sim$6). Incidentally, a~similar line of reasoning can be used
to set limits on dark matter annihilations or decays into final states including electromagnetic radiation at later stages in the evolution of the universe. A~direct comparison of the predicted
photon flux with the extragalactic background glow of the universe (infrared, ultraviolet or even optical) can be used to set constraints to the characteristics of several dark matter candidates~\cite{Overduin:2004sz}.
This method is not free from astrophysical uncertainties since it is applied to the older universe, when structures have been formed, and~the expected signal depends on how the dark matter,
as well as dust and gas, is distributed in galactic halos or the intergalactic medium. The~limits obtained are also not as strong as more recent limits mentioned earlier in this~review.
Following the evolution of the universe, from~higher to lower redshifts, the~first effect to consider is the imprint of the energy dumped into the primordial plasma by annihilations or decays
of dark matter, and~its effect on element nucleosynthesis. Photons, electrons and positrons couple readily to the plasma, but~quarks or gluons will hadronize into strongly interacting particles, typically mesons and baryons (and their antiparticles).
Short-lived particles (with lifetimes much shorter than the strong interaction rate at the temperature of the surroundings) will decay, reinforcing the energy dumping into the plasma and inducing
photo-dissociation of already formed light elements. However, stable particles, like~protons (or neutrons for all purposes) will become part of the expanding plasma, ready to participate in nucleosynthesis.
Since light-element synthesis depends on the baryon-to-photon ratio, additional stable baryons in the plasma due to dark matter annihilations, added to the photo-dissociation mentioned above
and pion-exchange interactions that will drive proton-neutron conversion,
can~change the rate of certain reactions and
produce primordial element abundances incompatible with observations. Turning the argument around, the~observed abundance of primordial elements can serve to set limits on the
amount of dark matter annihilations during the nucleosynthesis era~\cite{Reno:1987qw,Kawasaki:2015yya,Jedamzik:2004ip,Hisano:2009rc}. The~main uncertainties of this procedure are the uncertainties in
measuring the primordial element abundances, a~difficult measurement due to the intrinsic contamination of non-primordial elements in any source considered. Measurements of some elements have reached
a precision of less than 1\% ($^{4}$He~or D), but~others like $^{7}$Li remain at about 20\%~\cite{Fields:2019pfx}. Even with these caveats, BBN arguments allow to set quite stringent limits on $\left < \sigma_{\rm A} v \right >$ as a~function of dark matter mass, as~it is shown in the right plot of Figure~\ref{fig:CMB-BBN_limits}.
Changes in BBN processes will bear on the temperature and polarization power spectra of the CMB. The~CMB conveys detailed
information about the last scattering surface and, if~dark matter decayed or annihilated in the early universe, the~extra energy deposited in the plasma could delay recombination or
smooth out the primordial density fluctuations. Such an~effect would be visible in the CMB data today~\cite{Slatyer:2015jla,Slatyer:2015kla,Zhang:2006fr}. However, not
all the energy released in the annihilation of two dark matter particles is absorbed into the surrounding medium and it is usual to express results from CMB analyses in terms
of an~efficiency factor, $f(z)$, which represents the fraction of the annihilation energy that goes into heating and ionizing the surrounding plasma. This factor depends both on redshift and
the specific dark matter model through the branching ratios to particles that couple to the plasma, typically~photons and electrons and positrons, but~it lies in the range 0.001--1~\cite{Ade:2015xua}. The~redshift dependence of $f(z)$ is less acute than its dependence on the dark matter model, since the universe is evolving fast, and~the effects of injecting
energy into it are relevant only during a~limited $z$ range, typically between 1000 and 600, so one can approximate $f(z)$ with an~effective constant value $f_{\rm eff}$ which practically
only depends on the properties of the dark matter model assumed. Following~\cite{Ade:2015xua}, the~energy released per unit volume from dark matter annihilations as the universe expands can
be written as
\begin{equation}
\frac{dE}{dt dV}(z)~=~2 g^2 \rho_{\rm crit}^2 c^2 \Omega_c^2 \left( 1+z \right)^6 f_{\rm eff} \frac{\left < \sigma_{\rm A} v \right >}{m_{\rm DM}}
\label{eq:CMB_injection}
\end{equation}
where $\rho_{\rm crit}$ the critical density of the universe today, $m_{\rm DM}$ is the mass of the dark matter particle, and~$\left < \sigma_{\rm A} v \right >$ is the thermally-averaged
annihilation cross-section times the dark matter velocity. While~results from CMB observations are sometimes given as a~function of $f_{\rm eff} \left < \sigma_{\rm A} v \right >$, detailed~calculations of the effect of annihilations assuming a~simple dark matter model can be performed in order to break the degeneracy and express the results as limits to the
$\left < \sigma_{\rm A} v \right >$ as a~function of the dark matter mass. This has the advantage of presenting results which are directly comparable with other indirect search techniques mentioned in
this review, while not being more model dependent. On~the contrary, rather less model dependent since, as~mentioned above, source modeling does not enter here aside from the
usual assumptions of $\Lambda$CDM cosmology. There is one caveat, though, when~comparing limits on $\left < \sigma_{\rm A} v \right >$ from indirect experiments, probing dark matter
accumulated in galactic halos, with~limits on $\left < \sigma_{\rm A} v \right >$ from CMB.
Note that the direct comparison works under the assumption of s-wave annihilations. For~higher order contributions or
more complex models, with~velocity-dependent annihilation cross section for example, the~comparison of limits should be taken with care since they probe very different dark matter velocity regimes.
A thermalized 100~GeV particle in a~3000~K plasma (the temperature of the universe at recombination) has a~typical velocity of the order of 100~m/s, while dark matter particles in galactic
halos have typical velocities of 200~km/s.
The left plot of Figure~\ref{fig:CMB-BBN_limits} shows limits on $\left < \sigma_{\rm A} v \right >$ as a~function of dark matter mass, where a~detailed calculation of the effects in the CMB anisotropy induced by the
injection of photons, electrons, muons and W's from dark matter annihilations in the expanding plasma has been used to calculate $f_{\rm eff}$~\cite{Kawasaki:2015peu,Kanzaki:2009hf,Kanzaki:2008qb}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth,height=0.45\linewidth]{figures/BBN_limit_sigma-v.pdf}
\includegraphics[width=0.45\linewidth]{figures/CMB_limit_sigma-v.pdf}
\caption{
(\textbf{Left}) The 95\% confidence level limits on the velocity-weighted dark matter annihilation cross section as a~function of dark matter mass for different annihilation channels,
obtained from an~analysis on the primordial abundances of light elements from big-bang nucleosynthesis. Figure from~\cite{Kawasaki:2015yya}.
(\textbf{Right}) The 95\% confidence level limits on the velocity-weighted dark matter annihilation cross section as a~function of dark matter mass for different annihilation channels,
obtained from an~analysis of CMB anisotropy data. Figure from~\cite{Kawasaki:2015peu}.
}
\label{fig:CMB-BBN_limits}
\end{figure}
After recombination, the~universe enters a~phase where primordial atoms have been formed and the ambient radiation does not have enough energy to ionize them. This is called the
``Dark Ages'', and~it lasts until about z$\sim$200, when the first large structures start to form by gravitational growth from the primordial fluctuations. As~matter collapses into structures it heats
up, emitting ultraviolet radiation that can interact with the surrounding gas, specifically inducing a~strong absorption at the resonant frequency of the hyper-fine transition between the triplet and the singlet levels of the hydrogen ground state. These two levels are separated by
$5.9 \times 10^{-6}$~eV, which corresponds to a~wavelength of 21.1~cm and a~frequency of 1420~MHz. Today, the~redshifted absorption signal appears typically in the 30--200~MHz range and opens the possibility to
use radio telescopes as instruments to study the early universe. The~21-cm line has been used to measure rotation curves of galaxies in a~complementary way to optical measurements. However, the primordial
21-cm sky follows the matter distribution at the reionization era and provides additional information to CMB cosmology~\cite{Pritchard:2011xb}. At~about z$\sim$6 reionization ends and the whole universe
remains ionized until the present epoch. However, up to redshifts about z$\sim$50, the~21-cm signal is expected to be negligible. The~presence of dark matter annihilations or decays can change this timeline,
providing a~continuous source of energetic electrons, positrons and gamma-rays that could start the reionization at an~earlier time and leave a~measurable effect in the 21-cm map~\cite{Evoli:2014pva,Belikov:2009qx,DAmico:2018sxd,Natarajan:2008pk,Natarajan:2009bm}. Ideally, one could even distinguish the time evolution of dark matter annihilations, which is different from decays. Since the annihilation rate is proportional to $N_{\rm DM}^2$, annihilations can dominate
at early times, when the cosmic expansion has not diluted the number density, but~they could reignite at a~later stage, when dark matter has collapsed into primordial halos, providing a~local enhancement
of the dark matter density. Since decays are just proportional to $N_{\rm DM}$, the~energy injection into the surrounding medium is uniform with time~\cite{Cumberbatch:2008rh,Chuzhoy:2007fg}.
Given the wide energy spectrum of the annihilation/decay products of dark matter (up to the dark matter mass) and the resonance character of ionization, only a~small fraction of the injected energy
from dark matter can be used for ionization. This limits the dark matter models that can be probed by 21-cm line cosmology to low-mass, typically $\mathcal{O}$(MeV), candidates. Only a~tiny fraction of
the spectrum of annihilations of heavier candidates is effective for ionization, so the strength of the method decreases rapidly with dark matter mass~\cite{Ripamonti:2006gq,Ripamonti:2006gr,Mapelli:2006ej,Mapelli:2007kr,Lopez-Honorez:2016sur}.
As we showed for the case of the CMB, the~energy injected into the intergalactic medium due to dark matter annihilations can be related to the annihilation cross section and candidate mass. There~are several ways to express this, as~fraction of ionized Hydrogen as a~function of redshift, as~the temperature of the Hydrogen gas as function of redshift or, what I think gives a~more visual connection
to the dark matter characteristics, as~the rate of energy deposition from annihilations as a~function of redshift~\cite{Valdes:2007cu},
\begin{equation}
\frac{dE}{dt}(z)~=~\frac{1}{2} N^2_{\rm DM,0} N_{\rm b,0} \left( 1+z \right)^3 f_{\rm abs}(z) \frac{\left < \sigma_{\rm A} v \right >}{m_{\rm DM}}
\label{eq:21cm_injection}
\end{equation}
where $N_{\rm DM,0}$ is the current number density of dark matter particles, $N_{\rm b,0}$ is the current baryon number density, $f_{\rm abs}$ is the fraction of the energy released by the dark matter
annihilations that goes into ionization and, as~in Equation~(\ref{eq:CMB_injection}), $m_{\rm DM}$ is the mass of the dark matter particle and $\left < \sigma_{\rm A} v \right >$ is the thermally-averaged
annihilation cross-section times the dark matter velocity.
The calculation of the left hand side of Equation~(\ref{eq:21cm_injection}) is not easy since the injection of energy occurs in a~complex and expanding medium, but~
once a~model for the evolution of the intergalactic medium is in place, observations of the 21-cm primordial intensity in the sky (proportional to the injected energy from dark matter annihilations) can
be used to constrain $\left < \sigma_{\rm A} v \right >$ as a~function of $m_{\rm DM}$.
The main foreground comes from the contribution from later processes, like radio emission from synchrotron processes in our galaxy or from other galaxies and clusters of galaxies, which has to
be carefully dealt~with.
There are several radio observatories in operation that have the sensitivity to set competitive limits on dark matter annihilations and, indeed, observations from the EDGES telescope~\cite{Bowman:2018yin}
have been used to set limits on dark matter characteristics. The~EDGES telescope measured the brightness temperature of the 21-cm line relative to the CMB, finding that the absorption profile is consistent with expectations. However, it also found that the relative amplitude of the absorption is more than a~factor of two larger than the predictions from standard cosmology at a~3.8$\sigma$ confidence level. Since we are talking of a~relative measurement of $T_{\rm 21cm}$ with respect to $T_{\rm CMB}$, the~discrepancy can be interpreted either as the primordial gas at z$\sim$20 being colder than expected, or~the background radiation being hotter than expected ($T_{\rm 21cm}$ and $T_{\rm CMB}$ denote, respectively, the~temperature of the ambient gas and of the CMB at the end of the Dark Ages). Standard explanations assuming increased radiation from the early stars and/or a~
slightly different decoupling redshift between matter and radiation can (more~or less) accommodate the EDGES measurement, since there are still large uncertainties on the early star formation history of
the universe. However, the result from EDGES can also be interpreted in terms of dark matter. Indeed, interactions of dark matter with surrounding baryons or the injection of energy due to dark matter decays or annihilations can change the thermal history of the hydrogen gas and accommodate the observation by EDGES~\cite{Panci:2019zuu,Liu:2018uzy,DAmico:2018sxd,Barkana:2018lgd}.
The contribution of dark matter annihilation products to heating the background radiation can lead to a~suppression of the observed absorption (even erasing it if the energy injection is too large), so
the observation of an~anomalous absorption level in the 21-cm line sky can be translated into bounds on dark matter annihilations or decays. On~the other hand, interactions
between dark matter and surrounding baryons can lower the ambient gas temperature, and~can explain the observed feature if the dark matter species is light (below a~few GeV) and its interaction cross
section lies above a~few $10^{-21}$~cm$^2$~\cite{Barkana:2018lgd}. Figure~\ref{fig:21cm_limits} shows an~example of the constraints obtained from the EDGES observation for two annihilation channels, and~assuming that the energy deposited from the annihilations is absorbed by the surrounding plasma with some delay~\cite{DAmico:2018sxd}. This needs to be modeled and adds a~source of uncertainty in the calculations but it is justified for some annihilation products that can penetrate the gas before fully interacting with it. In~other cases, like short-lived particles, a~``prompt'' energy deposition approximation is more accurate. See~\cite{DAmico:2018sxd} for additional limits obtained under the prompt energy deposition approximation.
Note that the limits reach lower masses than other indirect searches, but~the caveat mentioned above about the different average velocities of dark matter today (as probed by annihilations in the halo of our Galaxy or nearby galaxies) and at higher redshifts (as probed by the 21-cm line) remains when trying to compare~results.
\begin{figure}[t]
\centering
\includegraphics[width=0.70\linewidth]{figures/sigma-v_21cm-EDGES_mumu.pdf}
\caption{ Limits on the velocity-weighted dark matter annihilation cross section as a~function of dark matter mass from the absorption observed in the 21-cm spectrum at $z\sim 17$, for~ two examples of annihilation channels. Figure from~\cite{DAmico:2018sxd}. }
\label{fig:21cm_limits}
\end{figure}
Constrains on the dark matter annihilation cross section can also be obtained by using radio telescopes to target specific sources, in~a similar way that it is done with gamma-ray, X-ray and neutrino telescopes. The~technique is based on looking for synchrotron radio emission from the charged products (mainly e$^-$ and e$^+$) of dark matter annihilations in the halo of a~suitable galaxy. The~author
in~\cite{Tyler:2002ux} used Draco as early as 2002 to set limits on the dark matter velocity averaged self annihilation cross section from VLA observations. More recently, the~LOFAR collaboration has carried out a~competitive search for dark matter in the dwarf galaxy Canis Venatici~\cite{Vollmann:2019boa}. The~results of this technique depend on the diffusion of the charged particles once
emitted and on the profile of the local magnetic field, two quantities that are not well known, so the results are ineluctably model-dependent. However, with reasonable assumptions on the structure and
magnetic field strength from a~given source, this~method yields competitive results to gamma-ray, cosmic-ray or neutrino dark matter searches from dwarf galaxies as illustrated in Figure~\ref{fig:LOFAR}.
\begin{figure}[t]
\centering
\includegraphics[width=0.70\linewidth]{figures/sigma-v_LOFAR.pdf}
\caption{ Limits on the velocity-weighted dark matter annihilation cross section as a~function of dark matter mass from LOFAR, assuming annihilation into $e^+e^-$. Results are dependent on the galactic diffusion model used and assumed magnetic field strengths, and~results are shown for three different models (blue dotted, black full and red dashed lines). Figure from~\cite{Vollmann:2019boa}. See reference for details in the diffusion models~used. }
\label{fig:LOFAR}
\end{figure}
\unskip
\section{Discussion}
Even that severe constraints from accelerator experiments have closed a~big chunk of the supersymmetric parameter space, which was
originally a~favourite to provide a~dark matter candidate with the right mass and interaction strength, the~possibilities for
particle candidates remain quite wide: Models with very low or very high masses and/or very weak or strong interactions are still viable and escape current accelerator constraints.
On the experimental side, the~sensitivity of neutrino telescopes, gamma- and X-ray telescopes and cosmic-ray detectors has improved enormously, sometimes by orders of magnitude, in~the last decades,
but the fact remains that no experiment has reported any positive signal so far.
We seem to have past the point where a~clear, strong, unambiguous signal will be detected by any given experiment. It seems probable that, rather, any detection of particle dark
matter, if~that is the solution to our current observational conundrum, will show itself as a~slight deviation from the background.
Given the uncertainties in the backgrounds of some of the techniques mentioned in this review, it will not suffice that a~single experiment detects an~anomaly, but~that a~consistent
picture emerges from all the signatures that experimentalists are looking for. I~have mentioned Occam's razor a~couple of times in this review precisely because, taken individually, the~dark matter
explanation of any single observation might easily be superseded by an~hypothesis using only conventional physics. It is in the correlation between results of different experiments where
we should look. Paraphrasing Viktor Hess in his Nobel lecture in 1936:
In order to make further progress, particularly in the field of dark matter searches, it will be necessary to apply all our resources and apparatus simultaneously and side-by-side (while the original
formulation referred to the ``field of cosmic rays''~\cite{Hess:1936aaa}). The~aim of this review is to give a~glimpse on the signatures looked for in indirect searches for dark matter,
accompanying them with examples of recent results and hopefully conveying to the reader that indirect searches for dark matter is a~complex and multidisciplinary field, currently~driven by
observations.
Searches for the products of dark matter annihilations in our galaxy or nearby galaxies with gamma-ray telescopes provide stringent limits to the dark matter annihilation cross
section, mainly~at relatively low dark matter masses, in~the region of a~few GeV to $\sim$100 GeV, but~they suffer from complex backgrounds. Dwarf spheroid galaxies are a~relatively
background free sources that are commonly used in these kind of searches. X-ray telescopes extend the mass reach of gamma-ray telescopes to the few keV region since they can detect softer
signatures from final-state radiation of charged decay or annihilation products of low mass candidates. Neutrino telescopes provide a~complementary search approach, currently more sensitive to high dark matter
masses, $\mathcal{O}$(100~GeV) and above, while~searches with charged cosmic-rays suffer from uncertainties from propagation and relevant cross sections, like~antiproton
production cross section, needed to understand the production mechanisms at the source and as a~background. However, they also provide competitive results, comparable to the two
other techniques just mentioned. Large radio arrays have recently joined the effort of searching for annihilations of dark matter in dwarf galaxies by looking for anomalous
radio emission from synchrotron radiation of $e^+$ and $e^-$ produced as a~result of dark matter annihilations or decays. There are new astrophysical uncertainties involved in this kind of
technique due to the needed knowledge of the magnetic field of the source and the propagation of the electrons in the complex medium that can be a~galactic halo. However, results look promising,
and they are competitive with the results from gamma-rays, neutrinos and cosmic-rays under certain assumptions of the structure of the source. So these five, let us call them {\em source-based},
searches provide really complementary results and, taken together, will really help characterizing any~signal.
On top of the source-based searches, early universe cosmology presents an~independent way to search for dark matter, based on the effects of dark matter annihilations
(or decay) in the evolution of the early universe. Additional energy deposition on the primordial plasma, from~the CMB era to the first formation of structures, must not
cause a~distortion of the large-scale structure of the universe as we see it today. Quite stringent limits on the annihilation or decay characteristics of dark matter
can be extracted from such~requirement.
In the big picture of things the question remains: Is the dark sector really so simple as one stable particle species, while the visible sector comprises several
fundamental particles and families? Or,~even more radically: Is really the solution to the dark matter problem a~particle-based one? From a~pure experimental point of view,
there is currently no evidence for or against either of these options. The~particle solution was historically adopted since extensions of the Standard Model of particle physics
predict the existence of new particles, and~some of them have the characteristics needed of a~dark matter candidate (the ``WIMP miracle''), and~in order to reduce the
parameter space, experimental searches are simply interpreted in terms of a~single particle solution. The~constraining power of the results presented in this review would
be lost if several candidates would have to be taken into account. Not least, the~hierarchy of the masses, strength of interactions and annihilation branching ratios of the
different candidates would be unknown. This poses a~limitation from the experimental point of view, but~it should not stop theorists from exploring such
scenarios~\cite{Belanger:2020hyh,Yaguna:2019cvp,Elahi:2019jeo,Poulin:2018kap,Ahmed:2017dbb,Aoki:2012ub,Zurek:2008qg}.
As for the second question above, there are proposed ways out from the dark matter conundrum without the need of dark matter.
Modified Newtonian Dynamics (MOND)~\cite{Famaey:2011kh}, and~extensions thereof, is the more completely developed approach so far, but~it requires a~modification of
Newtonian gravity (or~General Relativity for that matter), theories that are understood to be universally valid and, therefore, any proposed modification is, at~a minimum,
philosophically difficult to swallow. MOND~really solves the problem it set out to solve: Explaining the flat rotation curves of galaxies taking into account only the
visible baryonic matter. It tends to fail, though, when trying to describe complex halos with velocity streams perpendicular to the galactic plane~\cite{Lisanti:2018qam},
and has serious difficulties with early universe cosmology, the~formation of large scale structure and, specifically, the~speed of gravitational waves~\cite{McGaugh:2014nsa,Sanders:2018jiv}.
Not that $\Lambda$CDM cosmology is free from problems
(a singularity at $t=0$ and the need for an~ad-hoc cosmological constant for example; not to mention the need for a~new type of dark matter itself). It is not the aim to review here the
advantages of $\Lambda$CDM cosmology over alternative theories of gravity or vice-versa, but~I would just like to finish by reminding the reader that the lack of any
experimental signal so far from dark matter searches really forces us to keep all our options~open.
\vspace{6pt}
|
1,116,691,499,138 | arxiv | \section{\label{sec:intro}Introduction}
Structured proteins in their folded state possess a rich variety
of three dimensional conformations ({\it tertiary structures})
which are usually classified in terms of the
mutual arrangements of the so--called motifs and
elements of the {\it secondary structure} \cite{tooze}.
The secondary structures are usually characterized
as segments of the protein chain possessing a strong regularity in the
values of the Ramachandran angles \cite{rama1,rama2}.
The most common of such structures are the $\alpha$-helix and the $\beta$-sheet.
There is a general view that the microscopic chirality of individual amino
acids is responsible for the twisted shape of $\beta$-sheets in
globular and fibrous proteins \cite{salemme}.
Apart from the flat conformation described by Pauling and Corey \cite{linus},
a $\beta$-strand can acquire a nonzero degree of helicity
with a finite twist along the principal axis of the
polypeptide chain. Furthermore, the helical structure of a single strand
directly relates to the \emph{macroscopic} twisted
shape of the whole $\beta$-sheet.
The $\beta$-sheet conformation has been recently exploited as a
reference structure for the novel biomaterials
produced by large--scale self--assembly of oligopeptides in solution
\cite{soms1,soms2,mit1,mit2}.
In the latter Refs.\ it has been shown that oligomeric peptides can be
rationally designed so that they self--assemble unidirectionally in solution
forming helically twisted $\beta$-sheet tapes stabilized by a regular
arrangement of hydrogen bonds. The main factors which stabilise the
tape structures are: the intermolecular hydrogen bonding between
the polypeptide backbones, cross--strand attractive forces between
sidegroups (hydrogen bonding, electrostatic and hydrophobic) and
lateral recognition (steric and $\pi$-$\pi$ interactions)
between the adjacent $\beta$-strands.
The tape structure is regarded as only the
first in the hierarchy of equilibrium structures observed
with increasing oligopeptide concentration such as the double tapes,
fibrils, fibers, and eventually nematic gels \cite{soms1,soms2,mit1,mit2}.
This novel route towards engineering of biomaterials is an alternative
to some approaches which caused controversy
and provides a possibility for simple
equilibrium control of the biomaterials architecture, their functional and
mechanical properties, as well as the kinetics of their formation
in response to pH, temperature and other triggers.
Among different applications of these biomaterials which could be envisaged
one can mention their current use as three--dimensional scaffolds for
tissues growth \cite{mit1,mit2}.
Moreover, these oligopeptides--based assemblies serve as a simple
experimental model system which could be used for providing valuable insights
into the self--assembly and aggregation mechanisms of natural proteins and,
in particular, formation of plaques of $\beta$-amyloids \cite{amyl} and
fibrous proteins structures.
In terms of computer simulations, a number of Molecular Dynamics studies
of oligopeptide systems were reported recently \cite{mit1,soms3}.
These works have explored the (meta)stability of relatively small
aggregates over fairly short computing times accessible to simulations, thereby
providing valuable structural information which is difficult
to extract from the experiment. Unfortunately, it is difficult to
establish whether those structures were sufficiently well equilibrated
over such short run times and only several particular oligopeptide sequences
were considered.
At present, there is still no full understanding regarding the details of the
functional relationship between the chiral nature of the single $\beta$-strand
and the helical geometry of the $\beta$-tape.
More generally, only in recent years the connection
between the molecular handedness and the morphology of supramolecular
assemblies was examined \cite{selinger,harris,kornyshev}.
Researchers have found that chirality controls the shape
of the macroscopic assemblies not only in natural and synthetic
peptides, but also in other biological systems such as lipids, fatty acids,
and nucleic acids (see Ref. \onlinecite{selinger} for a review on models
for the formation of helical structures made up of chiral molecules).
Recently, Nandi and Bagchi \cite{nandi1,nandi2,bag} have proposed a model
for the assembly of chiral amphiphilic molecules.
The latter is based on a simplified representation of either the
geometry or the potential energy, which is then minimized
in order to find the most efficient packing.
Their results, which are consistent with the experiment, show that chiral
tetrahedral amphiphiles of the same handedness assemble at a finite
angle with respect to their neighbors, driving the formation process of
helical clusters.
In $\beta$-sheet tapes a similar behaviour is found, where the microscopic
chirality arises from the intra--molecular interactions.
The intermolecular forces then stabilize the tape structure with
a finite twist angle observed between the neighboring strands.
This twist angle transfers the chirality from the single strands to the
level of the mesoscopic assembly, which hence possesses chirality as well.
This type of the secondary structure could be rationalised as a
compromise between the \emph{out--of--plane} energy term originated
from the chirality of the single peptides and the inter--molecular
(mainly hydrogen bonding) energies of the backbones atoms,
preferring a flat arrangement.
The main goal of this work is to achieve a fundamental understanding
of the way in which the microscopic chirality of single peptide molecules
manifests itself at a larger supramolecular scale of the self--assembled tapes.
In practice, we would like to construct a \emph{minimal} model capable of
capturing the most essential features of this phenomenon.
For this, we shall adopt a simplified coarse--grained description for
the rod--like oligopeptides with a nonzero degree of helicity.
Our computational study will be based on classical Monte Carlo simulations in
continuous space with the use of the standard Metropolis algorithm
\cite{metropolis} much used in polymer simulations.
To describe the inter--molecular hydrogen bonding
occurring within two--dimensional $\beta$-tapes
we shall use a coarse--grained description via a combination of the
soft--core repulsion terms and short--ranged attractive terms.
Furthermore, the microscopic chirality is introduced via an \emph{ad hoc}
quadratic term. The functional dependence of the macroscopic
twist on the strength of the latter will then be analysed.
The explicit forms for different potential energy terms, including those
describing the bonded interaction, and motivation for their choice
are detailed in the next section.
\section{\label{sec:meth}Methods}
A peptide $\beta$-sheet is a regular secondary structure
characterised by values of the Ramachandran angles lying
within the upper left quadrant of the Ramachandran plot \cite{tooze}.
Its basic units are short peptide segments (called $\beta$-strands)
which are stabilized by an ordered network of hydrogen bonds
between the atoms of the backbone. The spatial sequence of the
$\beta$-strands can follow a \emph{parallel} or
\emph{antiparallel} pattern, depending on the reciprocal arrangement of
the strands termini.
Whether $\beta$-sheets are formed by $\beta$-strands connected
via loops regions within the same chain (intra--molecular sheets)
or from many oligomeric peptides (inter--molecular sheets), as e.g.
in synthetic peptides assemblies and
$\beta$-amyloids, they all conform to a variety of twisted and curved
geometrical surfaces \cite{salemme}. The twisting appears in $\beta$-sheets
due to the nonzero chirality of the single strands, the
backbone geometry of which can be well approximated
by a circular helix. The mathematical definition for the latter
is given by the classical differential geometry of curves \cite{struik}.
While fully atomistic computational studies of proteins are quite common,
there is also a tradition of using simplified off--lattice models
in the literature \cite{levitt1,levitt2}. The latter facilitate simulations
by retaining the {\it most essential features} of the peptides and overcome
the difficulty with rather excessive equilibration times of the fully
atomistic systems, which often raise doubt as to the validity of the final
results for them.
A number of important results on fast--folding proteins and peptides
have been obtained in the last decade via such coarse--grained approaches
\cite{thiru1,thiru2,takada,bag,marg1}.
In the search of a suitable simplified model for the $\beta$-strand capable of
generating a stable two--dimensional tape, one can use the so--called
$C_{\alpha}$ models \cite{marg2}, which retain one interaction site per residue.
Then the inter--molecular hydrogen bonds could be incorporated,
for example, via the angular--dependent effective potential \cite{klim}.
Another direction is to consider a different class of
the so--called $C_{\alpha}$-$C_{\beta}$ intermediate level models
\cite{marg2}, in which two or three interaction sites are retained
per residue in the backbone.
A two--dimensional aggregate can then be generated by
mimicking the hydrogen bonding via a sum of effective Lennard--Jones
terms \cite{takada}, also incorporating the steric effects of the
side chain.
Although this is still a crude representation for the hydrogen bond, we find
the second route quite satisfactory for our purposes.
\subsection{Geometry of the simplified model}
In our model, the coarse--grained geometry of the single $\beta$-strand
retains three interaction sites per residue.
The backbone of the single amino acid is represented by two beads named
$C$ and $N$, standing for the moieties $C_{\alpha}HC'O$ and $NH$,
respectively.
Each sidegroup (amino acid residue) is then modeled by a
bead $S$ bonded to $C$.
One could also easily introduce many types of the sidegroups, but
we shall defer studying more complicated sequences to the future
publications until we are fully satisfied with the performance of
the simplest models of homogeneous sequences.
We shall introduce all of the energy parameters of our coarse--grained
models in units of $k_B T$. It should be noted, however, that these parameters
are {\it effective} as we have reduced the number of degrees of
freedom considerably by introducing united atoms and by describing
the solvent implicitly.
Thus, these parameters are temperature--dependent, and so are the equilibrium
conformations, even though $k_B T$ cancels out formally from the Boltzmann
weight $\exp(-E/k_B T)$. In principle, one could determine how the
coarse--grained parameters are related to the fully atomistic ones at a given
temperature via a procedure analogous to that of Ref. \onlinecite{btg}.
However, as any inverse problem, it is a considerably difficult task.
In practice, we have chosen the temperature equal to $300$ K and
the numerical values of most of the energy parameters so that they broadly
correspond to the typical values in the fully atomistic force fields.
Concerning the purely phenomenological parameters, such as the chirality
parameter, their values were chosen so that a reasonable experimental range
of the twist in the structures is reproduced.
\subsection{Potential energy function of the model $A$}
The first choice for the potential energy model follows the guidelines of
the model proposed by Thirumalai and Honeycutt
\cite{thiru1,thiru2} in their mesoscopic simulations of $\beta$-barrels.
This \emph{minimal} forcefield model adopts functional forms
of interactions akin to those typically employed in fully atomistic
molecular mechanics models.
\subsubsection{Bond length potential}
The length of each bond connecting two monomers is restrained towards
the equilibrium value via a harmonic potential:
\begin{equation}
U_{bond}=\frac{K_{b}}{2}(r-r_{eq})^{2},
\end{equation}
in which $K_{b}=200.0\ k_{B}T\cdot\mbox{\AA}^{-2}$ and $r_{eq}=2.0\ \mbox{\AA}$.
\subsubsection{Bond angle potential}
Bond angles defined via triplets $C_{i}-N_{i+1}-C_{i+1}$,
$N_i-C_i-N_{i+1}$,
$N_{i}-C_{i}-S_{i}$ and $S_{i}-C_{i}-N_{i+1}$ are controlled
via a harmonic potential of the form:
\begin{equation}
U_{angle}=\frac{K_{\theta}}{2}(\theta-\theta_{eq})^{2},
\end{equation}
where $K_{\theta}=40.0\text{ } k_{B}T$ and $\theta_{eq}=120^{\circ}$,
$\theta_{eq}=0^{\circ}$ for angles centered at $C$ and $N$
respectively \cite{Baumketner}.
\subsubsection{Dihedral angle potential}
Dihedral angle $\alpha$ is defined by the following formula:
\begin{equation} \label{eq:dihe}
\alpha=\mbox{sign}(\alpha)\cdot \text{arccos}\Bigg(\frac{(\bm{r}_{12}\times\bm{r}_{32})\cdot(\bm{r}_{32}\times\bm{r}_{34})}
{\|\bm{r}_{12}\times\bm{r}_{32}\|\|\bm{r}_{32}\times\bm{r}_{34}\|}\Bigg),
\end{equation}
where $\bm{r}_{ij} = \bm{r}_{i}-\bm{r}_{j}$ and
\begin{equation}
\mbox{sign}(\alpha)=\mbox{sign}(\bm{r}_{12}\cdot\bm{r}_{32}\times\bm{r}_{34}).
\end{equation}
Torsional degrees of freedom are constrained by a sum of terms associated
with quadruplets of successive $C$ beads and having the form
\cite{Baumketner}:
\begin{equation}
U_{tors}=-A \cos(3\alpha)-B \cos(\alpha),
\end{equation}
where $A=B=4.0\text{ } k_{B}T$.
An additional dihedral term is introduced to force planarity
between pairs of subsequent $S$ beads:
\begin{equation}
U_{plane}=D \cos(\alpha).
\end{equation}
It is applied to the quadruplets $S_{i}-C_{i}-C_{i+1}-S_{i+1}$
and $D=4.0\text{ } k_{B}T$.
The presence of this term, increases the stability of the structures,
by enhancing the steric hindrance due to the side chains.
As mentioned above, this excluded volume effect is quite essential for
generating a two--dimensional tape as the inter--molecular
hydrogen bonds have no directional dependencies in our model.
\subsubsection{Chirality}
Handedness is introduced in the model
via a quadratic term involving only quadruplets of
successive $C$ beads, that is:
\begin{equation}\label{eq:tau1}
U_{chiral}=\frac{K_{\tau}}{2}(\tau-\tau_{0})^{2},
\end{equation}
where $K_{\tau}= 10\,k_B\,T$
and $\tau$ is equal to the normalized numerator of the analogous
quantity defined in Frenet--Serret picture of spatial curves
\cite{struik}:
\begin{equation}\label{eq:tau2}
\tau = \frac{\bm{r}_{12}\cdot\bm{r}_{22}\times\bm{r}_{34}}
{\|\bm{r}_{12}\|\|\bm{r}_{23}\times\bm{r}_{34}\|}.
\end{equation}
The dependence of the chirality parameter on the temperature is expected
to be relatively weak.
This is in qualitative correspondence with the experimental data \cite{soms1},
which has revealed high structural stability of tape assemblies,
in a wide range of temperatures, essentially while water remains liquid.
However, more experimental data is required in order to determine how exactly
$K_{\tau}/k_B T$ depends on the temperature.
\subsubsection{Non--bonded Interactions}
A short--ranged Lennard--Jones term is used here in order to represent,
in a highly simplified way, the inter--molecular hydrogen bonding typical
of $\beta$-sheet structures:
\begin{equation}
U_{LJ}=\epsilon_{1}\Big[\Big( \frac{\sigma^{LJ}}{r}\Big)^{12}
-\Big( \frac{\sigma^{LJ}}{r}\Big)^{6}\Big],
\end{equation}
where the energy constant is $\epsilon_{1}=5.0\text{ } k_{B}T$ for all
the interactions involving the backbones beads.
The van Der Waals radius is taken as $\sigma^{LJ}=2.0\text{ }\cdot\mbox{\AA}$
for $C-C$ and $N-N$ inter--molecular interactions
and as $\sigma^{LJ}=3.0\text{ }\mbox{\AA}$ for $C-N$ pairs. A small attractive
well is introduced between pairs of $S$ united atoms by
choosing $\epsilon_{2}=1.0\text{ } k_{B}T$
and $\sigma^{SC}=2.0\text{ }\mbox{\AA}$ in order to mimic various attractive
forces between the sidegroups.
Finally, a soft core (steric) repulsion,
for all pairs involving $C-S$ and $N-S$ is added:
\begin{equation}
U_{repul}=\epsilon_{1}\Big(\frac{\sigma^{SC}}{r}\Big)^{12}.
\end{equation}
We should emphasise that the intra--molecular non--bonded interactions
are only included for pairs of sites connected via more than or equal
to three bonds.
\subsection{Potential energy function of the model $B$}
The second choice for the potential energy function
outlined here provides a more phenomenological
approach to the behaviour of the
coarse--grained systems from the basic geometrical principles.
For details of the Frenet--Serret picture of curves
we refer the interested reader to Ref. \onlinecite{struik}.
We shall attempt to exploit and generalise Yamakawa's geometrical ideas for
helical wormlike chain model \cite{yamakawa}.
Here the bond length potential and the non--bonded interactions
retain the same functional form as in the model A.
The remaining bonded interactions are modeled as follows.
\subsubsection{Curvature}
The bond angle potential consists of a sum of harmonic terms involving
the curvature $\kappa$:
\begin{equation}
U_{angle}=\sum_{angles}\frac{K_{\kappa}}{2}(\kappa-\kappa_{eq})^{2},
\end{equation}
where $K_{\kappa}=40.0\text{ } k_{B}T\cdot\mbox{\AA}^{-2}$ and
$\kappa_{eq}=2.0\text{ }\mbox{\AA}$, $\kappa_{eq}=0.0\text{ }\mbox{\AA}$,
for angles centered in $C$ and $N$ respectively and
with the curvature $\kappa$ defined as:
\begin{equation}
\kappa^{2} = 2\lbrack1-\cos(\theta)\rbrack.
\end{equation}
This definition of curvature is slightly different from the one
used by Yamakawa as his definition also depends on
the pitch of the helix due to a different normalisation condition
\cite{yamakawa}.
\subsubsection{Torsion}
Backbone is constrained towards a planar conformation by the terms:
\begin{equation}
U_{tors}=\sum_{dihedrals}\frac{K_{\tau}}{2}\tau^{2},
\end{equation}
involving only $C$ monomers, with $K_{\tau}=20.0\text{ }
k_{B}T$.
\subsubsection{Chirality}
Chirality is introduced here via the term:
\begin{equation}
U_{chiral}=\frac{K_{\tau}}{2}(\tau-\tau_{0})^{2},
\end{equation}
applied to the quadruplets $S_{i}-C_i-C_{i+1}-S_{i+1}$
with $K_{\tau}=20.0\text{ } k_{B}T$.
\subsection{Procedure for Metropolis Monte Carlo simulations
in continuous space}
For simulations we used the in--house
Monte Carlo (MC) code named PolyPlus with the standard Metropolis
algorithm \cite{metropolis} and local monomer moves,
which represents a straightforward extension to a
more generic force (energy) field of the implementation
described by us in Ref.~\onlinecite{CorFunc}.
This was extensively used in the past and was quite successful
in tackling a wide range of problems for different heteropolymers
in solution.
Note that the periodic boundary were unnecessary
in the present study as we are dealing with an attractive
cluster. Therefore, no boundary conditions were required as
the centre--of-mass of the system was maintained at the origin.
First, single coarse--grained $\beta$-strands made of $N=11$ residues are
placed into a planar, antiparallel arrangement.
Starting from this initial conformation, systems of three different sizes
(namely composed of $M=7,\ 15$ and $45$ strands) have been studied,
using either the potential energy model $A$ or $B$ and
with varied values of the chirality parameter $\tau_{0}$.
Specifically, we ran simulations in which $\tau_{0}$ takes values
in the two sets $\{0.0,0.25,0.5,0.75,1.0\}$ and $\{0.0,0.1,0.2,0.3\}$
within the potential energy models $A$ and
$B$ respectively.
The difference between the two chosen sets is due to
the different way in which the bonded interactions are
implemented in both models.
Simulations times varied from $1\cdot10^{7}$ to $5\cdot10^{7}$
Monte Carlo sweeps.
About one fifth of that was required to achieve a good quality of equilibration,
which was carefully monitored by analysing the trends in the potential energy,
the radius of gyration of the tape, and the wave number $k$ values;
and the rest four fifth of the run time were the production
sweeps used for sampling of all observables. Thus, we were able to
achieve both a good equilibration and a good sampling statistics for the
observables of interest.
\section{\label{sec:defs}Definitions of some observables}
The circular helicoid is the minimal surface having a circular helix
as its boundary \cite{struik}.
It can be described in the parametric form by:
\begin{equation}\label{eq:helicoid}
\begin{array}{ccl}
x & = & u\cos(kv), \\
y & = & u\sin(kv), \\
z & = & v.
\end{array}
\end{equation}
The circular helicoid (see Fig.~\ref{fig:0}a)
can be swept out by moving a segment in space,
the length of this segment being equal to the length of the interval
(domain) of the parameter $u$ definition.
The corresponding circular helix can be defined in a similar way
(thick lines in Fig.~\ref{fig:0}a):
\begin{equation}\label{eq:helix}
\begin{array}{ccl}
x & = & \rho\cos(kv), \\
y & = & \rho\sin(kv), \\
z & = & v.
\end{array}
\end{equation}
Here, the fixed radius $\rho$ is related to the parameter $u$,
with $u \in [-\rho, \rho]$ and $k$ being defined as the {\it pitch wave number}
so that the {\it pitch} of a circular helix is $P=\frac{2\pi}{k}$.
The pitch wave number is, by convention, negative if the helix is
left--handed and positive in the opposite case.
In order to find a connection between the helicoid and the tapes
generated in our simulations, we require a consistent definition
of the segment the motion of which in space sweeps the surface.
For this purpose, we can state that each strand's backbone
lies along the vector
\begin{equation}
\bm{n}=[2\rho \cos(kz), 2\rho \sin(kz),z],
\end{equation}
where $\bm{z}$ is taken as the pitch axis of the tape.
Furthermore, we have to assume that $z$ varies in a discrete way
along the tape with $\Delta z=d$, where $d$ is the {\it
inter--strand distance}, (i.e. distance between the
nearest--neighbor strands). This somewhat modifies the calculation of
the pitch, namely $P=\frac{2\pi d}{k}$.
Finally, three parameters are necessary to fully identify the vector
$\bm{n}$ and the circular helix which delimits the surface.
The instantaneous values of $\rho$ (the {\it tape radius}),
$k$ and $d$ are calculated by
taking the vector $\bm{x}_{ij}=(C_{2}^{i}-C_{10}^{j})$,
where $C_{l}^{i}$ is the position of monomer $C$ in the $l$th residue
within the $i$th strand (see Fig.~\ref{fig:0}b for details).
The use of the vector $\bm{x}_{ii}$ is justified
because the molecules behave themselves essentially as rigid rods.
In more details:
\begin{eqnarray}
\label{eq:rho}
\rho_{i} & = & \frac{\|\bm{x}_{ii}\|}{2}, \\
\label{eq:kappai}
k_{i,i+1} & = & \text{arccos}\Bigg(- \frac{\bm{x}_{ii}}
{\|\bm{x}_{ii}\|}\cdot \frac{\bm{x}_{i+1,i+1}}{\|\bm{x}_{i+1,i+1}\|}
\Bigg),\\
\label{eq:dd}
d_{i,i+1} & = & \frac{\|\bm{x}_{i,i+1} \cdot \bm{x}_{ii}
\times \bm{x}_{i+1,i+1}\|}{\|\bm{x}_{ii} \times \bm{x}_{i+1,i+1}\|}
\end{eqnarray}
This calculation of the parameter $k$, which is related to a cosine,
misses the correct evaluation of the sign.
To overcome this, an analogous measure related to a dihedral angle is needed.
The \emph{local dihedral angle} $(LDA)$ is defined as in Eq.~(\ref{eq:dihe})
with
\begin{eqnarray}
\label{eq:lda1}
\bm{r}_{12} & = & -\bm{x}_{ii}, \\
\label{eq:lda2}
\bm{r}_{32} & = & \bm{x}_{i,i+1}, \\
\label{eq:lda3}
\bm{r}_{34} & = & \bm{x}_{i+1,i+1}.
\end{eqnarray}
Monitoring the sign of this quantity gives information about
the handedness of the helical cluster.
Thus, the complete formula for the calculation of the pitch wave number is:
\begin{equation}\label{eq:kappa}
k_{i,i+1} = \mbox{sign}(LDA)\cdot\text{arccos}\Bigg(-
\frac{\bm{x}_{ii}}{\|\bm{x}_{ii}\|}\cdot
\frac{\bm{x}_{i+1,i+1}}{\|\bm{x}_{i+1,i+1}\|}
\Bigg).
\end{equation}
In Tab.~\ref{tab:exp} we exhibit typical experimental values of the pitch
wave number $k$ for some of oligopeptide--based supramolecular clusters.
These values were obtained from atomic force microscopy and
transmission electron microscopy data.
\section{\label{sec:res}Results}
Tape structures in both models $A$ and $B$ appear
to be perfectly stable with single strands packed side by side along the
backbone axis. Therefore, the simplistic representation of the
hydrogen bonding adopted by us is successful in keeping the strands aligned.
With the increase of the chirality parameter $\tau_0 \neq 0$
the single strands acquire a somewhat regular twisted geometry.
The handedness and the magnitude of this twist have been
quantified by calculating the value of the dihedral angles
$\chi_{i}$ defined by the quadruplets
$S_{i}-C_{i}-C_{i+2}-S_{i+2}$ where \cite{shamov} $i=2,3,4,...,8$.
Right--handed twist and left--handed twist are associated with
positive and negative values of $\chi$ respectively.
The averaged values (over the production Monte
Carlo sweeps, over all but the terminal strands, and over the
7 different $i$) of these
dihedral values and the standard deviations are shown in Tabs.~\ref{tab:0a}
and \ref{tab:0b} for the models A and B respectively.
One can see a systematic increase of $\chi$ with the chirality
parameter $\tau_0$, irrespective of the model choice.
Increasing the chirality parameter $\tau_0$ leads also, as expected,
to a persistent macroscopic twist of the tapes, which monotonically increases
with the value of $\tau_{0}$.
The numerical measure of the handedness of this twist
could be expressed in terms of $LDA$ defined by Eqs.~(\ref{eq:dihe},%
\ref{eq:lda1},\ref{eq:lda2},\ref{eq:lda3}).
Fig.~\ref{fig:1} shows that, when chirality is introduced, the sign of LDA
becomes well--defined and that the absolute value of LDA increases
with $\tau_{0}$.
It is worthwhile to remark also, as a proof of the consistency of
our procedure, that the achiral structure has no well--defined
sign for $LDA$ (see Fig.~\ref{fig:1}).
Moreover, one can observe that after changing the sign
of the chirality parameter $\tau_{0}$ the
sign of $LDA$, and hence the handedness of the tape, will be
reversed (data not shown).
Figs.~\ref{fig:2},\ref{fig:3} also show averaged
equilibrium snapshots related
to the systems with different values of the chirality parameter $\tau_{0}$
in the models $A$ and $B$ respectively.
>From these one could see how the structures change from a flat into more
and more twisted tapes as $\tau_{0}$ increases.
Next, we would like to compare our simulated structures with
the geometry of a left--handed circular helicoid, the definition for which was
given in Sec.~\ref{sec:defs}. Specifically, we are
interested in characterizing the circular helices which sweep
the boundaries of that surface.
The details of calculation of the three parameters, $k$, $d$ and $\rho$,
which are necessary for connection of the \emph{idealized}
geometry with that of the simulated tapes, can be also found in
Sec.~\ref{sec:defs} and in Fig.~\ref{fig:0}.
These values we can calculate from the coordinates
of each two consequent strands.
Clearly, the chains at the boundaries of the tape behave
in a somewhat different way from those buried inside.
More generally, despite of the intra--molecular origin of chirality,
the conformation of single strands within a tape also depends on their
interactions with nearest--neighboring strands. As chirality is increased,
the deformation of the flat geometry of a single strand progresses.
Therefore, for fairly short tapes with small $M$ we could quite
significant finite size effects leading to considerable deviations from
the regular geometrical surfaces.
Thus, we shall compute the average values and the standard deviations
of the quantities $k$, $d$ and $\rho$ over the span of the tape
(with the exception of the two terminal strands on both edges
to reduce the boundary effects), as well as over the production sweeps,
in order to understand at what extent they vary along the tape.
These values are presented in Tabs.~\ref{tab:1} and \ref{tab:2}
for the models $A$ and $B$ respectively.
The values of the helix radius $\rho$ does not seem to vary significantly
along the tape, which is reflected in a relatively small value
of its standard deviation $\sigma_{\rho}$. Evidently, the average
value of $\rho$ is essentially independent of the number of strands
$M$ or the chirality parameter $\tau_0$ as it is related to the
conformation of a single strand. While the standard deviation
of the distance between two strands $d$ is relatively large, we
do not observe any systematic dependencies of its values on either
the location within the tape or the value of the chirality
parameter $\tau_0$. A large value of $\sigma_d$ could be
attributed to the inter--molecular interactions contribution
to the distances between nearest close--packed strands.
However, the pitch wave number $k$ is strongly dependent on the
location of the strands pair used in its calculation inside the tape,
which is especially striking for small systems made of $M=7$ and $M=15$
chains since they do not as yet complete a full turn of the helicoid.
The results of the calculations of the average and
standard deviation of the pitch wave number $k$ shown in Tab.~\ref{tab:1},
Tab.~\ref{tab:2} were thus obtained by taking only the three
central strands, which has the advantage that the results become
less sensitive to the edge--effects.
Clearly, the central area of the tape of different sizes $M$ behaves,
as far as the pitch wave number is concerned, in a similar way
and numerically approaches the value of $k$ in the tape of $M=45$ strands.
This can also be seen from the histograms (probability
distributions) of $k$ in Fig.~\ref{fig:4}. Note that the location of
the peak of these shifts to the right with $M$ somewhat.
The values of $k$ which we have obtained in the range of the
studied chirality parameter $\tau_0$ choices correspond to the experimental
values of $k$ shown in Tab. \ref{tab:exp}.
Thus, we need about $3-5\ k_B T$ for $K_{\tau}$ in our model
in order to obtain the highest of the known experimental values of $k$.
To check the quality of our parameters, we then performed,
for each system, a self--consistent fitting procedure, in which we
considered two data sets, $r_1$ and $r_2$,
which are related to the two respective edges
of the tape. These data sets comprised a sequence of
the average positions \cite{footnote1}.
of the penultimate monomers in consecutive strands within the
tape. Note that the penultimate monomers were considered rather
than the end monomers to reduce the ``end effects''. Also, because
the strands were assembled in an antiparallel pattern, one typical
sequence of positions would comprise monomers
$C_{2}^{1}$, $C_{10}^{2}$, $C_{2}^{3}$, $C_{10}^{4}$, \dots, $C_{2}^{M-1}$.
These were fitted with a regular helix described by Eq.~(\ref{eq:helix})
sweeping the end of the regular helicoid with the
parameters $d$, $\rho$ and $k$ calculated from the simulation data.
The fitting procedure for each system produces, as the final output,
the two mean displacements $\langle\Delta\rangle_{r_1}$ and $\langle\Delta
\rangle_{r_2}$
between the \emph{regular} helix and the data sets $r1$ and $r2$ respectively.
Since the data sets $r_1$ and $r_2$ are statistically equivalent
we calculated the mean displacement of our points from the regular
helix, $\langle\Delta\rangle$, averaged over the two values.
As can be seen from Tab.~\ref{tab:3}, the resulting mean displacements
are relatively small compared to the size of the van der Waals radius
of various monomers for the systems made of $M=7$ and $M=15$.
Therefore, our overall procedure is quite satisfactory.
Note that a somewhat larger values of $\langle\Delta\rangle$ for the
systems made of $M=45$
strands can be related to the need for a better equilibration in this
largest of the studied systems, which was also seen from
monitoring the trends in the global observables such as the
squared radius of gyration of the tape.
Thus, overall, we conclude
that both of the potential models suggested here are successful in generating
chirality within a stable tape cluster.
\section{Conclusion}
In this paper we have proposed two coarse--grained models for
short peptides in an extended $\beta$--strand--like conformation.
We also have studied these strands self--assembled into
a supramolecular $\beta$-tape in case of the model oligopeptides
with identical side groups attached.
A fine tuned combination of Lennard--Jones potential terms was
successful in stabilizing the chains within such a
two--dimensional structure.
Chirality was then introduced on a molecular level with resulting in
a regular twist of the surface of the tape.
Within the two different models we have investigated
the effect of changing the values of the chirality parameter $\tau_{0}$.
As we only considered homogeneous
sequences yielding tapes with identical sides,
the equilibrium structures obtained at the end of the simulations
had a geometry of a circular helicoid with the pitch wave number
$k$ increasing linearly with $\tau_{0}$.
In model $A$ the chirality term is added as a simple asymmetric
contribution to the three--folded dihedral potential which is typically used
in coarse--grained models of $\beta$-sheets \cite{thiru1,thiru2}.
The remaining bonded interactions in their analytical expressions
and in the numerical strength were taken akin to those of the
fully atomistic force fields. Therefore, in essence, here we were introducing
chirality into a well--established potential energy model.
Model $B$, conversely, is more coarse--grained and relies on the
principles of the differential geometry of curves and surfaces
\cite{harris}. Importantly, in this model we still have obtained the
results comparable to those of the more detailed model $A$.
This establishes a degree of universality in the transfer of chirality
from the intra--molecular to the supramolecular level.
Despite the difference in the way how chirality was introduced
in the both models, a macroscopic regular twist was generated
equivalently.
Both models could be easily extended to include hydrophobic/hydrophilic
and explicitly charged sidegroups leading to the difference of the tapes
sides, something we would like to study in the future.
Such an extended study of different coarse--grained
oligopeptide sequences of interest should allow us to describe
higher order self--assembled structures (ribbons, fibers and fibrils)
in detail, providing valuable insights for the experiment.
\begin{acknowledgments}
The authors would like to thank Professor Neville Boden,
Professor Alexei Kornyshev,
Professor Alexander Semenov, Dr Amalia Aggeli, Dr Colin Fishwick,
as well as our colleagues
Dr Ronan Connolly and Mr Nikolaj Georgi for useful discussions.
Support from the IRCSET basic research grant SC/02/226
is also gratefully acknowledged.
\end{acknowledgments}
|
1,116,691,499,139 | arxiv | \section{Introduction}
Carnatic music can be seen as a set of protocols coalesced with a musical scale, whose reference point is the fundamental frequency of the rendering vocalist/instrument. A musical scale, paired with it's unique set of protocols is called a \textit{r\=aga}. Due to the oral traditions of Carnatic music, the renditions differ for every performer. Moreover, a single \textit{r\=aga} may contain many different forms of compositions, making their standardisation a tough task. As a result, analysis of Carnatic music poses a lot of non-trivial challenges. In this article, we aim to establish causal relationships between different \textit{M\=e\d{l}akarta r\=aga} and their \textit{Janya} in Carnatic music.
The research in Carnatic music analysis has been able to achieve \textit{r\=aga} classification \cite{madhusudhan2019deepsrgm}, using modern Transformer-based architectures. While \textit{r\=aga} classification is an interesting problem, the analysis of Carnatic music is not limited to that. Understanding the melody of a Carnatic composition \cite{koduri2011computational} is a non-trivial problem in itself. Other research problems include \textit{gamaka} and motif identification \cite{krishna2012carnatic} and tonic pitch identification \cite{salamon2012multipitch, gulati2014automatic, bellur2012knowledge}, where in all the articles show different approaches for the same problem. \cite{garani2019} shows a representation of Carnatic \textit{r\=aga} as Markov Chains. We have used a Markov chain representation inspired by the one introduced by Garani et. al. in \cite{garani2019}.
Causal analysis of music is generally studied for it's neurological and psychological effects. The Mozart Effect \cite{jenkins2001mozart, perlovsky2012mozart} is essentially the study of the cognitive function of the brain as a response to the usage of Mozart compositions as stimuli. \cite{van_Esch_2020} uses Granger Causality for inference of effective connectivity in brain networks for detecting the Mozart effect.
\subsection{Carnatic Music: A Primer}
\textit{Svataha Ranjayitihi S\'varaha} is a popular Sanskrit phrase, which means that the \textit{s\'vara} sounds sweet even without embellishments. Basically, \textit{s\'vara}\footnote{\textit{S\'vara} and \textit{r\=aga} are both a disambiguation. Vernacular terms use the same words for both singular and plural forms.} is a melodious sound. The foundations of the Indian system of music are built upon the \textit{Sapta-s\'vara}, i.e. $\{Sa, Ri, Ga, Ma, Pa, Dha, Ni\}$, also denoted as $\{S, R, G, M, P, D, N\}$. Each note is independent and has its own position, known as the \textit{S\'varasthana}.
The \textit{Abhinava R\=aga Manjari} defines the concept of \textit{\'sruti} as
\begin{quote}
\textit{N\=ityam G\=it\=opay\=ogitvam Bhigyeatvam Py\=atu, Lak\d{s}ae Proktam Supariy\=aptam Sa\.ng\=it \'Sritu Lak\d{s}a\d{n}a\.m}
\end{quote}
or, ``a certain distinctly identifiable sound that can be used in music is called \textit{\'sruti}''. Basically, \textit{\'sruti} refers to the position of the musical notes on the frequency spectrum, relative to the fundamental frequency. \textit{\'Sruti} is loosely analogous to the pitch of the sound. Usually, in Western music, the pitch of $A4 = 440~Hz$, also known as the concert pitch, is used as a reference point for tuning musical instruments \cite{Mueller15_FMP_SPRINGER}. In Indian classical music, the \textit{\'sruti} is tuned with reference to the tonic pitch of the lead singer/instrument. The central frequency of the tonic pitch is almost always more than an octave lower than $440~Hz$, and is different for every singer/instrument.
The concept of the \textit{r\=aga} is perhaps the crown jewel in the innovations of Indian classical music.
\textit{Brihaddesi} by M\=atanga Muni is the first text to define the concept of \textit{r\=aga}, according to which, it is a group of luminous notes with an integrated discipline of \textit{\'sruti} relationship, a power to steer the mind and evoke sentiment, and a built in capacity for infinite expansion \cite{ayyangar1972history}.
Furthermore, a \textit{r\=aga} is required to have at least four \textit{s\'vara}. \textit{R\=aga} can broadly be classified into \textit{Janaka} or \textit{Janya}. \textit{Janaka} or the parent \textit{r\=aga} contain the \textit{Sapta-s\'vara} in both \textit{\=ar\=oha\d{n}a} and \textit{avar\=oha\d{n}a} i.e, ascending and the descending progressions respectively.
\begin{equation}
\label{eq: aroha}
\{S, R, G, M, P, D, N\}
\end{equation}
Equation \eqref{eq: aroha} shows the \textit{\=ar\=oha\d{n}a krama} of a \textit{Janaka} r\=aga. Equation \eqref{eq: avaroha} shows the \textit{avar\=oha\d{n}a krama} of the same \textit{r\=aga}.\footnote{Overdot on $S$ ($\dot{S}$) refers to the higher octave. Refer Section \ref{sec: auto_parse} for the details of the notations used.}
\begin{equation}
\label{eq: avaroha}
\{\dot{S}, N, D, P, M, G, R\}
\end{equation}
The \textit{s\'vara} in a \textit{Janaka r\=aga} must be present in a sequential order, as shown in Equation \eqref{eq: aroha} and \eqref{eq: avaroha}. The \textit{avaroha\d{n}a} must contain the same \textit{s\'vara} as the \textit{\=ar\=oha\d{n}a}. These are the essential qualities of a \textit{Janaka r\=aga}.
\textit{Janya r\=aga}, or ``child'' \textit{r\=aga} are \textit{r\=aga} whose scale is derived from a \textit{M\=e\d{l}akarta} scale. These scale can be \textit{vakra} (zig-zag in pattern), or \textit{varja} (absence of certain notes).
\textit{Janya r\=aga} are derivative \textit{r\=aga}, and if all the \textit{s\'vara} are taken from the parent \textit{r\=aga}, it is known as the \textit{Up\=anga Janya}. \textit{R\=aga Ha\d{s}adhwani} is an example of this kind of \textit{Janya r\=aga}. In case an \textit{anya s\'vara} (foreign note) is present it is known as a \textit{Bh\=a\d{s}\=anga r\=aga}. \textit{R\=aga K\=ambh\=oji} is an example of a \textit{Bh\=a\d{s}anga r\=aga}.
A \textit{Bh\=a\d{s}\=anga r\=aga} can manifest in two ways. One, where the foreign note appears in the scale itself. \textit{R\=aga Bhairavi} scale contains the both $D_1$ and $D_2$, while it's \textit{Janaka r\=aga Natabhairavi} contains only $D_1$. So, in this case, $D_1$ is a \textit{sv\=akiya s\'vara} (note that belongs to the \textit{Janaka} scale), while $D_2$ is the \textit{anya sv\'ara}. The other way of manifestation is where the \textit{anya s\'vara} appears only in rendition/performance. For example, \textit{r\=aga Bilahari}, \textit{Janya} of \textit{r\=aga Dh\=ira\'sankar\=abhara\d{n}a\.m} (commonly referred as \textit{{\'Sankar\=abhara\d{n}a\.m}}), employs $N_2$ in certain phrases during the rendition, whereas, $N_3$ is the \textit{sv\=akhya s\'vara}.
The \textit{Janaka}, which are also described as the \textit{M\=e\d{l}akarta r\=aga} have been devised using the \textit{Katapay\=adi Sa\.nkhy\=a} \cite{carnaticBook} system, by Govindacharya in $17^{th}$ C. E. The 72 \textit{r\=aga} have been created/devised employing a twelve tonal scale, as shown in Table \ref{tab: twelve-tonal}.
The \textit{Janya} \textit{r\=aga} did not really originate from a \textit{M\=e\d{l}a}. Some of the \textit{Janya} \textit{r\=aga} are chronologically older than their \textit{M\=e\d{l}akarta} because the \textit{M\=e\d{l}akarta} were created after these \textit{Janya} \textit{r\=aga} were, similarly to how grammatical structures may have developed after the spoken language did. At that point, the \textit{M\=e\d{l}akarta} were named after the popular \textit{r\=aga} that were categorised under them.
\subsubsection{Compositional Structure}
Carnatic music is fundamentally based on compositions. Compositions are a reflection of a composer's musical prowess, and usually are structured to emphasize the true essence of a \textit{r\=aga}. There are various compositional formats in Carnatic music, including, but not limited to \textit{g\=ita\d{m}}, \textit{s\'varajati}, \textit{vara\d{n}am}, \textit{kriti}, \textit{j\=ava\d{l}i}, \textit{padam} and \textit{till\=ana}. These vary in structure, size, style and features.
\textit{Kriti} is the most widely performed format in performances. It's structured in three main sections: the \textit{pallavi}, the \textit{anupallavi} and the \textit{cara\d{n}a}. Some derivatives of \textit{kriti} employ \textit{sama\d{s}\d{t}hi cara\d{n}a} or \textit{ci\d{t}\d{t}ai s\'vara} as well. Likewise, \textit{var\d{n}am} begins with the \textit{pallavi}, followed by an \textit{anupallavi, ci\d{t}\d{t}ai s\'vara, cara\d{n}a} and \textit{cara\d{n}a s\'vara}.
\subsection{Causal Analysis}
Causal analysis is a study of correlation and directional influence between different variables (as measured using time-series or sequences). The effect that a measurable perturbation in one variable (time-series/ sequence) has on the another variable helps in evaluating the causal influence that exists between the two variables or systems under study (or in a network of interacting variables). According to Judea Pearl, a pioneer in the field of causal inference, causality can be pictured as a ladder with three rungs \cite{pearl2018book}. The lowest rung is called \textit{association}, which is the process of finding correlations between variables/time-series sequences. Quite often, two highly correlated time-series may not be causally related at all, or might share a mutual cause in a third completely different variable/time series. To solve these conundrums, the middle rung of the ladder of causality, called \textit{intervention} is used. Intervention is the process of finding the effect that a measurable perturbation of one variable has on another in a multi-variate system. The highest rung in the ladder of causality is \textit{counterfactuality}. In simple terms, counterfactual reasoning asks the question ``{\it what if}'' a particular intervention was carried out (hypothetically) - and aims to {\it imagine} its repercussions or consequences to determine the directional causal linkage between two or more variables.
For example, consider three variables producing time-series measurements: $T_t, T_s$ and $T_c$, where $T_t$ contains data about the atmospheric temperature of a region for the past few weeks, $T_s$ represents the trends in ice-cream sales and $T_c$ represents the trends in electricity consumption per day. Let's say a non-decreasing trend is observed in all of them. Association will tell us that each pair is highly correlated. Let's say a natural intervention occurs, and it rains for the next week. As a result, $T_t$ now becomes a decreasing sequence. If this results in both $T_s$ and $T_c$ having decreasing trends, then we can say that $T_t$ is the causal series, whereas $T_s$ and $T_t$ are effects. Counterfactual reasoning suggests considering alternate hypothetical situations like doubling the price of ice-creams or increasing the energy charges. In the former case, $T_s$ might get affected, but $T_t$ does not get affected. Moreover, a perturbation in $T_t$ will still affect $T_s$. Similarly, in the latter case, $T_c$ might get affected, but that does not affect $T_t$. In this scenario too, a perturbation in $T_t$ will still affect $T_c$. This verifies that $T_t$ is actually the causal series.
The discovery of causal relationships between time-series sequences is useful in many scientific domains including but not limited to psychiatry, economics, neuroscience, climatology, medicine, physics and artificial intelligence. Refer \cite{kathpalia2021measuring} for a detailed report of these applications of causality.
\section{Methodology}
Several sets of \emph{M\=e\d{l}akarta Janaka--\textit{Janya}} \textit{r\=aga} pairs were chosen for this analysis. The chosen \textit{r\=aga} are listed in Table \ref{tab: raga_data}. The the number of compositions found in the corresponding \textit{r\=aga}, derived from \cite{shivkumarwebsite} has also been shown. In this article, we will represent a \textit{M\=e\d{l}akarta r\=aga} as $\mathcal{R}_{\eta}$
where $\eta$ stands for the \textit{M\=e\d{l}akarta} Index of that \textit{r\=aga} (also specified in the column \texttt{\textit{M\=e\d{l}akarta Sa\.nkhy\'a}} of Table \ref{tab: raga_data}). Similarly, a \textit{Janya r\=aga} will be represented as $\mathcal{R}_{\eta}^{(\alpha)}$, where $\eta$, as before, specifies the \textit{M\=e\d{l}akarta} Index of the corresponding \textit{M\=e\d{l}akarta r\=aga}, and $\alpha$ is the first letter of the \textit{r\=aga} name. For example, \textit{r\=aga Kharaharapriya}, which is the $22^{nd}$ \textit{M\=e\d{l}akarta r\=aga}, will be represented as $\mathcal{R}_{22}$, and \textit{r\=aga \=Abh\=ogi}, it's \textit{Janya}, will be represented as $\mathcal{R}_{22}^{(a)}$.
$\mathcal{R}_{\eta}$ and $\mathcal{R}_{\eta}^{(\alpha)}$ are essentially sets that contain all the compositions in the corresponding \textit{r\=aga}. Hence, the cardinality $|\mathcal{R}_{\eta}|$ or $|\mathcal{R}_{\eta}^{(\alpha)}|$ equals the number of compositions in that set. This is what the \texttt{Num Comps} column in Table \ref{tab: raga_data} tells. Let $\textbf{C}$ be the set of all the compositions in the dataset. Then, $\textbf{C}$ is a super-set/container set of all the \textit{r\=aga}:
\begin{align}
\label{eq: superset}
\textbf{C} := \bigcup_{i \in \textbf{I}, \alpha \in \textbf{A}} \{\mathcal{R}_{i} \cup \mathcal{R}_{i}^{(\alpha)}\},
\end{align}
where $\textbf{I} := \{8, 15, 22, 28, 29, 65\}$ and $\textbf{A} := \{d, m, a, k, h\}$. Naturally, the cardinality $|\textbf{C}| = 57$.
The column \texttt{Identifier} of Table \ref{tab: raga_data} are just tags that were used to uniquely identify a \textit{r\=aga} in the code-base. Affirmatively, it is inspired by the $\mathcal{R}_{\eta}$/$\mathcal{R}_{\eta}^{(\alpha)}$ naming convention.
Notice there does not exist a specific \textit{Janya r\=aga} for \textit{r\=aga Mecakaly\=a\d{n}i} (commonly refered as \textit{Kaly\=a\d{n}i}) in Table \ref{tab: raga_data}. \textit{R\=aga Ha\.msadhwani} is considered a \textit{Janya} of \textit{r\=aga \'Sa\.nkar\=abhara\d{n}a\.m}, but, it can also be derived from \textit{r\=aga Kaly\=a\d{n}i}. \textit{R\=aga \'Sa\.nkar\=abhara\d{n}a\.m} employs \textit{\'Suddha Madhyama} ($M_1$) while \textit{r\=aga Kaly\=a\d{n}i} employs the \textit{Prat\=i Madhyama} ($M_2$). Notice that any of those \textit{Madhyama} are missisng from the \textit{r\=aga Ha\.msadhwani} scale. This paves the way for the aforementioned classification.
\begin{table*}[!hbt]
\caption{The list of all the \textit{r\=aga} used in this analysis. The symbol $\hookrightarrow$ represents the \textit{\=ar\=oha krama} while the symbol $\hookleftarrow$ represents the \textit{avar\=oha krama}.}
\begin{tabular}{||>{\raggedleft}p{1.5cm}|c|l|P{1.5cm}|P{2cm}||}
\hline \hline
\textbf{Identifier} & \textbf{\textit{R\=aga}} & \textbf{Scale} & \textbf{Number of Compositions} & \textbf{\textit{M\=e\d{l}akarta Sa\.nkhy\'a}} \\
\hline \hline
\verb|8| & \textit{R\=aga Hanumat\=odi} & $\hookrightarrow$: $\{S, R_1, G_2, M_1, P, D_1,N_2, \dot{S}\}$ & $8$ & 8 \\
& & $\hookleftarrow$: $\{\dot{S},N_2, D_1, P, M_1, G_2, R_1, S\}$ & & \\\hline
\verb|8_d| & \textit{R\=aga Dhanyasi} & $\hookrightarrow$: $\{S, G_2, M_1, P, N_2, \dot{S}\}$ & $5$ & 8 \\
& & $\hookleftarrow$: $\{\dot{S},N_2, D_1, P, M_1, G_2, R_1, S\}$ & & \\\hline
\verb|15| & \textit{R\=aga M\=ayama\b{l}avagau\b{l}a} & $\hookrightarrow$: $\{S, R_1, G_3, M_1, P, D_1, N_3, \dot{S}\}$ & $6$ & 15 \\
& & $\hookleftarrow$: $\{\dot{S},N_3, D_1, P, M_1, G_3, R_1, S\}$ & & \\\hline
\verb|15_m| & \textit{R\=aga Malahari} & $\hookrightarrow$: $\{S, R_1, M_1, P, D_1, \dot{S}\}$ & $4$ & 15 \\
& & $\hookleftarrow$: $\{\dot{S}, D_1, P, M_1, G_3, R_1, S\}$ & & \\\hline
\verb|22| & \textit{R\=aga Kharaharapriya} & $\hookrightarrow$: $\{S, R_2, G_2, M_1, P, D_2, N_2, \dot{S}\}$ & $6$ & 22 \\
& & $\hookleftarrow$: $\{\dot{S}, N_2, D_2, P, M_1, G_2, R_2, S\}$ & & \\\hline
\verb|22_a| & \textit{R\=aga \=Abh\=ogi} & $\hookrightarrow$: $\{S, R_2, G_2, M_1, D_2, \dot{S}\}$ & $3$ & 22 \\
& & $\hookleftarrow$: $\{\dot{S}, D_2, M_1, G_2, R_2, S\}$ & & \\\hline
\verb|28| & \textit{R\=aga Harik\=ambh\=oji} & $\hookrightarrow$: $\{S, R_2, G_3, M_1, P, D_2, N_2, \dot{S}\}$ & $4$ & 28 \\
& & $\hookleftarrow$: $\{\dot{S}, N_2, D_2, P, M_1, G_3, R_2, S\}$ & & \\\hline
\verb|28_k| & \textit{R\=aga K\=ambh\=oji} & $\hookrightarrow$: $\{S, R_2, G_3, M_1, P, D_2, \dot{S}\}$ & $5$ & 28 \\
& & $\hookleftarrow$: $\{\dot{S}, N_2, D_2, P, M_1, G_3, R_2, S\}$ & & \\\hline
\verb|29| & \textit{R\=aga Dh\=ira\'sankar\=abhara\d{n}a\.m} & $\hookrightarrow$: $\{S, R_2, G_3, M_1, P, D_2, N_3, \dot{S}\}$ & $6$ & 29 \\
& & $\hookleftarrow$: $\{\dot{S}, N_3, D_2, P, M_1, G_3, R_2, S\}$ & & \\\hline
\verb|29_h| & \textit{R\=aga Ha\d{m}sadhwani} & $\hookrightarrow$: $\{S, R_2, G_3, P, N_3, \dot{S}\}$ & $4$ & 29 \\
& & $\hookleftarrow$: $\{\dot{S}, N_2, P, G_3, R_2, S\}$ & & \\\hline
\verb|65| & \textit{R\=aga Mecakaly\=a\d{n}i} & $\hookrightarrow$: $\{S, R_2, G_3, M_2, P, D_2, N_3, \dot{S}\}$ & $6$ & 65 \\
& & $\hookleftarrow$: $\{\dot{S}, N_3, D_2, P, M_2, G_3, R_2, S\}$ & & \\\hline
\end{tabular}
\label{tab: raga_data}
\end{table*}
\subsection{Data Pre-processing}
\label{sec: auto_parse}
Figure \ref{fig: representation} shows a typical way of annotating compositions of Carnatic music. An automatic parser was developed to parse these notations into symbolic sequences. The \textit{sv\'ara} $\{S, R, G, M, P, D, N\}$ or $\{s, r, g, m, p, d, n\}$ (the difference between the capital and small letters is that the capitals (e.g. $S$) occupy one count whereas the smalls (e.g. $s$) occupy half a count) were denoted by indices $\{0, 1, \ldots 6\}$ in a seven-tonal scale, and from the set $\{0, 1, 2, \ldots, 11\}$ if a twelve-tonal scale was used, depending upon the notes used in the \textit{r\=aga}. The rests ($;$ and $,$) are given the index $\infty$. Table \ref{tab: seven-tonal} shows the seven-tonal representation of the indices. Table \ref{tab: twelve-tonal} depicts notes used across Carnatic music and their corresponding indices. Hence, $S$ in the higher octave (usually denoted as $\dot{S}$) will be denoted by the Index $7$ in the seven-tonal scale, and by Index $12$ in a twelve-tonal one, $\dot{R}$ will be $8$ or $13/14/15$ respectively (latter depending upon the \textit{r\=aga}) and so on. Similarly, notes in the lower octave will have negative indices ($\underdot{N} \rightarrow -1/-2$ depending upon the \textit{r\=aga}).
\begin{table*}[h!]
\caption{The seven-tonal scale, as has been used in the scope of this article. The actual note selected is decided by the scale of the (chosen) \textit{M\=e\d{l}akarta r\=aga}}
\begin{tabular}{||l|c|c|c|c|c|c|c||}
\hline \hline
\textbf{Index} & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ \\ \hline
\textbf{Symbol} & $S$ or $s$ & $R$ or $r$ & $G$ or $g$ & $M$ or $m$ & $P$ or $p$ & $D$ or $d$ & $N$ or $n$ \\ \hline \hline
\end{tabular}
\label{tab: seven-tonal}
\end{table*}
For example, consider \textit{r\=aga} \textit{M\=ayama\b{l}avagau\b{l}a}, whose "scale"\footnote{Both \textit{\=ar\=oha\d{n}a} and \textit{avar\=oha\d{n}a} \textit{krama} use the same notes.} can be defined with the notes \\$\{S, R_1, G_3, M_1, P, D_1, N_3\}$. In a seven-tonal scale, it will simply be represented as $\{0, 1, 2, 3, 4, 5, 6\}$, since it is a \textit{samp\=ur\d{n}a M\=e\d{l}akarta r\=aga}. However, in a twelve-tonal scale, it will be represented as $\{0, 1, 4, 5, 7, 8, 11\}$; where the indexing follows the scheme given in Table \ref{tab: twelve-tonal}. Now consider \textit{r\=aga Malahari}, the \textit{Janya} of \textit{r\=aga} \textit{M\=ayama\b{l}avagau\b{l}a}. As shown in Table \ref{tab: raga_data}, the \textit{\=ar\=oha} and \textit{avar\=oha} sequences are not equal. In a seven-tonal scale, the \textit{\=ar\=oha} will be $\{0, 1, 3, 4, 5\}$ and the \textit{avar\=oha} will be $\{0, 1, 2, 3, 4, 5\}$ (in the reverse order). In a twelve tonal scale, they will be $\{0, 1, 5, 7, 8\}$ and $\{0, 1, 4, 5, 7, 8\}$. For the purpose of parsing such a scale, we simply use the larger set amongst \textit{\=ar\=oha} and \textit{avar\=oha}. The absence of notes in the latter will affect the probability of occurrence of those notes in the \textit{r\=aga}'s compositions.
\begin{table*}[h!]
\caption{The twelve-tonal scale in Carnatic music.}
\begin{tabular}{||l|c|c|c|c|c|c|c|c|c|c|c|c||}
\hline \hline
\textbf{Index} & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ \\ \hline
\textbf{Symbol} & $S$ & $R_1$ & $R_2$ & $R_3$ & & $M_1$ & $M_2$ & $P$ & $D_1$ & $D_2$ & $D_3$ & \\ \hline
& & & $G_1$ & $G_2$ & $G_3$ & & & & & $N_1$ & $N_2$ & $N_3$ \\ \hline \hline
\end{tabular}
\label{tab: twelve-tonal}
\end{table*}
The \textit{M\=e\d{l}akarta--Janya} pairs have been parsed in the twelve-tonal scale. For causal discovery method that we employ~\cite{pranay2021causal}, it does not matter as the \texttt{Lempel-Ziv} algorithm works with the total number of different symbols, and does not look at their magnitudes. However, it should be noted that a seven-tonal scale makes sense only when comparing \textit{M\=e\d{l}akarta}--\textit{Janya} \textit{r\=aga} pairs, wherein the scale of the latter is actually a subset of the former. Care must be taken that both \textit{M\=e\d{l}akarta} and it's \textit{Janya} have been parsed using the same scale. In short, as long as the compared sequences are parsed using the same scale, it works.
\begin{figure*}[!hbt]
\centering
\includegraphics[scale=0.4]{images/composition-notation.png}
\caption{A typical Carnatic notation. This specific image is an excerpt from \textit{\'Sivak\=am\=e\'svari} by D\=ik\d{s}hit\=ar in the \textit{r\=aga} \textit{Kaly\=a\d{n}i}, taken from~\cite{shivkumarwebsite}.}
\label{fig: representation}
\end{figure*}
As Figure \ref{fig: representation} depicts, a typical Carnatic notation consists of the \textit{s\'vara} used, the \textit{s\=ahitya} (the lyrics of the composition), and the corresponding meaning of the \textit{s\=ahitya}. Also, to guide the reader about the rendition, an attempt is made to place a copy of the \textit{s\=ahitya} in line with the \textit{svara} (Figure \ref{fig: representation} shows that). However, \textit{s\=ahitya}, their corresponding meanings and the rendition techniques are currently not adding any value to the input. Hence, only the \textit{s\'vara} part is considered as the input.
Another problem that we faced when analysing the notations obtained from \cite{shivkumarwebsite} was that the octave of a given note had to be inferred by the reader. The automatic identification of the octave of the note thus became a challenge. To get around this, we utilised the observation that note jumps with index difference more than seven semitones did not occur frequently in Carnatic music. Consider a note-event sequence belonging to some composition $\textbf{c} \in \textbf{C}$, and let $\|\textbf{c}\|$ denote the length of the sequence. Let $i \in [2:\|\textbf{c}\|-1]$. Now, if $|c_i - c_{i - 1}| > 7$ semitones\footnote{Seven semitones usually corresponds to five indices in the seven-tonal scale.}, then it is highly likely that $c_i$ was changed to the lower or the higher octave, depending upon the index of the note $c_i$. For $c_i \geq D$ (pitches higher than $D$), $c_i = c_i - x$ and for pitches lower than $D$, $c_i = c_i + x$, when $x$-tonal scale is used, $x \in \{7, 12\}$. In case $c_i = \infty$ (i.e., it is a rest), this comparison with $c_{i-1}$ was not performed. In case $c_{i-1} = \infty$, the $c_i$ was compared with $c_{i-2}$ and so on. In essence, the fact that a typical Carnatic composition spans two octaves $[\underdot{P}, \dot{P}]$ is harnessed. So, when a high-index note (such as $D$ or $N$) is immediately followed by a low-index note (such as $S$ or $R$), it usually means that either the former is in a lower octave or the latter is in a higher octave. This led to inconsistencies in the parsed compositions, but it was still used as it tracked the octave changes properly.
Each composition is structured according to a rhythmic cycle, known as the \textit{t\=a\d{l}a}. Few popular Carnatic \textit{t\=a\d{l}a} are {\=Adi} (eight-beat rhythmic cycle), \textit{Kha\d{n}\d{d}a C\=apu} (five-beat rhythmic cycle), \textit{R\=upaka} (six-beat rhythmic cycle) and likewise. \textit{\=Avartana} in Carnatic music, is one whole rhythmic cycle within a \textit{t\=a\d{l}a} (e.g. completion of eight beats in \textit{\=Adi t\=a\d{l}a} marks the completion of an \textit{\=avartana}.) The use of the \textit{da\d{n}\d{d}\=a} ($|$) indicates the separation between the different divisions of the \textit{\=avartana}. The use of the ``double \textit{da\d{n}\d{d}\=a}'' ($||$) (\cite{carnaticBook}) in the notations marks the end of the \textit{\=avartana} of the \textit{t\=a\d{l}a} being used. In the parser, only the double \textit{da\d{n}\d{d}\=a} is used to track the number of measures encountered thus far.
The issue of octave resolution also makes it rather tough to distinguish between the \textit{\=ar\=oha} phrases and the \textit{avar\=oha} phrases. Consequently, for the final parse, a vocabulary built from the union of all the notes in the \textit{\=ar\=oha} and the \textit{avar\=oha} scales is ultimately used. For \textit{M\=e\d{l}akarta r\=aga}, this is not a problem, as both the \textit{\=ar\=oha krama} and the \textit{avar\=oha krama} use all the seven \textit{s\'vara}. In case of \textit{Janya r\=aga}, some \textit{s\'vara} might be missing from the \textit{\=ar\=oha}, while other might be missing from the \textit{avar\=oha}. Redundant \textit{s\'vara}, which may occur in some \textit{vakra prayoga}, are eliminated. In this case, a union of both is used as the vocabulary. For example, consider \textit{r\=aga Malahari} from Table \ref{tab: raga_data}. The vocabulary of \textit{r\=aga Malahari} will look like
\begin{align}
\label{eq: scale}
\{S, R_1, G_3, M_1, P, D_1\}.
\end{align}
The parsed notations are represented in three dimensions: the frequency space (the note index), the length space (relative note duration/number of beats per note) and the temporal space (the measure indices). Every note-event can thus be represented as
\begin{align}
\label{eq: 3d-space}
\Vec{n} \in \mathbf{S} \iff \Vec{n} &:= n_a \hat{a} + n_b \hat{b} + n_c \hat{c},
\end{align}
where $\hat{a}$, $\hat{b}$ and $\hat{c}$ are unit vectors in the directions of note-indices, number of counts and measure indices respectively. $(\mathbf{S})$ is a discrete three-dimensional vector-space with basis vectors $\hat{a}$, $\hat{b}$ and $\hat{c}$, capable of representing any song/composition from parsed notations.
It is quite likely that the Carnatic parser will encounter the occurrence of more counts than the \textit{\=avartana} count. For example, try counting the number of counts in each \textit{\=avatanam} in Fig \ref{fig: representation}. One time out of two, this is the case. This is because the underlined notations (e.g. $\underline{dnsn}$) or even notations without spaces (e.g. $mg-$) are supposed to be counted as $1$ count. However, the parser is not developed enough to handle this. As of now, every symbol is given it's designated duration, and if the total exceeds the \textit{\=avartana} counts, a normalization is done. Let $\textbf{n} := (\Vec{n_1}, \Vec{n_2}, \ldots, \Vec{n_k})$ represent a complete \textit{\=avartana} for some composition $c \in \textbf{C}$. Let $\theta$ be the number of beats per \textit{\=avartana}. If $\sum_{i=1}^k (n_i)_b > \theta$, then:
\begin{align}
\label{eq: beat-norm}
(n_i)_b' = \frac{(n_i)_b}{\sum_{i=1}^k (n_i)_b} \times \theta ~~~ \forall i \in [1:k]
\end{align}
where $(n_i)_b'$ denotes the new $\hat{b}$ component in every $\Vec{n_i}$.
For example, consider the phrase $G ~ \underline{dsns} ~ d ~\underline{pdpP,} ~ g ~ M ~ P~;||$. The specified \textit{t\=ala} is \textit{\=Adi}, so this should be $\theta = 8$ counts. However, the total is $1 + 0.5*4 + 0.5 + (0.5*4+1) + 0.5 + 1 + 1 + 1 = 10$. Then the original duration of every event in the sequence is divided by $10$ and then multiplied by $8$, so the new durations are $0.8 + 0.4*4 + 0.4 + (0.4*4+0.8) + 0.4 + 0.8 + 0.8 + 0.8 = 8$.
As a result, the parsed duration may not be equal to the notated duration. However, even in real world, the note duration is adjusted through the rendition, and hence differs from the notated duration quite often.
\subsubsection{Pre-processing for \textit{R\=aga K\=ambh\=oji}}
\label{sec: kambhoji}
\textit{R\=aga K\=ambh\=oji} ($\mathcal{R}_{28}^{(k)}$) is a prime example of a \textit{bh\=a\d{s}\=a\.nga r\=aga}, wherein, a foreign note appears in the scale itself. As established earlier, it is hard to distinguish between an \textit{\=ar\=oha} and an \textit{avar\=oha} phrase. In \textit{r\=aga K\=ambh\=oji}, the foreign \textit{s\'vara} appears only as an extension of the \textit{avar\=oha}. Even if we add $N_3$ to the vocabulary, we need a way of differentiating between the two $Ni$-s and deciphering them in the notations.
Fortunately, $N_3$ occurs only in the \textit{vakra prayoga}, which is a variation of $PND$ (e.g. $pnd$, $p;nd$, $p,n;,d$ etc). So, each time the parser encounters the \textit{s\'vara} $D$ or $d$ in a \textit{r\=aga K\=ambh\=oji} composition, two of the previously encountered \textit{s\'vara} are checked. In case the current $D$ (or $d$) follows a $N$ (or $n$), which is preceded by a $P$ (or $p$), the $N$ (or $n$) is the foreign $N_3$. This specific implementation was realized because the parser has knowledge of the past: i.e. it remembers all the notes it has encountered in the measure so far, but, it lacks the knowledge of the future. When we encounter a $N$, we cannot check whether the next note in the sequence is $D$ or not.
\subsection{Data Processing}
\label{sec: data_proc}
We can emulate the melody of the song by mapping the duration ($\hat{b}$) of every note-event to an integer and repeating that note that many times. Let \textbf{N} be the note-event sequence of a composition in $\textbf{C}$, such that $\textbf{N} := (\Vec{n_1}, \Vec{n_2}, \ldots, \Vec{n_K})$, where each $\Vec{n_i} \in \mathbf{S}$. Then we define a mapping $f: \mathbf{S} \rightarrow \mathbf{S}$ such that:
\begin{align}
f(\Vec{n}) &:= (n_a + 36) \hat{a} + \lceil n_b \cdot 480 \rceil \hat{b} + n_c \hat{c}.
\end{align}
Observe that the duration of every note-event gets multiplied by $480$. This was done because, commonly, the number of beats every note-event lasts for will be an inverse power of two, and in some cases, might be as small as $\frac{1}{32}$. However, other beat-lengths like \textbf{triplets} (that last for $\frac{1}{3}$ of a beat) or \textbf{fifth-beat} (that lasts for $\frac{1}{5}$ of a beat) are also encountered frequently. By multiplying the note-event duration by $480 = 32 \times 3 \times 5$, we hope to convert almost all the $n_b$'s to an integer. We might also encounter some special case scenarios (e.g., seventh-beats or derivatives of triplets and fifth-beats) in which case, just multiplying by $480$ would not convert $n_b$ to an integer. That is why the ceiling operation ($\lceil \cdot \rceil$) was used.
If we now repeat each $f(\Vec{n}) \cdot \hat{a}$ (which represents a note index) $f(\Vec{n}) \cdot \hat{b}$ times (which is now an integer), we then obtain a new sequence $N^\dagger$ which emulates the melody of the song. For example, consider three random note events $\Vec{n}_1, \Vec{n}_2, \Vec{n}_3$. Let $\Vec{n}_1 = 4\hat{a} + 0.5\hat{b} + 1\hat{c}$, $\Vec{n}_2 = 5\hat{a} + 1 \hat{b} + 5 \hat{c}$ and $\Vec{n}_3 = 0\hat{a} + 0\hat{b} + 2\hat{c}$. Then, with the contribution of $\Vec{n}_1, \Vec{n}_2, \Vec{n}_3$, $N^\dagger = \{ 0, 0, 0\ldots(480 \text{ times}), 4, 4, 4\ldots(240, \text{ times}),\ldots,\\ 5, 5, 5\ldots(480 \text{ times}), \ldots \}$. Equation \eqref{eq: n_dagger} shows how the final song might be represented. The song has been transposed by three octaves (by adding $36$ to $n_a$) because the causal discovery algorithm (implemented in the \texttt{ETCPy} library) does not support sequences with integers less than zero.
\begin{align}
\label{eq: n_dagger}
\mathbf{N}^\dagger = \{&\underbrace{f(\vec{n_1})_a, f(\vec{n_1})_a, \ldots}_{f(\vec{n_1})_b \text{ times}}, \underbrace{f(\vec{n_2})_a, f(\vec{n_2})_a, \ldots}_{f(\vec{n_2})_b \text{ times}}, \ldots,
\underbrace{f(\vec{n_K})_a, f(\vec{n_K})_a, \ldots}_{f(\vec{n_K})_b \text{ times}} \}
\end{align}
\subsection{Causal Inference}
As highlighted earlier, the \texttt{ETCPy} library\footnote{Available on \url{https://github.com/pranaysy/ETCPy}, open source, with the Apache 2.0 licence.} has been used to compute the Causality between pairs of sequences. $N^\dagger$ for a particular composition has variable lengths. Lowest $N^\dagger$ values for each of the chosen \textit{M\=ed{l}akarta--Janya} are depicted in Table \ref{tab: surr_test_results}.
\begin{figure*}[!hbt]
\centering
\includegraphics[width=\textwidth]{images/raga_caus_65_nameshade.png}
\caption{Results of LZP causal analysis on $\{\mathcal{R}_{65} \cup \mathcal{R}_{29}^{(h)}\}$. The black background highlights the \textit{M\=e\d{l}akarta} compositions.}
\label{fig: caus_graph}
\end{figure*}
The \texttt{Lempel-Ziv-Penalty} (LZP for short, please see Appendix 1 for a detailed description) model has been used; which implies the algorithm used for compressing the sequences is \texttt{LZ-76} \cite{lempel1976complexity}. We then find the causal direction between each pair and output them as a Directed Acyclic Graph (DAG). The graph in Figure \ref{fig: caus_graph} depicts one such DAG. Precisely, it features a result with \textit{r\=aga Kaly\=a\d{n}i} as the \textit{M\=e\d{l}akarta} and \textit{r\=aga Ha\d{m}sadhwani} as its \textit{Janya}. It has a top-down structure. Hence, a node never connects to any other node above it, but may connect with multiple nodes below it. Moreover the black shade represents compositions in the \textit{M\=e\d{l}akarta} \textit{r\=aga}.
\subsection{Surrogate Testing}
\label{sec: surr_test}
\subsubsection{Surrogate Data Generation}
Following the parsing step highlighted in Section \ref{sec: auto_parse}, we end up with a data stream that can be completely represented in the vector space $\textbf{S}$. This data-stream is fairly accurate in the $\hat{b}$ dimension, not so accurate in the $\hat{a}$ dimension and exact in the $\hat{c}$ dimension. The parser ends up covering $(\underdot{S}, \Ddot{S})$ in compositions, using around $[19, 23]$ symbols as vocabulary in the $\hat{a}$ (note-index) dimension of a single \textit{r\=aga}. As \cite{garani2019} shows, a Carnatic \textit{r\=aga} can be represented as a layered graph. However, due to inconsistencies in the $\hat{b}$ (duration) dimension, we cannot follow their exact approach.
Hence, using just the $\hat{a}$ (note-index) dimension, we formulate Markov Chains (MC) for each \textit{r\=aga}. The stationary distribution $\pi$ (calculated using the transition matrix of an order-1 MC) for several \textit{r\=aga} is shown in Fig \ref{fig: some_stationary_dists}. As we are not using the $\hat{b}$ (duration) and $\hat{c}$ (measure-index) dimensions for this, we coalesce all the compositions in $\mathcal{R}_{\eta}$ or $\mathcal{R}_{\eta}^{(\alpha)}$ to form one single sequence $\textbf{R} = (\Vec{r_1}, \Vec{r_2}, \ldots, \Vec{r_{\gamma}})$, where $\Vec{r_i} \in \textbf{S}$. This way, we are able to define the \textit{r\=aga}'s statistics. To save memory space, the transition probabilities are found as-is; i.e. the transition is added to the matrix only if it is not already there, or if it is, its frequency is updated. For example, consider an order-3 MC. One possible transition is $P\underdot{D}\dot{S} \rightarrow \dot{R}$. Even if it is quite likely that $p(\dot{R}|\dot{S}) >0$, the occurrence of the phrase $P\underdot{D}\dot{S}$ itself is very unlikely. As it never occurs, it is never added to the transition matrix ($\rho$). Consequently, we avoid a sparse transition matrix.
After calculating the required transition matrix using $\textbf{R}$, new compositions $\Phi := \{\phi_1, \phi_2, \ldots \phi_s\}$ (where $s$ is the number of surrogates) are generated, primarily using the transition probability matrix. Firstly, a ``key'' is uniformly selected at random from rows of $\rho$. In case of an order-1 MC, $\rho$ ends up being a square matrix. Also, in case of an order-1 MC:
\begin{align}
\label{eq: stationary_dist}
\pi = \pi \rho.
\end{align}
So, instead of using the uniform distribution, the stationary distribution ($\pi$) of the MC is used for key selection. Once the key $\kappa$ is selected, the next element is generated using $\rho[\kappa]$ as a weighted distribution. Let $\Vec{n_1}$ be the first element of some $\phi$. Then, $(n_1)_a \sim \rho[\kappa]$. After a valid $(n_1)_a$ generation, it is added to the end of $\kappa$ and one element is removed from the front of $\kappa$. Naturally, in case of order-1 MC, it is simply $\kappa = (n_1)_a$.
The problem with order-2 or higher order MC's is that after several iterations, the generation seems to get stuck in an infinite loop of a single note. It is very likely that, for example, after several samples, all the new generations will be rests $;$. That is because, $p(;|;;) > 0$ or maybe even $p(;|;;;;) > 0$, while $p(\text{any other element}|;;;;) \approx 0$. Consequently, order-1 MC's tend to give the best results.
As explained in Section \ref{sec: auto_parse}, the event durations obtained by the parser are not necessarily accurate. So, we define a new distribution:
\begin{align}
\label{eq: dur_dist}
\mathbbm{2} := 2^{-\lfloor \mathcal{N}^+(2, 1) \rfloor}
\end{align}
where $\mathcal{N}^+(2, 1)$ stands for the positive part of the normal distribution $\mathcal{N}(2, 1)$. So, $(n_1)_b \sim \mathbbm{2}$. As shown in Figure \ref{fig: dur_dists}, Equation \eqref{eq: dur_dist} gives out the values at which $(n_i)_b$ values peak.
The \textit{\=avartana} ($\tau$) count for $\phi$ is chosen uniformly at random from the set $\{6, 7, 8, 10, 12, 14, 16\}$. After several valid generations, if $|\sum_{i=t_1}^{t_2} (n_i)_b - \tau| <= \epsilon$, then $(n_{t_2})_c = (n_{t_2 - 1})_c + 1$, where $(n_{t_1})_c = (n_{t_1 - 1})_c + 1$ (i.e. the last measure). $\epsilon$ is a small constant, which has been given the value $0.0078125 = 2^{-7}$ for efficient generation. This makes $\mathbbm{2}$ the limiting distribution; i.e. $\mathbbm{2}$ is the only distribution that might reject a generated sample, given it does not meet the aforementioned conditions. Sometimes, the generated sample $g \sim \mathbbm{2}$ is too high to fit in the \textit{\=avartana} counts. In that case, the entire note event $\vec{n}$ is deleted, and new values are obtained for $\vec{n}_a$ and $\vec{n}_b$. Initially, for $\Vec{n_1}$, $(n_1)_c = 0$.
In this manner, a thousand note events are generated for every $\phi$. The number of measures $\max n_c ~ \forall \Vec{n} \in \phi$ gets adjusted according to selected \textit{\=avartana} count. Fig \ref{fig: surr_statdists} shows the stationary distribution of fifty surrogates as compared to the original stationary distribution of \textit{r\=aga }.
\begin{figure*}[t]
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\columnwidth]{images/pi_8.png}
\caption{$\pi(\mathcal{R}_8)$}
\label{fig: stationary_8}
\end{subfigure}
\hspace{0.02\textwidth}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\columnwidth]{images/pi_15.png}
\caption{$\pi(\mathcal{R}_{15})$}
\label{fig: stationary_15}
\end{subfigure}
\hspace{0.03\textwidth}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\columnwidth]{images/pi_22.png}
\caption{$\pi(\mathcal{R}_{22})$}
\label{fig: stationary_22}
\end{subfigure}
\hspace{0.03\textwidth}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[width=\columnwidth]{images/pi_28.png}
\caption{$\pi(\mathcal{R}_{28})$}
\label{fig: stationary_28}
\end{subfigure}
\caption{The stationary distribution ($\pi$) for several \textit{r\=aga}. Note the singular point of high probability towards the end, which represents the rest. The other points indicate notes along approximately three octaves. The peaks are usually found in the central octave. The X-axis zero corresponds to the lowest pitch encountered in the \textit{r\=aga}.}
\label{fig: some_stationary_dists}
\end{figure*}
\begin{figure*}[!hbt]
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\columnwidth]{images/duration-comparison-8.png}
\caption{Overall frequency distribution of note-event durations for $\mathcal{R}_8$ (blue) and $\Phi(\mathcal{R}_{8})$ (orange).}
\label{fig: dur_dist_8}
\end{subfigure}
\hspace{0.1\textwidth}
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\columnwidth]{images/duration-comparison-28_k.png}
\caption{Overall frequency distribution of note-event durations for $\mathcal{R}_{28}^{(k)}$ (blue) and $\Phi(\mathcal{R}_{28}^{(k)})$ (orange).}
\label{fig: dur_dist_28_k}
\end{subfigure}
\hspace{0.1\textwidth}
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\columnwidth]{images/duration-comparison-15.png}
\caption{Overall frequency distribution of note-event durations for $\mathcal{R}_{15}$ (blue) and $\Phi(\mathcal{R}_{15})$ (orange).}
\label{fig: dur_dist_15}
\end{subfigure}
\caption{Frequency distributions of note-event durations obtained from the parser (blue) and from the surrogate generations (orange). Even though they do not follow the same distributions, notice that the blue ones peak at the inverse powers of 2.}
\label{fig: dur_dists}
\end{figure*}
\begin{figure*}[!hbt]
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.5\columnwidth]{images/22-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{22})$ (blue) and $\pi(\Phi(\mathcal{R}_{22}))$ (orange). }
\label{fig: surr_stat_22}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.5\columnwidth]{images/8-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{8})$ (blue) and $\pi(\Phi(\mathcal{R}_{8}))$ (orange). }
\label{fig: surr_stat_8_d}
\end{subfigure}
\caption{The stationary distribution of the specified \textit{r\=aga} compared to the stationary distribution of its surrogates.}
\label{fig: surr_statdists}
\end{figure*}
\subsubsection{Data Processing for Causal Analysis with Surrogate Data}
\label{sec: surr_data_proc}
The complete sequence (as shown in Equation \eqref{eq: n_dagger}) is used only if its length is the minimum throughout the set. Otherwise, a random sub-sequence of the minimum length is chosen from the given sequence. e.g. let $\textbf{c}_i ~ \in ~ \{\mathcal{R}_{29} \cup \mathcal{R}_{29}^{(h)}\}$, where $i \in [1:10]$. Let $N_{min} = \min(\|N^{\dagger}_1\|, \|N^{\dagger}_2\|, \ldots, \|N^{\dagger}_{10}\|)$. Then contiguous sub-sequences of length $N_{min}$ elements will be chosen uniformly at random from all $N^{\dagger}_i$ for the final comparison. This is important, as longer sequences have more opportunities to build a comprehensive CFG. Table \ref{tab: surr_test_results} also depicts the value of $N_{min}$ for every \textit{r\=aga} pool. If we choose $N_{min}$ to be constant over all the pools, then an insight about the overall complexity of compositions in the pool can be gathered by observing the time taken for compression ($\overline{t}_{calc}^{(n)}$, where $n$ is the number of surrogates.)
As a result, we end up introducing non-determinism to the equation, for we do not control which sub-sequence gets selected. Consequently, the results of LZP Causality are different over different iterations. In Table \ref{tab: surr_test_results}, the rows with number of surrogates $s = 0$ show the results of causal analysis amongst the compositions of the dataset (refer Table \ref{tab: raga_data}) using this methodology.
\section{Results}
In Tables \ref{tab: surr_test_results} and \ref{tab: surr_res_unif}, $\mathcal{E}_{s}$, where $s$ is the number of surrogates generated, stands for set of edges connecting a \textit{M\=e\d{l}akarta} composition and a \textit{Janya} composition. The results were calculated over \textbf{10} iterations, and as expected, they were not the same at every iteration. This is because:
\begin{enumerate}
\item Different minimum-length sequences get selected at every iteration.
\item Different surrogates are generated at every iteration.
\end{enumerate}
$N_{min}$ was defined previously in Section \ref{sec: surr_data_proc}, and it defines the length of all the sequences in the given \textit{r\=aga} pool.
A correct connection is an edge that points from a \textit{M\=e\d{l}akarta} composition to a \textit{Janya} composition. Then $\mathcal{E}' \in \mathcal{E}$ is the subset of edges that point correctly. The mean cardinality of this set over 10 iterations is depicted as $\overline{\mathcal{E}'_{s}}$ in Table \ref{tab: surr_test_results}.
The quantity $\overline{\mathcal{E}'_{s}}\%$ in Tables \ref{tab: surr_test_results} and \ref{tab: surr_res_unif} represents the \textbf{causal accuracy}: it is the percentage of the ratio of $\overline{\mathcal{E}'_{s}}$ and $\mathcal{E}_{s}$.
\begin{align}
\label{eq: causal_accuracy}
\overline{\mathcal{E}'_{s}}\% = \frac{\overline{\mathcal{E}'_{s}}}{\mathcal{E}_{s}} \times 100\%
\end{align}
The vector $\textbf{t}_{gen}^{(s)}$ contains the amount of time it takes to generate $s$ surrogates for each $\mathcal{R}_{\eta_1}$ and $\mathcal{R}_{\eta_2}^{\alpha}$ pair at every iteration. This value is expected to be stochastic in nature, primarily controlled by $\mathbbm{2}$ (Equation \eqref{eq: dur_dist}). The average over 10 iterations is depicted as $\overline{\textbf{t}}_{gen}^{(s)}$ in Table \ref{tab: surr_test_results}. The vector $\textbf{t}_{calc}^{(s)}$ contains the amount of time it takes to calculate the causality of the set $\{\mathcal{R}_{\eta} \cup \mathcal{R}_{\eta}^{\alpha}\}$ at every iteration. The average over 10 iterations is depicted as $\overline{\textbf{t}}_{calc}^{(s)}$. Note that all the experiments were performed on AMD's Ryzen 9 3950x, wherein, the surrogate generation was done on a single thread, and all the 32 threads were used for causal discovery.
\begin{table*}[!h]
\caption{Results of Causal Analysis}
\begin{tabular}{||>{\raggedright}p{1cm}|c|c|c|c|c|c||}
\hline \hline
\textbf{\textit{R\=aga} pool} & $\{\mathcal{R}_{8} \cup \mathcal{R}_{8}^{(d)}\}$ & $\{\mathcal{R}_{15} \cup \mathcal{R}_{15}^{(m)}\}$ & $\{\mathcal{R}_{22} \cup \mathcal{R}_{22}^{(a)}\}$ & $\{\mathcal{R}_{28} \cup \mathcal{R}_{28}^{(k)}\}$ & $\{\mathcal{R}_{29} \cup \mathcal{R}_{29}^{(h)}\}$ & $\{\mathcal{R}_{65} \cup \mathcal{R}_{29}^{(h)}\}$ \\ \hline
$N_{min}$ & $84960$ & $43680$ & $112240$ & $98556$ & $84960$ & $114620$ \\ \hline
$\mathcal{E}_{0}$ & $40$ & $24$ & $18$ & $20$ & $24$ & $24$ \\ \hline
$\overline{\mathcal{E}'_{0}}$ & $10.4$ & $23.38$ & $16.72$ & $4.48$ & $21.58$ & $18.78$\\ \hline
$\overline{\mathcal{E}'_{0}}\%$ & $21.66$ & $97.04$ & $92.88$ & $22.4$ & $89.91$ & $78.25$\\ \hline
$\mathcal{E}_{50}$ & $3248$ & $3024$ & $2968$ & $2970$ & $3024$ & $3024$ \\ \hline
$\overline{\mathcal{E}'_{50}}$ & $340.1$ & $2389.1$ & $2083.8$ & $1214.3$ & $2630.6$ & $1927.2$ \\ \hline
$\overline{\mathcal{E}'_{50}}\%$ & $10.47$ & $79.0$ & $70.21$ & $40.89$ & $86.99$ & $63.73$ \\ \hline
$\mathcal{E}_{100}$ & $11448$ & $11024$ & $10918$ & $10920$ & $11024$ & $11024$ \\ \hline
$\overline{\mathcal{E}'_{100}}$ & $976$ & $8522.7$ & $7798.3$ & $4559.4$ & $9934.1$ & $7687.$ \\ \hline
$\overline{\mathcal{E}'_{100}}\%$ & $8.53$ & $77.31$ & $71.43$ & $41.75$ & $90.11$ & $71.39$\\
\hline \hline
\end{tabular}
\label{tab: surr_test_results}
\end{table*}
As Table \ref{tab: surr_test_results} shows, $N_{min}$ plays an important role for obtaining the results. A larger value of $N_{min}$ presents LZ with a longer data stream, which results in a more accurate compression, however, it also directly affects $\overline{\textbf{t}}_{calc}^{(s)}$. The next logical step was to use a constant $N_{min} $ on all the \textit{r\=aga} pools. The results are depicted in Table \ref{tab: surr_res_unif}. Now, $\overline{\textbf{t}}_{calc}^{(s)}$'s are aligned in a different order. The \textit{r\=aga} pool $\{\mathcal{R}_{8} \cup \mathcal{R}_{8}^{(d)}\}$ (\textit{Hanumat\=odi - Dhanyasi}) is the most ancient pool of the lot, and effectively contains some of the most complex compositional forms. It would be natural if the Lempel-Ziv algorithm spends most of the time in that pool. Consequently, $\overline{\textbf{t}}_{calc}^{(s)}$ is a great indicator of the overall complexity of the \textit{r\=aga} pool. Moreover, $\overline{\mathcal{E}}'_{s}$ in Table \ref{tab: surr_res_unif} shows the effect of a more inaccurate compression by LZ.
\begin{table*}[!h]
\centering
\caption{Results of Causal Analysis, with $N_{min} = 43680$ for all the \textit{r\=aga} pools}
\begin{tabular}{||>{\raggedright}p{1cm}|c|c|c|c|c|c||}
\hline \hline
\textbf{\textit{R\=aga} pool} & $\{\mathcal{R}_{8} \cup \mathcal{R}_{8}^{(d)}\}$ & $\{\mathcal{R}_{15} \cup \mathcal{R}_{15}^{(m)}\}$ & $\{\mathcal{R}_{22} \cup \mathcal{R}_{22}^{(a)}\}$ & $\{\mathcal{R}_{28} \cup \mathcal{R}_{28}^{(k)}\}$ & $\{\mathcal{R}_{29} \cup \mathcal{R}_{29}^{(h)}\}$ & $\{\mathcal{R}_{65} \cup \mathcal{R}_{29}^{(h)}\}$ \\ \hline
$\overline{\mathcal{E}'_{50}}$ & $529.3$ & $2358.6$ & $1866$ & $1337.2$ & $2376$ & $1716.9$ \\ \hline
$\overline{\textbf{t}}_{gen}^{(50)}$ & $94.12$ & $51.17$ & $43.46$ & $61.03$ & $116.35$ & $56.66$ \\ \hline
$\overline{\textbf{t}}_{calc}^{(50)}$ & $197.82$ & $151.8$ & $163.07$ & $183.0$ & $172.83$ & $160.8$ \\ \hline
$\overline{\mathcal{E}'_{50}}\%$ & $16.3$ & $78$ & $62.87$ & $45.02$ & $78.57$ & $56.78$ \\ \hline
$\overline{\mathcal{E}'_{100}}$ & $1568.2$ & $8232.8$ & $6749.5$ & $4657.4$ & $8332.0$ & $5957.2$ \\ \hline
$\overline{\textbf{t}}_{gen}^{(100)}$ & $160.90$ & $127.86$ & $139.93$ & $182.30$ & $157.9$ & $115.58$ \\ \hline
$\overline{\textbf{t}}_{calc}^{(100)}$ & $713.93$ & $557.18$ & $601.73$ & $671.83$ & $631.51$ & $589.89$\\ \hline
$\overline{\mathcal{E}'_{100}}\%$ & $13.7$ & $74.68$ & $61.82$ & $41.83$ & $75.58$ & $54.0$ \\
\hline \hline
\end{tabular}
\label{tab: surr_res_unif}
\end{table*}
A homogeneous pattern can be observed in results depicted across Tables \ref{tab: surr_test_results} and \ref{tab: surr_res_unif}. Causal inference patterns observed from these results are tabulated in Table \ref{tab: inferences}. The overall inference does not add up for the $\{\mathcal{R}_{8} \cup \mathcal{R}_{8}^{(d)}\}$ (\textit{r\=aga Hanumat\=odi--Dhanyasi} pair) and $\{\mathcal{R}_{28} \cup \mathcal{R}_{28}^{(k)}\}$ (\textit{r\=aga Harik\=ambh\=oji--K\=ambh\=oji} pair). However, four out of the six chosen \textit{Janaka--Janya} pairs show desired results: The \textit{Janaka r\=aga} are a structural cause of the \textit{Janya r\=aga}.
\begin{table*}[!h]
\centering
\caption{Causal inference observed from the results depicted in Table \ref{tab: surr_test_results} and Table \ref{tab: surr_res_unif}. The arrow shows the causal direction, i.e. the $\mathcal{R}$ towards which the arrow points is the effect while the $\mathcal{R}$ from which it originates is the cause. }
\begin{tabular}{||>{\raggedright}p{1cm}|c|c|c|c|c|c||}
\hline
\textbf{\textit{R\=aga} pool} & $\{\mathcal{R}_{8} \cup \mathcal{R}_{8}^{(d)}\}$ & $\{\mathcal{R}_{15} \cup \mathcal{R}_{15}^{(m)}\}$ & $\{\mathcal{R}_{22} \cup \mathcal{R}_{22}^{(a)}\}$ & $\{\mathcal{R}_{28} \cup \mathcal{R}_{28}^{(k)}\}$ & $\{\mathcal{R}_{29} \cup \mathcal{R}_{29}^{(h)}\}$ & $\{\mathcal{R}_{65} \cup \mathcal{R}_{29}^{(h)}\}$ \\ \hline
& $\{\mathcal{R}_{8} \leftarrow \mathcal{R}_{8}^{(d)}\}$ & $\{\mathcal{R}_{15} \rightarrow \mathcal{R}_{15}^{(m)}\}$ & $\{\mathcal{R}_{22} \rightarrow \mathcal{R}_{22}^{(a)}\}$ & $\{\mathcal{R}_{28} \leftarrow \mathcal{R}_{28}^{(k)}\}$ & $\{\mathcal{R}_{29} \rightarrow \mathcal{R}_{29}^{(h)}\}$ & $\{\mathcal{R}_{65} \rightarrow \mathcal{R}_{29}^{(h)}\}$ \\ \hline
\hline
\end{tabular}
\label{tab: inferences}
\end{table*}
\section{Discussion}
Mathematically, it is possible to obtain $100\%$ causal accuracy for any \textit{Janaka--Janya r\=aga} pair. However, that case would demotivate the existence of the \textit{Janya r\=aga}, especially as we are comparing different sections of the compositions and using textual inputs. The \textit{Janya r\=aga} exist because they have a unique flavour/express a different mood, as compared to the corresponding \textit{M\=e\d{l}akarta}. In some cases, they might exhibit a unique combination of two different \textit{M\=e\d{l}akarta} (e.g. \textit{r\=aga Dhanyasi}, as explained later).
In case of $\{\mathcal{R}_{28} \cup \mathcal{R}_{28}^{(k)}\}$ (\textit{r\=aga Harik\=ambh\=oji--K\=ambh\=oji} pair), \textit{r\=aga K\=ambh\=oji} is a \textit{Bh\=a\d{s}\=anga r\=aga}. The presence of \textit{K\=akal\=i N\=i\d{s}a\=ada} ($N_3$) along with the \textit{Kai\d{s}ik\=i N\=i\d{s}ada} ($N_2$) in \textit{r\=aga K\=ambh\=oji} affects the context free grammar of the \textit{r\=aga}. The notations do not explicitly distinguish between the two, but as shown in Section \ref{sec: auto_parse}, they can be detected through the occurrence of certain phrases. Both the $Ni$'s are naturally represented by different note-indices ($N_2 \rightarrow 10$, $N_3 \rightarrow 11$) in the twelve tonal scale.
When the $Ni$'s were not explicitly mined, the results for $\{\mathcal{R}_{28} \cup \mathcal{R}_{28}^{(k)}\}$ (\textit{r\=aga Harik\=ambh\=oji--K\=ambh\=oji} pair) were $\overline{\mathcal{E}'_{0}}\% \approx 44\%$, $\overline{\mathcal{E}'_{50}}\% \approx 32\%$ and $\overline{\mathcal{E}'_{100}}\% \approx 36\%$. By parsing the $N_3$ explicitly, we are effectively adding a symbol to the grammar of $\mathcal{R}_{28}^{(k)}$ which does not exist in $\mathcal{R}_{28}$. In an ideal scenario, one could expect a further decrease in the causal accuracy ($\overline{\mathcal{E}'_{n}}\%$'s). However, we observe a slight increase overall (refer Table \ref{tab: surr_test_results}). This means that \textit{r\=aga K\=ambh\=oji} compositions can be shown to be derived from \textit{r\=aga Harik\=ambh\=oji}, if we sieve off the \textit{bh\=a\d{s}\=a\.nga} phrases.
Another possible reason for lower causality on $\{\mathcal{R}_{28} \cup \mathcal{R}_{28}^{(k)}\}$ can be attributed to the fact that \textit{r\=aga K\=ambh\=oji} is an ancient \textit{r\=aga}, which has been in usage much before the \textit{Janaka, r\=aga Harik\=ambh\=oji}. As a result, \textit{r\=aga K\=ambh\=oji} has experienced more musicological experimentation than its \textit{M\=e\d{l}akarta}.
But that is not the case with $\{\mathcal{R}_{8} \cup \mathcal{R}_8^{(d)}\}$ (the \textit{r\=aga Hanum\=at\=odi}--\textit{Dhany\=asi} pair). \textit{R\=aga T\=odi} (here known as \textit{r\=aga Hanum\=at\=odi}) is an ancient \textit{r\=aga}, and has been extensively experimented upon by different composers of all the eras. However, observe the \textit{\=ar\=oha} sequence of $\mathcal{R}_{8}^{(d)}$ (\textit{r\=aga Dhanyasi}) in Table \ref{tab: raga_data}. It might as well be a derived from $\mathcal{R}_{22}$ (\textit{r\=aga Kharaharapriya}). In fact, \textit{r\=aga Dhanyasi} uses the \textit{\=ar\=oha} sequence of \textit{r\=aga \'Suddha Dhanyasi}, a \textit{Janya} of \textit{r\=aga Kharaharapriya} and the \textit{avar\=oha} sequence of \textit{r\=aga Hanumat\=odi}. Hence, LZP might struggle in $\{\mathcal{R}_{8} \cup \mathcal{R}_{8}^{(d)}\}$ because of the fact that \textit{r\=aga Dhanyasi} is a complex combination of two differently coloured \textit{r\=aga}. Consequently, LZP sees \textit{r\=aga Hanumat\=odi} as a ``purer/simpler'' scale and often ends up predicting it as an effect rather than a cause. Also, \textit{r\=aga T\=odi} and \textit{r\=aga Dhanyasi} are complex \textit{r\=aga}, and their renditions usually require heavy usage of embellishments and ornamentation, which are not accurately annotated.
Moreover, not only $\mathcal{R}_{8}$ (\textit{r\=aga Hanumat\=odi}), but almost all the \textit{r\=aga} above suffer from the representational limitations. Representational limiters are explained in detail below.
\subsection{Representational Limitations}
\label{sec: rep_lim}
Representational limitations refer to the shortcomings of using text as a representation. A primary example of representational limitations are the \textit{gamaka}. \textit{Gamaka} are complex ornamented glides that revolve around the central frequency of a \textit{s\'vara}. There are \textit{da\d{s}avidha gamaka} i. e. ten different styles of \textit{gamaka}. There are massive differences in rendering style of these \textit{gamaka} amongst different traditions. Naturally, they are very difficult to annotate. As such, text representations of Carnatic compositions seldom contain any \textit{gamaka} references. These musical embellishments usually are taught orally to the students of Carnatic music. In many \textit{r\=aga} of Carnatic music, \textit{gamaka} are inherent to the existence of the \textit{r\=aga}. For example, \textit{r\=aga Sah\=an\=a} cannot exist without its \textit{gamaka}.
One of the reasons why \textit{r\=aga Hanumat\=odi--Dhanyasi} $\{\mathcal{R}_{8} \cup \mathcal{R}_{8}^{(d)}\}$ might fail is because of the fact that text representation of compositions does not capture \textit{gamaka}. \textit{R\=aga Hanumat\=odi}, which is more popular by the name \textit{r\=aga T\=odi}, gives importance to every note. In typical \textit{r\=aga T\=odi} renditions, every note has its own peculiarity and unique usage in each phrase. Representing these peculiarities in text is a tough task.
Another example of a representational limitation is that while using text notations, distinguishing between the \textit{\=ar\=oha} and \textit{avar\=oha} phrases becomes difficult. Quite often, text representations are not presented in the \texttt{UTF-8} rich text format, as a result of which, $\dot{S}$ gets represented just as $S$, and as a result, octave resolution is annulled. This problem and it's workaround have been discussed previously in Section \ref{sec: auto_parse}. However, this representational limitation only reinforces the difficulty of \textit{\=ar\=oha - avar\=oha} distinction. \textit{\=Ar\=oha--avar\=oha} phrase distinction means the ability to evaluate the direction of the current phrase, and the ability to predict the direction upcoming phrases.
\subsection{Parse Limitations}
\label{sec: parse_lim}
Parser limitations refer to the shortcomings of the parser described in Section \ref{sec: auto_parse} and Appendix \ref{app: auto-parser}.
In Carnatic music, a composition is divided into several sections.
The reason behind this division is that, the \textit{r\=aga} needs to be exposed gradually, while rendering, for the best effect. However, the parser does not guarantee this structural bifurcation. Moreover, as explained in Section \ref{sec: data_proc}, a random section of the composition is considered for the final causal inference. This, in the worst case, means that a fast moving section (e.g. \textit{cara\d{n}a}) of one composition might get contested against a more serene section (e.g. \textit{pallavi}) of another. Hence, even if the former is a \textit{Janya} composition, \texttt{LZP} will tend to evaluate it as the cause.
Naturally, the length of each section varies for every composition. The primary reason why the bifurcation of these sections is lost is because the parse output produces a sequence of note events as shown in Equation \eqref{eq: n_dagger}. Moreover, depending upon the chosen $N_{min}$, a variable number of sections might get selected. This might lead to further complications. For example, consider a scenario where the \textit{pallavi, anupallavi} are selected along with a very small portion of the \textit{cara\d{n}a}. Then this very small \textit{cara\d{n}a} portion might add unnecessary complexity for \texttt{LZP}.
As highlighted in Section \ref{sec: auto_parse}, as long as notations are not represented in a \texttt{utf-8} format, absolute resolution of the length (in time) of each note-event (as represented $\hat{b}$ in Equation \eqref{eq: 3d-space}) and the octave of each \textit{s\'vara} are non-trivial problems. Because of the way the parser works, we quite often encounter situations where we reach notes like $\underdot{S}$ or $\dot{N}$ (outside the $[\underdot{P}, \dot{P}]$ interval), which are seldom used in any Carnatic music rendition. Such notes add faux vocabulary, which increase the \texttt{LZ} complexity of the compositions, making them more complex than they actually are. This also affects the stationary distribution ($\pi$) of the \textit{r\=aga}. Consequently, during surrogate generation, sometimes a problem is encountered when the randomly selected pitch belongs to this faux vocabulary. These notes act as "traps" for surrogate generation because they might transition only to themselves or to one other pitch, which in turn transitions back to the original faux pitch leading to a vicious cycle\footnote{An example for the latter case: consider a scenario where the parser has erratically decoded the last phrase of a particular composition as $P \dot{D} \dot{N} \dot{D}$, and that those are the only occurrences of $\dot{N}$ and $\dot{D}$ in the whole \textit{r\=aga}. Now, imagine that, while surrogate generation, a $\dot{D}$ gets selected. That $\dot{D}$ can only transition to the $\dot{N}$, which in turn only transitions to the $\dot{D}$. That leads to a trap. }. To tackle this challenge in surrogate generation, whenever a faux pitch is encountered, the parser either tries to move towards the real vocabulary, and if that's not possible, the whole note-event is discarded and the generation starts over.
In typical Carnatic notations (refer Figure \ref{fig: representation}), when a group of \textit{s\'vara} are clubbed together, a Carnatic practitioner usually understands that all those notes need to be sung together to fit a specific number of counts. Unfortunately, the specific number of counts always varies. That is why for a given parsed note event $\vec{n}$ (refer Equation \eqref{eq: 3d-space}), many times $n_b$ is not accurate. However, the workaround discussed in Section \ref{sec: auto_parse} works as a good approximation.
\subsection{Future Directions}
One way to attack the problems highlighted in Sections \ref{sec: rep_lim} and \ref{sec: parse_lim} is to use an audio of several renditions along with the text notations. Analysing the contours in a Fourier Transform of the audio will help in detecting \textit{gamaka}. These audio files can then be used to train specialized architectures like the auto-encoder network used in \cite{dhariwal2020jukebox} to find a more elegant representation for each \textit{r\=aga}. Historically, some ancient \textit{r\=aga} have existed long before their classification as a \textit{Janya} of some specific \textit{M\=e\d{l}akarta} in the $17^{th}$ C.E. Specialized architectures can be trained to learn different features of each \textit{r\=aga}, and these features can hopefully be used for counterfactual reasoning of the historical causal events.
Augmenting the current text dataset with audio renditions can also help us redefine the concept of \textit{r\=aga} in a more abstract manner. This abstract definition might help in reducing ambiguities inherent to Carnatic music, especially ones which cause difficulties in its analysis. For example, in Carnatic practice, the rendition protocol of \textit{r\=aga Ha\.msadhwani} is often considered to be similar to that of \textit{r\=aga Kaly\=a\d{n}i}, even though it is considered a \textit{Janya} of \textit{r\=aga \'Sa\.nkar\=abhara\d{n}a\.m}. As we have shown in this article, causal analysis might be able to answer that conundrum. Redefinition of the concept of \textit{r\=aga} using hybrid data might aid in solidifying our claim.
The use of hybrid data can also help capture nuances in each note of a \textit{r\=aga T\=odi} rendition. Because of this new ability, the true complexity of \textit{r\=aga T\=odi} can be highlighted, which may improve the accuracy of causal discovery.
Carnatic music is constantly evolving, and hence, a full causal analysis of the subject is next to impossible. But we believe that causal analysis can act as a useful tool for studying the past and present evolution of the subject. In this regard, our study has initiated this line of causal analysis of Carnatic music for the very first time.
\section{Conclusions}
Carnatic music is a huge mine of data, but a large part of that tends to be inaccessible for analysis. This is primarily because modern Carnatic music stands firmly on the works of the Trinity\footnote{The Trinity refers to the three most prolific Carnatic composers of the $18^{th}$ century, \'Syama \'Sastri, Ty\=agar\=aja Sw\=ami and Muttusw\=ami D\=ik\d{s}itar.}, and other well-known composers of the the same and previous era. As a result, even though the scientific principles on which it is based are solid, the number of compositions in practice are limited. This is one of the main disadvantages of the oral tradition.
Knowledge in traditional disciplines has been orally passed across generations in our country, usually in the form of \textit{\'sruti} (here, listening) and \textit{smriti} (here, remembering). And so has it been in the case of music, which has had a rich and traditional \textit{gur\=u--\'si\d{s}ya parampar\=a}, wherein the \textit{gur\=u} or teacher imparts knowledge or information to the \textit{\'si\d{s}ya} or disciple. These sources of knowledge are orally learnt by the disciple, memorised and passed on to the next generation. In case of written texts, most of our knowledge was written on perishable sources such as palm leaves and paper manuscripts. Most of these sources did not survive due to the fragile nature of the materials used, leading to the loss of documented knowledge. As a result, no specific rule-book exists for Carnatic music. Every school tends to follow their own practices, which might vary accordingly.
This article gives an insight into how the \textit{M\=e\d{l}akarta r\=aga} are structurally related to their \textit{Janya r\=aga}. These causal relationships are a step towards re-discovering the lost scientific principles on which the whole system of Carnatic music is founded. From the evolution of concepts of \textit{gr\=ama} and \textit{m\=urchana} to the concept of the \textit{r\=aga}, which itself went through innate changes in every century, a clear causal path can be traced. Hence, on further development, causal analysis of Hindustani and Carnatic music together can be employed for finding the insights behind the genesis of the \textit{r\=aga} itself.
\section*{Supplementary Material}
\subsection*{Causal Inference for all \textit{M\=e\d{l}akarta--Janya r\=aga} pairs}
This section depicts typical results of Causal inference in all the \textit{M\=e\d{l}akarta--Janya r\=aga} pairs. As described in the article, the results of causal discovery in each \textit{M\=e\d{l}akarta--Janya} pair can be visualized in the form of a Directed Acyclic Graph. Outgoing arrows from a node make it a cause, and likewise, incoming arrows make it an effect.
\begin{itemize}
\item Figure \ref{fig: caus_graph_8} shows a causal inference DAG for the \textit{r\=aga Hanumat\=odi--Dhanyasi} pair.
\item Figure \ref{fig: caus_graph_15} shows a causal inference DAG for the \textit{r\=aga M\=ayama\b{l}avagau\b{l}a--Malahari} pair.
\item Figure \ref{fig: caus_graph_22} shows a causal inference DAG for the \textit{r\=aga Kharaharapriya--\=Abh\=ogi} pair. This DAG has been structured in the Left-Right direction, i.e. leftmost nodes are the ultimate causes and the rightmost nodes are the ultimate effects.
\item Figure \ref{fig: caus_graph_28} shows a causal inference DAG for the \textit{r\=aga Harik\=ambh\=oji--K\=ambh\=oji} pair.
\item Figure \ref{fig: caus_graph_29} shows a causal inference DAG for the \textit{r\=aga \'Sankar\=abhara\d{n}a\.m--Ha\d{m}sadhwani} pair.
\end{itemize}
\begin{figure*}[!hbt]
\includegraphics[scale=0.112]{images/surrogate/raga_caus_8_nameshade.png}
\caption{Result of LZP causal analysis on $\{\mathcal{R}_{8} \cup \mathcal{R}_{8}^{(d)}\}$. Black background highlights the \textit{M\=e\d{l}akarta} compositions.}
\label{fig: caus_graph_8}
\end{figure*}
\begin{figure*}[!hbt]
\centering
\includegraphics[scale=0.2]{images/surrogate/raga_caus_15_nameshade.png}
\caption{Result of LZP causal analysis on $\{\mathcal{R}_{15} \cup \mathcal{R}_{15}^{(m)}\}$. Black background highlights the \textit{M\=e\d{l}akarta} compositions.}
\label{fig: caus_graph_15}
\end{figure*}
\end{landscape}
\begin{landscape}
\begin{figure*}[!hbt]
\centering
\includegraphics[scale=0.15]{images/surrogate/raga_caus_22_nameshade_LR.png}
\caption{Result of LZP causal analysis on $\{\mathcal{R}_{22} \cup \mathcal{R}_{22}^{(a)}\}$. Black background highlights the \textit{M\=e\d{l}akarta} compositions. This DAG has a Left-Right structure instead of a Top-Down structure.}
\label{fig: caus_graph_22}
\end{figure*}
\begin{figure*}[!hbt]
\centering
\includegraphics[scale=0.2]{images/surrogate/raga_caus_28_nameshade.png}
\caption{Result of LZP causal analysis on $\{\mathcal{R}_{28} \cup \mathcal{R}_{28}^{(k)}\}$. Black background highlights the \textit{M\=e\d{l}akarta} compositions.}
\label{fig: caus_graph_28}
\end{figure*}
\end{landscape}
\begin{landscape}
\begin{figure*}[!hbt]
\centering
\includegraphics[scale=0.175]{images/surrogate/raga_caus_29_nameshade.png}
\caption{Result of LZP causal analysis on $\{\mathcal{R}_{29} \cup \mathcal{R}_{29}^{(h)}\}$. Black background highlights the \textit{M\=e\d{l}akarta} compositions.}
\label{fig: caus_graph_29}
\end{figure*}
\subsection*{Stationary Distribution of each \textit{r\=aga}}
In this section, the stationary distribution ($\pi(\mathcal{R})$) of each \textit{r\=aga} along with that of its surrogate compositions ($\pi(\Phi(\mathcal{R}))$) is depicted, $\forall ~\mathcal{R} ~\in \mathbf{C}$, where $\mathbf{C}$ is the set of all \textit{r\=aga} in the dataset. The Y-axis in these plots represents probability of the occurrence of the note corresponding to the index depicted in the X-axis. Notice that the zero of the X-axis does not correspond to a pitch index of zero, instead corresponds to the lowest pitch index found in that \textit{r\=aga}. The last point in each plot gives the probability of occurrence of rests (which were given a pitch-index of $\infty$).
\begin{figure*}[!hbt]
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.9\columnwidth]{images/surrogate/8_d-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{8}^{(d)})$ (blue) and $\pi(\Phi(\mathcal{R}_{8}^{(d)}))$ (orange). }
\label{fig: surr_stat_8_d}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.9\columnwidth]{images/surrogate/15-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{15})$ (blue) and $\pi(\Phi(\mathcal{R}_{15}))$ (orange). }
\label{fig: surr_stat_15}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.9\columnwidth]{images/surrogate/15_m-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{15}^{(m)})$ (blue) and $\pi(\Phi(\mathcal{R}_{15}^{(m)}))$ (orange). }
\label{fig: surr_stat_15_m}
\end{subfigure}
\caption{The stationary distribution of the specified \textit{r\=aga} compared to the stationary distribution of its surrogates.}
\label{fig: surr_statdists}
\end{figure*}
\end{landscape}
\begin{landscape}
In Figure \ref{fig: surr_stat_15_m}, rests are not the most frequently occurring element. This can be justified by having a look at the dataset of \textit{r\=aga Malahari}, which mostly contains short and succinct \textit{g\=itha}. These \textit{g\=itha} are usually used as beginner lessons, because a note event quite often lasts for one count, and the use of \textit{gamaka} is minimal.
\begin{figure*}[!hbt]
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.8\columnwidth]{images/surrogate/22_a-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{22}^{(a)})$ (blue) and $\pi(\Phi(\mathcal{R}_{22}^{(a)}))$ (orange). }
\label{fig: surr_stat_22_a}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.8\columnwidth]{images/surrogate/28-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{28})$ (blue) and $\pi(\Phi(\mathcal{R}_{28}))$ (orange). }
\label{fig: surr_stat_28}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.8\columnwidth]{images/surrogate/28_k-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{28}^{(k)})$ (blue) and $\pi(\Phi(\mathcal{R}_{28}^{(k)}))$ (orange). }
\label{fig: surr_stat_28_k}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.8\columnwidth]{images/surrogate/29-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{29})$ (blue) and $\pi(\Phi(\mathcal{R}_{29}))$ (orange). }
\label{fig: surr_stat_29}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=0.8\columnwidth]{images/surrogate/29_h-surr-stationary.png}
\caption{$\pi(\mathcal{R}_{29}^{(h)})$ (blue) and $\pi(\Phi(\mathcal{R}_{29}^{(h)}))$ (orange). }
\label{fig: surr_stat_29_h}
\end{subfigure}
\caption{The stationary distribution of the specified \textit{r\=aga} compared to the stationary distribution of its surrogates.}
\label{fig: surr_statdists}
\end{figure*}
As discussed earlier, \textit{R\=aga K\=ambh\=oji} employs the \textit{K\=akali Ni\d{s}\=ada} ($N_3$), which explains Figure \ref{fig: surr_stat_28_k}'s similarity with Figure \ref{fig: surr_stat_29}.
\end{landscape}
\end{document} |
1,116,691,499,140 | arxiv | \section{Brief introduction}
A fascinating area of study within strong interactions at high energies is the Balitsky-Fadin-Kuraev-Lipatov (BFKL) approach. The key idea in this formalism is that, when the center-of-mass energy $\sqrt{s} \to \infty$, terms of the form {$\alpha_s^n
\log^n{\left(s\right)} \sim \alpha_s^n \left(y_A-y_B\right)^n$} (where $y_{A,B}$ are the rapidities of tagged particles in the final state) must be resummed in order to accurately describe experimental observables. In this limit there is decoupling between transverse and longitudinal degrees of freedom which allows to evaluate cross sections in the factorized form:
\begin{eqnarray}
\sigma^{\rm LL} &=& \sum_{n=0}^\infty {\cal C}_n^{\rm LL} \alpha_s^n
\int_{y_B}^{y_A} d y_1 \int_{y_B}^{y_1} d y_2 \dots \int_{y_B}^{y_{n-1}} d y_n \nonumber\\
&=& \sum_{n=0}^\infty \frac{{\cal C}_n^{\rm LL}}{n!}
\underbrace{\alpha_s^n \left(y_A-y_B\right)^n }_{\rm LL} \nonumber
\end{eqnarray}
where LL stands for the leading log approximation and $y_i$ correspond to the rapidity of emitted particles. The LL BFKL formalism allows us to calculate the coefficients ${\cal C}_n^{\rm LL}$~\cite{Lipatov:1985uk,Balitsky:1978ic,Kuraev:1977fs,Kuraev:1976ge,Lipatov:1976zz,Fadin:1975cb}. The next-to-leading log approximation (NLL)~\cite{Fadin:1998py,Ciafaloni:1998gs} is more complicated since it is sensitive to the running of the strong coupling and to the choice of energy scale in the logarithms~\cite{Forshaw:2000hv,Chachamis:2004ab,Forshaw:1999xm,Schmidt:1999mz}. We can parametrize the freedom in the choice of these two scales, respectively, by introducing the constants
${\cal A}$ and ${\cal B}$ in the previous expression:
\begin{eqnarray}
\sigma^{LL+NLL} &=&
\sum_{n=1}^\infty \frac{{\cal C}_n^{\rm LL} }{n!} \left(\alpha_s- {\cal A} \alpha_s^2\right)^n \left(y_A-y_B - {\cal B}\right)^n \nonumber\\
&&\hspace{-2.4cm}= \sigma^{\rm LL} - \sum_{n=1}^\infty \frac{\left({\cal B} \, {\cal C}_n^{\rm LL} + (n-1) \, {\cal A}
\, {\cal C}_{n-1}^{\rm LL} \right)}{(n-1)!} \underbrace{ \alpha_s^n
\left(y_A-y_B\right)^{n-1}}_{\rm NLL} + \dots \nonumber
\end{eqnarray}
We see that at NLL a power in $\log{s}$ is lost w.r.t. the power of the coupling. In this formalism we can then calculate cross sections using the factorization formula (with $Y\simeq \ln{s}$)
\begin{eqnarray}
\sigma (Q_1,Q_2,Y) = \int d^2 \vec{k}_A d^2 \vec{k}_B \, \underbrace{\phi_A(Q_1,\vec{k}_A) \,
\phi_B(Q_2,\vec{k}_B)}_{\rm PROCESS-DEPENDENT} \, \underbrace{f (\vec{k}_A,\vec{k}_B,Y)}_{\rm UNIVERSAL}, \nonumber
\end{eqnarray}
where $\phi_{A,B}$ are process-dependent impact factors which are functions of some external scale, $Q_{1,2}$, and some internal momentum for reggeized gluons, $\vec{k}_{A,B}$. The Green function $f$ is universal and depends on $\vec{k}_{A,B}$ and the energy of the process $\sim e^{Y/2}$. It corresponds to the solution of the BFKL equation which at LL can be written in iterative form~\cite{Schmidt:1996fg} in transverse momentum representation as
\begin{eqnarray}
f &=& e^{\omega \left(\vec{k}_A\right) Y} \Bigg\{\delta^{(2)} \left(\vec{k}_A-\vec{k}_B\right) + \sum_{n=1}^\infty \prod_{i=1}^n \frac{\alpha_s N_c}{\pi} \int d^2 \vec{k}_i
\frac{\theta\left(k_i^2-\lambda^2\right)}{\pi k_i^2} \nonumber\\
&&\hspace{-1.2cm}\int_0^{y_{i-1}} \hspace{-.3cm}d y_i e^{\left(\omega \left(\vec{k}_A+\sum_{l=1}^i \vec{k}_l\right) -\omega \left(\vec{k}_A+\sum_{l=1}^{i-1} \vec{k}_l\right)\right) y_i} \delta^{(2)}
\left(\vec{k}_A+ \sum_{l=1}^n \vec{k}_l - \vec{k}_B\right)\Bigg\}, \nonumber
\end{eqnarray}
where the gluon Regge trajectory reads
\begin{eqnarray}
\omega \left(\vec{q}\right) &=& - \frac{\alpha_s N_c}{\pi} \log{\frac{q^2}{\lambda^2}}. \nonumber
\end{eqnarray}
$\lambda$ is a regulator of infrared divergencies. This solution has been studied at length in a series of papers. It serves as the basis to construct the Monte Carlo event generator {\tt BFKLex} which has had multiple
applications in collider phenomenology and more formal studies~\cite{Chachamis:2013rca,Caporale:2013bva,Chachamis:2012qw,Chachamis:2012fk,Chachamis:2011nz,Chachamis:2011rw}. In the following Section we focus on a discussion of this iterative solution at NLL and, in particular, on those terms which drive its collinear behavior.
\section{Resummation of the leading collinear contributions}
At NLL there is a dominant term in the BFKL kernel in the collinear regions, which correspond to the limits $\vec{k}_A^2$ being much bigger or smaller than $\vec{k}_B^2$, taking the form of a double log contribution. This term can be introduced in the previous iterative representation of the gluon Green function making the replacement
\begin{eqnarray}
\theta \left(k_i^2-\lambda^2\right) \to \theta \left(k_i^2-\lambda^2\right)
- \underbrace{\frac{\bar{\alpha}_s}{4} \ln^2{\left(\frac{\vec{k}_A^2}{\left(\vec{k}_A+\vec{k}_i\right)^2}\right)}}_{\rm NLL}.
\end{eqnarray}
This new contribution generates a large instability in the BFKL formalism when it is applied to cross sections whose impact factors are not very narrowly localized in transverse momentum space. Since a typical impact factor allows for configurations where $\vec{k}_A^2$ can be quite different to $\vec{k}_B^2$ then this is an important problem which needs to be investigated in detail. The original BFKL formulation can be applied to processes where $\vec{k}_A^2 \simeq \vec{k}_B^2$, if we want to extend its applicability then it is needed to introduce all-orders collinear corrections to its kernel. This has been addressed in~\cite{Salam:1998tj,Ciafaloni:2003ek}. In~\cite{Vera:2005jt} it was shown that the collinear corrections in transverse momentum space have the following structure when expanded in a perturbative series:
\begin{eqnarray}
\theta \left(k_i^2-\lambda^2\right) \to \theta \left(k_i^2-\lambda^2\right) + \sum_{n=1}^\infty
\frac{\left(-\bar{\alpha}_s\right)^n}{2^n n! (n+1)!} \ln^{2n}{\left(\frac{\vec{k}_A^2}{\left(\vec{k}_A+\vec{k}_i\right)^2}\right)}.
\label{SumBessel}
\end{eqnarray}
In order to understand what it is the effect of this sum we have plotted the gluon Green function for a fixed coupling of $\bar{\alpha}_s=0.2$ and $Y=3$ with $k_b = 20$ versus $k_a$ in Fig.~\ref{Coll-Bessel} (all the transverse momenta in this paper are written in GeV).
\begin{figure}
\includegraphics[height=10cm]{Coll-Doug.pdf}
\vspace{-1cm}
\caption{Collinear behavior of gluon Green function including collinear contributions}
\label{Coll-Bessel}
\end{figure}
In this figure we compare the LL result in collinear and anti-collinear regions with the higher order corrections. We can add the terms in the sum of Eq.~(\ref{SumBessel}) one by one and observe that the converge is very good when we are not very far from the region $\vec{k}_A^2 \simeq \vec{k}_B^2$. We see that the higher order collinear corrections are needed to avoid a negative Green function (the NLO truncation goes negative rather quickly). The continuous line corresponds to the summation of an infinite number of terms in Eq.~(\ref{SumBessel}). In this case it was shown in~\cite{Vera:2005jt} that this resums to a Bessel function of the first kind (this approach, in $\gamma$-space, has already been successfully applied in many phenomenological applications~\cite{Vera:2006un,Vera:2007kn,Caporale:2007vs,Vera:2007dr,Caporale:2008fj,Hentschinski:2012kr,Hentschinski:2013id,Caporale:2013uva,Chachamis:2015ona}). A similar conclusion has recently been reached in coordinate space in~\cite{Iancu:2015vea}.
What we have discussed in this short Section accounts for the leading collinear contributions. We will show next how to deal with a more complete set of them.
\section{Bessel resummation in the {\tt BFKLex} Monte Carlo event generator}
We have implemented the collinear resummation in transverse momentum space in the {\tt BFKLex} Monte Carlo event generator including subleading logarithmic contributions and the full NLL BFKL kernel. The prescription was shown in~\cite{Vera:2005jt} and we explain it here for completeness. We keep the NLO BFKL kernel untouched and only add NNLO and beyond higher order corrections. To do so we remove from the original NLO kernel the double log
\begin{eqnarray}
- \frac{\bar{\alpha}^2_s}{4} \frac{1}{(\vec{q}-\vec{k})^2}
\ln^2{\left(\frac{q^2}{k^2}\right)}
\label{singlelog}
\end{eqnarray}
and we replace it by the resummed expression
\begin{eqnarray}
\Bigg\{\left(\frac{q^2}{k^2}\right)^{- b \, \bar{\alpha}_s
\frac{|k-q|}{k-q}} \sqrt{\frac{2(\bar{\alpha}_s + a \, \bar{\alpha}_s^2)}{\ln^2{\left(\frac{q^2}{k^2}\right)}}} J_1 \left(\sqrt{2(\bar{\alpha}_s + a \, \bar{\alpha}_s^2) \ln^2{\left(\frac{q^2}{k^2}\right)}}\right)\nonumber\\
&&\hspace{-9cm}- \bar{\alpha}_s - a \, \bar{\alpha}_s^2 + b \, \bar{\alpha}_s^2
\frac{|k-q|}{k-q} \ln{\left(\frac{q^2}{k^2}\right)} \Bigg\}\frac{1}{(\vec{q}-\vec{k})^2}
\end{eqnarray}
where $J_1$ is the Bessel function of the first kind and
\begin{eqnarray}
a &=& \frac{5}{12} \frac{\beta_0}{N_c} - \frac{13}{36}\frac{n_f}{N_c^3} - \frac{55}{36};\\
b &=& - \frac{1}{8} \frac{\beta_0}{N_c} - \frac{1}{6}\frac{n_f}{N_c^3} - \frac{11}{12};
\end{eqnarray}
take into account the subleading collinear logs (the previous section corresponds to the simple case with $a=b=0$, in this work we focus on the scale invariant part of the NLO kernel and took $\beta_0=0$ and $n_f=0$). Notice that the perturbative expansion of the Bessel function generates back, as the first term, the double log of Eq.~(\ref{singlelog}) which we had removed. This ensures that, up to NLO, we are in agreement with the results of Fadin and Lipatov.
The effect of this correction on the gluon Green function is quite dramatic. It greatly reduces its growth with energy as can be seen in Fig.~\ref{Growth-Energy}.
\begin{figure}
\includegraphics[height=10cm]{Growth-Energy.pdf}
\vspace{-1cm}
\caption{Growth with energy of the gluon Green function}
\label{Growth-Energy}
\end{figure}
This reduction is noticeable already at very low values of $Y$. The effect is also very large, as it should be, in the collinear regions which are largely modified, but positive, when the Bessel resummation is introduced, as can be seen in Fig.~\ref{Collinear-Y}, where we show the collinear behavior for two different energy values.
\begin{figure}
\begin{center}
\includegraphics[height=10cm]{Collinear-Y4.pdf}
\vspace{-1cm}\\
\includegraphics[height=10cm]{Collinear-Y8.pdf}
\end{center}
\vspace{-1cm}
\caption{Collinear behavior of gluon Green function for different values of the scattering energy}
\label{Collinear-Y}
\end{figure}
In the following Section we discuss the effects of the double log resummation for a couple of more exclusive quantities: the so-called diffusion cigar and the multiplicity distributions.
\section{The diffusion picture and multiplicities}
An interesting feature arises when the collinear terms are included in our analysis: the typical transverse scales in the gluon ladder tend to be more localized around the external transverse scales and the diffusion towards infrared and ultraviolet regions is largely reduced. The quantity we call $\rho$ measures the mean value for the average transverse scale (the straight lines in Fig.~\ref{Melon-Y}) of those reggeized gluons propagating in a rapidity gap centered at a typical rapidity $y$.
The curved lines in the same figure indicate the root mean square deviation from the mean value. We can see that, as it is well known, the diffusion or width of the cigar-shaped figures is broader as the total available energy, or total rapidity span increases (in the figure we chose two plots with $Y=2$ and $Y=4$, and note that $y$ varies from 0 to $Y$). In the upper and lower plots of the figure we chose asymmetric and symmetric external transverse scales, respectively.
\begin{figure}
\hspace{-1.5cm}
\includegraphics[height=6.5cm]{Melon-Y2-10-50.pdf}\includegraphics[height=6.5cm]{Melon-Y4-10-50.pdf}
\hspace{-1.5cm}\includegraphics[height=6.5cm]{Melon-Y2-10-15.pdf}\includegraphics[height=6.5cm]{Melon-Y4-10-15.pdf}
\caption{Diffusion plots for different external transverse momenta configurations and different center-of-mass energies.}
\label{Melon-Y}
\end{figure}
A striking new feature is revealed when looking at the number of iterations of the kernel needed to construct the solution to the BFKL equation. This is shown in Fig.~\ref{Multiplicities} al LL (left) and NLL +Bessel (right). The Green function at each $Y$ corresponds to the areas under the different curves there depicted. In both cases we find Poissonian distributions in the number of iterations of the kernel, with a huge suppression in the case with collinear resummation (right plot). The remarkable fact is the position of the maxima in these distributions: in the collinearly-improved case we find that at larger rapidities these maxima are rather smaller than at LL. For rapidity $Y=4$ at LL it is placed at $N\simeq 5$ while with NLL+Bessel it lies at $N \simeq 3$. For $Y=6$ we have $N \simeq 8$ at LL versus
$N\simeq 5$ in the NLL+Bessel case. The maximal $N$ with a significant contribution to the Green function also decreases a significant amount when moving from LL to a calculation with higher orders: at $Y=4,6$, $N_{\rm max}=16,22$, respectively,
at LL while $N_{\rm max}=12,15$ at higher orders.
\begin{figure}
\hspace{-1.cm}\includegraphics[height=6.5cm]{Multiplicity-LO.pdf}\includegraphics[height=6.5cm]{Multiplicity-NLO.pdf}
\caption{Distribution in the number of iterations of the BFKL kernel needed to construct the gluon Green function.}
\label{Multiplicities}
\end{figure}
This reduction of the effective final state multiplicity has phenomenological consequences which we will describe in a future work. We believe the implementation of higher order corrections into a Monte Carlo event generator can be a useful tool to test the high energy limit of QCD at present and future colliders. It is needed to define new, more exclusive, observables to find imprints of this interesting kinematical region.
\section{Summary \& Outlook}
We have presented a collinearly improved version of the NLL BFKL kernel which is valid in transverse momentum representation and implemented it in the {\tt BFKLex} Monte Carlo event generator. This offers us the possibility to study its consequences in the global resummation program at a very exclusive level. We have found that its net effect is to improve the convergence in collinear regions, thus stabilizing the perturbative expansion even beyond the original multi- and quasi-multi-regge kinematics. The new collinear contributions take the form of a Bessel function of the first kind which reduces the total growth with energy of BFKL cross sections. Taking advantage of the event generator, we have found that the spread in transverse momentum of the typical scales within the BFKL ladder is greatly reduced by the collinear improvements. Quite remarkably, we have shown how the mean mini-jet multiplicity is also reduced when these higher order corrections are taken into account. It is now mandatory to further investigate how this improved resummation program affects exclusive observables at the Large Hadron Collider.
\begin{flushleft}
{\bf \large Acknowledgements}
\end{flushleft}
G.C. acknowledges support from the MICINN, Spain, under contract FPA2013-44773-P.
A.S.V. acknowledges support from Spanish Government (MICINN (FPA2010-17747,FPA2012-32828)) and to the Spanish MINECO Centro de Excelencia Severo Ochoa Programme (SEV-2012-0249).
|
1,116,691,499,141 | arxiv | \section{Introduction.}
\indent The radial gauge (or Poincar\'e gauge) was introduced by Fock and
Schwinger
\cite{FS,Sch} in the context
of quantum electrodynamics and often used in problems connected with QCD at
the non perturbative level \cite{QCD}. The main feature of such
a gauge is the possibility of expressing correlation
functions in terms of v.e.v. of physical fields like the field strength
$F_{\mu\nu}$.
Recently an extension of the radial gauge to general relativity has been
given by
Modanese and Toller \cite {M.T.}. Here the meaning of the gauge is more
profound than in the simpler case of gauge
theories, because it involves also the reparametrization group. In fact the
radial
coordinates in which such a gauge is realized have a well defined physical
meaning
as geodesic or Riemann-Eisenhart coordinates that radiate from a given
origin.
The hamiltonian approach in Riemann normal coordinates has been developed
by Nelson and Regge \cite{N.R.}. Here we shall look at the problem from the
lagrangian point of view, i.e. starting from the functional integral.
In the functional integral approach the correlation functions are computed
by averaging the products of fields like ${\cal O}(x) {\cal O}(y)$ on all
geometries weighted by the exponential of the gravitational action.
Fixing the gauge to the radial gauge gives $x,y,\dots$ a well defined
meaning as
the points that acquire geodesic coordinates $x,y,\dots$ in each of the
geometries we are summing over.\par
\indent The problem of computing physical correlation functions has become
recently
acute when people started performing Monte Carlo simulations in general
relativity models
\cite{reticoli}. In the
usually adopted method of the Regge calculus, one resorts to computing
expectation
values of products of fields defined on the sample geometry at a given
geodesic distance \cite{H.W.}. This amounts in the continuum to fixing the
gauge as the geodesic gauge.
A similar approach developed from the Mandelstam path space formulation
\cite{Mandelstam} has been recently given \cite{T.W.,Teitel}.
The aim of the present paper is to compute the propagators in such a gauge.
As we shall see
such a problem is not at all trivial because a simple minded computation of
the propagator gives meaningless, i.e. infinite results.\par
\indent It is well known that gravity can be formulated in the second or first
order approach and we shall derive the propagators in both formulations.
Formally in the second order approach one starts from the functional
integral
\begin{equation}
\int e^{\int\sqrt{g} R(x) d^N x}\mu[g_{\mu\nu}]{\cal D}[g_{\mu\nu}]
\end{equation}
where the fundamental variable is the metric tensor $g_{\mu\nu}$ and $
\mu[g_{\mu\nu}]$
is the integration
measure which we do not need to specify here \cite{misura}.
We shall show in section 2 that
the radial gauge condition in the second order formalism can be written as
$x^\mu g_{\mu\nu}=x^\mu\eta_{\mu\nu}$. Thus the sharp gauge fixing will be
given by
$\delta(x^\mu(g_{\mu\nu}-\eta_{\mu\nu}))$ which is to be accompanied by the
relative Faddeev-Popov terms by adding to the lagrangian the expression
\begin{equation}
\bar{c}^\nu(x)x^\mu(\nabla_\mu c_\nu+\nabla_\nu c_\mu)(x).
\end{equation}
Note that contrary to what happens in the usual gauge theories
formulated
in the radial gauge, the ghosts do not decouple. In the first order
formalism on the other hand, one considers as fundamental variables the
vierbeins
$\tau^a_\mu$ and the connections $\Gamma^{ab}_\mu$. The functional integral
becomes
\begin{equation}
\int e^{-{1\over 2\kappa^2}\int(d\Gamma+\Gamma\wedge\Gamma)^{ab}
\wedge e^c \dots \epsilon_{abc\dots}}\mu[e^a_\mu]{\cal
D}[\Gamma^{ab}_\mu]
{\cal D}
[e^a_\mu].
\end{equation}
The natural way to introduce the radial gauge fixing without breaking the
symmetry
between the connections and the vierbeins in obtained by the
$N + N(N-1)/2$ gauge conditions $\delta(x^\mu \Gamma^{ab}_\mu)~
\delta(x^\mu (e^a_\mu-\delta^a_\mu))$
and supplying the relative Faddeev-Popov terms. As it happens in
Yang-Mills theory,
the ghost associated to the local Lorentz symmetry formally decouple,
while those
related to the diffeomorphisms survive as in the second order case.\par
\indent The technique for deriving the propagator in the radial gauge in a
generalization of
the gauge projection method developed in \cite{M.S.} for the usual gauge
theory case. We
recall that given a generic field one can define several projections to the
radial
gauge that differ by the behavior of the projected field at the origin
and at
infinity. One has to keep in mind that all the problem is to give a
solution to
the radial Green's equation, i.e. the equation arising when one performs in
the
gravitational field lagrangian in presence of sources, an arbitrary
variation of
the gravitational field subject to the radial gauge conditions; such a
solution has to be symmetric in the field argument and radial. In section 2
we give a
general construction of the propagator in any ``sharp gauge'' of which the
radial
gauge is an example, starting from the propagator in a generic gauge (e.g.
a gauge
of the Feynman-Landau type). Care however has to be exerted in the projection
process,
because due to the singular nature of the correlation function at short
distances,
spurious infinite solutions of the homogeneous equation can develop. In
fact
the
most natural propagator in the radial gauge would be the correlator
\begin{equation}
\langle P^0[h]_{\mu\nu}(x) P^0[h]_{\rho\sigma}(y)\rangle
\end{equation}
that corresponds to the selection, in the projection procedure, of the
field which
is regular at the origin. In practice such a field is given by a proper
average
of the Feynman field along the geodesics joining the points $x$ and $y$ to
the
origin
and due to the singular nature of the Riemann correlators at short
distances,
$d^{-N-2}$, it makes the written correlation function divergent in all
dimensions.\par
\indent Similarly the radial projected field that is regular at infinity,
$P^\infty
[h]_{\mu\nu}$, leads to
a propagator divergent in all $N\leq 4$, for infrared problems. On the
other
hand
it is shown in section 2 that one can consider a projected equation that
treats
the
origin and the infinity in symmetrical way. A singular behavior of the
field
$g_{\mu\nu}$
at the origin and/or at infinity is unavoidable as can be seen by
considering a
special family of Wilson loops described in section 4. A formal solution to
the radial
$P^S$ projected Green's function is given by
\begin{equation}
\langle P^S[h]_{\mu\nu}(x) P^S[h]_{\rho\sigma}(y)\rangle
\end{equation}
where $P^S=(P^0+P^\infty)/2$,
which however still contains an infinite gauge term, solution of the
homogeneous
equation that has to be subtracted away. The result of such a subtraction
procedure
gives for the solution of the radial $P^S$ projected Green's equation
\begin{equation}
{1\over 2}\langle P^0[h]_{\mu\nu}(x) P^\infty[h]_{\rho\sigma}(y)\rangle+
{1\over 2}\langle P^\infty[h]_{\mu\nu}(x) P^0[h]_{\rho\sigma}(y)\rangle
\end{equation}
which is symmetric in the field arguments and finite for all $N>2$ and
explicitly
solves the problem of finding the propagator in the radial gauge.
In addition it is immediately verified that the given propagator satisfies
the radiality condition $x^\mu G_{\mu\nu,\rho\sigma}(x,y)=0$. A similar
procedure
works
successfully also in the first order formalism.
{}From the technical viewpoint the transition from the propagator in a gauge
of
the
Feynman-Landau type to the radial gauge propagator in obtained by
expressing the
radial
fields in terms of the linearized Riemann and torsion tensors which are
invariant
under linearized gauge transformations; analytically the radial propagators
are
naturally expressed in terms of
hypergeometric functions. In three dimensions the propagators acquire a
particularly
simple structure: in the first order approach the correlators of two
connections
vanish
identically while the correlator of two dreibeins or of a dreibein and a
connection
reduce to collinear contributions. In the second order formalism the
metric-metric
correlation function is also zero except for collinear contributions. This
is
clearly
related to the absence of propagating gravitons in three dimensions and to
the
physical
nature of the considered gauge.\par
\indent The paper is structured as follows: in section 2 we develop the general
procedure
for
deriving the propagators in sharp gauges from the ones in a generic gauge.
In
section 3
we formulate the radial condition in the second order formalism and relate
it to
the
usual definition of the Riemann-Eisenhart coordinates; special attention
is
paid to
the allowed singularities of the metric at the origin and behavior at
infinity.
Then
we write
down the radial projectors in terms of the linearized Riemann and torsion
tensors and
finally we derive the propagator. In section 4 we repeat the same procedure in
first
order case, underlining the most important differences between the two
approaches.
Section 5 is devoted to the conclusions. In Appendix A we show how to
express the
propagators
in terms of hypergeometric functions and work out explicitly the behavior
of the
propagators at the origin and infinity. Appendix B outlines the $\beta\neq
0$
(i.e. non sharp) radial gauge in the second order formalism.
\section{Propagators.}
In this section we shall develop a general procedure for the construction
of the propagators in a class of ``sharp'' gauge conditions starting
from the propagator given in a generic gauge. Subsection 2.A contains
the general treatment; in subsections
2.B, 2.C, 2.D we display the concrete form
of the various operators in the case of Electrodynamics and linearized
Einstein theory in the second and first order formalism, respectively.
\subsection{General formalism.}
Let the equation of motion for a generic gauge field $A(x)$ be written in the
form
\begin{equation}
K_x \, A(x) = J(x),
\label{dft}
\end{equation}
where $K$ is a linear, non-invertible, hermitean ``kinetic'' operator and
$J(x)$ is an external source coupled to $A$. The gauge transformations
of $A$ have the form
\begin{equation}
A(x) \rightarrow A(x) + C_x \, f(x).
\label{njh}
\end{equation}
Here $C_x$ is another linear operator, which has the properties
\begin{equation}
K_x \, C_x = 0 \qquad \mbox{(gauge invariance)}
\label{hre}
\end{equation}
and
\begin{equation}
C^\dagger_x \, K_x = 0 \,\,{\rm which~~ implies}\, C^\dagger_x \, J(x) = 0
\qquad \mbox{(``source conservation'')}.
\label{dou}
\end{equation}
We assume that we can always add to the kinetic operator $K$ an
operator $K^{\cal F}$ that makes $K$ invertible, as it happens, for example,
in the Feynman gauge. We assume $K^{\cal F}$
to be of the form ${\cal F}^\dagger {\cal F}$, as deriving from a
quadratic gauge fixing $\int d x \, {\cal F}^2(A(x))$. ${\cal F}$ is meant
to be a linear operator on $A$. The propagator $G^{\cal F}$ corresponding to
this gauge fixing satisfies
\begin{equation}
(K_x + K^{\cal F}_x) \, G^{\cal F}(x,y) = \delta(x-y)
\label{gre}
\end{equation}
and has the following property
\begin{equation}
\int dy \, K^{\cal F}_x \, G^{\cal F}(x,y) \, J(y)=0
\qquad \mbox{if} \ \ C^\dagger_y \, J(y) = 0.
\label{bsp}
\end{equation}
In other words, $K^{\cal F}_x$ vanishes when applied to the fields
generated by physical sources. In fact, applying $C^\dagger$ to (\ref{gre})
we get
\begin{equation}
C^\dagger_x \, K_x \, G^{\cal F}(x,y) +
C^\dagger_x \, {\cal F}^\dagger_x \, {\cal F}_x \, G^{\cal F}(x,y)
= C^\dagger_x \, \delta(x-y).
\end{equation}
But using (\ref{dou})
and integrating on a conserved source $J$ we obtain
\begin{equation}
\int dy \, C^\dagger_x \, {\cal F}^\dagger_x \, {\cal F}_x \,
G^{\cal F}(x,y) \, J(y) = 0.
\label{chy}
\end{equation}
We notice that ${\cal F} \, C$ is the kinetic ghost operator and as
such invertible. Then from (\ref{chy}) we get
\begin{equation}
\int dy \, {\cal F}_x \, G^{\cal F}(x,y) \, J(y) = 0
\end{equation}
and multiplying by ${\cal F}^\dagger_x$ we finally prove (\ref{bsp}).
Let us now impose on $A$ a generic ``sharp'' gauge condition ${\cal G}$
\begin{equation}
A^{\cal G}=\{A(x):\, {\cal G}(A(x))=0\}.
\label{bhe}
\end{equation}
${\cal G}$ is meant to be a linear function of $A$ and of its derivatives.
The field $A^{\cal G}(x)$ can be obtained from a generic field $A(x)$
through a (generally non-local) projector $P^{\cal G}$
\begin{equation}
A^{\cal G}(x) = P^{\cal G}[A](x) = A(x) + C_x \, F^{\cal G} [A](x).
\label{puy}
\end{equation}
We require this projector to be insensitive to any ``previous gauge''
of the field, namely to satisfy
\begin{equation}
P^{\cal G}[C_x \, f](x) = 0, \qquad \mbox{for any} \ f(x).
\label{cht}
\end{equation}
We consider now the adjoint projector ${P^{\cal G}}^\dagger$ that in general
does not coincide with $P^{\cal G}$; an exception is given by
Feynman-Landau type of gauges.
We shall now prove the following properties of the adjoint projector.
\medskip
\noindent {\it (1) ${P^{\cal G}}^\dagger$ produces conserved
sources} \cite{B.D.}.\par
\medskip
\noindent For, integrating (\ref{cht}) on a current $J(x)$ we have
\begin{equation}
\int dx \, P^{\cal G}[C_x \, f](x) \, J(x) = 0;
\end{equation}
by definition of the adjoint projector, this means that
\begin{equation}
\int dx \, f(x) \, C^\dagger_x \, {P^{\cal G}}^\dagger[J](x)= 0
\end{equation}
and due to the arbitrariness of $f(x)$
\begin{equation}
C^\dagger_x {P^{\cal G}}^\dagger[J](x) = 0 \qquad \mbox{for any} \ J(x).
\label{xpf}
\end{equation}
\noindent {\it (2) ${P^{\cal G}}^\dagger$ leaves
a conserved source unchanged.}\par
\medskip
\noindent Let us suppose that $J$ is conserved. Using (\ref{puy})
we have for a generic $A$
\begin{equation}
\int dx \, A(x) \, {P^{\cal G}}^\dagger [J](x) =
\int dx \, A(x) \, J(x) + \int dx \, \{ C_x \, F^{\cal G}[A](x) \} \, J(x) .
\end{equation}
Integrating by parts the second term on the r.h.s. and using
(\ref{dou}) and the arbitrariness of $A$ we have
\begin{equation}
{P^{\cal G}}^\dagger[J](x) = J(x) \qquad \mbox{if} \ C^\dagger_x \, J(x) = 0.
\label{vls}
\end{equation}
\bigskip
The equation of motion obtained varying the action under the constraint
(\ref{bhe}) is
\begin{equation}
{P^{\cal G}}^\dagger[K_x \, A^{\cal G}](x) = {P^{\cal G}}^\dagger[J](x).
\label{vro}
\end{equation}
{}From (\ref{dou}) and Property 2 we have
\begin{equation}
K_x \, A^{\cal G}(x) = {P^{\cal G}}^\dagger[J](x),
\label{vpe}
\end{equation}
or for the propagator
\begin{equation}
K_x \, G^{\cal G}(x,y) = {P^{\cal G}}^\dagger[\delta(x-y)],
\label{lpi}
\end{equation}
where the meaning of the r.h.s. is
\begin{equation}
\int dy \, {P^{\cal G}}^\dagger[\delta(x-y)] \, J(y) =
{P^{\cal G}}^\dagger[J](x).
\end{equation}
Next we show that a solution of (\ref{lpi}) is
\begin{equation}
G^{\cal G}(x,y) = i\left< P^{\cal G}[A^{\cal F}](x) \,
P^{\cal G}[A^{\cal F}](y) \right>_0,
\label{jhg}
\end{equation}
where $A^{\cal F}$ denotes the field in the original gauge.
Integrating on a source $J(y)$ we have
\begin{eqnarray}
& & i\int dy \, K_x \left< P^{\cal G}[A^{\cal F}](x) \,
P^{\cal G}[A^{\cal F}](y) \right>_0 \, J(y) = \nonumber \\
& & = i\int dy \, K_x \left< P^{\cal G}[A^{\cal F}](x) \,
A^{\cal F}(y) \right>_0 \, {P^{\cal G}}^\dagger[J](y) = \nonumber \\
& & = i\int dy \, K_x \left< \{ A^{\cal F}(x) +
C_x \, F^{\cal G}[A^{\cal F}](x) \} \,
A^{\cal F}(y) \right>_0 \, {P^{\cal G}}^\dagger[J](y) = \nonumber \\
& & = i\int dy \, K_x \left< A^{\cal F}(x) \,
A^{\cal F}(y) \right>_0 \, {P^{\cal G}}^\dagger[J](y) = \nonumber \\
& & = \int dy \, (K_x + K^{\cal F}_x) \, G^{\cal F}(x,y) \,
{P^{\cal G}}^\dagger[J](y) -
\int dy \, K^{\cal F}_x \, G^{\cal F}(x,y) \, {P^{\cal G}}^\dagger[J](y)
= \nonumber \\
& & = \int dy \, \delta(x-y) \, {P^{\cal G}}^\dagger[J](y).
\label{cmd}
\end{eqnarray}
In the last step we have used (\ref{bsp}) and Property 1.
Note however that $G^{\cal G}$ defined in
(\ref{jhg}) remains unchanged if we replace the original field with
any other gauge equivalent field.
Given two different projectors $P_1$ and $P_2$ that project on the same
gauge (which as a rule differ for different boundary conditions), one has
$P_1 \, P_2 = P_1$ and $P_2 \, P_1 = P_2$ due to eq.
(\ref{cht})\footnotemark. \footnotetext{The definition of $P^0$ and
$P^{\infty}$ is different from one
adopted in ref. \cite{M.S.}, and they obey a different algebra. The
definition adopted here privileges the invariance under gauge
transformations on the original fields while the ones given in \cite{M.S.}
privileges the radiality. These projectors give the same results when
applied to regular fields.} Thus
also $P_{12}=\alpha P_1 + (1-\alpha) P_2$ is
a projector on the
considered gauge and one can write down the $P_{12}$-projected Green
function equation (we omit the suffix ${\cal G}$)
\begin{equation}
K_x \, G(x,y) = P^\dagger_{12}[\delta(x-y)].
\label{xgh}
\end{equation}
It is immediate to verify that a solution of (\ref{xgh}) is also given by
\begin{equation}
\alpha \, \left< P_2[A](x) \, P_1[A](y) \right>_0 +
(1-\alpha) \left< P_1[A](x) \, P_2[A](y) \right>_0.
\label{fgz}
\end{equation}
In fact
\begin{equation}
K_x \, \alpha \, \left< P_2[A](x)\, P_1[A](y) \right>_0 =
\alpha \, K_x \, \left< A(x) \, P_1[A](y) \right>_0 = \alpha \, P^\dagger_1
[\delta(x-y)],
\end{equation}
repeating the same procedure of eq. (\ref{cmd}). Acting similarly with
the $(1-\alpha)$ term in (\ref{fgz}), we get (\ref{xgh}). We notice that
for $\alpha = \frac{1}{2}$, \ (\ref{fgz}) is symmetric in the exchange
of the field arguments.
In the following we write the explicit form of the operators appearing in
the Electrodynamics and linearized Einstein theory
\subsection{Electrodynamics and linearized Yang-Mills theory.}
This is the most simple case. We have the following identifications
\begin{eqnarray}
& & A \rightarrow A_\mu; \\
& & J \rightarrow J_\mu; \\
& & C f \rightarrow \partial_\mu \, f; \\
& & K \rightarrow \eta_{\mu \nu} \,
\Box - \partial_\mu \partial_\nu; \\
& & K^{\cal F} \rightarrow \partial_\mu \partial_\nu,
\end{eqnarray}
where $K^{\cal F}$ is the operator produced by the usual
Feynman gauge fixing $\frac{1}{2}(\partial^\mu A_\mu)^2$.
\subsection{Linearized Einstein gravity in the second-order formalism.}
In this case we have
\begin{eqnarray}
& & A \rightarrow h_{\mu \nu}; \\
& & J \rightarrow T_{\mu \nu}; \\
& & C f \rightarrow \left( \delta_{\alpha \sigma}
\partial_\rho + \delta_{\alpha \rho} \partial_\sigma \right)
f_\alpha ; \\
& & K \rightarrow
K_{\mu \nu \rho \sigma} = \frac{1}{4} [2 \, \eta_{\rho \sigma} \,
\partial_\mu \partial_\nu +
2 \, \eta_{\mu \nu} \, \partial_\rho \partial_\sigma + \nonumber \\
\label{mkw}
& & \qquad \qquad \qquad -
( \eta_{\mu \rho} \, \partial_\nu \partial_\sigma +
\eta_{\nu \rho} \, \partial_\mu \partial_\sigma
+ \eta_{\mu \sigma} \, \partial_\nu \partial_\rho +
\eta_{\nu \sigma} \, \partial_\mu \partial_\rho ) + \nonumber \\
& & \qquad \qquad \qquad + (\eta_{\mu \rho} \, \eta_{\nu \sigma}
+ \eta_{\mu \sigma} \, \eta_{\nu \rho}
- 2 \, \eta_{\mu \nu} \, \eta_{\rho \sigma}) \, \Box ] ; \\
& & K^{\cal F} \rightarrow
K^{\cal F}_{\mu \nu \rho \sigma} =
\frac{1}{4} [ - ( 2 \, \eta_{\rho \sigma} \,
\partial_\mu \partial_\nu +
2 \, \eta_{\mu \nu} \, \partial_\rho \partial_\sigma ) +
\nonumber \\
& & \qquad \qquad \qquad
+ ( \eta_{\mu \rho} \, \partial_\nu \partial_\sigma +
\eta_{\nu \rho} \, \partial_\mu \partial_\sigma
+ \eta_{\mu \sigma} \, \partial_\nu \partial_\rho +
\eta_{\nu \sigma} \, \partial_\mu \partial_\rho) ] .
\end{eqnarray}
Here $K^{\cal F}$ is the operator produced by the harmonic gauge fixing
\begin{equation}
\frac{1}{2} \left( \partial^\mu h_{\mu \nu} -
\frac{1}{2} \partial^\nu h^\mu_\mu \right)^2.
\label{cre}
\end{equation}
\subsection{Linearized Einstein gravity in the first-order formalism.}
The quadratic part of the lagrangian has the form
\begin{equation}
L^{(2)} = - (
\delta^{\mu \nu \gamma}_{abc} \, \partial_\mu \, \Gamma^{ab}_\nu
\, \tau^c_\gamma + \delta^{\mu \nu}_{ab}
\, \Gamma^a_{c \mu} \, \Gamma^{cb}_\nu
+ T^\mu_a \, \tau^a_\mu + \Sigma^\mu_{ab} \, \Gamma^{ab}_\mu ) ,
\end{equation}
where
\begin{equation}
\tau^a_\mu = e^a_\mu - \delta^a_\mu
\end{equation}
and $T^\mu_a$ and $\Sigma^\mu_{ab}$ are the energy-momentum source and
the spin-torsion source, respectively. The gauge transformations have the form
\begin{equation}
Cf \rightarrow \left(
\begin{array}{cc}
0 & \ \ \delta^{ab}_{cd} \, \partial_\mu\\
& \\
\partial_\mu & \ \ - \eta_{\mu b} \, \delta^{ab}_{cd}
\end{array} \right)
\left(
\begin{array}{c}
\Lambda^a \\
\\
\theta^{cd}
\end{array} \right).
\end{equation}
The field equations are given by
\begin{equation}
K \, A = J \rightarrow \left(
\begin{array}{cc}
\frac{1}{2} (\eta_{c \sigma} \delta^{\nu \sigma \mu}_{dab}
- \eta_{d \sigma} \delta^{\nu \sigma \mu}_{cab}) & \ \
- \delta^{\lambda \gamma \mu}_{rab} \, \partial_\lambda \\
& \\
\delta^{\mu \lambda \nu}_{acd} \, \partial_\lambda & 0
\end{array} \right)
\left(
\begin{array}{c}
\Gamma^{cd}_{\nu} \\
\\
\tau^r_\gamma
\end{array} \right) =
\left(
\begin{array}{c}
\Sigma^\mu_{ab} \\
\\
T^\mu_a
\end{array} \right)
\end{equation}
and the gauge-fixing term has the form
\begin{equation}
K^{\cal F} \, A \rightarrow
\left(
\begin{array}{cc}
0 & \ \ 0 \\
& \\
0 & \ \ 4K^{\cal F}_{a \mu r \nu} \, \eta^{\nu \gamma}
+ 2\beta \, (\eta_{ar} \eta^{\mu \gamma} - \delta^\mu_r \delta^\gamma_a)
\end{array} \right)
\left(
\begin{array}{c}
\Gamma^{cd}_{\nu} \\
\\
\tau^r_\gamma
\end{array} \right),
\end{equation}
and is produced by the harmonic gauge fixing (\ref{cre}) with
$h_{\mu \nu}=\tau^a_\mu \eta_{a \nu} + \tau^a_\nu \eta_{a \mu}$,
to which the symmetric gauge fixing $ \frac{\beta}{2}
(\tau^a_\mu \eta_{a \nu} - \tau^a_\nu \eta_{a \mu})^2$ has been
added.
\section{Second order formalism.}
\subsection{Radial gauge.}
We first give a discussion at the classical level.
In the second order formalism we define the radial gauge through
the condition
\begin{equation}
\label{radsec}
\xi^\mu g_{\mu\nu}(\xi)=\xi^\mu\eta_{\mu\nu}
\end{equation}
($\eta_{\mu\nu}={\rm diag}(1,-1,-1,\dots)$, and $N=$space-time dimensions).
Taking the derivative of (\ref{radsec}) with respect to $\xi^\lambda$ we
have
\begin{equation}
\label{dereq1}
g_{\lambda\mu}(\xi)+\xi^\mu \partial_\lambda
g_{\mu\nu}(\xi)=\eta_{\lambda\nu}.
\end{equation}
Under regularity hypothesis on $g_{\mu\nu}(\xi)$ we obtain
\begin{equation}
\label{g(0)}
g_{\lambda\nu}(0)=\eta_{\lambda\nu}.
\end{equation}
Contracting the Levi-Civita connection on $\xi^\alpha$ and $\xi^\beta$ we get,
using (\ref{radsec}) and (\ref{dereq1})
\begin{eqnarray}
\label{LevCiv}
& & \xi^a\xi^\beta\Gamma_{\mu,\alpha\beta}(\xi) =
\frac{1}{2}\xi^\alpha\xi^\beta
[ \partial_\beta g_{\mu\alpha}(\xi)+\partial_\alpha
g_{\mu\beta}(\xi)-\partial_\mu g_{\alpha\beta}(\xi)]=\nonumber\\
& & \qquad = \frac{1}{2} \xi^\beta[\eta_{\beta\mu} -g_{\beta\mu}(\xi)]
+ \frac{1}{2} \xi^\alpha[\eta_{\alpha\mu}-g_{\alpha\mu}(\xi)]
- \frac{1}{2} \xi^\beta[\eta_{\beta\mu}-g_{\beta\mu}(\xi)]=0.
\end{eqnarray}
Thus coordinates satisfying (\ref{radsec}) are Riemann's normal
coordinates, in the sense that the lines $\xi^\mu = \lambda n^\mu$
are autoparallel lines \cite{H.E.}. Furthermore (\ref{LevCiv})
combined with (\ref{radsec}) tells us that $\xi^\mu = \lambda n^\mu$
are also geodesic in the sense that they are extrema of the distance
between two events. In fact from (\ref{radsec}) we have that
\begin{equation}
(ds)^2 = d\xi^\mu \, g_{\mu \nu}(\xi) \, d\xi^\nu =
(d\lambda)^2 \, n^\mu \, g_{\mu \nu}(\xi) \, n^\nu =
(d\lambda)^2 \, n^\mu \, g_{\mu \nu}(\xi)\vert_{\xi=0} \, n^\nu
\end{equation}
and thus $\xi^\mu(s)$ satisfies the equation
\begin{equation}
\frac{d^2\xi^\mu}{d
s^2}+\Gamma^{\mu}_{\alpha\beta}(\xi)\frac{d\xi^\alpha}{d s} \frac{d
\xi^\beta}{d s}=0.
\end{equation}
\noindent The usual definition of the geodesic gauge is \cite{M.T.W.}
\begin{equation}
\label{radsec1}
\xi^\alpha \xi^\beta\Gamma_{\mu,\alpha\beta}(\xi)=0,
\end{equation}
as given by eq. (\ref{LevCiv}).
We want to prove that eq. (\ref{radsec}) follows from (\ref{radsec1})
(apart from a global linear transformation). In fact from (\ref{radsec1})
we have
\begin{eqnarray}
\label{Delta}
& & \frac{1}{2}\xi^\alpha\xi^\beta
[\partial_\beta g_{\mu\alpha}(\xi)+\partial_\alpha
g_{\mu\beta}(\xi)-\partial_\mu g_{\alpha\beta}(\xi)] = \nonumber\\
& & \qquad = \xi^\alpha \partial_\alpha[\xi^\beta g_{\beta\mu}(\xi)]
-\frac{1}{2}\partial_\mu [\xi^\alpha\xi^\beta
g_{\alpha\beta}(\xi)]=0.
\end{eqnarray}
Multiplying (\ref{Delta}) by $\xi^\mu$ we obtain
\begin{equation}
\frac{1}{2}\xi^a\partial_\alpha[\xi^\beta\xi^\mu
g_{\beta\mu}(\xi)]-\xi^\beta\xi^\mu g_{\beta\mu}(\xi)=0,
\end{equation}
i.e. $\xi^\mu\xi^\beta g_{\mu\beta}(\xi)$ is a homogeneous
function of degree 2 in $\xi$.
In the following (see subsection 3.C) we shall consider metrics $g_{\mu\nu}$
which a priori are not regular at the origin but such that $\xi^\mu
g_{\mu\beta}(\xi) $ is a regular ($C^2$) function in a domain containing
the origin and which vanish for $\xi^\mu=0$. Under such assumption we
have
\begin{equation}
\xi^\beta \xi^\mu g_{\beta \mu}(\xi) = c_{\mu \beta} \xi^\beta \xi^\mu .
\end{equation}
In fact taking the second derivative we have
\begin{equation}
\partial_\rho \partial_\lambda
[ \xi^\beta \xi^\mu g_{\beta \mu}(\xi) ] =
H^0_{\rho \lambda}(\xi) = c_{\rho \lambda} ,
\end{equation}
because the only continuous homogeneous function of degree $0$ is
the constant.
\noindent Substituting this relation into (\ref{Delta}) we have
\begin{equation}
\xi^a\partial_a[\xi^\beta g_{\beta\mu}(\xi)
- \xi^\beta c_{\beta\mu}] = 0,
\end{equation}
i.e. $\xi^\beta g_{\beta\mu}(\xi)-\xi^\beta c_{\beta\mu}$ must be an
homogeneous
function of degree 0, that has to vanish for $\xi=0$ and is thus
identically zero
\begin{equation}
\xi^\beta g_{\beta\mu}(\xi)=\xi^\beta c_{\beta\mu}.
\end{equation}
In conclusion we have found that (\ref{radsec1}) implies (\ref{radsec})
up to a global linear transformation of the coordinates.
In the following we shall adopt (\ref{radsec}) as the definition of the radial
gauge in the second order formalism.
\subsection{Radial Projectors.}
In this section we shall derive, given the linearized Riemann tensor, radial
fields $h^{0}_{\mu\nu}(x)$ and $h^\infty_{\mu\nu}(x)$ which generate such a
Riemann tensor. $h^{0}_{\mu\nu}(x)$ and $h^\infty_{\mu\nu}(x)$ differ for
their regularity properties at the origin and infinity and they will be the
ingredients out of which we construct the Green's function.
We now want to express the radial metric
$g_{\mu\nu}(x)=\eta_{\mu \nu}+h_{\mu \nu}(x)$ in
terms of the Riemann tensor, at the linearized level.
Let us start from the expression of the linearized Riemann tensor
\begin{equation}
\label{Riemann}
R^{L}_{\mu\nu,\alpha\beta}(x) = \frac{1}{2}
[\partial_\alpha\partial_\nu h_{\mu\beta}(x)
- \partial_\alpha\partial_\mu h_{\nu\beta}(x)
- \partial_\beta\partial_\nu h_{\mu\alpha}(x)
+ \partial_\beta\partial_\mu h_{\alpha\nu}(x)].
\end{equation}
$R^{L}_{\mu\nu,\alpha\beta}(x)$ is an invariant under the
linearized reparametrization transformations. Using the radial condition
(\ref{radsec}) in the form
\begin{equation}
\label{radsec2}
{\cal G}(h)_\nu(x) = x^\mu h_{\mu\nu}(x) = 0
\end{equation}
we can rewrite the contraction of (\ref{Riemann}) with $x^\mu \, x^\alpha$
as follows
\begin{equation}
2 \, x^\mu x^\alpha R^{L}_{\mu\nu,\alpha\beta}(x) =
- x^\mu \partial_\mu [x^\alpha \partial_\alpha h_{\nu\beta}(x)] -
x^\alpha \partial_\alpha h_{\nu\beta}(x).
\end{equation}
Applying the usual change of variables $x\rightarrow\lambda x$ we get
\begin{eqnarray}
\label{Delta1}
& & -2 \, \lambda^2 x^\mu x^\alpha
R^{L}_{\mu \nu,\alpha \beta}(\lambda x) = \nonumber \\
& & \qquad =\lambda x^\mu \partial_\mu [\lambda x^\alpha
\partial_\alpha h_{\nu \beta}(\lambda x)] +
\lambda x^\alpha \partial_\alpha h_{\nu \beta}(\lambda x)=
\frac{d}{d\lambda} \left[ \lambda^2 x^\alpha \partial_\alpha
h_{\nu\beta}(\lambda x) \right].
\end{eqnarray}
Integrating (\ref{Delta1}) in $\lambda$ from $0$ to $1$ we obtain
\begin{equation}
\label{Delta2}
x^\alpha \partial_\alpha h_{\nu\beta}( x) =
- 2 \, x^\alpha x^\mu \int^1_0 d\lambda \, \lambda^2
R^{L}_{\mu\nu,\alpha\beta}(\lambda x),
\end{equation}
provided the metric is such that $|R^L( x)| \le | x|^{-3+\varepsilon}$
as $ x \to 0$. With $\vert x \vert^\alpha$ we understand here a quantity that
for $x^\mu=\lambda x^\mu$ behaves like $\lambda^\alpha $ for $\lambda\to 0$.
We notice again that
\begin{equation}
\tau x^\alpha \partial_\alpha h_{\nu \beta}(\tau x) =
\tau \frac{d}{d\tau} \left[ h_{\nu \beta}(\tau x) \right]
\end{equation}
and thus we have
\begin{eqnarray}
\label{Delta3}
h^0_{\nu\beta}( x) & = & -2 \, x^\alpha x^\mu \int^1_0
d\tau \, \tau \int^1_0 d\lambda \, \lambda^2 \,
R^{L}_{\mu \nu,\alpha \beta}(\lambda \tau x) = \nonumber \\
& = & -2 \, x^\alpha x^\mu \int^1_0 d\lambda \, \lambda (1-\lambda)
\, R^{L}_{\mu \nu,\alpha \beta}(\lambda x) = \nonumber \\
& = & P^0[h]_{\nu \beta}(x),
\end{eqnarray}
provided $\vert R^L(x)\vert< \vert x\vert^{-2+\varepsilon}$ for $x\to 0$.
The superscript ``0'' on $h_{\nu\beta}( x)$ specifies the integration limits.
Another possibility to solve equation (\ref{Delta1}) is to integrate twice
from $1$ to $\infty$, provided the metric is such that
$|R^L( x)| \le | x|^{-3-\varepsilon}$ as $ x \to \infty$. Then we have
\begin{eqnarray}
\label{Delta4}
h^{\infty}_{\nu\beta}( x) & = & - 2 \, x^\alpha x^\mu
\int^\infty_1 d\tau \, \tau \int^\infty_1 d\lambda \, \lambda^2 \,
R^{L}_{\mu\nu,\alpha\beta}(\lambda\tau x) = \nonumber \\
& = & - 2 \, x^\alpha x^\mu \int^\infty_1 d\lambda \, \lambda
(\lambda-1) \, R^{L}_{\mu\nu,\alpha\beta}(\lambda x) = \nonumber \\
& = & P^\infty[h]_{\nu \beta}(x).
\end{eqnarray}
Eqs. (\ref{Delta3}), (\ref{Delta4}) define projectors from an arbitrary
gauge to the radial gauge. The projector nature of $P^0$ and $P^\infty$
is proven by showing that e.g. $h$ and $P^0[h]$ give rise to the same
Riemann tensor. In fact by using the linearized Bianchi identities
\begin{equation}
\partial_\lambda R_{\mu \nu , \alpha \beta}(x) +
\partial_\mu R_{\nu \lambda , \alpha \beta}(x) +
\partial_\nu R_{\lambda \mu , \alpha \beta}(x) = 0
\end{equation}
and the familiar symmetry properties of the Riemann tensor, one reduces
${R^L}^0_{\mu \nu , \alpha \beta}(x)$ to
\begin{equation}
{R^L}^0_{\mu \nu , \alpha \beta}(x) = \int_0^1 d\lambda \,
\frac{d}{d\lambda} [\lambda^2 \,
R^L_{\mu \nu , \alpha \beta}(\lambda x)].
\label{cxz}
\end{equation}
Whenever $P^0$ is defined on $h$ we have from (\ref{cxz})
\begin{equation}
{R^L}^0_{\mu \nu , \alpha \beta}(x) = R^L_{\mu \nu , \alpha \beta}(x).
\end{equation}
Furthermore from the gauge invariance of $R^L_{\mu \nu , \alpha \beta}(x)$
appearing in the definitions (\ref{Delta3}), (\ref{Delta4}) we have the
properties
\begin{equation}
P^0 \, P^\infty = P^0, \qquad P^\infty \, P^0 = P^\infty.
\label{fut}
\end{equation}
The propagator
\begin{equation}
\left< P^0[h]_{\mu \nu}(x) \, P^0[h]_{\rho \sigma}(y) \right>_0
\label{sog}
\end{equation}
computed by means of (\ref{Delta3}) is divergent in all dimensions due
to the $(x-y)^{-N-2}$ behavior of the correlation of two Riemann's tensors
at short distances. On the other hand
\begin{equation}
\left< P^\infty[h]_{\mu \nu}(x) \, P^\infty[h]_{\rho \sigma}(y)
\right>_0
\end{equation}
diverges for $N\le 4$ due to the infrared behavior of the correlator
of two Riemann's tensors. For this reason we shall adopt
a different gauge projector given by
\begin{equation}
P^S = \frac{1}{2} (P^0 + P^\infty)
\end{equation}
whose projector nature is immediately seen from (\ref{fut}). We elucidate
the nature of the boundary condition at $0$ and $\infty$ of the projector
$P^S$ at the end of section 3.C.
As we have shown in section 2 the solution of the projected Green's
function equation is given by
\begin{equation}
\frac{1}{2} \left< P^0[h]_{\mu \nu}(x) \, P^\infty [h]_{\rho \sigma}(y)
\right>_0 +
\frac{1}{2} \left< P^\infty[h]_{\mu \nu}(x) \, P^0 [h]_{\rho \sigma}(y)
\right>_0
\end{equation}
which as we shall show in the next section is convergent for $N>2$.
In order to write the source term in the projected Green's function
equation we must compute the adjoint of $P^S$, i.e. the adjoints of
$P^0$ and $P^\infty$. We have
\begin{eqnarray}
& & {P^0}^\dagger[T]^{\mu \nu}(x) = T^{\mu \nu}(x) +
\int_1^\infty d\tau \, \tau^N [x^\nu \partial_\rho T^{\mu \rho}(\tau x)
+ x^\mu \partial_\rho T^{\nu \rho}(\tau x)] + \nonumber \\
& & \qquad \qquad \qquad + x^\mu x^\nu \int_1^\infty d\tau \, \tau^N (\tau-1)
\partial_\alpha \partial_\beta T^{\alpha \beta}(\tau x), \nonumber \\
& & \nonumber \\
& & {P^\infty}^\dagger[T]^{\mu \nu}(x) = T^{\mu \nu}(x) -
\int_0^1 d\tau \, \tau^N [x^\nu \partial_\rho T^{\mu \rho}(\tau x)
+x^\mu \partial_\rho T^{\nu \rho}(\tau x)] + \nonumber \\
& & \qquad \qquad \qquad - x^\mu x^\nu \int_0^1 d\tau \, \tau^N (\tau-1)
\partial_\alpha \partial_\beta T^{\alpha \beta}(\tau x)
\end{eqnarray}
and
\begin{equation}
{P^S}^\dagger = \frac{1}{2} ({P^0}^\dagger + {P^\infty}^\dagger).
\end{equation}
\subsection{Radial propagators.}
We shall now derive the propagators using the general procedure explained
in section 2. The kinetic operator $K$ is given by formula (\ref{mkw}).
The equation for the propagator is
\begin{equation}
(K_x)_{\mu \nu}^{\rho \sigma} \, G_{\rho \sigma \alpha \beta}(x,y) =
P^\dagger[\Delta_{\mu \nu \alpha \beta}(x-y)],
\end{equation}
where
\begin{equation}
\Delta_{\mu \nu \alpha \beta}(x-y) =
\frac{1}{2} \, (\eta_{\mu \alpha} \, \eta_{\nu \beta}
+ \eta_{\mu \beta} \, \eta_{\nu \alpha}) \, \delta^N(x-y).
\label{qpf}
\end{equation}
As a preliminary step we must compute the correlator of two Riemann
tensors; this is most simply obtained by using the harmonic gauge.
We find
\begin{eqnarray}
\label{RR2}
& & \left< R_{\mu \nu , \alpha \beta}(x) \, R_{\rho \sigma , \lambda
\gamma}(y) \right>_0 =
\frac{1}{4} \delta^{\mu' \nu'}_{\mu \nu} \eta_{\nu' \nu''} \, (
\delta^{\rho' \nu''}_{\rho \nu} \delta^{\alpha' \beta'}_{\alpha \beta}
\delta^{\lambda' \beta''}_{\lambda \gamma}
\eta_{\beta' \beta''}+ \nonumber \\
& &+ \, \delta^{\lambda' \nu''}_{\lambda \gamma}
\delta^{\alpha' \beta'}_{\alpha \beta} \delta^{\rho' \beta''}_{\rho \sigma}
\eta_{\beta' \beta''} -
\mbox{$\frac{2}{N-2}$} \,
\delta^{\alpha' \nu''}_{\alpha \beta} \delta^{\rho' \sigma'}_{\rho \sigma}
\delta^{\lambda' \sigma''}_{\lambda \gamma}
\eta_{\sigma' \sigma''} ) \
\partial_{\mu'} \partial_{\alpha'} \partial_{\rho'} \partial_{\lambda'}
D(x-y)
\label{mpj}
\end{eqnarray}
being $D(x-y)$ the usual Feynman propagator
\begin{equation}
D(x)=\frac{\Gamma (N/2-1)}{4 \pi^{\frac{N}{2}}}\frac{i}{(x^2- i \varepsilon
)^{N/2-1}}.
\end{equation}
If we choose $P=P^0$ and
substitute in (\ref{jhg}) eq. (\ref{mpj}) we find that the propagator can be
expressed as a linear combination of derivatives of integrals of the type
\begin{equation}
F^0_{\alpha\beta}(x,y)= \int_0^1 d\lambda \, (1-\lambda) \int_0^1 d\tau \,
(1-\tau) \partial_\alpha \partial_\beta D(\lambda x - \tau y).
\label{fsd}
\end{equation}
With the method explained below one sees that such integral diverges
in any dimension due to the bad ultraviolet behavior of
$\partial_\alpha \partial_\beta D(\lambda x - \tau y)$. Similarly
\begin{equation}
G^\infty_{\mu \nu \rho \sigma}(x,y) =
\left< P^\infty[h]_{\mu \nu}(x) \, P^\infty[h]_{\rho \sigma}(y)
\right>_0
\end{equation}
computed along the same lines is expressed in terms of derivatives of
\begin{equation}
F^\infty_{\alpha\beta}(x,y)=\int_1^\infty d\lambda \, (1-\lambda)
\int_1^\infty d\tau \,
(1-\tau) \partial_\alpha \partial_\beta D(\lambda x - \tau y)
\label{gff}
\end{equation}
and as shown below it diverges for $N \leq 4$.
On the other hand the correlator
\begin{equation}
\left< P^0[h]_{\mu \nu}(x) \, P^\infty[h]_{\rho \sigma}(y)
\right>_0
\label{fuy}
\end{equation}
used to construct the solution of the $P^S$ projected Green's equation
converges for $N > 2$.
We prove now the above stated results: the correlator (\ref{fuy}) can
be expressed as linear combination of derivatives of the double integral
\begin{equation}
\label{FS2}
F^S_{\alpha \beta}(x,y) =
\int_0^1 d\lambda \, (1-\lambda) \int_1^\infty d\tau \,
(\tau - 1) \partial_\alpha \partial_\beta D(\lambda x - \tau y)
\label{dkc}
\end{equation}
where
\begin{equation}
\partial_\alpha \partial_\beta D(x) =
-(N-2) \, \frac{\eta_{\alpha \beta}}{\ (x^2)^{N/2}} +
N(N-2) \, \frac{x_\alpha x_\beta}{\ \ (x^2)^{N/2+1}}.
\label{vgr}
\end{equation}
The first term of (\ref{vgr}) substituted in (\ref{dkc}) gives
\begin{equation}
\int_0^1 d\lambda \, \lambda^{-N+1}(1-\lambda) \int_0^\lambda
d\rho \, \left( \frac{\lambda}{\rho} - 1 \right)
\frac{\rho^{N-2}}{(\rho x - y)^N}
\end{equation}
and converges for $N>2$. Similarly one proves the convergence for
$N>2$ of the second term of (\ref{vgr}) substituted in (\ref{dkc}).
Thus the solution of the $P^S$ projected Green's equation
\begin{equation}
(K_x)_{\mu \nu}^{\rho \sigma} \, G^S_{\rho \sigma \alpha \beta}(x,y) =
{P^S}^\dagger[\Delta_{\mu \nu \alpha \beta}(x-y)]
\end{equation}
given by
\begin{equation}
-iG^S_{\mu \nu \rho \sigma}(x,y) =
\frac{1}{2} \left< P^0[h]_{\mu \nu}(x) \, P^\infty[h]_{\rho \sigma}(y)
\right>_0 +
\frac{1}{2} \left< P^\infty[h]_{\mu \nu}(x) \, P^0[h]_{\rho \sigma}(y)
\right>_0
\end{equation}
converges for all $N>2$.
With regard to the $G^\infty(x,y)$ given by (\ref{gff}) one reaches
for the analogue $F^\infty_{\alpha \beta}(x,y)$ of
$F^S_{\alpha \beta}(x,y)$ the integral
\begin{equation}
(N-2)\int_1^\infty d\lambda \, \lambda^{-N+1}(1-\lambda) \int_0^\lambda
d\rho \, \left( \frac{\lambda}{\rho} - 1 \right)
\frac{\rho^{N-2}}{(\rho x - y)^N}\left ( -\eta_{\alpha\beta}+N
\frac{x_\alpha x_\beta}{(\rho x-y)^2}\right )
\end{equation}
which converges only for $N>4$. Finally the
function $F^0_{\alpha \beta}(x,y)$ given by
\begin{equation}
F^0_{\alpha \beta}(x,y) =
\int_0^1 d\lambda \, (1-\lambda) \int_0^1 d\tau \,
(1-\tau) \partial_\alpha \partial_\beta D(\lambda x - \tau y)
\end{equation}
leads to integrals of the form
\begin{equation}
\int_0^1 d\lambda \, (1-\lambda) \, \lambda^{-N+1}
\int_\lambda^\infty d\rho \, \left( \frac{\lambda}{\rho} - 1 \right)
\frac{\rho^{N-2}}{(\rho x - y)^N}
\end{equation}
which diverge for all $N \geq 2$.
In conclusion, we produced a solution of the radial Green's function
equation that is radial, symmetric and finite for $N>2$. Such a
solution treats the infinity and the origin in symmetrical way in the
sense that the field
\begin{equation}
h^S_{\mu \nu}(x) = \int G^S_{\mu \nu \rho \sigma}(x,y) \,
t^{\rho \sigma}(y) \, d^Ny ,
\end{equation}
computed on a physical (i.e. conserved) source $t^{\rho \sigma}$,
behaves like
\begin{equation}
h^S(x) = K^0(x) + \frac{H^0(x)}{r} + O(r^\varepsilon)
\qquad \mbox{for} \ \ r \rightarrow 0
\label{lsu}
\end{equation}
and
\begin{equation}
h^S(x) = - K^0(x) - \frac{H^0(x)}{r} + O(r^{-1-\varepsilon})
\qquad \mbox{for} \ \ r \rightarrow \infty ,
\label{kdj}
\end{equation}
where $K^0$ and $H^0$ are homogeneous functions of $x$ of degree 0,
and $r=|x|$.
In fact, given a conserved source $t^{\rho \sigma}(y)$ we have
\begin{eqnarray}
& & -i \int G^S_{\mu \nu \rho \sigma}(x,y) \,
t^{\rho \sigma}(y) \, d^Ny = \frac{1}{2}
\int \left< P^0[h^F]_{\mu \nu}(x) \, h^F_{\rho \sigma}(y) \right>_0
\, {P^\infty}^\dagger[t]^{\rho \sigma}(y) \, d^Ny +
\nonumber \\
& & \qquad \qquad \qquad + \frac{1}{2} \int \left<
P^\infty[h^F]_{\mu \nu}(x)
\, h^F_{\rho \sigma}(y) \right>_0
\, {P^0}^\dagger[t]^{\rho \sigma}(y) \, d^Ny
\label{jds}
\end{eqnarray}
and using the fact that ${P^\infty}^\dagger$ and ${P^0}^\dagger$ act like
the identity on conserved sources we have that the l.h.s. of (\ref{jds})
is given by $P^S[h^F]_{\mu \nu}(x)$ being $h^F_{\mu \nu}$ the field,
computed in the harmonic gauge, associated to the conserved source
$t^{\rho \sigma}$. For $r \rightarrow \infty$, $h^0_{\mu \nu}(x)$
behaves from (\ref{Delta3}) like
\begin{eqnarray}
h^0_{\mu \nu}(x) & = & - \frac{2 x^\alpha x^\beta}{r^2} \int_0^r dr' \,
r' \, R^L_{\alpha \mu , \beta \nu}(r' \hat{x}) +
\frac{2 x^\alpha x^\beta}{r^3} \int_0^r dr' \, {r'}^2
R^L_{\alpha \mu , \beta \nu}(r' \hat{x}) = \nonumber \\
& = & K^0_{\mu \nu}(x) + \frac{H^0_{\mu \nu}(x)}{r}
+ O(r^{-1-\epsilon})
\end{eqnarray}
while $h^\infty_{\mu \nu}(x)$ from (\ref{Delta4}) is given by
\begin{equation}
h^\infty_{\mu \nu}(x) = - \frac{2 x^\alpha x^\beta}{r^2}
\int_r^\infty dr' \, r' \, R^L_{\alpha \mu , \beta \nu}(r' \hat{x}) +
\frac{2 x^\alpha x^\beta}{r^3} \int_r^\infty dr' \, {r'}^2
R^L_{\alpha \mu , \beta \nu}(r' \hat{x})
\end{equation}
and is regular in the same limit, in the sense that
it vanishes at least as quickly as $h^F_{\mu \nu}$
itself. Similarly in the limit $r \rightarrow 0$
we find that $h^0_{\mu \nu}(x)$
vanishes and
\begin{equation}
h^\infty_{\mu \nu}(x) = - K^0_{\mu \nu}(x) - \frac{H^0_{\mu \nu}(x)}{r}
+ O(r^\varepsilon),
\end{equation}
from which one obtains the two relations (\ref{lsu}) and (\ref{kdj}).
In Appendix A it is shown how one can express the propagators
$G^S$ in terms of the familiar hypergeometric functions, while in Appendix
B we treat the $\beta\neq 0$ (i.e. non sharp) radial gauge in the second
order formalism.
\section{First order formalism}
\subsection{Radial gauge}
We recall for completeness the main formulae for the radial gauge in the
first order formalism given by \cite{M.T.}. The gauge is defined by
\begin{equation}
\xi^\mu\Gamma^{ab}_\mu(\xi)=0
\end{equation}
\begin{equation}
\xi^\mu e^{a}_\mu(\xi)=\xi^\mu \delta^a_\mu
\end{equation}
and $\Gamma^{ab}_\mu (\xi)$ and $e^a_{\mu}(\xi)$ are expressed in term of
the radial Riemann and torsion two forms as follows
\begin{equation}
\label{eqrad1}
\Gamma^{a}_{b,\mu}(\xi)=\xi^\nu\int^1_0 d\lambda ~\lambda
R^{a}_{b,\nu\mu}(\lambda\xi)
\end{equation}
\begin{equation}
\label{eqrad2}
e^{a}_{\mu}(\xi)=\delta^a_\mu+\xi^\nu\xi^b\int^1_0 d\lambda ~\lambda
(1-\lambda) R^{a}_{b,\nu\mu}(\lambda\xi)
+\xi^\nu \int^1_0 d\lambda ~\lambda S^{a}_{\nu\mu}(\lambda\xi)
\end{equation}
for the derivation of which we refer to \cite{M.T.}. These formulae hold
under the
hypothesis of regularity at the origin for the field $(\Gamma^{ab}_\mu (x),
e^a_\mu (x))$. Radial projections corresponding to different assumption about
the behavior of the field at the origin will be considered in the following
subsection.
\subsection{ Radial Projectors}
Similarly as done in section 3, at the linearized level we can use relations
(\ref{eqrad1}) and (\ref{eqrad2}) to project the general field $(\Gamma,e)$ into
the radial field $(\Gamma^R, e^R)$ as the r.h.s of eqs.(\ref{eqrad1}) and
(\ref{eqrad2}) are invariant under linearized gauge transformations
\begin{equation}
\left (
\begin{array}{c}
\Gamma^{ab}_{\mu}(x)\\
\\
\tau^{a}_{\mu}(x)
\end{array}
\right )
\longrightarrow
\left (
\begin{array}{c}
\Gamma^{ab}_{\mu}(x)+\partial_\mu \Theta^{ab}(x)\\
\\
\tau^{a}_{\mu}(x)-\Theta^{ab}(x)\eta_{b\mu}+\partial_{\mu} \Lambda^a (x)
\end{array}
\right )
\end{equation}
where $\tau^{a}_\mu (x)$ is related to $e^a_\mu(x)$ by
\begin{equation}
e^a_\mu (x)=\delta^a_\mu+\tau^a_\mu (x).
\end{equation}
$\Theta^{ab}(x)$ is the infinitesimal Lorentz-transformation and
$\Lambda^a(x)$ the infinitesimal translation. We shall be interested in the
projectors $P^{0}$ and $P^{\infty}$ to construct the projector
\begin{equation}
\label{1PS}
P^S=\frac{1}{2}(P^{0}+P^{\infty}).
\end{equation}
$P^0$ is given by (\ref{eqrad1}) and (\ref{eqrad2}) substituting to
$R^{ab}_{\mu\nu}$ and $S^a_{\mu\nu}$ their linearized expression
\begin{equation}
R^{L a b}_{\mu\nu}(x)=\partial_\mu \Gamma^{a b}_\nu (x) -\partial_\nu
\Gamma^{a b}_\mu (x)
\end{equation}
\begin{equation}
S^{L a }_{\mu\nu}(x)=\partial_\mu \tau^{a }_\nu (x) -\partial_\nu
\tau^{a}_\mu (x)
+\Gamma^{a b}_\mu (x)\eta_{b\nu}-\Gamma^{a b}_\nu (x)\eta_{b\mu}
\end{equation}
which are invariant under linearized gauge transformation; i.e.
\begin{equation}
\label{1P0}
P^{0}
\left (
\begin{array}{c}
\Gamma^{a}_{b,\mu}(x)\\
\\
\\
\tau^{a}_{\mu}(x)
\end{array}
\right )
=
\left (
\begin{array}{c}
x^\nu\displaystyle{\int}^1_0 d\lambda ~\lambda
R^{La}_{b,\nu\mu}(\lambda x)\\
\\
x^\nu x^b\displaystyle{\int}^1_0 d\lambda ~\lambda (1-\lambda)
R^{La}_{b,\nu\mu}(\lambda x)
+x^\nu \displaystyle{\int}^1_0 d\lambda ~\lambda S^{La}_{\nu\mu}(\lambda x)
\end{array}
\right )
\end{equation}
\begin{equation}
\label{1Pinfinity}
P^{\infty}
\left (
\begin{array}{c}
\Gamma^{a}_{b,\mu}(x)\\
\\
\\
\tau^{a}_{\mu}(x)
\end{array}
\right )
=
\left (
\begin{array}{c}
-x^\nu\displaystyle{\int}^\infty_1 d\lambda ~\lambda
R^{La}_{b,\nu\mu}(\lambda x)\\
\\
-x^\nu x^b\displaystyle{\int}^\infty_1 d\lambda ~\lambda (1-\lambda)
R^{La}_{b,\nu\mu}(\lambda x)
-x^\nu \displaystyle{\int}^\infty_1 d\lambda ~\lambda
S^{La}_{\nu\mu}(\lambda x)
\end{array}
\right ).
\end{equation}
$P^0$ projects on radial fields that are regular at the origin (i.e.
behaving like the original field) and give a connection
$\Gamma^{ab}_\mu (x)$ behaving like ${1/r}$ at $\infty$.
On the other hand $P^\infty$ projects on a radial field that is regular
at $\infty$ and give a connection
$\Gamma^{ab}_\mu (x)$ behaving like ${1/r}$ at the
origin. As we noticed in the introduction the radial connection has to
behave like ${1/r}$ at the origin and/or at $\infty$,
otherwise the Wilson loop given by two radii and closed by two arcs one
going to the origin and the other going to infinity (see Fig.1) would be
identically 1, due to the radial nature of the connection;
but this would be contradictory as one cannot fix at the kinematical level the
value of a gauge invariant quantity. $P^S$ which allows us to
construct a finite propagator in $N>2$, treats the origin and infinity in
symmetrical way by giving the same ${1/r}$ behavior in
the two limits with opposite coefficients. This is seen in the same way as
shown at the end of section 3 and in Appendix A in the second order formalism.
Eq. (\ref{1PS}) together with eqs. (\ref{1P0}) and (\ref{1Pinfinity}) will
allow us to compute the radial propagator associated to the $P^S$
projector, in terms of the correlation functions of the linearized Riemann
and torsion two forms, which are invariant under linearized gauge
transformations and as such can be computed in any gauge (e.g. the harmonic
gauge). First we must derive the action of the adjoint projectors on the
torsion source $\Sigma^\mu_{ab}=-\Sigma^\mu_{ba}$ and on the energy
momentum tensor $T^\mu_a$.
\begin{eqnarray}
\label{P0dagS}
&&P^{0\dagger}(\Sigma)^\mu_{ab}=\Sigma^\mu_{ab}(x)+x^\mu \int^\infty_1
d\lambda \lambda^{N-1}\left (\partial_\rho \Sigma^\rho_{ab}(\lambda
x)+\frac{1}{2}\left (
T^\rho_a(\lambda x)\eta_{\rho b} -T^\rho_b(\lambda x)\eta_{\rho
a}\right)\right)\nonumber\\
&&+\frac{x^\mu}{2}\int^\infty_1 d\lambda
(\lambda^N-\lambda^{N-1})(x_b\partial_\rho T^\rho_a-x_a\partial_\rho T^\rho_b)
\end{eqnarray}
\begin{equation}
\label{P0dagT}
P^{0\dagger}(T)^\mu_a=T^\mu_a (x)+x^\mu\int^\infty_1 d\lambda \lambda^{N-1}
\partial_\rho T^\rho_a (\lambda x)
\end{equation}
The $P^{\infty\dagger}$ are given by changing in (\ref{P0dagS}) and
(\ref{P0dagT}) the integration limit from $(1,\infty)$ to $(1,0)$.
One easily verifies from (\ref{P0dagS}) and
(\ref{P0dagT}) that the projected sources $P^{0\dagger}(\Sigma,T)$ satisfy
the linearized Bianchi identities, as proved on general grounds in section 2
eq. (2.17)
\begin{equation}
\label{Bianchi1}
\partial_\mu \Sigma^\mu_{ab}+\frac{1}{2}(T_{a}^\rho\eta_{\rho b}-T_{b}^\rho
\eta_{\rho a} )=0
\end{equation}
\begin{equation}
\label{Bianchi2}
\partial_\mu T^\mu_a=0.
\end{equation}
In addition one notices that the r.h.s of (\ref{P0dagS}) and
(\ref{P0dagT}) are integrals of the linearized Bianchi identities
(\ref{Bianchi1}) and (\ref{Bianchi2}) and thus $P^{0\dagger}$ and
$P^{\infty\dagger}$ leave unchanged the sources satisfying the Bianchi
identities, in agreement with the general argument of section 2.
\subsection{Radial Propagators}
Similarly to what happens in the second order formalism, the propagators
constructed from $\langle P^0(\Gamma,\tau) P^0(\Gamma,\tau)\rangle_0$
are divergent while $\langle P^\infty(\Gamma,\tau)
P^\infty(\Gamma,\tau)\rangle_0$ diverge for $N\le 4$. Thus we shall
construct
the solution for the the $P^S$ projected Green' s function equation
\begin{equation}
\label{EQS}
\left (
\begin{array}{ccc}
\frac{1}{2}(\delta_{am}^{\mu\nu}\eta_{bn}-\delta_{an}^{\mu\nu}\eta_{bm}) &
& \delta^{\lambda\mu\sigma}_{abd}\partial_\lambda\\
& & \\
\delta^{\mu\lambda\nu}_{pmn}\partial_\lambda & & 0
\end{array}
\right)
\left (
\begin{array}{ccc}
G^{mn,rs}_{\nu,\gamma} & & G^{mn,g}_{\nu,\beta}\\
& \\
G^{d,rs}_{\sigma,\gamma} & & G^{d,g}_{\sigma,\beta}
\end{array}
\right )
=
P^{S\dagger}\left (
\begin{array}{ccc}
\delta^\mu_\gamma \delta^{rs}_{ab} & & 0\\
& &\\
0 & &\delta^\mu_\beta \delta^g_p
\end{array}
\right )\delta^N (x-y)
\end{equation}
by means of general technique described in section 2 and 3. One proves that
\begin{eqnarray}
\label{1GPS}
&&-i G^S (x,y)=\\
&&\frac{1}{2}\left \langle P^{0}
\left (
\begin{array}{c}
\Gamma^{a}_{b,\mu}(x)\\
\\
\tau^{a}_{\mu}(x)
\end{array}
\right )
P^{\infty}
\left (
\Gamma^{a}_{b,\mu}(y)\;
\tau^{a}_{\mu}(y)
\right )\right \rangle
+\frac{1}{2}
\left \langle P^{\infty}
\left (
\begin{array}{c}
\Gamma^{a}_{b,\mu}(x)\\
\\
\tau^{a}_{\mu}(x)
\end{array}
\right )
P^{0}
\left (
\Gamma^{a}_{b,\mu}(y)\;
\tau^{a}_{\mu}(y)
\right )\right \rangle\nonumber
\end{eqnarray}
is a convergent symmetric radial solution of (\ref{EQS}) for all
$N>2$. The explicit form of solution (\ref{1GPS}) of the Green' s function
equation (\ref{EQS}) is easily computed by using (\ref{1P0}) and
(\ref{1Pinfinity}) where the correlators between Riemann and torsion two-forms
which are invariant under linearized gauge transformations, can be
obtained using e.g. the usual symmetric harmonic gauge ($\beta=\infty$) in
which
\begin{equation}
\label{TT}
\langle
\tau_{a\mu}(x)\tau_{b\nu}(y)\rangle=-i\left(\eta_{ab}\eta_{\mu\nu}+\eta_{a\nu}
\eta_{b\mu}-\frac{2}{N-2}\eta_{a\mu}\eta_{b\nu}\right)D(x-y).
\end{equation}
We get
\begin{equation}
\label{RR}
\langle R^{Lab}_{\mu\nu}(x) R^{Lcd}_{\lambda\rho}(y)\rangle=
\langle R^{Lab}_{\mu\nu}(x) R^{Lcd}_{\lambda\rho}(y)\rangle^{II}+
[(\partial_{\mu}\partial_{\lambda} M^{ab,cd}_{\nu,\rho}-
\partial_{\mu}\partial_{\rho} M^{ab,cd}_{\nu,\lambda})-
(\mu\leftrightarrow\nu)]
\end{equation}
where $\langle\ \rangle^{II} $ means the correlator in the second order
formalism (see section 4) and
\begin{equation}
\label{GG}
M^{ab,cd}_{\nu,\rho}=-\frac{i}{4}\left(
\delta_{\nu\rho}\delta^{ab}_{c'd'}\eta^{c'c}\eta^{d'd}
+\left (\delta^{ab}_{\nu\alpha}
\delta^{cd}_{\rho\beta}-\frac{2}{N-2}\delta^{ab}_{\rho\alpha}
\delta^{cd}_{\nu\beta}\right )\eta^{\alpha\beta}\right)\delta^N(x-y)
\end{equation}
is the usual ultralocal part of the propagator of $\Gamma$ in the first
order formalism \cite{M.P.}.
\begin{equation}
\label{SS}
\langle S^{La}_{\mu\nu}(x) S^{Lb}{\rho\sigma}(y)\rangle=
[ (M^{a c,b d}_{\mu,\rho}\eta_{c\nu}\eta_{d\sigma}-
M^{a c,b d}_{\mu,\sigma}\eta_{c\nu}\eta_{d\rho})-
(\mu\leftrightarrow\nu)]
\end{equation}
\begin{equation}
\label{RS}
\langle R^{Lab}_{\mu\nu}(x) S^{Lc}_{\rho\sigma}(y)\rangle=
[(\partial_{\mu} M^{ab,cd}_{\nu,\rho}\eta_{d\sigma}-
\partial_{\nu} M^{ab,cd}_{\mu,\rho}\eta_{d\sigma})-
(\rho\leftrightarrow\sigma)
].
\end{equation}
The local nature of the correlators (\ref{SS}) and (\ref{RS}) reflects the
well known fact that the torsion does not propagate in the Einstein-Cartan
theory.
The propagators are obtained by substituting the correlators of the Riemann
and torsion tensors given by (\ref{RR}), (\ref{SS}) and (\ref{RS}) into
(\ref{1GPS}).
The integrals over
$\lambda$ and $\tau$ of the $\langle R R \rangle^{II}$ correlators are the
same as those given in the second order formalism.
The contact terms generated by the expression of $M^{ab,cd}_{\nu,\rho}$
and appearing on the r.h.s. eqs. (\ref{RR}),(\ref{SS}) and (\ref{RS})
lead to integrals in $\lambda$ and $\tau$ which are
convergent. In fact the generic integral is of the form
\begin{eqnarray}
&&\int^1_0 d\lambda \lambda^A \int^\infty_1 d\tau \tau^B
\delta^N(\lambda x-\tau y)=\nonumber \\
&&\int^1_0 d\lambda \lambda^A \int^\infty_1 d\tau \tau^B \delta (\lambda
|x|-
\tau |y|)(\lambda |x|)^{-N+1} \delta^{N-1}(\Omega_x-\Omega_y)=\\
&&\delta^{N-1}(\Omega_x-\Omega_y) \Theta (|x|-|y|)\frac{1}{A+B-N+2}
\left ( |y|^{-B-1}|x|^{B-N+1}-|y|^{A-N+1}|x|^{-A-1} \right ).\nonumber
\end{eqnarray}
We notice that in three dimensions the Riemann-Riemann correlator (\ref{RR}) is
identically zero
as can be
explicitly verified substituting (\ref{TT}) and (\ref{GG}) into (\ref{RR}) or
alternatively,
observing
that the (2,1)
component of eq. (\ref{EQS}) in three dimensions reads $\langle
R^L(x)\Gamma(y)\rangle
\equiv 0 $ which implies that $\langle R^L(x) R^L(y)\rangle\equiv 0 $ .
This means
through (\ref{1P0}) and (\ref{1Pinfinity}) that
$\langle\Gamma(x)\Gamma(y)\rangle\equiv 0 $
and that
$\langle
\tau(x)
\tau(y)\rangle $ is zero except for singular contributions that arise when
the
origin,
$x$ and $y$ are collinear; such contributions are given by (\ref{RR}),
(\ref{SS}) and (\ref{RS}).
Also in the second order approach the correlator $\langle h(x)h(y)\rangle$
in three
dimensions reduces to the collinear singular terms. Thus in the radial gauge
one
reads
directly the non existence of propagating gravitons, while in the harmonic
gauge
one
propagates a pure gauge.
The surviving collinear singularity of the radial propagator in three
dimension
reflects
the topological nature of the theory.
\section{Conclusions}
In the present paper we considered the quantization of the gravitational
field in the radial gauge.
The main advantage of this gauge is to provide physical correlation
functions,
i.e.
correlation functions defined at points described invariantly in terms of a
geodesic
system of coordinates radiating from a given origin.
Given a generic field we developed general projection formulae for
extracting
from it a radial field that is gauge equivalent to the given one; the
projection
is obviously not unique due to the presence of the residual gauge. For
reasons
intrinsic
to the gauge nature of the considered theory and that can be easily
understood
by
considering a special family of Wilson loops, such projected fields develop
some
kind
of singular behavior at the origin or a very slow decrease at infinity. The
best
choice is to treat the origin and infinity in symmetrical way by sharing
equally
such non regular behavior and this fixes completely the gauge. Technically
the
projection procedure
is obtained through
the Riemann and torsion tensors which are invariant at the linearized level.
Still
the direct computation of the v.e.v. of the symmetrically projected field
contains
an infinite solution of the homogeneous radially projected Green's
function, due to the singular behavior of the Riemann-Riemann correlator,
which has to be subtracted away. In this way one obtains an explicit
solution of
the
radially projected Green's function equation that is symmetric in the field
arguments,
radial and finite for all dimensions larger than 2.
In three dimensions the propagators vanish identically except for collinear
contributions; this is in agreement with the absence of propagating
gravitons in three dimensions, while the collinear contributions are
remnants at the perturbative level of topological effects
(conical defects) introduced by matter
in three dimensional gravity \cite{Flatland}.
|
1,116,691,499,142 | arxiv | \section{Introduction}
Let $\Omega$ be a convex and bounded subset of $\mathbb{R}^{d}~(d\geq 1)$ with smooth boundary $\partial \Omega$ and $[0,T]$ is a fixed finite time interval. We consider the following integro-differential equation of Kirchhoff type involving fractional time derivative of order $\alpha ~(0<\alpha <1)$ for non-homogeneous materials
\begin{align*}\tag{$D_{\alpha}$}
^{C}D^{\alpha}_{t}u- \nabla.\left(M\left(x,t,\|\nabla u\|^{2}\right)\nabla u\right)=f(x,t)+\int_{0}^{t}b(x,t,s)u(x,s)ds \quad \text{in}~ \Omega \times (0,T],
\end{align*}
with initial and boundary conditions
\begin{align*}
u(x,0)&=u_{0}(x) \quad \text{in} ~\Omega,\\
u(x,t)&=0 \quad \text{on} ~\partial \Omega \times [0,T],
\end{align*}
where $u:=u(x,t) :\Omega \times [0,T]\rightarrow \mathbb{R} $ is the unknown function, $M:\Omega\times [0,T]\times \mathbb{R}^{+}\rightarrow \mathbb{R}^{+},$ initial data $ u_{0}$, source term $f$ are known functions and $b(x,t,s)$ is a memory operator to be defined later. $^{C}D_{t}^{\alpha}u$ in the problem $(D_{\alpha})$ is the fractional time derivative of order $\alpha$ in the Caputo sense, which is defined in \cite{podlubny1998fractional} as
\begin{equation}\label{LS1}
^{C}D_{t}^{\alpha}u:=\frac{1}{\Gamma(1-\alpha)} \int_{0}^{t}\frac{1}{(t-s)^{\alpha}}\frac{\partial u(s)}{\partial s}ds,
\end{equation}
where $\Gamma(\cdot)$ denotes the gamma function. For the case $\alpha=1$, Lalit et al. \cite{kumar2020finite} proposed linearized backward Euler-Galerkin FEM and linearized Crank-Nicolson-Galerkin FEM with accuracy $O(h+k)$ and $O(h+k^{2})$, respectively.\\
\noindent There are various notions of fractional derivatives other than the Caputo derivative, which include Riemann-Liouville, Gr\"{u}nwald-Letnikov, Weyl, Marchaud, and Riesz fractional derivatives \cite{ podlubny1998fractional, miller1993introduction}. Among these, Caputo fractional derivative and Riemann-Liouville fractional derivative are the most commonly found in literature \cite{podlubny1998fractional}. These two fractional derivatives are related to each other by the following relation, see \cite{podlubny1998fractional}
\begin{equation}\label{1.2}
^{C}D_{t}^{\alpha}u(t)=~^{R}D_{t}^{\alpha}\left(u(t)-u(0)\right),
\end{equation}
where $^{R}D_{t}^{\alpha}u$ is the Riemann-Liouville fractional derivative defined by
\begin{equation}
^{R}D_{t}^{\alpha}u(t):=\frac{1}{\Gamma(1-\alpha)}\frac{d}{dt}\int_{0}^{t}\frac{1}{(t-s)^{\alpha}}u(s)ds .
\end{equation}
It is worth noting that the fractional derivatives other than
Caputo fractional derivative require initial condition containing the limiting value of fractional derivatives at $t=0$ \cite{podlubny1998fractional}, which has no physical interpretation. An advantage of choosing Caputo fractional derivative in the problem $(D_{\alpha})$ is that it allows the initial and boundary conditions in the same way as those for integer order differential equations. \\
\noindent There are many physical and biological processes in which the mean-squared displacement of the particle motion grows only sublinearly with time $t$, instead of linear growth. For instance, acoustic wave propagation in viscoelastic materials \cite{mainardi2010fractional}, cancer invasion system \cite{manimaran2019numerical}, anomalous diffusion transport \cite{metzler2000random}, which cannot be described accurately by classical models having integer order derivatives. Therefore the study of fractional differential equations has evolved immensely in recent years.\\
\noindent Problems involving fractional time derivatives have been studied by many researchers, for instance, see \cite{schneider1989fractional,huang2005time, giga2017well}.
Analytical solutions of fractional differential equations are expressed in terms of Fox $H$-functions, Green functions, and hypergeometric functions. Such special functions are more complex to compute, which restricts the applications of fractional calculus in applied sciences. This motivates the researchers to develop numerical algorithms for solving fractional differential equations.\\
\noindent There are two predominant discretization techniques in time for fractional differential equations, namely Gr\"{u}nwald-Letnikov approximation \cite{dimitrov2013numerical} and L1 type approximation \cite{alikhanov2015new,lin2007finite}. The second category, namely the L1 type approximation scheme, is based on piecewise interpolation of the integrand in definition \eqref{LS1} of the Caputo fractional derivative. Lin and Xu \cite{lin2007finite} developed the L1 scheme based on piecewise linear interpolation for Caputo fractional derivative and Legendre spectral method in space for the following time fractional PDE in one space dimension
\begin{equation}\label{1.3}
\begin{aligned}
^{C}D^{\alpha}_{t}u-\frac{\partial^{2}u}{\partial x^{2}}&=f(x,t)\quad x \in (0,L),~ t \in (0,T],\\
u(x,0)&=g(x) \quad x \in (0,L),\\
u(0,t)=u(L,t)&=0 \quad 0 \leq t \leq T,
\end{aligned}
\end{equation}
and achieved the convergence estimates of $O(h^{2}+k^{2-\alpha})$ for the solutions belonging to $C^{2}\left([0,T];H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\right)$. Recently, Alikhanov \cite{alikhanov2015new} proposed a modification of the L1 type scheme in time direction and difference scheme in space direction for the problem \eqref{1.3}. In his work, the author proved that the convergence rate is $O(h^{2}+k^{2})$ for the solutions belonging to $C^{3}\left([0,T];C^{4}(\Omega)\right)$.\\
\noindent On a similar note, there has been considerable attention devoted to the nonlocal diffusion problems where diffusion depends on the entire domain rather than pointwise. Lions \cite{lions1978some} studied the following problem
\begin{equation*}
\frac{\partial^{2}u}{\partial t^{2}}-M\left(x,\int_{\Omega}|\nabla u|^{2}dx\right)\Delta u=f(x,t) \quad \text{in}\quad \Omega \times [0,T],
\end{equation*}
which models transversal oscillations of an elastic string or membrane by considering the change in length during vibrations. Also, the nonlocal terms appear in various physical and biological systems. For instance, the temperature in a thin region during friction welding \cite{kavallaris2007behaviour}, Kirchhoff equations with magnetic field \cite{mingqi2018critical}, Ohmic heating with variable thermal conductivity \cite{tzanetis2001nonlocal}, and many more. We cite \cite{mishra2015existence,goel2019kirchhoff,correa2004existence} for some contemporary works related to the existence, uniqueness, and regularity of the problems involving nonlocal terms. \\
\noindent The models discussed above behave accurately only for a perfectly homogeneous medium, but in real-life situations, a large number of heterogeneities are present, which causes some memory effect or feedback term. These phenomena cannot be described by classical PDEs, which motivates us to study time fractional PDEs for non-homogeneous materials. We note that this class of equations has not been analyzed in the literature yet, and this is the first attempt to establish new results for the model $(D_{\alpha})$.\\
\noindent The salient feature of our problem is its doubly nonlocal nature due to the presence of Kirchhoff term and fractional derivatives or memory term. The memory term incorporates the history of the phenomena under investigation, which is crucial in evolutionary processes. This structure induces additional difficulties in the use of classical methods of PDEs. In the first part of this article, we show the existence and uniqueness of weak solutions for the problem $(D_{\alpha})$ by applying the Galerkin method. Further, we discretize the domain in space direction by using a conforming FEM \cite{thomee1984galerkin}, keeping the time variable continuous. We define a new Ritz-Volterra type operator as an intermediate function and derive error estimates of $O(h)$ for semi-discrete solutions of the problem $(D_{\alpha})$.\\
\noindent Finally, we find numerical solutions of $(D_{\alpha})$ by discretizing the domain in the time direction also. Qiao et al. \cite{qiao2019adi} studied $(D_{\alpha})$ having weakly singular kernel and without Kirchhoff term and derived $O(h^{2}+k^{2-\alpha})$ convergence rate by using difference schemes in space and L1 type scheme in the time direction. For the case $b=0$ in $(D_{\alpha})$, Manimaran et al. \cite{manimaran2021finite} used L1 type methods for fractional derivative and Newton methods for nonlinearity to obtain $O(h^{2}+k^{2-\alpha})$ accuracy for sufficiently smooth solutions. \\
\noindent The novelty of our work is to obtain the optimal rate of convergence $O(k^{2})$ in time for the numerical solutions of the problem $(D_{\alpha})$. For nonlinear PDEs, linearization techniques have the advantage over Newton's iterations method in terms of computational cost and storage. This is the motivation for us to develop a linearized fully discrete formulation based on the Galerkin FEM in space and the L1 type scheme in the time direction. This scheme is proved to be accurate of $O(h+k^{2-\alpha})$. Further, to increase the accuracy in time, we propose a new linearized numerical algorithm based on L2-1$_{\sigma}$ scheme. In this scheme, we prove that numerical solutions converge to the exact solution of $(D_{\alpha})$ with the optimal rate of accuracy $O(h+k^{2})$. The presence of Kirchhoff term makes the problem nonlinear which requires high computational efforts to obtain the numerical solutions of ($D_{\alpha}$) at each time step. Also the discretization of memory term demands large computer storage. To resolve these issues, we come up with a linearization techniques for approximation of Kirchhoff term and memory term is approximated by modified Simpson's rule \cite{pani1992numerical}. Further, some numerical experiments are carried out to confirm the theoretical findings. In addition, we remark that to the best of our knowledge, there is no article that has established convergence of $O(h+k^{2})$ to the time fractional PDEs involving memory term.\\
\noindent Turning to the layout of this paper, in Section 2, we provide some preliminaries and state the main contributions of this work. Section 3 includes the proof of the existence and uniqueness of weak solutions for the problem $(D_{\alpha})$. In Section 4, the corresponding semi-discrete formulation is studied. A priori bounds and error estimates for semi-discrete solutions are derived, which are uniform in time. In Section 5, we develop a new L1 type Galerkin FEM and derive a priori bounds on numerical solutions of $(D_{\alpha})$ and prove that our numerical algorithm has a convergence rate of $O(h+k^{2-\alpha})$. In Section 6, we establish the optimal rate of convergence $O(h+k^{2})$ in time by developing a new fractional Crank-Nicolson-Galerkin FEM. Section 7 contains numerical experiments that reveal that theoretical results are sharp. Finally, we conclude this work in Section 8. \\
\noindent In this paper, for any two quantities, $a$ and $b$, the symbol $a \lesssim b$ means that there is a generic positive constant $K$ such that $a\leq K b$. Here $K$ may vary at different stages but is independent of mesh parameters $h$ and $k$.
\section{Preliminaries and main results}
This section of the article is intended to provide some preliminaries for time fractional PDEs, which will be used in subsequent sections. Further, we state the main results of this article. \\
\noindent Let $L^{2}(\Omega)$ be the standard Lebesgue space of square integrable functions with norm $\|\cdot\|$, induced by the inner product $(\cdot,\cdot)$, and $H^{1}_{0}(\Omega)$ is a standard Hilbert space. Let us denote
\begin{equation}\label{2.1}
k(t):=\frac{t^{-\alpha}}{\Gamma(1-\alpha)},
\end{equation}
and $\ast$ denotes the convolution of two integrable functions $g$ and $h$ on $[0,T]$ as follows
\begin{equation}\label{354}
(g\ast h)(t)=\int_{0}^{t}g(t-s)h(s)ds\quad \text{for}~~t~~\text{in}~~[0,T].
\end{equation}
\begin{rmk}\label{rmk1}
Note that, $l(t)$ defined by $l(t):=\frac{t^{\alpha-1}}{\Gamma(\alpha)}$ satisfies $k\ast l=1$.
\end{rmk}
\noindent Now using \eqref{2.1}, \eqref{354}, and \eqref{1.2}, we rewrite the problem $(D_{\alpha})$ in $\Omega \times (0,T]$ as
\begin{equation}\tag{$R_{\alpha}$}
\begin{aligned}
\frac{d}{dt}\left[k\ast (u-u_{0})\right](t)- \nabla.\left(M\left(x,t,\|\nabla u\|^{2}\right)\nabla u\right)&=f(x,t)+\int_{0}^{t}b(x,t,s)u(x,s)ds,
\end{aligned}
\end{equation}
with initial and boundary conditions
\begin{equation*}
\begin{aligned}
u(x,0)&=u_{0}(x) \quad \text{in} ~\Omega,\\
u(x,t)&=0 \quad \text{on} ~\partial \Omega \times [0,T].\\
\end{aligned}
\end{equation*}
\noindent Throughout the paper, we assume the following hypotheses on data:\\
(H1)\quad $u_{0}\in H^{1}_{0}(\Omega)$ and $f \in L^{\infty}\left(0,T,L^{2}(\Omega)\right)$.\\
(H2)\quad $M: \Omega \times (0,T]\times \mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$ is a Lipschitz continuous function such that there exists a constant $m_{0}$ which satisfies
\begin{equation*}\label{1.1}
M(x,t,s)\geq m_{0} >0 ~\text{for all }~ (x,t,s) \in \Omega \times (0,T]\times \mathbb{R}^{+} ~\text{and} \left(m_{0}-4L_{M}\hat{K}^{2}\right)>0,
\end{equation*}
where $\hat{K}=\left(\|\nabla u_{0}\|+\|f\|_{L^{\infty}\left(0,T,L^{2}(\Omega)\right)}\right)$ and $L_{M}$ is a Lipschitz constant.\\
(H3)\quad $b(x,t,s)$ is a second order partial differential operator of the form of
\begin{equation*}
b(x,t,s)u(x,s)=-\nabla\cdot(b_{2}(x,t,s)\nabla u(x,s))+\nabla \cdot(b_{1}(x,t,s)u(x,s))+b_{0}(x,t,s)u(x,s),
\end{equation*}
with $b_{2}:\Omega\times(0,T]\times (0,T] \rightarrow \mathbb{R}^{d\times d}$ is a matrix with entries $[b_{2}^{ij}(x,t,s)]$, $b_{1}:\Omega \times(0,T]\times (0,T] \rightarrow \mathbb{R}^{d}$ is a vector with entries $[b_{1}^{j}(x,t,s)]$ and $b_{0}: \Omega\times (0,T]\times (0,T] \rightarrow \mathbb{R}$ is a scalar function. We assume that $b_{2}^{ij}, b_{1}^{j}, b_{0}$ are smooth functions in all variables for $i,j=1,2,\dots,d$.\\
\noindent The notion of weak formulation for the problem $(R_{\alpha})$ is described as follows:\\
find
$u$ in $L^{\infty}\left(0,T,L^{2}(\Omega)\right) \cap L^{2}\left(0,T,H^{1}_{0}(\Omega)\right)$ and $\frac{d}{dt}\left[k\ast (u-u_{0})\right](t)$ in $L^{2}\left(0,T,H^{-1}(\Omega)\right) $ such that the following holds for all $v$ in $H^{1}_{0}(\Omega)$ and $a.e. ~t$ in $(0,T]$
\begin{equation}\tag{$W_{\alpha}$}
\begin{aligned}
\left(\frac{d}{dt}\left[k\ast (u-u_{0})\right](t),v\right)&+ \left(M\left(x,t,\|\nabla u\|^{2}\right)\nabla u,\nabla v\right)\\
&=\left(f,v\right)+\int_{0}^{t}B(t,s,u,v)ds~ \text{in}~ \Omega \times (0,T],
\end{aligned}
\end{equation}
with initial and boundary conditions
\begin{equation*}
\begin{aligned}
u(x,0)&=u_{0}(x) \quad \text{in} ~\Omega,\\
u(x,t)&=0 \quad \text{on} ~\partial \Omega \times [0,T],
\end{aligned}
\end{equation*}
where
\begin{equation*}
\begin{aligned}
B(t,s,u,v)&=\left( b_{2}(x,t,s)\nabla u(s),\nabla v\right)+\left(\nabla \cdot( b_{1}(x,t,s)u(s)),v\right)+\left(b_{0}(x,t,s)u(s),v\right),\\
\left(b_{2}(x,t,s)\nabla u(s),\nabla v\right)&=\int_{\Omega}b_{2}(x,t,s)\nabla u(s)\cdot \nabla v dx,\\
\left(\nabla \cdot (b_{1}(x,t,s)u(s)),v\right)&=\int_{\Omega}\left(\nabla \cdot (b_{1}(x,t,s)u(s))\right)v dx,\\
\left(b_{0}(x,t,s)u(s),v\right)&
=\int_{\Omega}b_{0}(x,t,s)u(s)v dx.
\end{aligned}
\end{equation*}
The assumptions on the coefficients $b_{0}, b_{1}$, and $b_{2}$ imply that there exists a positive constant $B_{0}$ such that for $(t,s)$ in $(0,T)\times (0,T)$ and $u,v$ in $H^{1}_{0}(\Omega)$ the following inequality holds
\begin{equation}\label{2.3}
|B(t,s,u,v)|\leq B_{0}\|\nabla u\|\|\nabla v\|
\end{equation}
We remark that the following examples of $b(x,t,s)$ satisfy (H3) and \eqref{2.3}.
\begin{exam}
Let $\Omega$ be a bounded domain in $\mathbb{R}^{2}, x=(x_{1},x_{2})$ in $\Omega$ and $t,s$ in $[0,T]$
\begin{enumerate}
\item $b_{2}(x,t,s)=e^{t-s}\begin{bmatrix}
-1& 0\\
0 &-1
\end{bmatrix}$; $b_{1}(x,t,s)=b_{0}(x,t,s)=0.$
\item $b_{2}(x,t,s)=e^{t-s}\begin{bmatrix}
0& -x_{1}x_{2}\\
-x_{1}x_{2} &0
\end{bmatrix}$; $b_{1}(x,t,s)=e^{t-s}\left(x_{1}^{2},x_{2}^{2}\right);b_{0}(x,t,s)=0.$
\end{enumerate}
These examples were used for numerical experiments in the context of parabolic integro-differential equations in \cite{barbeiro2011h1}.
\end{exam}
\noindent We use the following version of continuous Gr\"{o}nwall's inequality to derive the bounds on the weak solutions of $(D_{\alpha})$.
\begin{lem}\label{lem2.5}\cite{almeida2017gronwall}
Let $\alpha$ be such that $0<\alpha<1$ and suppose that $u$ and $v$ are two non-negative integrable functions on $[a,b]$, $v$ is nondecreasing and $g$ is continuous function in $[a,b]$. If
\begin{equation*}
u(t)\leq v(t)+g(t)\int_{0}^{t}(t-s)^{\alpha-1}u(s)ds \quad \text{for all}~ t ~~\text{in}~~ [a,b],
\end{equation*}
then
\begin{equation*}
u(t)\leq v(t)E_{\alpha}\left[g(t)\Gamma(\alpha)t^{\alpha}\right] \quad \text{for all}~ t ~~\text{in}~~ [a,b],
\end{equation*}
where $E_{\alpha}(\cdot)$ is the one parameter Mittag-Leffler function \cite{podlubny1998fractional}.
\end{lem}
\noindent Further, for the semi-discrete formulation of the problem $(D_{\alpha})$, we discretize the domain in space variable by a conforming FEM \cite{thomee1984galerkin}.
Let $\mathbb{T}_{h}$ be a shape regular (non overlapping), quasi-uniform triangulation of $\Omega$ and $h$ be the discretization parameter in the space direction. We define a finite dimensional subspace $X_{h}$ of $H^{1}_{0}(\Omega)$ as
$$X_{h}=\{v_{h}\in C(\bar \Omega)~:~v_{h}|_{\tau}~ \text{is a linear polynomial for all}~ \tau \in \mathbb{T}_{h} ~\text{and} ~v_{h}=0 ~\text{on}~\partial \Omega\}.$$
\noindent Now semi-discrete formulation for the problem $(D_{\alpha})$ is to find $u_{h}$ in $X_{h}$ such that the following holds for all $v_{h}$ in $X_{h}$ and a.e. $t$ in $(0,T]$
\begin{equation}\tag{$S_{\alpha}$}
\begin{aligned}
\left(^{C}D^{\alpha}_{t}u_{h},v_{h}\right) &+\left(M\left(x_{h},t,\|\nabla u_{h}\|^{2}\right)\nabla u_{h},\nabla v_{h}\right)\\
&=(f_{h},v_{h})+\int_{0}^{t}B(t,s,u_{h},v_{h})ds ~~~\text{in}~~\mathbb{T}_{h} \times (0,T],
\end{aligned}
\end{equation}
with initial and boundary conditions
\begin{equation*}
\begin{aligned}
u_{h}(x,0)&=R_{h}u_{0} ~~\text{in} ~~\Omega,\\
u_{h}&=0 ~~\text{on}~~\partial \Omega\times [0,T],
\end{aligned}
\end{equation*}
where $f_{h}=f(x_{h},t)$ for $x_{h}$ in $\mathbb{T}_{h}$ and $R_{h}u_{0}$ is the Ritz projection of $u_{0}$ on $X_{h}$ defined by
\begin{equation}\label{Ad2.3}
\left(\nabla R_{h}u_{0},\nabla v_{h}\right)=\left(\nabla u_{0},\nabla v_{h}\right)~~\text{ for all}~~ v_{h} ~~\text{in}~~X_{h}.
\end{equation}
\noindent Now we move to the fully discrete formulation of $(D_{\alpha})$, for that we divide the interval $[0,T]$ into sub intervals of uniform step size $k$ and $t_{n}=nk$ for $n=1,2,3,\dots,N$ with $t_{N}=T$. We approximate Caputo fractional derivative by the following schemes.\\
\noindent \textbf{L1 Approximation Scheme \cite{li2016analysis}:} In this scheme, Caputo fractional derivative is approximated at the point $t_{n}$ using linear interpolation or backward Euler difference method as follows
\begin{equation}\label{2.9}
\begin{aligned}
^{C}D^{\alpha}_{t_{n}}u~&=\frac{1}{\Gamma(1-\alpha)}\int_{0}^{t_{n}}\frac{1}{(t_{n}-s)^{\alpha}}\frac{\partial u}{\partial s}ds\\
&=\frac{1}{\Gamma(1-\alpha)}\sum_{j=1}^{n}\frac{u^{j}-u^{j-1}}{k}\int_{t_{j-1}}^{t_{j}}\frac{1}{(t_{n}-s)^{\alpha}}~ds+\mathbb{Q}^{n}\\
&=\frac{k^{-\alpha}}{\Gamma(2-\alpha)}\sum_{j=1}^{n}a_{n-j}\left(u^{j}-u^{j-1}\right)+\mathbb{Q}^{n}
\end{aligned}
\end{equation}
where $a_{i}=(i+1)^{1-\alpha}-i^{1-\alpha},~i\geq0$, $u^{j}=u(x,t_{j})$, and $\mathbb{Q}^{n}$ is the truncation error. \\
\noindent \textbf{L2-1$_{\sigma}$ Approximation Scheme \cite{alikhanov2015new}:} In this scheme, Caputo fractional derivative is approximated at the point $t_{n-\frac{\alpha}{2}}$ using quadratic interpolation on $[0,t_{n-1}]$ and linear interpolation on $[t_{n-1 },t_{n-\frac{\alpha}{2}}]$ as follows
\begin{equation*}\label{2.11}
^{C}D^{\alpha}_{t_{n-\frac{\alpha}{2}}}=\tilde{\mathbb{D}}^{\alpha}_{t_{n-\frac{\alpha}{2}}}u+\tilde{\mathbb{Q}}^{n-\frac{\alpha}{2}}
\end{equation*}
where
\begin{equation}\label{1.20} \tilde{\mathbb{D}}^{\alpha}_{t_{n-\frac{\alpha}{2}}}u=\frac{k^{-\alpha}}{\Gamma(2-\alpha)}\sum_{j=1}^{n}\tilde{c}_{n-j}^{(n)}\left(u^{j}-u^{j-1}\right)
\end{equation}
with weights $\tilde{c}_{n-j}^{(n)}$ satisfying $\tilde{c}_{0}^{(1)}=\tilde{a}_{0}$ for $n=1$ and for $n\geq 2$
\begin{equation}
\tilde{c}_{j}^{(n)}=\begin{cases}
\tilde{a}_{0}+\tilde{b}_{1}, & j=0,\\
\tilde{a}_{j}+\tilde{b}_{j+1}-\tilde{b}_{j}, & 1\leq j \leq n-2,\\
\tilde{a}_{j}-\tilde{b}_{j},& j=n-1,
\end{cases}
\end{equation}
where
\begin{equation*}
\tilde{a}_{0}=\sigma^{1-\alpha}~\text{and}~
\tilde{a}_{l}=\left(l+\sigma\right)^{1-\alpha}-\left(\sigma\right)^{1-\alpha} \quad l\geq 1,
\end{equation*}
\begin{equation*}
\tilde{b}_{l}=\frac{1}{(2-\alpha)}\left[\left(l+\sigma\right)^{2-\alpha}-\left(l-\frac{\alpha}{2}\right)^{2-\alpha}\right]-\frac{1}{2}\left[\left(l+\sigma\right)^{1-\alpha}+\left(l-\frac{\alpha}{2}\right)^{1-\alpha}\right]~l\geq 1,
\end{equation*}
with $\sigma=\left(1-\frac{\alpha}{2}\right)$ and $\tilde{\mathbb{Q}}^{n-\frac{\alpha}{2}}$ is the truncation error.\\
\noindent Similar to the continuous case (Remark-\ref{rmk1}), there exists a discrete kernel corresponding to above schemes which satisfies the following properties.
\begin{lem}\label{L1.2} \textbf{(For L1 type approximation \cite{li2016analysis})}
Let $p_{n}$ be a sequence defined by
\begin{equation*}
p_{0}=1,~~ p_{n}=\sum_{j=1}^{n}(a_{j-1}-a_{j})p_{n-j}~~\text{for}~~n\geq 1.
\end{equation*}
Then $p_{n}$ satisfies
\begin{equation}
0<p_{n}<1,
\end{equation}
\begin{equation}\label{2.81}
\sum_{j=k}^{n}p_{n-j}a_{j-k}=1,\quad 1\leq k \leq n,
\end{equation}
\begin{equation}\label{2.91}
\Gamma(2-\alpha)\sum_{j=1}^{n}p_{n-j}\leq \frac{n^{\alpha}}{\Gamma(1+\alpha)}.
\end{equation}
\end{lem}
\begin{lem}\label{L1.21} \textbf{(For L2-1$_{\sigma}$ type approximation \cite{liao2019discrete})} Define
\begin{equation*}
\tilde{p}_{0}^{(n)}=\frac{1}{\tilde{c}_{0}^{(n)}},~~ \tilde{p}_{j}^{(n)}=\frac{1}{\tilde{c}_{0}^{(n-j)}}\sum_{k=0}^{j-1}\left(\tilde{c}_{j-k-1}^{(n-k)}-\tilde{c}_{j-k}^{(n-k)}\right)\tilde{p}_{k}^{(n)}~~\text{for}~~1\leq j\leq n-1.
\end{equation*}
Then $\tilde{p}_{j}^{(n)}$ satisfies
\begin{equation}
0< \tilde{p}_{n-j}^{(n)} <1,
\end{equation}
\begin{equation}\label{2.812}
\sum_{j=k}^{n}\tilde{p}_{n-j}^{(n)}\tilde{c}_{j-k}^{(j)}=1,\quad 1\leq k \leq n \leq N,
\end{equation}
\begin{equation}\label{2.914}
\Gamma(2-\alpha)\sum_{j=1}^{n}\tilde{p}_{n-j}^{(n)}\leq \frac{n^{\alpha}}{\Gamma(1+\alpha)}, \quad 1\leq n \leq N.
\end{equation}
\end{lem}
\noindent With this background, we state the main contributions of this article.
\begin{thm}\label{t2.5}
\textbf{(Well-posedness of weak formulation (W$_{\alpha}$))} Under the hypotheses of (H1), (H2), and (H3), the problem $(W_{\alpha})$ admits a unique solution which satisfies the following a priori bounds
\begin{equation*}
\|u\|^{2}_{L^{\infty}\left(0,T,L^{2}(\Omega)\right)}+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}\|\nabla u(s)\|^{2}ds \lesssim \left(\|\nabla u_{0}\|^{2}+\|f\|^{2}_{L^{\infty}(0,T,L^{2}(\Omega))}\right),
\end{equation*}
\begin{equation*}
\|u\|^{2}_{L^{\infty}(0,T,H^{1}_{0}(\Omega))}+\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}\|\Delta u(s)\|^{2}ds \lesssim \left(\|\nabla u_{0}\|^{2}+\|f\|^{2}_{L^{\infty}(0,T,L^{2}(\Omega))}\right).
\end{equation*}
\end{thm}
\noindent Now we derive the error estimates of semi-discrete solutions and fully discrete solutions by assuming that the solution of $(D_{\alpha})$ is in $C^{4}\left([0,T];H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\right)$.
\begin{thm}\label{t2.6}
\textbf{(Error estimates for semi-discrete formulation (S$_{\alpha}$))} Suppose that the hypotheses (H1), (H2), and (H3) hold. Then we have the following error estimates for the solution $u_{h}$
of the problem $(S_{\alpha})$, which is uniform in time
\begin{equation*}
\|u-u_{h}\|_{L^{\infty}\left(0,T,L^{2}(\Omega)\right)}+\left(\frac{1}{\Gamma(\alpha)}\int_{0}^{t}(t-s)^{\alpha-1}\|\nabla u(s)-\nabla u_{h}(s)\|^{2}ds\right)^{1/2} \lesssim h
\end{equation*}
provided that $u(t)$ is in $H^{2}(\Omega)\cap H^{1}_{0}(\Omega)$ for a.e. $t$ in $[0,T]$ and triangulation of the domain is quasi-uniform.
\end{thm}
\noindent Next, we develop the two different types of numerical algorithms to find the numerical solutions of the problem $(D_{\alpha})$.
\noindent \textbf{ Backward Euler-Galerkin FEM based on L1 scheme:} Find $u_{h}^{n} ~(n=1,2,3,\dots,N)$ in $X_{h}$ with $\bar{u}_{h}^{n-1}=2u_{h}^{n-1}-u_{h}^{n-2}$ such that the following equations hold for all $v_{h}$ in $X_{h}$ \\
For $n\geq 2,$
\begin{equation}\tag{$E_{\alpha}$}
\left(\mathbb{D}^{\alpha}_{t}u_{h}^{n},v_{h}\right)+\left(M\left(x_{h},t_{n},\|\nabla \bar{u}_{h}^{n-1}\|^{2}\right)\nabla u_{h}^{n},\nabla v_{h}\right)=\left(f_{h}^{n},v_{h}\right)+\sum_{j=1}^{n-1}w_{nj}B\left(t_{n},t_{j},u_{h}^{j},v_{h}\right).
\end{equation}
For $n=1$,
\begin{equation*}
\begin{aligned}
\left(\mathbb{D}^{\alpha}_{t}u_{h}^{1},v_{h}\right)+\left(M\left(x_{h},t_{1},\|\nabla u_{h}^{1}\|^{2}\right)\nabla u_{h}^{1},\nabla v_{h}\right)=\left(f_{h}^{1},v_{h}\right)+kB\left(t_{1},t_{0},u_{h}^{0},v_{h}\right),
\end{aligned}
\end{equation*}
with initial and boundary conditions
\begin{equation*}
\begin{aligned}
u_{h}^{0}&=R_{h}u_{0}~~\text{in}~~~X_{h},\\
u_{h}^{n}&=0 ~~\text{on}~~ \partial \Omega ~~~\text{for}~~n=1,2,3,\dots,N.
\end{aligned}
\end{equation*}
where $\mathbb{D}^{\alpha}_{t}u_{h}^{n}$ is the approximation operator defined in \eqref{2.9} as
\begin{equation}\label{S1.18}
\begin{aligned}
\mathbb{D}^{\alpha}_{t}u_{h}^{n}&=\frac{k^{-\alpha}}{\Gamma(2-\alpha)}\sum_{j=1}^{n}a_{n-j}\left(u_{h}^{j}-u_{h}^{j-1}\right),
\end{aligned}
\end{equation}
\noindent and $w_{nj}$ are the quadrature weights satisfying the following modified Simpson's rule \cite{pani1992numerical}. Let $m_{1}=[k^{-1/2}]$, where $[\cdot]$ denotes the greatest integer function. Set $k_{1}=m_{1}k$ and $\bar{t}_{j}=jk_{1}$. Let $j_{n}$ be the largest even integer such that $\bar{t}_{j_{n}} <t_{n}$ and introduce quadrature points
\[
\bar{t}_{j}^{n}=\begin{cases}
&jk_{1},\quad 0\leq j\leq j_{n},\\
&\bar{t}_{j}^{n}+(j-j_{n})k,\quad j_{n}\leq j\leq J_{n},
\end{cases}
\]
where $\bar{t}_{J_{n}}^{n}=t_{n-1}$.
Then quadrature rule for any function $g$ is given by
\begin{equation}\label{S1.14}
\begin{aligned}
\int_{0}^{t_{n}}g(s)~ds&=\sum_{j=0}^{n-1}w_{nj}g(t_{j})+q^{n}(g)\\
&=\frac{k_{1}}{3}\sum_{j=1}^{j_{n}/2}\left[g(\bar{t}_{2j}^{n})+4g(\bar{t}_{2j-1}^{n})+g(\bar{t}_{2j-2}^{n})\right]\\
&+\frac{k}{2}\sum_{j=j_{n}+1}^{J_{n}}\left[g(\bar{t}_{j}^{n})+g(\bar{t}_{j-1}^{n})\right]+kg(\bar{t}_{J_{n}}^{n})+q^{n}(g)
\end{aligned}
\end{equation}
where $q^{n}(g)$ is the quadrature error associated with the function $g$ at $t_{n}$.
\begin{thm}\label{J1}
\textbf{(Convergence Estimates for the Scheme (E$_{\alpha}$))} Under the hypotheses of (H1), (H2), and (H3), the fully discrete solution $u_{h}^{n}~(1\leq n \leq N)$ of $(E_{\alpha})$ converges to the solution $u$ of $(D_{\alpha})$ with the following rate of accuracy
\begin{equation*}
\max_{1\leq n \leq N}\|u(t_{n})-u_{h}^{n}\|+\left(k^{\alpha}\sum_{n=1}^{N}p_{N-n}\|\nabla u(t_{n}) -\nabla u_{h}^{n}\|^{2}\right)^{1/2}\lesssim (h+k^{2-\alpha}).
\end{equation*}
\end{thm}
\noindent At this point we observe that the convergence rate is of $O(k^{2-\alpha})$ in temporal direction which is not optimal. Thus to obtain the optimal rate of convergence, we propose the following numerical scheme.
\noindent \textbf{Fractional Crank-Nicolson-Galerkin FEM based on L2-1$_\sigma$ scheme:} Find $u_{h}^{n}~(n=1,2,3\dots,N)$ in $X_{h}$ with $\bar{u}_{h}^{n-1,\alpha}=\left(2-\frac{\alpha}{2}\right)u_{h}^{n-1}-\left(1-\frac{\alpha}{2}\right)u_{h}^{n-2}$ and $\hat{u}_{h}^{n,\alpha}=\left(1-\frac{\alpha}{2}\right)u_{h}^{n}+\left(\frac{\alpha}{2}\right)u_{h}^{n-1}$ such that the following equations hold for all $v_{h}$ in $X_{h}$\\
For $n\geq 2$,
\begin{equation}\tag{$F_{\alpha}$}
\begin{aligned}
\left(\tilde{\mathbb{D}}^{\alpha}_{t_{n-\frac{\alpha}{2}}}u_{h}^{n},v_{h}\right)+\left(M\left(x_{h},t_{n-\frac{\alpha}{2}},\|\nabla \bar{u}_{h}^{n-1,\alpha}\|^{2}\right)\nabla \hat{u}_{h}^{n,\alpha},\nabla v_{h}\right)&=\sum_{j=1}^{n-1}\tilde{w}_{nj}B\left(t_{n-\frac{\alpha}{2}},t_{j},u_{h}^{j},v_{h}\right)\\
&+\left(f_{h}^{n-\frac{\alpha}{2}},v_{h}\right).
\end{aligned}
\end{equation}
For $n=1$,
\begin{equation*}
\begin{aligned}
\left(\tilde{\mathbb{D}}^{\alpha}_{t_{1-\frac{\alpha}{2}}}u_{h}^{1},v_{h}\right)+\left(M\left(x_{h},t_{1-\frac{\alpha}{2}},\|\nabla u_{h}^{1}\|^{2}\right)\nabla u_{h}^{1},\nabla v_{h}\right)&=\left(1-\frac{\alpha}{2}\right)kB\left(t_{1-\frac{\alpha}{2}},t_{0},u_{h}^{0},v_{h}\right)\\
&+\left(f_{h}^{1-\frac{\alpha}{2}},v_{h}\right),
\end{aligned}
\end{equation*}
with initial and boundary conditions
\begin{equation*}
\begin{aligned}
u_{h}^{0}&=R_{h}u_{0}~~\text{in}~~~X_{h},\\
u_{h}^{n}&=0 ~~\text{on}~~ \partial \Omega ~~~\text{for}~~n=1,2,3,\dots,N,
\end{aligned}
\end{equation*}
where $\tilde{\mathbb{D}}^{\alpha}_{t_{n-\frac{\alpha}{2}}}u_{h}^{n}$ is the approximation operator defined in \eqref{1.20} and $\tilde{w}_{nj}$ is the quadrature weights satisfying the quadrature formula similar to \eqref{S1.14} with small modification as follows
\begin{equation}\label{S1.14AD}
\begin{aligned}
\int_{0}^{t_{n}}g(s)~ds&=\sum_{j=0}^{n-1}\tilde{w}_{nj}g(t_{j})+\tilde{q}^{n-\frac{\alpha}{2}}(g)\\
&=\frac{k_{1}}{3}\sum_{j=1}^{j_{n}/2}\left[g(\bar{t}_{2j}^{n})+4g(\bar{t}_{2j-1}^{n})+g(\bar{t}_{2j-2}^{n})\right]\\
&+\frac{k}{2}\sum_{j=j_{n}+1}^{J_{n}}\left[g(\bar{t}_{j}^{n})+g(\bar{t}_{j-1}^{n})\right]+\left(1-\frac{\alpha}{2}\right)kg(\bar{t}_{J_{n}}^{n})+\tilde{q}^{n-\frac{\alpha}{2}}(g),
\end{aligned}
\end{equation}
where $\tilde{q}^{n-\frac{\alpha}{2}}(g)$ is the quadrature error associated with the function $g$ at $t_{n-\frac{\alpha}{2}}$.
\begin{thm}\label{J2}
\textbf{(Convergence Estimates for the Scheme (F$_{\alpha}$))} Under the assumptions of (H1), (H2), and (H3), the fully discrete solution $u_{h}^{n}~(1\leq n \leq N)$ of $(F_{\alpha})$ satisfies the following convergence estimates
\begin{equation*}
\max_{1\leq n \leq N}\|u(t_{n})-u_{h}^{n}\|+\left(k^{\alpha}\sum_{n=1}^{N}\tilde{p}_{N-n}^{(N)}\|\nabla u(t_{n})-\nabla u_{h}^{n}\|^{2}\right)^{1/2}\lesssim (h+k^{2}).
\end{equation*}
\end{thm}
\begin{rmk}
We observe that as $\alpha \rightarrow 1$, the schemes $(E_{\alpha})$ and $(F_{\alpha})$ reduces to linearized backward Euler-Galerkin FEM and linearized Crank-Nicolson-Galerkin FEM respectively \cite{kumar2020finite}.
\end{rmk}
\section{Weak solutions of ($D_{\alpha}$)}
In this section, we prove the existence and uniqueness of weak solutions for the problem $(R_{\alpha})$ using the Galerkin method. For this, we study the variational formulation of $(R_{\alpha})$ and derive a priori bounds on every Galerkin sequence. These a priori bounds establish the convergence of the Galerkin sequence using compactness lemma for time fractional PDEs \cite{li2018some}.
\noindent The following lemma discusses the approximation of the kernel $k$ defined in \eqref{2.1}.
\begin{lem}\cite{zacher2009weak} \label{L3.1}Let $k$ be the kernel defined in \eqref{2.1}, then there exists a sequence of kernels $k_{n}$ in $W^{1,1}(0,T)$ such that
\begin{enumerate}
\item $k_{n}=ns_{n}$, with $s_{n}$ being the solution of scalar-valued Volterra equations
\begin{equation*}
s_{n}(t)+n(l\ast s_{n})(t)=1~~~t>0,~~~n\in \mathbb{N},
\end{equation*}
\item $k_{n}$ converges to $k$ in $L^{1}(0,T)$ as $n$ tends to infinity,
\item $k_{n}$ is non-negative and non-increasing in $(0,\infty)$.
\end{enumerate}
\end{lem}
\begin{subsection}{Proof of the Theorem \ref{t2.5}}
\begin{proof}
\textbf{(Existence)}~Let $\{\lambda_{i}\}_{i=1}^{m}$ be the eigenvalues and $\{\phi_{i}\}_{i=1}^{m}$ be the corresponding eigenfunctions of the Dirichlet problem for the standard Laplacian operator in $H^{1}_{0}(\Omega)$. We consider the finite dimensional subspace $\mathbb{V}_{m}$ of $H^{1}_{0}(\Omega)$ such that $\mathbb{V}_{m}= \text{span} \{\phi_{1}, \phi_{2}, \dots , \phi_{m}\}.$ We assume that
\begin{equation}\label{3.1}
u_{m}(\cdot,t)=\sum_{j=1}^{m}\alpha_{mj}(t)\phi_{j}\quad \text{and }\quad u_{m}(\cdot,0)=\sum_{j=1}^{m}(u_{0},\phi_{j})\phi_{j},
\end{equation}
then $u_{m}(\cdot,0)$ converges to $u_{0}$. Now we show that for each $m$ there is $u_{m} \in \mathbb{V}_{m}$, satisfying the following equation for all $v_{m} \in \mathbb{V}_{m}~ \text{and}~ a.e. ~t \in (0,T]$
\begin{equation}\label{3.2}
\begin{aligned}
\left(\frac{d}{dt}\left[k\ast (u_{m}-u_{m}(0))\right](t),v_{m}\right)&+\left(M(x,t,\|\nabla u_{m}\|^2)\nabla u_{m}, \nabla v_{m}\right)\\
&=(f,v_{m})+ \int_{0}^{t}B(t,s,u_{m},v_{m})ds.
\end{aligned}
\end{equation}
Put the values of $u_{m}$ and $u_{m}(0)$ in \eqref{3.2}, we obtain a system of fractional order differential equations. Then by the theory of fractional order differential equations, the system \eqref{3.2} has a continuous solution $u_{m}(t)$ on some interval $[0,t_{n}), 0<t_{n}< T,$ with vanishing trace of $k\ast(u_{m}-u_{m}(0))$ at $t=0$ \cite{zacher2009weak}. These local solutions $u_{m}(t)$ are extended to the whole interval by using the following a priori bounds on $u_{m}$. Put $v_{m}=u_{m}(t)$ in \eqref{3.2}, we obtain
\begin{equation}\label{3.3}
\begin{aligned}
\left(\frac{d}{dt}\left[k\ast (u_{m}-u_{m}(0))\right](t),u_{m}\right)&+\left(M(x,t,\|\nabla u_{m}\|^2)\nabla u_{m}, \nabla u_{m}\right)\\
&=(f,u_{m})+ \int_{0}^{t}B(t,s,u_{m},u_{m})ds.
\end{aligned}
\end{equation}
Let $k_{n}, n\in \mathbb{N}$ be the kernels defined in Lemma \ref{L3.1}, then equation \eqref{3.3} is rewritten as
\begin{equation}\label{3.4Ad}
\begin{aligned}
\left(\frac{d}{dt}(k_{n}\ast u_{m})(t),u_{m}\right)&+\left(M(x,t,\|\nabla u_{m}\|^2)\nabla u_{m}, \nabla u_{m}\right)\\
&=(f,u_{m})+ \int_{0}^{t}B(t,s,u_{m},u_{m})ds+k_{n}(t)(u_{m}(0),u_{m})+h_{mn}(t),
\end{aligned}
\end{equation}
with
\begin{equation}
h_{mn}(t)= \left(\frac{d}{dt}\left[k_{n}\ast (u_{m}-u_{m}(0))\right](t)- \frac{d}{dt}\left[k\ast (u_{m}-u_{m}(0))\right](t),u_{m}\right).
\end{equation}
Using Lemma 2.1 from \cite{zacher2009weak} and (H2), (H3) together with Cauchy-Schwarz and Young's inequality, we get
\begin{equation}\label{3.4}
\begin{aligned}
\frac{d}{dt}\left(k_{n}\ast \|u_{m}\|^{2}\right)(t)+m_{0}\|\nabla u_{m}\|^{2}&\leq \|f\|^{2}+ \|u_{m}\|^{2}+k_{n}(t)\|u_{m}(0)\|^{2}\\
&+\frac{B_{0}^{2}T}{m_{0}}\int_{0}^{t}\|\nabla u_{m}(s)\|^{2}ds+2h_{mn}(t).
\end{aligned}
\end{equation}
Now convolve \eqref{3.4} with the kernel $l$ defined in Remark \ref{rmk1} and use the fact that
\begin{equation}
l\ast \frac{d}{dt}\left(k_{n}\ast \|u_{m}\|^{2}\right)(t)=\frac{d}{dt}\left(k_{n}\ast l \ast \|u_{m}\|^{2}\right)(t)\rightarrow \frac{d}{dt}\left(k\ast l\ast \|u_{m}\|^{2}\right)(t)=\|u_{m}\|^{2},
\end{equation}
with
\begin{equation}
(l\ast k_{n})(t)\rightarrow (l \ast k)(t)= 1 ~~\text{and}~~(l\ast h_{mn})(t)\rightarrow 0~~\text{as}~~n \rightarrow \infty ~~\text{in}~~L^{1}(0,T),
\end{equation}
we get
\begin{equation*}
\begin{aligned}
\|u_{m}\|^{2}+\left(l\ast\|\nabla u_{m}\|^{2}\right)(t)&\lesssim \int_{0}^{t}(t-s)^{\alpha-1}\left(\|u_{m}(s)\|^{2}+\left(l\ast\|\nabla u_{m}\|^{2}\right)(s)\right)ds\\ &+\left(l\ast\|f\|^{2}\right)(t)+\|u_{0}\|^{2}.
\end{aligned}
\end{equation*}
Using Lemma \ref{lem2.5} and Poincar\'{e} inequality, we conclude that
\begin{equation}\label{3.5}
\begin{aligned}
\|u_{m}\|^{2}_{L^{\infty}\left(0,T,L^{2}(\Omega)\right)}+\left(l\ast\|\nabla u_{m}\|^{2}\right)(t)&\lesssim \left(\| \nabla u_{0}\|^{2}+\|f\|^{2}_{L^{\infty}\left(0,T,L^{2}(\Omega)\right)}\right).
\end{aligned}
\end{equation}
Further, we introduce discrete Laplacian $\Delta_{m}: \mathbb{V}_{m}\rightarrow \mathbb{V}_{m}$ as follows
\begin{equation}
\left(-\Delta_{m}u_{m},v_{m}\right):=\left(\nabla u_{m},\nabla v_{m}\right)\quad \text{for all}~ u_{m},v_{m} ~\text{in}~ \mathbb{V}_{m}.
\end{equation}
Now take $v_{m}=-\Delta_{m}u_{m}$ in \eqref{3.2} and follow the same arguments as we prove estimate \eqref{3.5} to achieve
\begin{equation}\label{3.9}
\|u_{m}\|^{2}_{L^{\infty}\left(0,T,H^{1}_{0}(\Omega)\right)}+\left(l\ast\|\Delta_{m}u_{m}\|^{2}\right)(t) \lesssim\left(\|\nabla u_{0}\|^{2}+\|f\|^{2}_{L^{\infty}\left(0,T,L^{2}(\Omega)\right)}\right)=\hat{K}.
\end{equation}
Moreover, for any $v \in H^{1}_{0}(\Omega)$, we write $v=v^{1}+v^{2}$ where $v^{1}\in$ span$\{\phi_{j}\}_{j=1}^{m}$ and $v^{2}$ satisfies $(\nabla v^{2},\nabla \phi_{j})=0$ for $j=1,2,3,\dots,m$, as $\{\phi_{j}\}_{j=1}^{m}$ are orthogonal in $H^{1}_{0}(\Omega)$. So from equation \eqref{3.2} and (H2) we have
\begin{equation*}
\begin{aligned}
\left(\frac{d}{dt}\left[k\ast (u_{m}-u_{m}(0))\right](t),v^{1}\right)+m_{0}(\nabla u_{m}, \nabla v^{1})&\leq (f,v^{1})+\int_{0}^{t}B(t,s,u_{m},v^{1})ds.
\end{aligned}
\end{equation*}
Using estimates \eqref{3.5} and \eqref{3.9}, we deduce that
\begin{equation}\label{3.11}
\left\|\frac{d}{dt}\left[k\ast(u_{m}-u_{m}(0))\right](t)\right\|_{L^{2}\left(0,T,H^{-1}(\Omega)\right)}\leq K.
\end{equation}
Thus estimates \eqref{3.5} and \eqref{3.11} provide a subsequence of $(u_{m})$ again denoted by $(u_{m})$ such that $u_{m}\rightharpoonup u$ in $L^{2}\left(0,T,H^{1}_{0}(\Omega)\right)$ and $\frac{d}{dt}\left[k\ast (u_{m}-u_{m}(0))\right](t) \rightharpoonup \frac{d}{dt}\left[k\ast (u-u_{0})\right](t)$ in $L^{2}\left(0,T,H^{-1}(\Omega)\right)$. In the light of estimates \eqref{3.9} and \eqref{3.11}, we apply compactness lemma \cite{li2018some} to conclude $u_{m} \rightarrow u$ in $L^{2}\left(0,T,H^{1}_{0}(\Omega)\right)$. Now using the fact that $M(x,t,\|\nabla u_{m}\|^{2})$ and $B(t,s,u_{m},v_{m})$ are continuous and an application of Lebesgue dominated convergence theorem, we pass the limit inside \eqref{3.2}
which establishes the existence of weak solutions of the problem $(D_{\alpha}).$ \\
\noindent \textbf{(Initial Condition)} Now weak solution $u$ satisfies the following equation for all $v$ in $H^{1}_{0}(\Omega)$
\begin{equation}\label{N2}
\begin{aligned}
\left(\frac{d}{dt}\left[k\ast (u-u_{0})\right](t),v\right)+ \left(M\left(x,t,\|\nabla u\|^{2}\right)\nabla u,\nabla v\right)&=\left(f,v\right)+\int_{0}^{t}B(t,s,u,v)ds.\\
\end{aligned}
\end{equation}
Let $\phi$ in $C^{1}\left([0,T];H^{1}_{0}(\Omega)\right)$ with $\phi(T)=0$, then multiply \eqref{N2} with $\phi$ and integrate by parts, we get
\begin{equation}\label{N24}
\begin{aligned}
-\int_{0}^{T}\left((k\ast (u-u_{0}))(t),v\right)\phi'(t)dt&+ \int_{0}^{T}\left(M\left(x,t,\|\nabla u\|^{2}\right)\nabla u,\nabla v\right)\phi(t)dt\\
&=\int_{0}^{T}\left(f,v\right)\phi(t)dt+\int_{0}^{T}\int_{0}^{t}B(t,s,u,v)\phi(t)dsdt\\
&+((k\ast(u-u_{0}))(0),\phi(0)).
\end{aligned}
\end{equation}
Since $C^{1}\left([0,T];H^{1}_{0}(\Omega)\right)$ is dense in $L^{2}\left(0,T,H^{1}_{0}(\Omega)\right)$, thus using \eqref{3.2} and $(k\ast(u_{m}-u_{m}(0)))$ has vanishing trace at $t=0$, we have
\begin{equation}\label{N248}
\begin{aligned}
-\int_{0}^{T}\left((k\ast (u_{m}-u_{m}(0)))(t),v\right)&\phi'(t)dt+ \int_{0}^{T}\left(M\left(x,t,\|\nabla u_{m}\|^{2}\right)\nabla u_{m},\nabla v\right)\phi(t)dt\\
&=\int_{0}^{T}\left(f,v\right)\phi(t)dt+\int_{0}^{T}\int_{0}^{t}B(t,s,u_{m},v)\phi(t)dsdt.
\end{aligned}
\end{equation}
Let $m$ tend to infinity in \eqref{N248} and comparing with \eqref{N24}, we get
$((k\ast(u-u_{0}))(0),\phi(0))=0$. Since $\phi(0)$ is arbitrary, so we have $(k\ast(u-u_{0}))(0)=0$ which implies $u=u_{0}$ at $t=0$ \cite{zacher2009weak}.\\
\noindent \textbf{(Uniqueness)} Suppose that $u_{1},u_{2}$ are solutions of equation $(W_{\alpha})$, then $z=u_{1}-u_{2}$ satisfies the following equation for all $ v \in H_{0}^{1}(\Omega)$ and $a.e. ~ t \in (0,T]$
\begin{equation}\label{3.13}
\begin{aligned}
\left(\frac{d}{dt}(k\ast z)(t),v\right)+\left(M\big(x,t,\|\nabla u_{1}\|^2\big)\nabla u_{1}, \nabla v\right)&=\left(M\big(x,t,\|\nabla u_{2}\|^2\big)\nabla u_{2}, \nabla v\right)\\
&+ \int_{0}^{t}B(t,s,z,v)ds.
\end{aligned}
\end{equation}
Put $v=z(t)$ in above equation and using (H2), (H3) along with Cauchy Schwarz and Young's inequality, we obtain
\begin{equation}
\begin{aligned}
2\left(\frac{d}{dt}(k\ast z)(t),z\right)+(m_{0}-4L_{M}\hat{K}^{2})\|\nabla z\|^{2} \lesssim
\int_{0}^{t}\|\nabla z(s)\|^{2}ds
\end{aligned}
\end{equation}
Now following the same lines as in the proof of estimate \eqref{3.5} and using (H2)
we conclude $\|z\|_{L^{2}\left(0,T,H^{1}_{0}(\Omega)\right)} =\|z\|_{L^{\infty}\left(0,T,L^{2}(\Omega)\right)}=0$. Thus uniqueness follows.
\end{proof}
\end{subsection}
\section{Error estimates for semi-discrete solution}
In this section, we derive error estimates for semi-discrete solutions of the problem $(S_{\alpha})$ by discretizing the domain in space variable while keeping the time variable continuous.\\
\noindent The existence, uniqueness, and a priori bounds on semi-discrete solutions is established on the similar lines as for the weak solutions of $(D_{\alpha})$, therefore omitted here. For the error estimates of semi-discrete solutions, we define a new Ritz-Volterra type projection operator $W:[0,T]\rightarrow X_{h}$ by
\begin{equation}\label{2.5}
\left(M\left(x,t,\|\nabla u\|^{2}\right)\nabla (u-W),\nabla v_{h}\right)=\int_{0}^{t}B(t,s,u-W,v_{h})~ds\quad \text{for all}~ v_{h}~\text{in}~ X_{h}.
\end{equation}
Ritz-Volterra type projection operator $W$ is well defined by the positivity of Kirchhoff term $M$.
\noindent The estimates on $u-W$, $W$,~ $u-R_{h}u$ are given by the following theorems.
\begin{thm}\label{thm2.7} \cite{rannacher1982some}
Under the assumption of uniform triangulation of $\Omega$, there exists a positive constant $C$ independent of $h$ such that the following hold for any function $u$ in $H^{2}(\Omega) \cap H_{0}^{1}(\Omega)$
\begin{equation*}
\|u-R_{h}u\|+h\|u-R_{h}u\|_{1} \leq Ch^{2}\|u\|_{2},
\end{equation*}
\begin{equation*}
\|R_{h}u\|_{1}\leq C\|u\|_{1},
\end{equation*}
where $\|p\|_{1}=\|p\|+\|\nabla p\|.$
\end{thm}
\begin{thm} \label{thm2.8}\cite{cannon1988non}
There exists a positive constant $C$ independent of $h$ such that following estimate hold
\begin{equation*}
\begin{aligned}
\|\rho(t)\|+h\|\rho(t)\|_{1} &\leq Ch^{2}\left(\|u(t)\|_{2}+\int_{0}^{t}\|u(s)\|_{2}ds\right),\\
\|\rho_{t}(t)\|+h\|\rho_{t}(t)\|_{1} &\leq Ch^{2}\left(\|u(t)\|_{2}+\|u_{t}\|_{2}+\int_{0}^{t}\|u_{t}(s)\|_{2}ds\right),
\end{aligned}
\end{equation*}
where $\rho=u-W$ and $\|p\|_{1}=\|p\|+\|\nabla p\|.$
\end{thm}
\begin{lem}\cite{kumar2020finite}\label{lem2.8}
Let $W$ be the Ritz-Volterra projection defined by \eqref{2.5}, then $\|\nabla W\|$ is bounded for every $t$ in $[0,T]$, i.e.
\begin{equation*}
\|\nabla W\|\lesssim\|\nabla u\|.
\end{equation*}
\end{lem}
\begin{subsection}{Proof of the Theorem \ref{t2.6}}
\begin{proof}
Put $u_{h}=W-\theta$ in $(S_{\alpha})$, we have
\begin{equation*}
\left(^{C}D^{\alpha}_{t}(W-\theta),v_{h}\right) +\left(M\left(x_{h},t,\|\nabla u_{h}\|^{2}\right)\nabla (W-\theta),\nabla v_{h}\right)=(f_{h},v_{h})+\int_{0}^{t}B(t,s,W-\theta,v_{h})ds.
\end{equation*}
Since $f$ in $L^{2}(\Omega)$ thus $(f,v_{h})=(f_{h},v_{h})$ for all $v_{h}$ in $X_{h}$, then by using (H2), $(W_{\alpha})$, and the definition \eqref{2.5} of Ritz-Volterra projection $W$ we get
\begin{equation}\label{4.4}
\begin{aligned}
\left(^{C}D^{\alpha}_{t}\theta,v_{h}\right) +m_{0}(\nabla \theta,\nabla v_{h})&\leq L_{M}\left(\|\nabla u\|+\|\nabla u_{h}\|\right)\left(\|\nabla u-\nabla u_{h}\|\right)(\nabla W,\nabla v_{h})\\
&-\left(^{C}D^{\alpha}_{t}\rho,v_{h}\right)+\int_{0}^{t}B(t,s,\theta,v_{h})ds.
\end{aligned}
\end{equation}
Set $v_{h}=\theta$ in \eqref{4.4} and using (H3), estimate \eqref{3.9} together with Cauchy-Schwarz and Young's inequality, we obtain
\begin{equation}
\begin{aligned}
\left(^{C}D^{\alpha}_{t}\theta,v_{h}\right)+(m_{0}-4L_{M}\hat{K}^{2})\|\nabla \theta\|^{2} &\lesssim \|\nabla \rho\|^{2}+\|~^{C}D^{\alpha}_{t}\rho\|^{2}\\
&+\|\theta\|^{2}+\int_{0}^{t}\|\nabla \theta(s)\|^{2}ds.
\end{aligned}
\end{equation}
Use (H2) and apply the same arguments as we prove estimate \eqref{3.5}, to obtain
\begin{equation*}
\|\theta\|^{2}_{L^{\infty}\left(0,T,L^{2}(\Omega)\right)}+\left(l\ast\|\nabla \theta\|^{2}\right)(t) \lesssim \left[l\ast \left(\|\nabla \rho\|^{2}+\|~^{C}D^{\alpha}_{t}\rho\|^{2}\right)\right](t)+\|\theta(0)\|^{2}_{1}.
\end{equation*}
Theorems \ref{thm2.7} and \ref{thm2.8} with triangle inequality concludes
\begin{equation*}
\|u-u_{h}\|^{2}_{L^{\infty}\left(0,T,L^{2}(\Omega)\right)}+\left(l\ast\|\nabla (u-u_{h})\|^{2}\right)(t) \lesssim h^{2}.
\end{equation*}
\end{proof}
\end{subsection}
\begin{section}{Backward Euler-Galerkin FEM based on L1 type scheme}
In this section, we prove the well-posedness and derive convergence rate for the proposed fully discrete formulation $(E_{\alpha})$.\\
\noindent For $n\geq 2$, the linear system is positive definite due to the non-degeneracy of diffusion coefficient $M$ which gives a unique solution of $(E_\alpha)$ and the existence of $u_{h}^{1}$ is established by the following variant of Br\"{o}uwer fixed point theorem.
\begin{thm}\cite{thomee1984galerkin}\label{T1.3}
Let $H$ be finite dimensional Hilbert space. Let $G:H\rightarrow H$ be a continuous map such that $\left(G(w),w\right)>0$ for all $w$ in $H$ with $\|w\|=r,~r>0$ then there exists a $\tilde{w}$ in $H$ such that $G(\tilde{w})=0$ and $\|\tilde{w}\|\leq r.$
\end{thm}
\begin{thm}\label{T1.4}
Suppose that (H1), (H2), and (H3) holds, then there exists a unique solution $u_{h}^{1}$ to the problem $(E_{\alpha})$ which satisfies the following a priori bound
\begin{equation}\label{S1.21}
\|u_{h}^{1}\|^{2}+k^{\alpha}\|\nabla u_{h}^{1}\|^{2}\lesssim \left( \|f_{h}^{1}\|^{2}+\|\nabla u_{h}^{0}\|^{2}\right)\approx \tilde{K}_{1},
\end{equation}
provided $k< k_{0}$ for some positive constant $k_{0}$.
\end{thm}
\begin{proof}
\textbf{(Existence)} ~Let $\alpha_{0}=k^{\alpha}\Gamma(2-\alpha)$ and put the definition of $\mathbb{D}^{\alpha}_{t}u_{h}^{1}$ in $(E_{\alpha})$, we get
\begin{equation*}
\begin{aligned}
\left(u_{h}^{1}-u_{h}^{0},v_{h}\right)&+\alpha_{0}\left(M\left(x_{h},t_{1},\|\nabla u_{h}^{1}\|^{2}\right)\nabla u_{h}^{1},\nabla v_{h}\right)=\alpha_{0}\left(f_{h}^{1},v_{h}\right)
+\alpha_{0}kB\left(t_{1},t_{0},u_{h}^{0},v_{h}\right).
\end{aligned}
\end{equation*}
Define a map $G:X_{h}\rightarrow X_{h}$ by
\begin{equation}\label{S1.19}
\begin{aligned}
\left(G\left(u_{h}^{1}\right),v_{h}\right)&=\left(u_{h}^{1},v_{h}\right)-\left(u_{h}^{0},v_{h}\right)+\alpha_{0}\left(M\left(x_{h},t_{1},\|\nabla u_{h}^{1}\|^{2}\right)\nabla u_{h}^{1},\nabla v_{h}\right)\\
&-\alpha_{0}\left(f_{h}^{1},v_{h}\right)-\alpha_{0}kB\left(t_{1},t_{0},u_{h}^{0},v_{h}\right).
\end{aligned}
\end{equation}
Then using (H2), (H3), and Cauchy-Schwarz inequality, we have
\begin{equation*}
\begin{aligned}
\left(G\left(u_{h}^{1}\right),u_{h}^{1}\right)\geq \left(\|u_{h}^{1}\|-\|u_{h}^{0}\|-\alpha_{0}\|f_{h}^{1}\|-\alpha_{0}k\|u_{h}^{0}\|\right)\|u_{h}^{1}\|.
\end{aligned}
\end{equation*}
Thus for $\|u_{h}^{1}\|> \|u_{h}^{0}\|+\alpha_{0}\|f_{h}^{1}\|+k\alpha_{0}\|u_{h}^{0}\|$, we have $\left(G\left(u_{h}^{1}\right),u_{h}^{1}\right)>0$ and the map $G$ defined by \eqref{S1.19} is continuous by continuity of $M$. Hence existence follows by Theorem \ref{T1.3}.\\
\noindent \textbf{(A priori bound)}
Put $v_{h}=u_{h}^{1}$ for $n=1$ in $(E_{\alpha})$ and using (H2), (H3) with Cauchy-Schwarz and Young's inequality, we get
\begin{equation*}
\begin{aligned}
(1-k^{\alpha}\Gamma(2-\alpha))\|u_{h}^{1}\|^{2}+k^{\alpha}\|\nabla u_{h}^{1}\|^{2}&\lesssim k^{\alpha}
\left(\|f_{h}^{1}\|^{2}+k^{2}\|\nabla u_{h}^{0}\|^{2}\right)+\|u_{h}^{0}\|^{2},
\end{aligned}
\end{equation*}
then for sufficiently small $k$ such that $k^{\alpha}< \frac{1}{\Gamma(2-\alpha)}$, we conclude \eqref{S1.21}.
\noindent \textbf{(Uniqueness)} Suppose that $X_{h}^{1}$ and $Y_{h}^{1}$ are the solutions of $(E_{\alpha})$ for $n=1$, then $Z_{h}^{1}=X_{h}^{1}-Y_{h}^{1}$ satisfies the following equation for all $v_{h}$ in $X_{h}$
\begin{equation}\label{S1.22}
\begin{aligned}
\left(\mathbb{D}^{\alpha}_{t}Z_{h}^{1},v_{h}\right)&+\left(M\left(x_{h},t_{1},\|\nabla X_{h}^{1}\|^{2}\right)\nabla Z_{h}^{1},\nabla v_{h}\right)\\
&=\left(\left[M\left(x_{h},t_{1},\|\nabla Y_{h}^{1}\|^{2}\right)-M\left(x_{h},t_{1},\|\nabla X_{h}^{1}\|^{2}\right)\right]\nabla Y_{h}^{1},\nabla v_{h}\right).
\end{aligned}
\end{equation}
Put $v_{h}=Z_{h}^{1}$ in equation \eqref{S1.22} and using (H2) we get
\begin{equation*}
\frac{1}{2}\mathbb{D}^{\alpha}_{t}\|Z_{h}^{1}\|^{2}+m_{0}\|\nabla Z_{h}^{1}\|^{2}\leq L_{M}\|\nabla Z_{h}^{1}\|\left(\|\nabla X_{h}^{1}\|+\|\nabla Y_{h}^{1}\|\right)\left(\nabla Y_{h}^{1},\nabla Z_{h}^{1}\right)
\end{equation*}
Now Cauchy-Schwarz inequality and \eqref{S1.21} yields
\begin{equation*}
\mathbb{D}^{\alpha}_{t}\|Z_{h}^{1}\|^{2}+\left(2m_{0}-4L_{M}\tilde{K}_{1}^{2}\right)\|\nabla Z_{h}^{1}\|^{2}\leq 0.
\end{equation*}
Then (H2) concludes the proof.
\end{proof}
\begin{thm}\label{5.3}
Under the assumption of (H1), (H2), and (H3) the solution $u_{h}^{n}~ (n\geq 2)$ of $(E_{\alpha})$ satisfy the following a priori bound
\begin{equation}\label{S1.24}
\begin{aligned}
\max_{2\leq m \leq N}\|u_{h}^{m}\|^{2}+k^{\alpha}\sum_{n=2}^{N}p_{N-n}\|\nabla u_{h}^{n}\|^{2}
\lesssim \left(k^{\alpha}\sum_{n=2}^{N}p_{N-n}\|f_{h}^{n}\|^{2}+\|\nabla u_{h}^{0}\|^{2}\right)\approx \tilde{K_{2}}.
\end{aligned}
\end{equation}
\end{thm}
\begin{proof}
Put $v_{h}=u_{h}^{n}$ for $n\geq 2$ in $(E_{\alpha})$ and using (H2), (H3) along with Cauchy-Schwarz and Young's Inequality we get
\begin{equation}\label{S1.23}
\frac{k^{-\alpha}}{\Gamma(2-\alpha)}\sum_{j=1}^{n}a_{n-j}\left(\|u_{h}^{j}\|^{2}-\|u_{h}^{j-1}\|^{2}\right)+\|\nabla u_{h}^{n}\|^{2}\lesssim \left(\|f_{h}^{n}\|^{2}+\|u_{h}^{n}\|^{2}+\sum_{j=1}^{n-1}w_{nj}\|\nabla u_{h}^{j}\|^{2}\right).
\end{equation}
Now multiply the equation \eqref{S1.23} by discrete convolution $P_{m-n}$ defined in Lemma \ref{L1.2} and take summation from $n=1$ to $m$ to obtain
\begin{equation*}
\begin{aligned}
&\sum_{n=1}^{m}p_{m-n}\sum_{j=1}^{n}a_{n-j}\left(\|u_{h}^{j}\|^{2}-\|u_{h}^{j-1}\|^{2}\right)+k^{\alpha}\Gamma(2-\alpha)\sum_{n=1}^{m}p_{m-n}\|\nabla u_{h}^{n}\|^{2}\\
&\lesssim k^{\alpha}\Gamma(2-\alpha)\left(\sum_{n=1}^{m}p_{m-n}\|f_{h}^{n}\|^{2}+\sum_{n=1}^{m}p_{m-n}\|u_{h}^{n}\|^{2}+\sum_{n=1}^{m}p_{m-n}\sum_{j=1}^{n-1}w_{nj}\|\nabla u_{h}^{j}\|^{2}\right).
\end{aligned}
\end{equation*}
Interchanging the summation and \eqref{2.81} yields
\begin{equation*}
\begin{aligned}
(1-\alpha_{0}) \|u_{h}^{m}\|^{2}+k^{\alpha}\sum_{n=1}^{m}p_{m-n}\|\nabla u_{h}^{n}\|^{2}
&\lesssim \sum_{n=1}^{m-1}\left(k^{\alpha}p_{m-n}\|u_{h}^{n}\|^{2}+k_{1}k^{\alpha}\sum _{j=1}^{n}p_{n-j}\|\nabla u_{h}^{j}\|^{2}\right)\\
&+k^{\alpha}\sum_{n=1}^{m}p_{m-n}\|f_{h}^{n}\|^{2}+\|u_{h}^{0}\|^{2}.
\end{aligned}
\end{equation*}
Now for sufficiently small $k^{\alpha} <\frac{1}{\Gamma{(2-\alpha)}}$ we have
\begin{equation*}
\begin{aligned}
\|u_{h}^{m}\|^{2}+k^{\alpha}\sum_{n=1}^{m}p_{m-n}\|\nabla u_{h}^{n}\|^{2}
&\lesssim \sum_{n=1}^{m-1}\left(k^{\alpha}p_{m-n}+k_{1}\right)\left(\|u_{h}^{n}\|^{2}+k^{\alpha}\sum _{j=1}^{n}p_{n-j}\|\nabla u_{h}^{j}\|^{2}\right)\\
&+k^{\alpha}\sum_{n=1}^{m}p_{m-n}\|f_{h}^{n}\|^{2}+\|u_{h}^{0}\|^{2}.
\end{aligned}
\end{equation*}
Now Discrete Gr\"{o}nwall's inequality and \eqref{2.91} concludes the proof.
\end{proof}
\begin{subsection}{Proof of the Theorem \ref{J1}}
\begin{proof} (Error Estimate for $e^{1}=u(t_{1})-u_{h}^{1}$)
Substitute $u_{h}^{1}=W^{1}-\theta^{1}$ for $n=1$ in $(E_{\alpha})$ and using weak formulation $(W_{\alpha})$ along with Ritz-Volterra projection $W$ at $t_{1}$ we get
\begin{equation}\label{S1.271}
\begin{aligned}
\left(\mathbb{D}^{\alpha}_{t}\theta^{1},v_{h}\right)&+\left(M\left(x_{h},t_{1},\|\nabla u_{h}^{1}\|^{2}\right)\nabla \theta^{1},\nabla v_{h}\right)-kB\left(t_{1},t_{0},\theta^{0},v_{h}\right)\\
&=\left(\mathbb{D}^{\alpha}_{t}W^{1}-~^{C}D^{\alpha}_{t_{1}}u,v_{h}\right)-kB\left(t_{1},t_{0},W^{0},v_{h}\right)+\int_{0}^{t_{1}}B(t_{1},s,W,v_{h})ds\\
&+\left(\left[M\left(x_{h},t_{1},\|\nabla u_{h}^{1}\|^{2}\right)-M\left(x_{h},t_{1},\|\nabla u^{1}\|^{2}\right)\right]\nabla W^{1},\nabla v_{h}\right).
\end{aligned}
\end{equation}
Set $v_{h}=\theta^{1}$ in \eqref{S1.271} and using (H2), (H3) with Cauchy-Schwarz inequality and Young's inequality, we get
\begin{equation}\label{S1}
\begin{aligned}
\left(\mathbb{D}^{\alpha}_{t}\theta^{1},\theta^{1}\right)+\|\nabla \theta^{1}\|^{2}&\lesssim \|\mathbb{Q}^{1}\|^{2}+\|~^{C}{D}^{\alpha}_{t_{1}}\rho\|^{2}+\|q^{1}(W)\|^{2}+
\|\nabla \rho^{1}\|^{2}.
\end{aligned}
\end{equation}
Where $\mathbb{Q}^{1}=\mathbb{D}^{\alpha}_{t}W^{1}-~^{C}D^{\alpha}_{t_{1}}W$ is the truncation error defined in \eqref{2.9} which satisfies $\|\mathbb{Q}^{1}\|\lesssim k^{2-\alpha}$ \cite{lin2007finite} and $q^{1}(W)$ is quadrature error for rectangle rule on a single interval of length $k$ which satisfies $\|q^{1}(W)\|\lesssim k^{2}$. Thus using Theorem \ref{thm2.8}, we conclude
\begin{equation}
\|\theta^{1}\|^{2}+k^{\alpha}\|\nabla \theta^{1}\|^{2}\lesssim \left(k^{2}+h\right)^{2}.
\end{equation}
\noindent (Error Estimate for $e^{n}=u(t_{n})-u_{h}^{n},~~n\geq 2$) Put $u_{h}^{n}=W^{n}-\theta^{n}$ in $(E_{\alpha})$ and follow the same lines as for $n=1$, we arrive at
\begin{equation}\label{S1.33}
\begin{aligned}
\sum_{j=1}^{n}a_{n-j}\left[\|\theta^{j}\|^{2}-\|\theta^{j-1}\|^{2}\right]+k^{\alpha}\|\nabla \theta^{n}\|^{2}&\lesssim k^{\alpha}\left(\|\mathbb{Q}^{n}\|^{2}+\|~^{C}D_{t_{n}}^{\alpha}\rho\|^{2}+\sum_{j=1}^{n-1}w_{nj}\|\nabla \theta^{j}\|^{2}\right)\\
&+k^{\alpha}\left(\|\nabla \bar{u}_{h}^{n-1}-\nabla u^{n}\|^{2} +\|q^{n}(W)\|^{2}\right),
\end{aligned}
\end{equation}
\noindent where $q^{n}(W)$ be the quadrature error operator on $H^{1}_{0}(\Omega)$, defined by
\begin{equation}\label{S1.32}
q^{n}(W)(v)=\int_{0}^{t_{n}}B(t_{n},s,W,v)-\sum_{j=1}^{n-1}w_{nj}B\left(t_{n},t_{j},W^{j},v\right)ds,
\end{equation}
which satisfies $\|q^{n}(W)\|\lesssim k^{2}$ \cite{pani1992numerical} and $\mathbb{Q}^{n}=\mathbb{D}^{\alpha}_{t}W^{n}-~^{C}D^{\alpha}_{t_{n}}W$ that satisfies $\|\mathbb{Q}^{n}\|\lesssim k^{2-\alpha}$\cite{lin2007finite}. Thus using Theorem \ref{thm2.8}, we obtain
\begin{equation}\label{S1.37}
\begin{aligned}
\sum_{j=1}^{n}a_{n-j}\left[\|\theta^{j}\|^{2}-\|\theta^{j-1}\|^{2}\right]+k^{\alpha}\|\nabla \theta^{n}\|^{2}&\lesssim k^{\alpha}\left(k^{2-\alpha}+h^{2}+h+k^{2}+k^{2}\right)^{2}+k^{\alpha}\|\theta^{n}\|^{2}\\
&+k^{\alpha}\left(\sum_{j=1}^{n-1}k_{1}\|\nabla \theta^{j}\|^{2}+\|\nabla \theta^{n-1} \|^{2}+\|\nabla \theta^{n-2}\|^{2}\right).
\end{aligned}
\end{equation}
Now follow the same arguments as we prove estimate \eqref{S1.24} to conclude the result.
\end{proof}
\end{subsection}
\end{section}
\section{Fractional Crank-Nicolson-Galerkin FEM based on L2-1$_{\sigma}$ scheme}
In this section, we show that our proposed scheme $(F_{\alpha})$ achieve the optimal rate of convergence in time. The existence and uniqueness of solutions for fully discrete formulation $(F_{\alpha})$ is proved along the similar lines as in the proof of Theorem \ref{T1.4}, therefore omitted here.
\begin{thm}\label{6.1} (\textbf{A priori bound}) Under the assumptions of (H1), (H2), and (H3) the solutions $u_{h}^{n}$ of $(F_{\alpha})$ satisfy the following a priori bounds
\begin{equation}\label{S2.24}
\begin{aligned}
\max_{1\leq m \leq N}\|u_{h}^{m}\|^{2}+k^{\alpha}\sum_{n=1}^{N}\tilde{p}_{N-n}^{(N)}\|\nabla u_{h}^{n}\|^{2}
\lesssim \left(k^{\alpha}\sum_{n=1}^{N}\tilde{p}_{N-n}^{(N)}\|f_{h}^{n-\frac{\alpha}{2}}\|^{2}+\|\nabla u_{h}^{0}\|^{2}\right)
\end{aligned}
\end{equation}
\end{thm}
\begin{proof}
The a priori bounds for $u_{h}^{1}$ is similar as in Theorem \ref{T1.4}. To obtain a priori bounds for $u_{h}^{n}~( n\geq 2)$,
put $v_{h}=u_{h}^{n}$ for $n\geq 2$ in $(F_{\alpha})$ and using (H2), (H3) along with the following identity
\begin{equation}
\|\nabla u_{h}^{n}+\nabla u_{h}^{n-1}\|^{2}=\|\nabla u_{h}^{n}\|^{2}+\|\nabla u_{h}^{n-1}\|^{2}+2\left(\nabla u_{h}^{n},\nabla u_{h}^{n-1}\right),
\end{equation}
we arrive at
\begin{equation}\label{S2.23}
\begin{aligned}
\frac{k^{-\alpha}}{\Gamma(2-\alpha)}\sum_{j=1}^{n}\tilde{c}_{n-j}^{(n)}\left(\|u_{h}^{j}\|^{2}-\|u_{h}^{j-1}\|^{2}\right)+\|\nabla u_{h}^{n}\|^{2}&\lesssim \|f_{h}^{n-\frac{\alpha}{2}}\|^{2}+\sum_{j=1}^{n-1}\tilde{w}_{nj}\|\nabla u_{h}^{j}\|^{2}\\
&+\|\nabla u_{h}^{n-1}\|^{2}
\end{aligned}
\end{equation}
Now follow the same arguments as we prove Theorem \ref{5.3} to obtain \eqref{S2.24}.
\end{proof}
\begin{subsection}{Proof of the Theorem \ref{J2}}
\begin{proof}
Substitute $u_{h}^{n}=W^{n}-\theta^{n}$ in $(F_{\alpha})$ and the evaluation of weak formulation $(W_{\alpha})$ and Ritz-Volterra projection $W$ at $t_{n-\frac{\alpha}{2}}$ and then put $v_{h}=\theta^{n}$, we get
\begin{equation}\label{2.21}
\begin{aligned}
\tilde{\mathbb{D}}^{\alpha}_{t_{n-\frac{\alpha}{2}}}\|\theta^{n}\|^{2}+\|\nabla \theta^{n}\|^{2}&\lesssim \|\tilde{\mathbb{D}}^{\alpha}_{t_{n-\frac{\alpha}{2}}}W^{n}-~^{C}D^{\alpha}_{t_{n-\frac{\alpha}{2}}}u\|^{2}+\sum_{j=1}^{n-1}\tilde{w}_{nj}\|\nabla \theta^{j}\|^{2}\\
&+\|\nabla \bar{u}_{h}^{n-1,\alpha}-\nabla u^{n-\frac{\alpha}{2}}\|^{2}+\|\nabla \hat{W}^{n,\alpha}-\nabla W^{n-\frac{\alpha}{2}}\|^{2}\\
& +\|\tilde{q}^{n-\frac{\alpha}{2}}(W)\|^{2}+\|\nabla \theta^{n-1}\|^{2}
\end{aligned}
\end{equation}
where $\tilde{q}^{n-\frac{\alpha}{2}}(W)$ be the quadrature error operator on $H^{1}_{0}(\Omega)$, defined by
\begin{equation}\label{S2.32}
\tilde{q}^{n-\frac{\alpha}{2}}(W)(v)=\int_{0}^{t_{n-\frac{\alpha}{2}}}B(t_{n-\frac{\alpha}{2}},s,W,v)-\sum_{j=1}^{n-1}\tilde{w}_{nj}B\left(t_{n-\frac{\alpha}{2}},t_{j},W^{j},v\right)ds,
\end{equation}
that satisfies $\| \tilde{q}^{n-\frac{\alpha}{2}}(W)\|\lesssim k^{2}$ \cite{pani1992numerical}
and Let $\tilde{\mathbb{Q}}^{n-\frac{\alpha}{2}}=\tilde{\mathbb{D}}^{\alpha}_{t_{n-\frac{\alpha}{2}}}W^{n}-~^{C}D^{\alpha}_{t_{n-\frac{\alpha}{2}}}W$ is the truncation error which satisfies $\|\tilde{\mathbb{Q}}^{n-\frac{\alpha}{2}}\|\lesssim k^{3-\alpha}$ \cite{alikhanov2015new}. Thus we get
\begin{equation}\label{S2.37}
\begin{aligned}
\sum_{j=1}^{n}\tilde{c}_{n-j}^{(n)}\left[\|\theta^{j}\|^{2}-\|\theta^{j-1}\|^{2}\right]+k^{\alpha}\|\nabla \theta^{n}\|^{2}&\lesssim k^{\alpha}\left(h^{2}+k^{3-\alpha}+h+k^{2}+k^{2}\right)^{2}\\
&+k^{\alpha}\left(\sum_{j=1}^{n-1}k_{1}\|\nabla \theta^{j}\|^{2}+\|\nabla \theta^{n-1} \|^{2}+\|\nabla \theta^{n-2}\|^{2}\right).
\end{aligned}
\end{equation}
\noindent Now follow the same arguments as we prove Theorem \ref{J1} and Theorem \ref{6.1} to obtain
\begin{equation*}
\|\theta^{m}\|^{2}+k^{\alpha}\sum_{n=1}^{m}\tilde{p}_{m-n}^{(m)}\|\nabla \theta^{n}\|^{2}
\lesssim \left(h+k^{\min\{3-\alpha,2\}}\right)^{2}
\end{equation*}
Use the similar approach for the case $n=1$ and then by triangle inequality and $\min\{3-\alpha,2\}=2$, we obtain the required result.
\end{proof}
\end{subsection}
\section{Numerical experiments}
In this section, we implement the theoretical results obtained from fully discrete formulation for the model $(D_\alpha)$. For space discretization linear hat basis functions say $\{\psi_{1}, \psi_{2}, \dots, \psi_{J}\}$ for $J$ dimensional subspace $X_{h}$ of $H_{0}^{1}(\Omega)$ is used, then numerical solution $u_{h}^{n}$ for $(D_{\alpha})$ at any time $t_{n}$ in $[0,T]$ is written as
\begin{equation}\label{7.1}
u_{h}^{n}=\sum_{i=1}^{J}\alpha_{i}^{n}\psi_{i},
\end{equation}
where $\alpha^{n}=\left(\alpha_{1}^{n}, \alpha_{2}^{n}, \alpha_{3}^{n}, \dots, \alpha_{J}^{n} \right)$ is to be determined.
Let us denote the error estimates that we have proved in Theorems \ref{J1} and \ref{J2} by
\begin{equation*}
\begin{aligned}
\text{\bf Error-1}&=\max_{1\leq n \leq N}\|u(t_{n})-u_{h}^{n}\|+\left(k^{\alpha} \sum_{n=1}^{N}p_{N-n}\|\nabla u(t_{n})-\nabla u_{h}^{n}\|^{2}\right)^{1/2},
\end{aligned}
\end{equation*}
and
\begin{equation*}
\begin{aligned}
\text{\bf Error-2}&=\max_{1\leq n \leq N}\|u(t_{n})-u_{h}^{n}\|+\left(k^{\alpha} \sum_{n=1}^{N}\tilde{p}_{N-n}^{(N)}\|\nabla u(t_{n})-\nabla u_{h}^{n}\|^{2}\right)^{1/2},
\end{aligned}
\end{equation*}
respectively.
\begin{exam} We consider the problem $(D_{\alpha})$ on $\Omega \times [0,T]$ where $\Omega = [0,1]\times[0,1]$ and $T=1$ with the following data
\begin{enumerate}
\item $M\left(x,y,t,\|\nabla u\|^{2}\right)=a(x,y)+b(x,y)\|\nabla u\|^{2}$ with $a(x,y)=x^{2}+y^{2}+1$ and $b(x,y)=xy$. This type of diffusion coefficient $M$ has been studied by medeiros et al. for numerical purpose in \cite{medeiros2012perturbation}.
\item $b_{2}(t,s,x,y)=e^{t-s}$
$\begin{bmatrix}
-1 & 0\\
0 &-1
\end{bmatrix}; ~ b_{1}(t,s,x,y)=b_{0}(t,s,x,y)=0.$
\item Source term
$f(x,t)=f_{1}(x,t)+f_{2}(x,t)+f_{3}(x,t)$ with
\begin{equation*}
\begin{aligned}
f_{1}(x,t)&=\frac{2}{\Gamma(3-\alpha)}t^{2-\alpha}(x-x^{2})(y-y^{2}),\\
f_{2}(x,t)&=2(x+y-x^{2}-y^{2})\left(t^{2}x^{2}+t^{2}y^{2}+\frac{xyt^{6}}{45}-2t-2+2e^{t}\right),\\
f_{3}(x,t)&=t^{2}(2x-1)(y-y^{2})\left(2x+\frac{yt^{4}}{45}\right)+t^{2}(2y-1)(x-x^{2})\left(2y+\frac{xt^{4}}{45}\right).
\end{aligned}
\end{equation*}
Then the exact solution of $(D_{\alpha})$ is
$u=t^{2}\left(x-x^{2}\right)\left(y-y^{2}\right).$
\end{enumerate}
\end{exam}
\noindent This example can be used to model the diffusion of a substance in a domain $\Omega$, see \cite{barbeiro2011h1}. \\
\noindent We obtain error and convergence rate in space direction as well as in time direction for different values of mesh parameters $h$ and $k$ and different values of $\alpha$. The convergence rate is calculated through the following $\log$ vs. $\log$ formula
\begin{equation*}
\text{ Convergence rate} =\begin{cases}\frac{\log(E(\tau,h_{1})/E(\tau,h_{2}))}{\log(h_{1}/h_{2})};& \text{In space direction}\\
\frac{\log(E(\tau_{1},h)/E(\tau_{2},h))}{\log(\tau_{1}/\tau_{2})};& \text{In time direction }
\end{cases}
\end{equation*}
where $E(\tau,h)$ is the error in respective norms at different mesh points.
\noindent \textbf{Backward Euler-Galerkin FEM scheme:} This numerical scheme $(E_{\alpha})$ provides the convergence rate as $O(h+k^{2-\alpha}),~$see Theorem \ref{J1}. To observe this order of convergence numerically, we run the MATLAB code at different iterations by setting $h \simeq k^{2-\alpha}$. Here $h$ is taken as the area of the triangle in the triangulation of domain $\Omega=[0,1]\times [0,1]$. For next iteration, we join the midpoint of each edge and make another triangulation as presented in Figures 1-3. In this way we collect the numerical results upto five iterations to support our theoretical estimates established in Theorem \ref{J1}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.2]{mesh1.eps}
\caption{Iteration no 1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.2]{mesh2.eps}
\caption{Iteration no 2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.2]{mesh3.eps}
\caption{Iteration no 3}
\end{figure}
\noindent From Tables \ref{table3}, \ref{table31}, and \ref{table361}, we conclude that the convergence rate is linear in space and $(2-\alpha)$ in the time direction, as we have proved in Theorem \ref{J1}. We also observe that as $\alpha$ tending to 1 then convergence rate is approaching 1 in the time direction which is same as we have established in \cite{kumar2020finite} for classical diffusion case.
\begin{table}[htbp]
\centering
\caption{\emph{Error and Convergence Rates in space-time direction for $\alpha=0.25$ }}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Iteration No.} & \textbf{Error-1} & \textbf{Rate in Space} & \textbf{Rate in Time} \\
\hline
1 & 3.04e-02 & - & - \\
\hline
2 & 1.50e-02& 1.0175 & 1.7807 \\
\hline
3 & 7.60e-03 & 0.9824 & 1.7192 \\
\hline
4 & 3.82e-03 & 0.9909 & 1.7342\\
\hline
5 & 1.89e-03 & 1.0166 & 1.7790\\
\hline
\end{tabular}%
\label{table3}%
\end{table}
\begin{table}[htbp]
\centering
\caption{\emph{Error and Convergence Rates in space-time direction for $\alpha=0.5$ }}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Iteration No.} & \textbf{Error-1} & \textbf{Rate in Space} & \textbf{Rate in Time} \\
\hline
1 & 2.63e-02 & - & - \\
\hline
2 & 1.35e-02& 0.9568 & 1.4352 \\
\hline
3 & 6.53e-03 & 1.0521 & 1.5781 \\
\hline
4 & 3.20e-03 & 1.0285 & 1.5428\\
\hline
5 & 1.58e-03 & 1.0141 & 1.5212\\
\hline
\end{tabular}%
\label{table31}%
\end{table}
\begin{table}[htbp]
\centering
\caption{\emph{Error and Convergence Rates in space-time direction for $\alpha=0.75$ }}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Iteration No.} & \textbf{Error-1} & \textbf{Rate in Space} & \textbf{Rate in Time} \\
\hline
1 & 2.23e-02 & - & - \\
\hline
2 & 1.09e-02& 1.0340 & 1.2925 \\
\hline
3 & 5.33e-03 & 1.0345 & 1.2931 \\
\hline
4 & 2.65e-03 & 1.0087 & 1.2960\\
\hline
5 & 1.29e-03 & 1.0331 & 1.2914\\
\hline
\end{tabular}%
\label{table361}%
\end{table}
\noindent \textbf{Fractional Crank-Nicolson-Galerkin FEM scheme:} This numerical scheme converges to the solution with accuracy $O(h+k^{2})$. Here we set $h\simeq k^{2}$ to conclude the convergence rates in space-time direction. Here iteration numbers have the same meaning as the previous case $(E_{\alpha})$. From the Tables \ref{table32}, \ref{table322}, and \ref{table323}, we deduce that the convergence rate is linear in space and quadratic in the time direction, as we have predicted in Theorem \ref{J2}.\\
\begin{table}[htbp]
\centering
\caption{\emph{Error and Convergence Rates in space-time direction for $\alpha=0.25$ }}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Iteration No.} & \textbf{Error-2} & \textbf{Rate in Space} & \textbf{Rate in Time} \\
\hline
1 & 2.32e-02 & - & - \\
\hline
2 & 1.10e-02& 1.0679 & 2.1359 \\
\hline
3 & 5.38e-03 & 1.0393 & 2.0787 \\
\hline
4 & 2.52e-03 & 1.0904 & 2.1809\\
\hline
5 & 1.24e-03 & 1.0225 & 2.0451\\
\hline
\end{tabular}%
\label{table32}%
\end{table}
\begin{table}[htbp]
\centering
\caption{\emph{Error and Convergence Rates in space-time direction for $\alpha=0.5$ }}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Iteration No.} & \textbf{Error-2} & \textbf{Rate in Space} & \textbf{Rate in Time} \\
\hline
1 & 2.42e-02 & - & - \\
\hline
2 & 1.16e-02& 1.0573 & 2.1147 \\
\hline
3 & 5.70e-03 & 1.0329 & 2.0658 \\
\hline
4 & 2.68e-03 & 1.0873 & 2.1746\\
\hline
5 & 1.32e-03 & 1.0207 & 2.0414\\
\hline
\end{tabular}%
\label{table322}%
\end{table}
\begin{table}[htbp]
\centering
\caption{\emph{Error and Convergence Rates in space-time direction for $\alpha=0.75$ }}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Iteration No.} & \textbf{Error-2} & \textbf{Rate in Space} & \textbf{Rate in Time} \\
\hline
1 & 1.97e-02 & - & - \\
\hline
2 & 9.05e-03& 1.1284 & 2.2569 \\
\hline
3 & 4.25e-03 & 1.0873 & 2.1746 \\
\hline
4 & 1.94e-03 & 1.1335 & 2.2671\\
\hline
5 & 9.30e-04 & 1.0614 & 2.1228\\
\hline
\end{tabular}%
\label{table323}%
\end{table}
\noindent Now, we plot the graph of an approximate solution as well as an exact solution in figure 1 using backward Euler-Galerkin FEM based on the L1 type scheme.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{123.eps}
\caption{Approximate solution(L.H.S) and Exact solution(R.H.S) at $T=1$ and $\alpha =0.5$.}
\end{figure}
\section{Conclusions}
In this work, we have established the well-posedness of weak formulation for time fractional integro-differential equations of Kirchhoff type for non-homogeneous materials. We have derived error estimates for semi-discrete formulation by discretizing the domain in space direction using a conforming FEM and obtained convergence rate $O(h)$ in the energy norm. Further, to obtain the numerical solution for this class of equations, we have developed and analyzed two different kinds of efficient numerical schemes. First, we derived that convergence rate is ($2$-$\alpha$) in the time direction, which is better than first order but not optimal. Next, to obtain the optimal rate of convergence, we proposed a new Fractional Crank-Nicolson-Galerkin FEM based on the L2-1$_{\sigma}$ scheme and proved that numerical solutions converge with the $O(h+k^{2})$ rate of accuracy. Finally, numerical experiments reveal that theoretical error estimates are sharp.
|
1,116,691,499,143 | arxiv | \section{Problem Setting}
In this article we seek a numerical algorithm that iteratively approximates the solution $w^*$ of the following strongly convex optimization problem:
\begin{equation}\label{eq1.0}
w^*=\arg \min_{\Gamma_f} f(.)
\end{equation}
\noindent where $f(.): \Gamma_f \rightarrow \mathcal{R}$ is an unknown, not necessarily smooth, multivariate and $\lambda$-strongly convex function, with $\Gamma_f$ its convex definition domain. The algorithm is not allowed to accurately sample $f(.)$ by any means since $f(.)$ itself is unknown. Instead the algorithm can call stochastic oracles $\tilde{\omega}(.)$ at chosen points $\tilde{x}_1,\ldots,\tilde{x}_n$, which are unbiased and independent probabilistic estimators of the first-order local information of $f(.)$ in the vicinity of each $x_i$:
\begin{equation}\label{eq1.1}
\tilde{\omega}(x_i) = \{\tilde{f}_i(x_i), \bigtriangledown \tilde{f}_i(x_i)\}
\end{equation}
\noindent where $\bigtriangledown$ denotes random subgradient operator, $\tilde{f}_i(.): \Gamma_f \rightarrow \mathcal{R}$ are independent and identically distributed (i.i.d.) functions that satisfy:
\begin{subequations}\label{eq1.2}
\begin{align}
\text{(unbiased) } \mathbb{E}[\tilde{f}_i(.)] &= f(.) \quad \forall i \label{eq1.2a} \\
\text{(i.i.d) }\mathrm{Cov} \left( \tilde{f}_i(.), \tilde{f}_j(.) \right) &= 0 \quad \forall i \neq j \label{eq1.2b}
\end{align}
\end{subequations}
Solvers to this kind of problem are highly demanded by scientists in large scale computational learning, in which the first-order stochastic oracle is the only measurable information of $f(.)$ that scale well with both dimensionality and scale of the learning problem. For example, a stochastic first-order oracle in structural risk minimization (a.k.a. training a support vector machine) can be readily obtained in $O(1)$ time \cite{bottou2008SGD}.
\section{Algorithm}
The proposed algorithm itself is quite simple but with a deep proof of convergence. The only improvement comparing to SGD is the selection of step size in each iteration, which however, results in substantial boost of performance, as will be shown in the next section.
\begin{figure*}
\noindent \rule{\linewidth}{0.5mm}\\
\textbf{Algorithm 1}\\
\rule{\linewidth}{0.3mm}
\begin{algorithmic}
\STATE Receive $x_1, \Gamma_f, \lambda$
\STATE $u_1 \gets 1$, $y_1 \gets x_1$
\STATE Receive $\tilde{f}_1(x_1), \bigtriangledown \tilde{f}_1(x_1)$
\STATE $\tilde{P}_1(.) \gets \Gamma_f \left\{ \tilde{f}_1(x_1)+\langle \bigtriangledown \tilde{f}_1(x_1), .-x_i \rangle+\frac{\lambda}{2} ||.-x_1||^2 \right\}$
\FOR {$i=2,\ldots,n$}
\STATE $x_i \gets \arg\min \tilde{P}_{i-1}(.)$
\STATE Receive $\tilde{f}_i(x_i), \bigtriangledown \tilde{f}_i(x_i)$
\STATE $\tilde{p}_i(.) \gets \Gamma_f \left\{ \tilde{f}_i(x_i)+\langle \bigtriangledown \tilde{f}_i(x_i), .-x_i \rangle+\frac{\lambda}{2} ||.-x_i||^2 \right\}$
\STATE $\tilde{P}_i(.) \gets (1-\frac{u_{i-1}}{2}) \tilde{P}_{i-1}(.) + \frac{u_{i-1}}{2} \tilde{p}_i(.)$
\STATE $y_i \gets (1-\frac{u_{i-1}}{2}) y_{i-1} + \frac{u_{i-1}}{2} x_{i}$
\STATE $u_i \gets u_{i-1}-\frac{u_{i-1}^2}{4}$
\ENDFOR
\STATE Output $y_n$
\end{algorithmic}
\rule{\linewidth}{0.5mm}
\end{figure*}
\section{Analysis}
The proposed algorithm is designed to generate an output $y$ that reduces the suboptimality:
\begin{equation}\label{eq3.1}
S(y)=f(y)-\min f(.)
\end{equation}
\noindent as fast as possible after a number of operations. We derive the algorithm by several intuitive observations that are generalized from existing first order methods. First, we start from worst-case upper-bounds of $S(y)$ in deterministic programming:
\begin{lemma}\label{state3.1}
(Cutting-plane bound \cite{joachims2006cuttingplane}): Given $n$ deterministic oracles $\Omega_n=\{\omega(x_1),\ldots,\omega(x_n)\}$ defined by:
\begin{equation}\label{eq3.2}
\omega(x_i)=\{f(x_i), \bigtriangledown f(x_i)\}
\end{equation}
If $f(.)$ is a $\lambda$-strongly convex function, then $\min f(.)$ is unimprovably lower bounded by:
\begin{equation}\label{eq3.1.5}
\min f(.) \geq \max_{i=1 \ldots n} p_i(w^*) \geq \min \max_{i=1 \ldots n} p_i(.)
\end{equation}
\noindent where $w^*$ is the unknown minimizer defined by \eqref{eq1.0}, and $p_i(.): \Gamma_f \rightarrow \mathcal{R}$ are proximity control functions (or simply prox-functions) defined by $p_i(.)=f(x_i)+\langle \bigtriangledown f(x_i), .-x_i \rangle + \frac{\lambda}{2} ||.-x_i||^2$.
\end{lemma}
\begin{proof}
By strong convexity of $f(.)$ we have:
\begin{flalign}
& &\mathbb{B}_f(.||x_i) &\geq \frac{\lambda}{2} ||.-x_i||^2 && \nonumber \\
&\Longrightarrow& f(.) &\geq p_i(.) && \nonumber \\
&\Longrightarrow& f(.) &\geq \max_{i=1,\ldots,n} p_i(.) && \label{eq3.1.6c}\\
&\Longrightarrow& \min f(.) = f(w^*) &\geq \max_{i=1,\ldots,n} p_i(w^*) \geq \min \max_{i=1,\ldots,n} p_i(.) && \label{eq3.1.6d}
\end{flalign}
\noindent where $\mathbb{B}_f(x_1||x_2)=f(x_1)-f(x_2)-\langle \bigtriangledown f(x_2),x_1-x_2 \rangle$ denotes the Bregman divergence between two points $x_1,x_2 \in \Gamma_f$. Both sides of \eqref{eq3.1.6c} and \eqref{eq3.1.6d} become equal if $f(.) = \max_{i=1,\ldots,n} p_i(.)$, so this bound cannot be improved without any extra condition.
\end{proof}
\begin{lemma}\label{state3.1.2}
(Jensen's inequality for strongly convex function) Given $n$ deterministic oracles $\Omega_n=\{ \omega(x_1), \ldots, \omega(x_n) \}$ defined by \eqref{eq3.2}. If $f(.)$ is a $\lambda$-strongly convex function, then for all $\alpha_1,\ldots,\alpha_n$ that satisfy $\sum_{i=1}^n \alpha_i = 1, \alpha_i \geq 0 \quad \forall i$, $f(y)$ is unimprovably upper bounded by:
\begin{equation}\label{eq3.2.5}
f(y) \leq \sum_{i=1}^n \alpha_i f(x_i) - \frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i ||x_i-y||^2
\end{equation}
\noindent where $y=\sum_{i=1}^{n} \alpha_i x_i$.
\end{lemma}
\begin{proof}
By strong convexity of $f(.)$ we have:
\begin{flalign*}
&& \mathbb{B}_f(x_i||y) &\geq \frac{\lambda}{2} ||x_i-y||^2 &&\\
&\Longrightarrow& f(y) &\leq f(x_i) -\langle \bigtriangledown f(y) , x_i-y \rangle - \frac{\lambda}{2} ||x_i-y||^2 &&\\
&\Longrightarrow& f(y) &\leq \sum_{i=1}^n \alpha_i f(x_i) - \langle \bigtriangledown f(y), \sum_{i=1}^n \alpha_i x_i-y \rangle - \frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i || x_i-y ||^2 &&\\
&& &\leq \sum_{i=1}^n \alpha_i f(x_i) - \frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i ||x_i-y||^2 &&
\end{flalign*}
Both sides of all above inequalities become equal if $f(.)=\frac{\lambda}{2} ||.-y||^2 + \langle c_1,. \rangle + c_2$, where $c_1$ and $c_2$ are constants, so this bound cannot be improved without any extra condition.
\end{proof}
Immediately, the optimal $A$ that yields the lowest upper bound of $f(y)$ can be given by:
\begin{equation}\label{eq3.2.6}
A = \arg\min_{\substack{\sum_{i=1}^n \alpha_i = 1 \\ \alpha_i \geq 0 \forall i}} \left\{ \sum_{i=1}^n \alpha_i f(x_i) - \frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i ||x_i-\sum_{j=1}^{n} \alpha_j x_j||^2 \right\}
\end{equation}
Combining with \eqref{eq3.1}, \eqref{eq3.1.5}, we have an deterministic upper bound of $S(y)$:
\begin{equation}\label{eq3.2.7}
S(y) \leq \min_{\substack{\sum_{i=1}^n \alpha_i = 1 \\ \alpha_i \geq 0 \forall i}} \left\{ \sum_{i=1}^n \alpha_i f(x_i) - \frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i ||x_i-\sum_{j=1}^{n} \alpha_j x_j||^2 \right\} - \max_{i=1 \ldots n} p_i(w^*)
\end{equation}
This bound is quite useless at the moment as we are only interested in bounds in stochastic programming. The next lemma will show how it can be generalized in later case.
\begin{lemma}\label{state3.2}
Given $n$ stochastic oracles $\tilde{\Omega}_n=\{\tilde{\omega}(x_1),\ldots,\tilde{\omega}(x_n)\}$ defined by \eqref{eq1.1}, if $y(.,\ldots,.): \mathcal{H}^n \times \Gamma_f^n \rightarrow \Gamma_f$ and $U(.,\ldots,.): \mathcal{H}^n \times \Gamma_f^n \rightarrow \mathcal{R}$ are functionals of $\tilde{f}_i(.)$ and $x_i$ that satisfy:
\begin{subequations}
\begin{align}
&U(f,\ldots,f,x_1,\ldots,x_n) \geq S(y(f,\ldots,f,x_1,\ldots,x_n)) \label{eq3.3}\\
&U(\tilde{f}_1,\ldots,\tilde{f}_n,x_1,\ldots,x_n) \text{ is convex w.r.t. } \tilde{f}_1,\ldots,\tilde{f}_n \label{eq3.4} \\
&\mathbb{E}[\langle \bigtriangledown_{f,\ldots,f} U(f,\ldots,f,x_1,\ldots,x_n),[\tilde{f}_1-f,\ldots,\tilde{f}_n-f]^T \rangle] \leq 0 \label{eq3.4c}
\end{align}
\end{subequations}
\noindent then $\mathbb{E}[S(y(f,\ldots,f,x_1,\ldots,x_n))]$ is upper bounded by $U(\tilde{f}_1,\ldots,\tilde{f}_n,x_1,\ldots,x_n)$.
\end{lemma}
\begin{proof}
Assuming that $\delta_i(.): \Gamma_f \rightarrow \mathcal{R}$ are perturbation functions defined by
\begin{equation}\label{eq3.3.2}
\delta_i(.)=\tilde{f}_i(.)-f(.)
\end{equation}
\noindent we have:
\begin{align}
U(\tilde{f}_{1,\ldots,n},x_{1,\ldots,n}) &\geq U(f+\delta_1,\ldots,f + \delta_n,x_{1,\ldots,n}) \nonumber \\
\text{(by \eqref{eq3.4})} &= U(f,\ldots,f,x_{1,\ldots,n}) + \langle \bigtriangledown_{f,\ldots,f} U(f,\ldots,f,x_{1,\ldots,n}),[\delta_{1,\ldots_n}]^T \rangle \nonumber \\
\text{(by \eqref{eq3.3})} &\geq S(y(f,\ldots,f,x_{1,\ldots,n})) + \langle \bigtriangledown_{f,\ldots,f} U(f,\ldots,f,x_{1,\ldots,n}),[\delta_{1,\ldots,n}]^T \rangle \label{eq3.3.3}
\end{align}
Moving $\delta_i$ to the left side:
\begin{align*}
\mathbb{E}[S(y(f,\ldots,f,x_{1,\ldots,n}))] &\leq U(\tilde{f}_{1,\ldots,n},x_{1,\ldots,n}) + \mathbb{E}[\langle \bigtriangledown_{f,\ldots,f} U(f,\ldots,f,x_{1,\ldots,n}),[\delta_{1,\ldots,n}]^T \rangle] \\
\text{(by \eqref{eq3.4c})} &\leq U(\tilde{f}_{1,\ldots,n},x_{1,\ldots,n})
\end{align*}
\end{proof}
Clearly, according to \eqref{eq3.4}, setting:
\begin{equation}\label{eq3.3.1}
U(\tilde{f}_{1,\ldots,n},x_{1,\ldots,n}) = \min_{\substack{\sum_{i=1}^n \alpha_i = 1 \\ \alpha_i \geq 0 \forall i}} \left\{ \sum_{i=1}^n \alpha_i \tilde{f}_i(x_i) - \frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i ||x_i-\sum_{j=1}^{n} \alpha_j x_j||^2 \right\} - \max_{i=1 \ldots n} \tilde{p}_i(w^*)
\end{equation}
\noindent by substituting $f(.)$ and $p_i(.)$ in \eqref{eq3.2.7} respectively with $\tilde{f}_i(.)$ defined by \eqref{eq1.2} and $\tilde{p}_i(.): \Gamma_f \rightarrow \mathcal{R}$ defined by:
\begin{equation}\label{eq3.3.5}
\tilde{p}_i(.)=\tilde{f}_i(x_i)+\langle \bigtriangledown \tilde{f}_i(x_i), .-x_i \rangle+\frac{\lambda}{2} ||.-x_i||^2
\end{equation}
\noindent is not an option, because $\min_{\substack{\sum_{i=1}^n \alpha_i = 1 \\ \alpha_i \geq 0 \forall i}} \{.\}$ and $- \max_{i=1 \ldots n} \{.\}$ are both concave, $\sum_{i=1}^n \alpha_i \tilde{f}_i(x_i)$ and $\tilde{p}_i(w^*)$ are both linear to $\tilde{f}_i(.)$, and $\frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i ||x_i-\sum_{j=1}^{n} \alpha_j x_j||^2$ is irrelevant to $\tilde{f}_i(.)$. This prevents asymptotically fast cutting-plane/bundle methods \cite{joachims2006cuttingplane,teo2009bundle,franc2009optimized} from being directly applied on stochastic oracles without any loss of performance. As a result, to decrease \eqref{eq3.1} and satisfy \eqref{eq3.4} our options boil down to replacing $\min_{\substack{\sum_{i=1}^n \alpha_i = 1 \\ \alpha_i \geq 0 \forall i}} \{.\}$ and $- \max_{i=1 \ldots n} \{.\}$ in \eqref{eq3.3.1} with their respective lowest convex upper bound:
\begin{align}
U_{(A,B)} (\tilde{f}_{1,\ldots,n},x_{1,\ldots,n}) &= \sum_{i=1}^n \alpha_i \tilde{f}_i(x_i) - \frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i ||x_i-\sum_{j=1}^{n} \alpha_j x_j||^2 - \sum_{i=1}^{n} \beta_i \tilde{p}_i(w^*) \nonumber \\
&= \sum_{i=1}^n \alpha_i \tilde{f}_i(x_i) - \frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i ||x_i-\sum_{j=1}^{n} \alpha_j x_j||^2 - \tilde{P}_n(w^*) \label{eq3.8}
\end{align}
\noindent where $\tilde{P}_n(.): \Gamma_f \rightarrow \mathcal{R}$ is defined by:
\begin{equation}\label{eq3.5.5}
\tilde{P}_n(.) = \sum_{i=1}^{n} \beta_i \tilde{p}_i(.)
\end{equation}
\noindent and $A=[\alpha_1,\ldots,\alpha_n]^T$, $B=[\beta_1,\ldots,\beta_n]^T$ are constant $n$-dimensional vectors, with each $\alpha_i$, $\beta_i$ satisfying:
\begin{subequations}\label{eq3.6}
\begin{align}
\sum_{i=1}^n \alpha_i &= 1 & \alpha_i &\geq 0 \quad \forall i\\
\sum_{i=1}^n \beta_i &= 1 & \beta_i &\geq 0 \quad \forall i
\end{align}
\end{subequations}
\noindent accordingly $y((\tilde{f}_1,\ldots,\tilde{f}_n,x_1,\ldots,x_n)$ can be set to:
\begin{equation}\label{eq3.8.5}
y_{(A,B)}(x_1,\ldots,x_n) = \sum_{i=1}^n \alpha_i x_i
\end{equation}
\noindent such that \eqref{eq3.3} is guaranteed by lemma \ref{state3.1.2}. It should be noted that $A$ and $B$ must both be constant vectors that are independent from all stochastic variables, otherwise the convexity condition \eqref{eq3.4} may be lost. For example, if we always set $\beta_i$ as the solution of the following problem:
\begin{equation*}
\beta_i=\arg \max_{\substack{\sum_{i=1}^n \beta_i = 1 \\ \beta_i \geq 0 \forall i}} \left\{ \sum_{i=1}^{n} \beta_i \tilde{p}_i(w^*) \right\}
\end{equation*}
\noindent then $\tilde{P}_n(w^*)$ will be no different from the cutting-plane bound \eqref{eq3.1.5}. Finally, \eqref{eq3.4c} can be validated directly by substituting \eqref{eq3.8} back into \eqref{eq3.4c}:
\begin{align}\label{eq3.9}
\langle \bigtriangledown_{f,\ldots,f} U_{(A,B)}(f,\ldots,f,x_{1,\ldots,n}),[\delta_{1,\ldots,n}]^T \rangle = \sum_{i=1}^{n} \left[ (\alpha_i-\beta_i) \delta_i(x_i) - \langle \bigtriangledown \delta_i(x_i), \beta_i (w^*-x_i) \rangle \right]
\end{align}
Clearly $\mathbb{E}[(\alpha_i-\beta_i) \delta_i(x_i)] = 0$ and $\mathbb{E}[\langle \bigtriangledown \delta_i(x_i), w^* \rangle] = 0$ can be easily satisfied because $\alpha_i$ and $\beta_i$ are already set to constants to enforce \eqref{eq3.4}, and by definition $w^*=\arg\min f(.)$ is a deterministic (yet unknown) variable in our problem setting, while both $\delta_i(x_i)$ and $\bigtriangledown \delta_i(x_i)$ are unbiased according to \eqref{eq1.2a}. Bounding $\mathbb{E}[\langle \bigtriangledown \delta_i(x_i), x_i \rangle]$ is a bit harder but still possible: In all optimization algorithms, each $x_i$ can either be a constant, or chosen from $\Gamma_f$ based on previous $\tilde{f}_1(.),\ldots,\tilde{f}_{i-1}(.)$ ($x_i$ cannot be based on $\tilde{f}_i(.)$ that is still unknown by the time $x_i$ is chosen). By the i.i.d. condition \eqref{eq1.2b}, they are all independent from $\tilde{f}_i(.)$, which implies that $x_i$ is also independent from $\tilde{f}_i(x_i)$:
\begin{equation}\label{eq3.9.1}
\mathbb{E}[ \langle \bigtriangledown \delta_i(x_i), x_i \rangle ] = 0
\end{equation}
As a result, we conclude that \eqref{eq3.9} satisfies $\mathbb{E}[\langle \bigtriangledown_{f,\ldots,f} U_{(A,B)}(f,\ldots,f,x_{1,\ldots,n}),[\delta_{1,\ldots,n}]^T \rangle] = 0$, and subsequently $U_{(A,B)}$ defined by \eqref{eq3.8} satisfies all three conditions of Lemma \ref{state3.2}. At this point we may construct an algorithm that uniformly reduces $\max_{w^*} U_{(A,B)}$ by iteratively calling new stochastic oracles and updating $A$ and $B$. Our main result is summarized in the following theorem:
\begin{theorem}\label{state3.4}
For all $\lambda$-strongly convex function $F(.)$, assuming that at some stage of an algorithm, $n$ stochastic oracles $\tilde{\omega}(x_1),\ldots,\tilde{\omega}(x_n)$ have been called to yield a point $y_{(A^n,B^n)}$ defined by \eqref{eq3.8.5} and an upper bound $\hat{U}_{(A^n,B^n)}$ defined by:
\begin{align}
\hat{U}_{(A^n,B^n)} (\tilde{f}_{1,\ldots,n},x_{1,\ldots,n}) &= U_{(A^n,B^n)} (\tilde{f}_{1,\ldots,n},x_{1,\ldots,n}) + \frac{\lambda}{2} \sum_{i=1}^{n} \alpha_i ||x_i-\sum_{j=1}^{n} \alpha_j x_j||^2 +\left( \tilde{P}_n(w^*) - \min \tilde{P}_n(.) \right) \nonumber \\
&= \sum_{i=1}^n \alpha_i \tilde{f}_i(x_i) - \min \tilde{P}_n(.) \label{eq3.5}
\end{align}
\noindent where $A^n$ and $B^n$ are constant vectors satisfying \eqref{eq3.6} (here $^n$ in $A^n$ and $B^n$ denote superscripts, should not be confused with exponential index), if the algorithm calls another stochastic oracle $\tilde{\omega}(x_{n+1})$ at a new point $x_{n+1}$ given by:
\begin{align*}
x_{n+1} &=\arg\min \tilde{P}_n(.)
\end{align*}
\noindent and update $A$ and $B$ by:
\begin{subequations}\label{eq3.10}
\begin{align}
A^{n+1} &=\left[ \left( 1-\frac{\lambda}{G^2} \hat{U}_{(A^n,B^n)} \right) (A^n)^T , \frac{\lambda}{G^2} \hat{U}_{(A^n,B^n)} \right]^T \\
B^{n+1} &=\left[ \left( 1-\frac{\lambda}{G^2} \hat{U}_{(A^n,B^n)} \right) (B^n)^T , \frac{\lambda}{G^2} \hat{U}_{(A^n,B^n)} \right]^T
\end{align}
\end{subequations}
\noindent where $G=\max ||\bigtriangledown \tilde{f}_i(.)||$, then $\hat{U}_{A^{n+1},B^{n+1}}$ is bounded by:
\begin{equation}\label{eq3.11}
\hat{U}_{(A^{n+1},B^{n+1})} \leq \hat{U}_{(A^n,B^n)}-\frac{\lambda}{2 G^2} \hat{U}_{(A^n,B^n)}^2
\end{equation}
\end{theorem}
\begin{proof}
First, optimizing and caching all elements of $A^n$ or $B^n$ takes at least $O(n)$ time and space, which is not possible in large scale problems. So we confine our options of $A^{n+1}$ and $B^{n+1}$ to setting:
\begin{subequations}\label{eq3.11.5}
\begin{align}
[\alpha^{n+1}_1,\ldots,\alpha^{n+1}_n]^T &=(1-\alpha^{n+1}_{n+1}) [\alpha^n_1,\ldots,\alpha^n_n]^T\\
[\beta^{n+1}_1,\ldots,\beta^{n+1}_n]^T &=(1-\beta^{n+1}_{n+1}) [\beta^n_1,\ldots,\beta^n_n]^T
\end{align}
\end{subequations}
\noindent such that previous $\sum_{i=1}^n \alpha_i \tilde{F}(x_i)$ and $\sum_{i=1}^n \beta_i \tilde{p_i}(.)$ can be summed up in previous iterations in order to produce a $1$-memory algorithm instead of an $\infty$-memory one, without violating \eqref{eq3.6}. Consequently $\hat{U}_{(A^{n+1},B^{n+1})}$ can be decomposed into:
\begin{align*}
\hat{U}_{(A^{n+1},B^{n+1})} &= \sum_{i=1}^{n+1} \alpha^{n+1}_i \tilde{f}(x_i) - \min \tilde{P}_{n+1}(.) \\
\text{(by \eqref{eq3.3.5}, \eqref{eq3.5.5}, \eqref{eq3.6})} &\leq \sum_{i=1}^{n+1} \alpha^{n+1}_i \tilde{f}(x_i) - \left[ \sum_{i=1}^{n+1} \beta^{n+1}_i \tilde{p}_i(\tilde{x}^*_n) - \frac{1}{2 \lambda} ||\bigtriangledown \sum_{i=1}^{n+1} \beta^{n+1}_i \tilde{p}_i(\tilde{x}^*_n)||^2 \right] \\
\text{(by \eqref{eq3.11.5})} &= \left[ (1-\alpha^{n+1}_{n+1}) \sum_{i=1}^{n} \alpha^{n}_i \tilde{f}(x_i) - (1-\beta^{n+1}_{n+1}) \sum_{i=1}^{n} \beta^{n}_i \tilde{p}_i(\tilde{x}^*_n) \right]\\
&\quad + \left[ \alpha^{n+1}_{n+1} \tilde{f}(x_{n+1}) - \beta^{n+1}_{n+1} \tilde{p}_{n+1}(\tilde{x}^*_n) \right] + \frac{(\beta^{n+1}_{n+1})^2}{2 \lambda} ||\bigtriangledown \tilde{p}_{n+1}(\tilde{x}^*_n)||^2
\end{align*}
\noindent where $\tilde{x}^*_n=\arg\min \tilde{P}_n(.)$, setting $a^{n+1}_{n+1}=b^{n+1}_{n+1}$ and $x_{n+1}=\tilde{x}^*_n$ eliminates the second term:
\begin{align}
\hat{U}_{(A^{n+1},B^{n+1})} &= (1-\alpha^{n+1}_{n+1}) \left( \sum_{i=1}^{n} \alpha^{n}_i \tilde{f}(x_i) - \tilde{P}_n(\tilde{x}^*_n) \right) + \frac{(\alpha^{n+1}_{n+1})^2}{2 \lambda} ||\bigtriangledown \tilde{p}_{n+1}(x_{n+1})||^2 \nonumber \\
\text{(by \eqref{eq3.3.5})} &= (1-\alpha^{n+1}_{n+1}) \hat{U}_{(A^n,B^n)} + \frac{(\alpha^{n+1}_{n+1})^2}{2 \lambda} ||\bigtriangledown \tilde{f}_{n+1}(x_{n+1})||^2 \nonumber \\
\text{($G \geq ||\bigtriangledown \tilde{f}_i(.)||$)} &\leq (1-\alpha^{n+1}_{n+1}) \hat{U}_{(A^n,B^n)} + \frac{(\alpha^{n+1}_{n+1})^2 G^2}{2 \lambda} \label{eq3.12}
\end{align}
Let $u_i=\frac{2 \lambda}{G^2} \hat{U}_{(A^i,B^i)}$, minimizing the right side of \eqref{eq3.12} over $\alpha^{n+1}_{n+1}$ yields:
\begin{equation}\label{eq3.12.5}
\alpha^{n+1}_{n+1} = \arg\min_{\alpha} \{ (1-\alpha) u_n + \alpha^2 \} = \frac{u_n}{2} =\frac{\lambda}{G^2} \hat{U}_{(A^n,B^n)}
\end{equation}
In this case:
\begin{flalign}\label{eq3.13}
&& u_{n+1} &\leq u_n-\frac{u_n^2}{4} &&\\
&\Longrightarrow& \hat{U}_{(A^{n+1},B^{n+1})} &\leq \hat{U}_{(A^n,B^n)}-\frac{2 \lambda}{G^2} \hat{U}^2_{(A^n,B^n)} && \nonumber
\end{flalign}
\end{proof}
Given an arbitrary initial oracle $\tilde{\omega}(x_1)$ and apply the updating rule in theorem \ref{state3.4} recursively results in algorithm 1, accordingly we can prove its asymptotic behavior by induction:
\begin{corollary}\label{state3.5}
The final point $y_n$ obtained by applying algorithm 1 on arbitrary $\lambda$-strongly convex function $f(.)$ has the following worst-case rate of convergence:
\begin{equation*}
\mathbb{E}[f(y_n)]-\min f(.) \leq \frac{2 G^2}{\lambda (n+3)}
\end{equation*}
\end{corollary}
\begin{proof}
First, by \eqref{eq3.13} we have:
\begin{equation}\label{eq3.14}
\frac{1}{u_{n+1}} \geq \frac{1}{u_n \left( 1-\frac{u_n}{4} \right) } = \frac{1}{u_n} + \frac{1}{4-u_n} \geq \frac{1}{u_n} + \frac{1}{4}
\end{equation}
On the other hand, by strong convexity, for all $x_1 \in \Gamma_f$ we have:
\begin{equation}\label{eq3.15}
f(x_1)-\min f(.) \leq \frac{1}{2 \lambda} ||\bigtriangledown f(x_1)||^2 \leq \frac{G^2}{2 \lambda}
\end{equation}
Setting $\hat{U}_{(1,1)}=\frac{G^2}{2 \lambda}$ as intial condition and apply \eqref{eq3.14} recursively induces the following generative function:
\begin{flalign*}
&& \frac{1}{u_n} &\geq 1+\frac{n-1}{4}=\frac{n+3}{4} && \\
&\Longrightarrow& u_n &\leq \frac{4}{n+3} &&\\
&\Longrightarrow& \hat{U}_{(A^n,B^n)} &\leq \frac{2 G^2}{\lambda(n+3)}\\
&\Longrightarrow& \mathbb{E}[f(y_n)]-\min f(.) &\leq \frac{2 G^2}{\lambda(n+3)} - \frac{\lambda}{2} \sum_{i=1}^{n} \alpha^{n}_i ||x_i-y_n||^2 - \left( \tilde{P}_n(w^*) - \min \tilde{P}_n(.) \right)
\end{flalign*}
\end{proof}
This worst-case rate of convergence is four times faster than Epoch-GD ($\frac{8 G^2}{\lambda n}$) \cite{hazan2011epochGD,Juditski2011uniform} or Cutting-plane/Bundle Method ($\frac{8 G^2}{ \lambda \left[ n+2-\log_{2} \left( \frac{\lambda f(x_1)}{4 G^2} \right) \right] }$) \cite{joachims2006cuttingplane,teo2009bundle,franc2009optimized}, and is indefinitely faster than SGD ($\frac{ln(n) G^2}{2 \lambda n}$) \cite{bottou2008SGD,shalev2007pegasos}.
\section{High Probability Bound}
An immediate result of Corollary \ref{state3.5} is the following high probability bound yielded by Markov inequality:
\begin{equation}\label{eq5.0}
\mathrm{Pr} \left( S(y_n) \geq \frac{2 G^2}{\lambda (n+3) \eta} \right) \leq \eta
\end{equation}
where $1-\eta \in [0,1]$ denotes the confidence of the result $y_n$ to reach the desired suboptimality. In most cases (particularly when $\eta \approx 0$, as demanded by most applications) this bound is very loose and cannot demonstrate the true performance of the proposed algorithm. In this section we derive several high probability bounds that are much less sensitive to small $\eta$ comparing to \eqref{eq5.0}.
\begin{corollary}
The final point $y_n$ obtained by applying algorithm 1 on arbitrary $\lambda$-strongly convex function $F(.)$ has the following high probability bounds:
\begin{subequations}\label{eq5.0.1}
\begin{align}
\mathrm{Pr} \left( S(y_n) \geq t+\frac{2 G^2}{\lambda (n+3)} \right) &\leq \exp \left\{ -\frac{t^2 (n+2)}{16 D^2 \sigma^2} \right\} \\
\mathrm{Pr} \left( S(y_n) \geq t+\frac{2 G^2}{\lambda (n+3)} \right) &\leq \frac{1}{2} \exp \left\{ - \frac{t^2 (n+2)}{8 \tilde{G}^2 D^2} \right\} \label{eq5.0.1b}\\
\mathrm{Pr} \left( S(y_n) \geq t+\frac{2 G^2}{\lambda (n+3)} \right) &\leq \exp \left\{ -\frac{t (n+2)}{4 \tilde{G} D} \mathrm{ln} \left( 1+ \frac{t \tilde{G}}{2 D \sigma^2} \right) \right\} \label{eq5.0.1c}
\end{align}
\end{subequations}
where constants $\tilde{G}=\max ||\bigtriangledown \delta_i(.)||$, $\sigma^2=\max \mathrm{Var} ( \bigtriangledown \tilde{f}_i(.) )$ are maximal range and variance of each stochastic subgradient respectively, and $D=\max_{x_1,x_2 \in \Gamma_f} ||x_1-x_2||$ is the largest distance between two points in $\Gamma_f$.
\end{corollary}
\begin{proof}
We start by expanding the right side of \eqref{eq3.3.3}, setting $A^n=B^n$ and substituting \eqref{eq3.9} back into \eqref{eq3.3.3} yields:
\begin{align}
S(y_n) &\leq U_{(A^n,A^n)}(\tilde{f}_{1,\ldots,n},x_{1,\ldots,n}) - \sum_{i=1}^{n} \alpha^n_i \langle \bigtriangledown \delta_i(x_i), x_i-w^* \rangle \nonumber \\
\text{(by Corollary \ref{state3.4})} &\leq \frac{2 G^2}{\lambda (n+3)} + \sum_{i=1}^{n} \alpha^n_i r_i \label{eq5.1}
\end{align}
\noindent with each $r_i = -\langle \bigtriangledown \delta_i(x_i), x_i-w^* \rangle$ satisfying:
\begin{align}
\text{(Cauchy's inequality) } -||\bigtriangledown \delta_i(x_i)|| ||x_i-w^*|| &\leq r_i \leq ||\bigtriangledown \delta_i(x_i)|| ||x_i-w^*|| \nonumber \\
-\tilde{G} D &\leq r_i \leq \tilde{G} D \label{eq5.2}
\end{align}
\begin{align}
\mathrm{Var} ( r_i ) &= \mathbb{E}[( \langle \bigtriangledown \delta_i(x_i),x_i-w^* \rangle -\mathbb{E}[\langle \bigtriangledown \delta_i(x_i),x_i-w^* \rangle] )^2] \nonumber \\
\text{(by \eqref{eq3.9.1})} &= \mathbb{E}[( \langle \bigtriangledown \delta_i(x_i),x_i-w^* \rangle )^2] \nonumber \\
\text{(Cauchy's inequality)} &\leq \mathbb{E}[||\bigtriangledown \delta_i(x_i)||^2 ||x_i-w^*||^2] \nonumber \\
&\leq D^2 \mathbb{E}[||\bigtriangledown \delta_i(x_i)||^2] = D^2 \mathrm{Var} \left( \bigtriangledown \tilde{f}_i(x_i) \right) \leq D^2 \sigma^2 \label{eq5.3}
\end{align}
This immediately expose $S_n(y_{(A^n,A^n)}(x_{1,\ldots,n}))$ to several inequalities in non-parametric statistics that bound the probability of sum of independent random variables:
\begin{subequations}\label{eq5.4}
\begin{align}
\text{(generalized Chernoff bound) } \mathrm{Pr} \left( \sum_{i=1}^{n} \alpha^n_i r_i \geq t \right) &\leq \exp \left\{ -\frac{t^2}{4 \mathrm{Var} \left( \sum_{i=1}^{n} \alpha^n_i r_i \right) } \right\} \nonumber \\
\text{(by \eqref{eq5.3})} &\leq \exp \left\{ -\frac{t^2}{4 D^2 \sigma^2 \sum_{i=1}^{n} (\alpha^n_i)^2} \right\} \\
\text{(Azuma-Hoeffding inequality) } \mathrm{Pr} \left( \sum_{i=1}^{n} \alpha^n_i r_i \geq t \right) &\leq \frac{1}{2} \exp \left\{ - \frac{2 t^2}{\sum_{i=1}^{n} (\alpha^n_i)^2 (\max r_i - \min r_i)^2 } \right\} \nonumber \\
\text{(by \eqref{eq5.2})} &\leq \frac{1}{2} \exp \left\{ - \frac{2 t^2}{4 \tilde{G}^2 D^2 \sum_{i=1}^{n} (\alpha^n_i)^2} \right\} \\
\text{(Bennett inequality) } \mathrm{Pr} \left( \sum_{i=1}^{n} \alpha^n_i r_i \geq t \right) &\leq \exp \left\{ -\frac{t}{2 \max ||\alpha^n_i \epsilon_i||} \mathrm{ln} \left( 1+ \frac{t \max ||\alpha_i \epsilon_i||}{\mathrm{Var} \left( \sum_{i=1}^{n} \alpha^n_i r_i \right) } \right) \right\} \nonumber \\
\text{(by \eqref{eq5.2}, \eqref{eq5.3})} &\leq \exp \left\{ -\frac{t}{2 \tilde{G} D \max \alpha_i} \mathrm{ln} \left( 1+ \frac{t \tilde{G} \max \alpha^n_i}{D \sigma^2 \sum_{i=1}^{n} (\alpha^n_i)^2} \right) \right\}
\end{align}
\end{subequations}
In case of algorithm 1, if $A^n$ is recursively updated by \eqref{eq3.10}, then each two consecutive $\alpha^n_i$ has the following property:
\begin{flalign}
&& \text{(by \eqref{eq3.10}) } \frac{\alpha^n_{i+1}}{\alpha^n_i} &= \frac{\alpha^{i+1}_{i+1}}{\alpha^{i+1}_i} = \frac{\alpha^{i+1}_{i+1}}{\alpha^i_i (1-\alpha^{i+1}_{i+1})} && \nonumber \\
&& \text{(by \eqref{eq3.12.5}, \eqref{eq3.13})} &=\frac{u_{i-1}-\frac{u_{i-1}^2}{4}}{u_{i-1} \left( 1-\frac{u_{i-1}}{2}+\frac{u_{i-1}^2}{8} \right)} && \nonumber \\
&& \text{($u_{i-1} \leq 1$)} &> 1 && \nonumber \\
&\Longrightarrow& \alpha^n_{i+1} &> \alpha^n_{i} && \nonumber\\
&\Longrightarrow& \max \alpha^n_i &= \alpha^n_n = \frac{u_{n-1}}{2} \leq \frac{2}{n+2} && \label{eq5.5}
\end{flalign}
Accordingly $\sum_{i=1}^{n} (\alpha^n_i)^2$ can be bounded by
\begin{equation}\label{eq5.6}
\sum_{i=1}^{n} (\alpha^n_i)^2 \leq n (\alpha^n_n)^2 \leq \frac{4 n}{(n+2)^2} \leq \frac{4}{n+2}
\end{equation}
Eventually, combining \eqref{eq5.1} \eqref{eq5.4}, \eqref{eq5.5} and \eqref{eq5.6} together yields the proposed high probability bounds \eqref{eq5.0.1}.
\end{proof}
By definition $\tilde{G}$ and $\sigma$ are both upper bounded by $G$. And if $\Gamma_f$ is undefined, by combining strong convexity condition $\mathbb{B}_{f} (x_1||\arg\min f(.)) = f(x_1)-\min f(.) \geq \frac{\lambda}{2} ||x_1-\arg\min f(.)||^2$ and \eqref{eq3.15} together we can still set
\begin{equation*}
\Gamma_f=\left\{ ||. - x_1||^2 \leq \frac{G^2}{\lambda^2} \right\}
\end{equation*}
\noindent such that $D = \frac{2 G}{\lambda}$, while $\arg\min f(.)$ is always included in $\Gamma_f$. Consequently, even in worst cases \eqref{eq5.0} can be easily superseded by any of \eqref{eq5.0.1}, in which $\eta$ decreases exponentially with $t$ instead of inverse proportionally. In most applications both $\tilde{G}$ and $\sigma$ can be much smaller than $G$, and $\sigma$ can be further reduced if each $\tilde{\omega}(x_i)$ is estimated from averaging over several stochastic oracles provided simultaneously by a parallel/distributed system.
\section{Discussion}
In this article we proposed algorithm 1, a first-order algorithm for stochastic strongly convex optimization that asymptotically outperforms all state-of-the-art algorithms by four times, achieving less than $S$ suboptimality using only $\frac{2 G^2}{\lambda S}-3$ iterations and stochastic oracles in average. Theoretically algorithm 1 can be generalized to strongly convex functions w.r.t. arbitrary norms using technique proposed in \cite{Juditski2011uniform}, and a slightly different analysis can be used to find optimal methods for strongly smooth (a.k.a. gradient lipschitz continuous or g.l.c.) functions, but we will leave them to further investigations. We do not know if this algorithm is optimal and unimprovable, nor do we know if higher-order algorithms can be discovered using similar analysis. There are several loose ends we may possibly fail to scrutinize, clearly, the most likely one is that we assume:
\begin{equation*}
\max_f S(y) = \max_f \{ f(y)-\min f(.) \} \leq \max_f f(y) - \min_f \min f(.)
\end{equation*}
However in fact, there is no case $\arg\max_f f(y) = \arg\min_f \min f(.) \quad \forall y \in \Gamma_f$, so this bound is still far from unimprovable. Another possible one is that we do not know how to bound $\frac{\lambda}{2} \sum_{i=1}^{n} \alpha^n_i ||x_i-y_n||^2$ by optimizing $x_n$ and $\alpha_{n}^{n}$, so it is isolated from \eqref{eq3.5} and never participate in parameter optimization of \eqref{eq3.12}, but actually it can be decomposed into:
\begin{align*}
\sum_{i=1}^{n+1} \alpha^{n+1}_i ||x_i-y_{n+1}||^2 &= \min \sum_{i=1}^{n+1} \alpha^{n+1}_i ||x_i-.||^2 \\
&= \sum_{i=1}^{n+1} \alpha^{n+1}_i ||x_i-y_n||^2 - \frac{1}{2 \lambda} || \bigtriangledown_{y_n} \left\{ \sum_{i=1}^{n+1} \alpha_i ||x_i-y_n||^2 \right\} ||^2 \\
&= \sum_{i=1}^{n} \alpha^{n+1}_i ||x_i-y_n||^2 + \alpha_{n+1} \left[ ||x_{n+1}-y_n||^2 \right] - \frac{(\alpha^{n+1}_{n+1})^2}{2 \lambda} || y_n-x_{n+1} ||^2 \\
&= (1-\alpha^{n+1}_{n+1}) \left[ \sum_{i=1}^{n} \alpha^n_i ||x_i-y_n||^2 \right] + \left[ \alpha_{n+1}-\frac{(\alpha^{n+1}_{n+1})^2}{2 \lambda} \right] || y_n-x_{n+1} ||^2
\end{align*}
\noindent such that $\left[ \alpha_{n+1}-\frac{(\alpha^{n+1}_{n+1})^2}{2 \lambda} \right] || y_n-x_{n+1} ||^2$ can be added into the right side of \eqref{eq3.12}, unfortunately, we still do not know how to bound it, but it may be proved to be useful in some alternative problem settings (e.g. in optimization of strongly smooth functions).
Most important, if $f(.)$ is $\lambda$-strongly convex and each $\tilde{f}_i(.)$ can be revealed completely by each oracle (instead of only its first-order information), then the principle of empirical risk minimization (ERM):
\begin{equation*}
y_n = \arg\min \sum_{i=1}^n \tilde{f}_i(.)
\end{equation*}
\noindent easily outperforms all state-of-the-art stochastic methods by yielding the best-ever rate of convergence $\frac{\sigma^2}{2 \lambda n}$ \cite{sridharan2008fast}, and is still more than four times faster than algorithm 1 (through this is already very close for a first-order method). This immediately raises the question: how do we close this gap? and if first-order methods are not able to do so, how much extra information of each $\tilde{f}_i(.)$ is required to reduce it? We believe that solutions to these long term problems are vital in construction of very large scale predictors in computational learning, but we are still far from getting any of them.
\section*{Acknowledgements}
We want to thank Dr Francesco Orabona and Professor Nathan Srebro for their critical comments on this manuscript. We also want to thank Maren Sievers, Stefan Rothschuh, Raouf Rusmaully and Li He for their proof reading and/or technical comments on an C++ implementation of this method.
\bibliographystyle{plain}
|
1,116,691,499,144 | arxiv |
\section{\uppercase{Introduction}}
\noindent Three-dimensional (3D) face modelling and 3D face reconstruction from monocular images have drawn increasing attention over the last few years. Especially with deep learning methods, 3D face reconstruction models are empowered to have more complexity and better feature extraction ability. However, as an important part of the human head, the human ear has significantly less relative literature published. Being able to reconstruct 3D ears implies establishing the dense correspondence between 3D ear vertices and 2D ear image pixels, thus enabling further applications such as 3D/2D ear landmark localisation and ear recognition \cite{zhou2017deformable,emervsivc2017ear,emervsivc2019unconstrained}. Furthermore, it can be a vital part of constructing a 3D model of the full human head \cite{dai20203d,ploumpis2020towards}.
\begin{figure}[ht]
\centering
\resizebox{\columnwidth}{!}{
\begin{tikzpicture}
\node (lm_semantics) [inner sep=0pt, anchor=south east, label=below:(1)] at (0,0)
{\includegraphics[width=4cm]{intro_fig/lm_semantics.png}};
\node (render) [right = 0.2cm of lm_semantics, inner sep=0pt, label=below:(2)]
{\includegraphics[width=4cm]{intro_fig/pred_mesh.png}};
\node (landmarks) [right = 0.2cm of render, inner sep=0pt, label=below:(3)]
{\includegraphics[width=4cm]{intro_fig/pred_lm.png}};
\end{tikzpicture}
}
\caption{(1) $55$ landmarks and their semantics from ITWE-A dataset \cite{zhou2017deformable} (2) Rendered densely corresponded coloured 3D ear mesh projected onto the original image (3) Original image marked with predicted landmarks.}
\label{fig:intro}
\end{figure}
Most modern approaches for 3D face or 3D ear reconstruction from monocular images fall into three categories: generation based, regression based and the combination of both \cite{tewari2017mofa}. Generation based methods require a parametric model for the 3D object and 3D landmarks to optimise a set of parameters for optimal alignment between projected 3D models and 2D landmarks. For 3D ear reconstructions, two approaches can be found in literature \cite{dai2018data,zhou2017deformable}. Regression-based methods usually utilise neural networks to regress a parametric model's parameters directly, as proposed by \cite{richardson20163d} for 3D face reconstruction. Generation-based methods are often more computationally costly, due to their non-convex optimisation criteria and the requirement for landmarks. Regression-based methods require ground truth parameters to be provided, which is only accessible when using synthetic data \cite{richardson20163d}. Otherwise other 3D reconstruction algorithms are required to obtain ground truth parameters beforehand \cite{zhu2017face}. Therefore, Tewari \textit{et al}. proposed a self-supervised 3D face reconstruction method named \emph{Model-based Face Autoencoder} (MoFA) that combines both generation and regression based methods. This aims to mitigate the negative aspects of the two categories of method, by using an autoencoder composed of a regression-based encoder and a generation-based decoder \cite{tewari2017mofa}. However, there are no regression-based or autoencoder structured approaches for 3D ear reconstruction in the literature. Whether this self-supervised autoencoder approach can tackle the complexity of the ear structure remains an open question that we address here.
The core idea of the self-supervised learning approach is to synthesise similar colour images from original colour input images in a differentiable manner. For such an approach, a parametric ear model is needed. Dai \textit{et al}. propose a 3D Morphable Model (3DMM) of the ear, named the York Ear Model (YEM). Its 3D ear mesh has $7111$ vertex coordinates, so $21333$ vertex parameters, reduced to $499$ shape parameters using PCA. However, to enable self-supervised learning, the 3D ear meshes require colour/texture, which is not included in the YEM model.
In this context, we present a Human Ear Reconstruction Autoencoder (HERA) system, with the following contributions:
\begin{itemize}
\item A 3D ear reconstruction method that is completely trained unsupervised using in-the-wild monocular ear colour 2D images.
\item An in-the-wild ear colour model that colours the 3D ear mesh to minimise its difference with the 2D ear image in appearance.
\item Evaluations that demonstrate that the proposed model is able to predict a densely corresponded coloured 3D ear mesh (\textit{e}.\textit{g}. Figure \ref{fig:intro} (2)) and 2D landmarks (\textit{e}.\textit{g}. Figure \ref{fig:intro} (3)).
\end{itemize}
\begin{figure*}[t]
\centering
\resizebox{\textwidth}{!}{
\subimport{}{arch.tex}
}
\caption{Overview of the autoencoder architecture}
\label{fig:arch}
\end{figure*}
\section{\uppercase{Related Work}}
\noindent In this section, we discuss a range of 3D face reconstruction methods that utilise an autoencoder structure to achieve self-supervised learning. The method this paper proposes obtains 3D ear shapes by employ a strong prior provided by an ear 3DMM, thus the two existing 3D parametric ear models will be discussed. Finally, two methods that evaluate their methods using normalised landmark error are discussed, since we evaluate landmark prediction accuracy on the same dataset, using the same metric.
\subsection{Self-supervised Learning for 3D Dense Face Reconstruction}
\noindent The self-supervised learning approach to 3D face reconstruction builds an end-to-end differentiable pipeline that takes the original colour images as input, predicts and reconstructs the 3D face mesh, then uses a differentiable renderer to reconstruct colour images as output. The goal of such a self-supervised learning approach is to minimise the difference between input colour images and output colour images. Several novel 3D face reconstruction approaches have recently been proposed. Improvements include using a face recognition network to contribute to a loss function, using a Generative Adversarial Network (GAN) for texture generation \cite{gecer2019ganfit} and replacing the linear 3DMM structure with a non-linear 3DMM \cite{tran2018nonlinear}. The aim of all of those approaches is to achieve better performance more intuitively, particularly in terms of minimising the appearance difference between generated output images and real input images.
\subsection{In-the-wild Ear Image Dataset}
\noindent There are numerous in-the-wild ear image datasets built for various purposes, here we focus on Collection A from the \emph{In-the-wild Ear Database} (ITWE-A) since it has $55$ manually-marked landmarks. All the landmarks have semantic meaning, as shown in Figure \ref{fig:intro} (1). This dataset contains $500$ images in its training set and $105$ images in its test set, where each image is captured in-the-wild and contains a clear ear. The dataset has a large variation in ear colours, as is the nature of in-the-wild images, and it even contains several grayscale images. Traditional 3DMM colour models, such as that of the Basel Face Model 09 (BFM09) \cite{blanz1999morphable}, often fail to generate a highly-similar appearance to the input. However, the in-the-wild ear colour model proposed here, can cover such colour variance, since it models directly from the in-the-wild images themselves.
\subsection{Parametric Ear Models}
\noindent Zhou and Zaferiou build their parametric ear model using an Active Appearance Model (AAM), which is a linear model that aims to model the 2D ear's shape and colour simultaneously\cite{cootes1998active}. A 3D Morphable Model (3DMM) is a closely-related model that models objects' shapes and colours in 3D instead of 2D. Blanz and Vetter first proposed a 3D Morphable Model (3DMM) for human faces \cite{blanz1999morphable}, which builds a linear system that allows different 3D face meshes to be described by $199$ shape parameters. Similarly, Dai \textit{et al}. \cite{dai2018data} proposed a 3D morphable model for the human ear named the York Ear Model (YEM), also based on a linear system, but with $499$ parameters. Here, we utilise this ear 3DMM for its strong 3D ear shape prior. Meanwhile, the reduced dimension of the parameters allows the neural network to perform a much easier regression task using $499$ shape parameters rather than $21333$ raw vertex parameters.
\subsection{2D Ear Detection}
\noindent Ear detection or localisation in 2D images aims to find the region of interest bounding the ear, from images of the human head that contain ears; for example, profile-view portraits. It is a vital preprocessing step in the 3D ear reconstruction pipeline. Object detection has been studied for decades and there exists a number of algorithms that specifically perform the 2D ear detection task. Zhou and Zaferiou \cite{zhou2017deformable} use the histogram of oriented gradients with a support vector machine (HoG+SVM) to predict a rectangular region of interest. Emer\v{s}i\v{c} \textit{et al}. \cite{emervsivc2017pixel} and Bizjak \textit{et al}. \cite{bizjak2019mask} propose deep learning methods to tackle the 2D ear detection task by predicting a pixel-level segmentation of the 2D ear image directly.
\subsection{2D Ear Landmark Localisation}
\noindent 2D ear landmark localisation is a task for finding specific key points on 2D ear images. It is an intuitive method of quantitative evaluation of this work where the shape and alignment of the reconstructed 3D ear mesh can be evaluated precisely. In 2D face landmark localisation, numerous approaches obtain 2D landmarks by reconstructing 3D models first \cite{zhu2017face,liu2016joint,mcdonagh2016joint}. Being able to achieve competitive results against a specialised 2D landmark predictor is necessary for the success of a 3D dense ear reconstruction algorithm. Zhou and Zaferiou's approach comes with the ITWE-A dataset and is considered as a baseline. They use Scale Invariant Feature Transform (SIFT) features and an AAM model to predict 2D landmarks \cite{zhou2017deformable}. Hansley and Segundo \cite{hansley2018employing} propose a CNN-based approach to regress 2D landmarks directly and they also evaluate on the ITWE-A dataset. Their approach proposes two CNNs that both predict the same set of landmarks but with different strengths. The first CNN has better generalisation ability for different ear poses. The resulting landmarks of the first CNN are used to normalise the ear image. The second CNN predicts improved normalised ear images based on the results of the first CNN.
\section{\uppercase{The HERA system}}
\noindent Our proposed Human Ear Reconstruction Autoencoder (HERA) system employs an autoencoder structure that takes ear images as input and generates synthetic images. Therefore, it is trained by minimising the difference between input images and the final synthesised images. An illustration of our end-to-end architecture is shown in Figure \ref{fig:arch}. The encoder is a CNN predicting intermediate code vectors that are then fed to the decoder, where coloured 3D ear meshes are reconstructed and rendered into 2D images.
The decoder is comprised of: \emph{(1)} the YEM ear shape model and our in-the-wild ear colour model that reconstruct ear shapes and ear colours respectively; \emph{(2)} PyTorch3D \cite{ravi2020pytorch3d} that renders images with ear shapes and colours in a differentiable way. The comparison of the input and synthesised images is implemented by a combination of loss functions and regularisers. The essential loss function is a photometric loss, with an additional landmark loss that can be included for both faster convergence time and better accuracy. The whole autoencoder structure is designed to be differentiable and so can be trained in an end-to-end manner. Each part of the architecture (\textit{i}.\textit{e}. encoder CNN, ear 3DMM, scaled orthogonal projection and loss functions) is differentiable by default, thereby using a differentiable renderer to render 3D meshes to 2D images makes the whole architecture differentiable. The core part of the decoder is described in Section \ref{sec:ear-3dmm}. The whole end-to-end trainable architecture and the necessary training methods are then described in Section \ref{sec:ear-ae}.
\subsection{Ear 3D Morphable Model Preliminaries}
\label{sec:ear-3dmm}
\noindent This section describes the 3DMM part of the decoder which comprises an ear shape model derived from the YEM, an ear colour model, and the projection model. With this 3DMM, the shape parameters $\bm{\alpha_}{s}$ can be reconstructed to an 3D ear vertex coordinate vector $\bm{S} \in \mathbb{R}^{N\times3}$ where $N$ is the number of vertices in a single 3D ear mesh. The colour parameters $\bm{\alpha_}{c}$ are then reconstructed to a vertex colour vector $\bm{C} \in \mathbb{R}^{N\times3}$ to colour each vertex. The pose parameters $\bm{p}$ are used in the projection model that aligns 3D ear meshes with 2D ears' pixels.
\subsubsection{Ear Shape Model}
\noindent We employ YEM model \cite{dai2018data}, which supplies the geometric information necessary for reconstruction. It is constructed using PCA from $500$ 3D ear meshes and thus provides a strong statistical prior. The 3D ear vertex coordinate vector (\textit{i}.\textit{e}. 3D ear shape) $\bm{S}$ is reconstructed from shape parameter vector $\bm{\alpha}_S$ by:
\begin{equation}
\label{eq:shape-3dmm}
\bm{S} = \hat{S}\left(\alpha_{s}\right) = \Bar{\bm{S}} + \bm{U}_s\bm{\beta}_{s},
\end{equation}
where $\Bar{\bm{S}} \in \mathbb{R}^{3N}$ is the mean ear shape, $\bm{U}_s \in \mathbb{R}^{3N \times 499}$ is the ear shape variation components and the resulting matrix is rearranged into a $N \times 3$ matrix, where each row represents a vertex coordinate in 3D space.
The projection model employed is the scaled orthogonal projection ($SOP$) projecting 3D shape to 2D. Given the 3D ear shape $\bm{S}$ from Equation \ref{eq:shape-3dmm}, the projection function, $\hat{V}$, is defined as:
\begin{equation}
\label{eq:sop}
\bm{V} = \hat{V}\left( \bm{S}, \bm{p} \right) = f \bm{P}_{o} \hat{R}\left(\bm{r}\right) \bm{S} + \bm{T},
\end{equation}
where $\bm{P}_{o} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$ is the orthogonal projection matrix, $\bm{V} \in \mathbb{R}^{N \times 2}$ are the projected 2D ear vertices and $\hat{R}\left(\bm{r}\right)$ is the function that returns the rotation matrix. Since scaled-orthogonal projection is used, $\bm{V}$ provides sufficient geometric information for the differentiable renderer and no additional camera parameters are needed.
In addition, 2D landmarks can be extracted from the projected vertices $\bm{V}$ by manually selecting $55$ semantically corresponding vertices. Thus we can define a vector of 2D landmarks of a projected ear shape $\bm{V}$ as:
\begin{equation}
\label{eq:lm-selecting}
\bm{X}_{i} = \bm{V}\left( \bm{L} \right),
\end{equation}
where $\bm{X}_{i} \in \mathbb{R}^{55 \times 2}$ are the landmark's x and y coordinates indexed by $\bm{L}$ in the projected ear vertices $\bm{V}$.
\subsubsection{In-the-wild Ear Colour Model}
\noindent The YEM model contains an ear shape model only. However, the decoder in our architecture requires the 3D ear meshes to be coloured to generate plausible synthetic ear images. To solve this problem, we build an in-the-wild ear colour model using PCA whitening.
Firstly, for each ear image from of the $500$ images from the training set of the ITWE-A dataset, a set of whitened ear shape model parameters $\bm{\alpha}_{s}$ and ear pose $\bm{p}$ is fitted using a non-linear optimiser to minimise 2D landmark distances. Using the reconstruction Equations \ref{eq:pca-whitening} $\sim$ \ref{eq:lm-selecting}, the optimisation criteria $E_{0}$ can be formed as follow:
\begin{gather}
\hat{X}\left( \bm{\alpha}_{s}, \bm{p} \right) = \hat{V}\left( \hat{S}\left( \hat{\alpha}\left(\bm{\alpha}_{s} \right) \right), \bm{p} \right), \\
E_{0}\left( \bm{\alpha}_{s}, \bm{p}, \bm{X}_{gt} \right) = \frac{1}{N_{L}}\left\| \left( \hat{X}\left( \bm{\alpha}_{s}, \bm{p} \right) \right)\left( \bm{L} \right) - \bm{X}_{gt}\right\|_{2},
\end{gather}
where $\hat{X}$ is the whole reconstruction and projection function, $N_{L} = 55$ is a constant representing the number of landmarks and $\bm{X}_{gt} \in \mathbb{R}^{55 \times 2}$ is the ground truth 2D landmarks provided by the ITWE-A dataset.
After the shapes are fitted, the colour for each vertex is obtained by selecting the corresponding 2D pixel colour. This process ends up in $500$ vertex colour vectors, which can then be used to build the in-the-wild ear colour model using PCA whitening. The vertex colour vectors are parameterised by $40$ parameters and cover by $86.6\%$ of the colour variation. The reconstruction coverage rate is not proportional to the quality of the model building, since setting a moderate coverage rate can implicitly ignore some occlusions (\textit{e}.\textit{g}. hair and ear piercings). This colour model is shown in Figure \ref{fig:colour_model}.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{
\subimport{}{colour_model.tex}
}
\caption{In-the-wild Ear Colour Model. The mean colour and first 5 parameters $\pm$ standard deviations (SD) are shown. The mean 3D ear mesh is used.}
\label{fig:colour_model}
\end{figure}
The reconstruction of the vertex colour vector $\bm{C}$ is:
\begin{equation}
\bm{C} = \hat{C}\left( \bm{\alpha}_{c} \right) = \Bar{\bm{C}} + \bm{U}_{c}\bm{\alpha}_c,
\end{equation}
where $\bm{\alpha}_{c} \in \mathbb{R}^{40 \times 1}$ is the colour parameter vector. $\Bar{\bm{C}}$ is average vertex colour vector, $\bm{U}_{c}$ is vertex colour variance component matrix and both are calculated by the PCA whitening algorithm.
\subsection{Intermediate Code Vector}
\noindent The intermediate code vector
\begin{equation}
\bm{v} = \left\{ \bm{p}, \bm{\alpha}_{s}, \bm{\alpha}_{c} \right\}
\end{equation}
connects the encoder and the decoder and has semantic meaning. Where
\begin{equation}
\bm{p} = \left\{ \bm{r}, \bm{T}, f \right\}
\end{equation}
defines the pose of the 3D ear mesh. $\bm{r} \in \mathbb{R}^{3}$ is the azimuth, elevation and row which map to the rotation matrix through function $\hat{R}\left(\bm{r}\right) : \mathbb{R}^{3} \rightarrow \mathbb{R}^{3 \times 3}$. $\bm{T} \in \mathbb{R}^{2\times1}$ defines the translation in X-axis and Y-axis. The translation in z-axis is not necessary since scaled orthogonal projection is used. $f$ is a fraction number that defines the 3D mesh's scale. $\bm{\alpha}_{s} \in \mathbb{R}^{40 \times 1}$ are the PCA whitened shape parameters and will be recovered to the shape parameters $\bm{\beta}_{s} \in \mathbb{R}^{499 \times 1}$ and then proceeded by the YEM 3DMM. $\bm{\alpha}_{c} \in \mathbb{R}^{40 \times 1}$ are the colour parameters for the in-the-wild ear colour model built by this paper.
\subsection{PCA Whitening}
\noindent To ease the optimisation process in training, we use PCA whitening to transfer the YEM ear model parameters into the format that is more favourable for deep learning frameworks. Firstly, the variances of the parameters can differ in a very large scale from $8 \times 10^{3}$ for the most significant parameter to $5 \times 10^{-7}$ for the least important parameter. It is difficult to train a neural network to effectively regress such large variance data. Secondly, the large number of the parameters slow the neural networks' training speed and worse the optimisation process. This could be mitigated by trimming a portion of the less important parameters out. But this has potential to lose the shape and color information from the trimmed part. To overcome this, we perform PCA whitening \cite{kessy2018optimal} over the full set of parameters. PCA whitening aims to generate zero-mean parameters with reduced dimensions in unit-variance. In our experiment, YEM's original parameters $\bm{\beta}_{s}$ of $499$ dimensions are transformed to $\bm{\alpha}_{s}$ of $40$ dimensions while covering $98.1\%$ of the variance associated with the original parameters. Each original parameter vector $\bm{\beta}_{s}$ can be recovered from $\bm{\alpha}_{s}$ by:
\begin{equation}
\label{eq:pca-whitening}
\bm{\beta}_{s} = \hat{\alpha}\left( \bm{\alpha}_{s} \right) = \bm{U}_{w}\bm{\alpha}_{s},
\end{equation}
where $\bm{U}_{w} \in \mathbb{R}^{499 \times 40}$ is a constant matrix of variation components calculated by the PCA whitening procedure. The original parameters' mean is not added since they are zero-mean already.
\subsection{Ear Autoencoder}
\label{sec:ear-ae}
\noindent We now combine the intermediate code vector and decoder components, described in previous sections,
with the encoder, the differentiable renderer and the loss functions, to build the end-to-end autoencoder
As illustrated in Figure \ref{fig:arch}, we build an self-supervised architecture that consists of an encoder, an intermediate code vector, the decoder components, the differentiable renderer and the loss for back-propagation.
The encoder is an 18-layer residual network (ResNet-18) which is a CNN that performs well on regression from image data \cite{he2016deep}. We use PyTorch3D \cite{ravi2020pytorch3d} as a differentiable image renderer developed using PyTorch \cite{paszke2019pytorch}. It is a differentiable function that maps a set of vertex coordinate vector and vertex colour vector to a 2D image. The encoder $Q$ and decoder $W$ can be formed as follows:
\begin{gather}
\bm{v}_{pred} = Q\left(\bm{I}_{in}, \bm{\theta}\right),\\
\bm{S}_{pred}^{T}, \bm{C}_{pred} = W\left(\bm{v}_{pred}\right),\\
\bm{I}_{pred} = Render\left(\bm{S}_{pred}^{T}, \bm{C}_{pred}\right),\\
\bm{X}_{pred} = \bm{S}_{pred}^{T}\left(\bm{L}\right),
\end{gather}
where $\bm{I}_{in}$ is the input image and $\bm{\theta}$ are the weights of the encoder network $Q$. In the decoder $W$, the predicted 3D mesh (\textit{i}.\textit{e}. shape with pose $\bm{S}_{pred}^{T}$ and colour $\bm{C}_{pred}$) are reconstructed from the predicted intermediate code vector $\bm{v}_{pred}$. The reconstructed 3D mesh is then fed to the differential image render for capturing the rendered image $\bm{I}_{pred}$. The $\bm{L}$ indexes the x and y coordinates of the 55 ear landmarks in the ear shape $\bm{S}$. The predicted landmarks $\bm{X}_{pred} \in \mathbb{R}^{55 \times 2}$ can be derived from the predicted ear shape by indexing the x and y coordinates of the 55 ear landmarks in the predicted 3D ear shape from $\bm{L}$. The encoder ResNet-18 is initialised using the weights pre-trained on ImageNet \cite{deng2009imagenet}. The trained encoder network can be used for the shape and color parameters regression.
\subsubsection{Loss Function}
\label{sec:loss-functions}
\noindent Our loss function follows the common design of loss functions in differentiable renderer based self-supervised 3D reconstruction approaches. The proposed loss function is a combination of four weighted losses as:
\begin{multline}
E_{loss} = \lambda_{pix}E_{pix}\left( \bm{I}_{in} \right) + \lambda_{lm}E_{lm}\left( \bm{I}_{in}, \bm{X}_{gt}\right)
\\ + \lambda_{reg1}E_{reg1}\left( \bm{I}_{in} \right) + \lambda_{reg2}E_{reg2}\left( \bm{I}_{in} \right),
\end{multline}
where $\lambda_{i}$ are the weights for the losses $E_{i}$.
\paragraph{Pixel Loss}
The core idea of the self-supervised architecture is that the model can generate synthetic images from input images and are compared with input images. Thus to form such comparison, the Mean Square Error (MSE) is used on all pixels:
\begin{equation}
E_{pix}\left( \bm{I}_{in} \right) = L_{MSE}\left( Render\left(W \left( Q\left( \bm{I}_{in}, \bm{\theta} \right) \right) \right), \bm{I}_{in} \right),
\end{equation}
Where $L_{MSE}$ is a function that calculates the mean square error. A pixel mask is used to compare the rendered ear region only, since the rendered ear images have no background.
\paragraph{Landmark Loss}
The optional landmark loss is used to speed up the training process and help the network learn 3D ears with better accuracy. Zhou and Zaferiou \cite{zhou2017deformable} propose the mean normalised landmark distance error as their shape model evaluation metric. Here, we employ it as a part of the loss function.
It can be formed as:
\begin{multline}
\label{eq:landmark-loss}
E_{lm}\left( \bm{I}_{in}, \bm{X}_{gt}\right) = \frac{\left\| \left(W \left( Q\left( \bm{I}_{in}, \bm{\theta} \right) \right) \right) \left( \bm{L} \right) - \bm{X}_{gt} \right\|_{2}}
{D_N\left( \bm{X}_{gt} \right)N_{L}}
\end{multline}
where $\bm{X}_{gt}$ is the ground truth landmarks and $D_N\left( \bm{X}_{gt} \right)$ is a function gets the diagonal pixel length of the ground truth landmarks' bounding box. Since this loss is optional, setting $\lambda_{lm} = 0$ can enable the whole model to be trained on 2D image data $\bm{I}_{in}$ only, making the use of very large-scale unlabelled training data possible.
\paragraph{Regularisers}
Two regularisers are used to constrain the learning process and are weighted separately. The first regulariser is the statistical plausibility regulariser.
The regulariser is formed by:
\begin{equation}
E_{reg1}\left( \bm{I}_{in} \right) = \sum_{j=1}^{40}\bm{\alpha}_{sj} + \sum_{j=1}^{40}\bm{\alpha}_{cj},
\end{equation}
where $\bm{\alpha}_{s}$ and $\bm{\alpha}_{c}$ are ear shape and colour parameters predicted by the encoder network. Therefore this penalises the Mahalanobis distance from the mean shape and colour.
During our experiments, we found that an additional restriction on the scale parameter $f$ has to be applied for the model to be successfully trained without landmarks. The restriction is formed by:
\begin{equation}
E_{reg2}\left( \bm{I}_{in} \right) =
\begin{cases}
\left(0.5 - f\right)^{2} & \text{if } f < 0.5\\
\left(f - 1.5\right)^{2} & \text{if } f > 1.5\\
0 & \text{otherwise}
\end{cases},
\end{equation}
We employed two sets of weights, $\lambda$, depending on whether or not landmark loss is used when training.
\begin{itemize}
\item Training with landmarks: $\lambda_{pix} = 10$, $\lambda_{lm} = 1$, $\lambda_{reg1} = 5\times10^-2$ and $\lambda_{reg2} = 0$
\item Training without landmarks: $\lambda_{pix} = 2$, $\lambda_{lm} = 0$, $\lambda_{reg1} = 5\times10^-2$ and $\lambda_{reg2} = 100$
\end{itemize}
\subsubsection{Dataset Augmentation}
\noindent Since the ITWE-A dataset used to train our model contains only $500$ landmarked ear images, having limited variance on ear rotations, we perform data augmentation on the original dataset. An ear direction of a 2D ear image is defined by a 2D vector from one of the ear lobe landmark points to one of the ear helix landmark points. For each 2D ear image, $12$ random rotations around its central point are applied such that the angles between their ear directions and the Y-axis of the original image are uniformly distributed between $-60^{\circ}$ and $60^{\circ}$. The augmented ear image dataset contains $6,000$ images in total. With this augmentation, we find that test set landmark error drops significantly.
\section{\uppercase{Results}}
\noindent In this section, both quantitative evaluation results and qualitative evaluation results are discussed. Quantitative evaluation focuses on comparing landmark fitting accuracy with different approaches. While the qualitative evaluation focuses on evaluating the visual results of this 3D ear reconstruction algorithm. Furthermore, an ablation study is conducted to analyse the improvement that various optimisations of this work has proposed, including the PCA whitening on the YEM model parameters, the statistical plausibility regulariser and the dataset augmentation. The abbreviation Human Ear Reconstruction Autoencoder (HERA) is used to represent the final version of this work.
\subsection{Quantitative Evaluations}
\noindent The main quantitative evaluation method applied is the mean normalised landmark distance error proposed by \cite{zhou2017deformable} formed in Equation \ref{eq:landmark-loss} which also forms the landmark loss that trains our system. Projecting the 3D ear meshes' key points to 2D and comparing them with the ground truth can assess the accuracy of the 3D reconstruction. There are two approaches that predict the same set of landmarks using the same dataset in the literature , therefore comparisons can be formed. Zhou \& Zaferiou's work \cite{zhou2017deformable} is considered as a baseline solution and Hansley \& Segundo's work \cite{hansley2018employing} is a specifically designed 2D landmark localisation algorithm that has the lowest landmark error in the literature. To interpret the landmark error, it is stated that for an acceptable prediction of landmarks, the mean normalised landmark distance error has to be below $0.1$ \cite{zhou2017deformable}. This is a dimensionless metric that is the ratio of the mean Euclidean pixel error to the diagonal length of the ear bounding box.
As this paper stated in Section \ref{sec:loss-functions}, HERA can be trained without landmarks or data augmentation in a self-supervised manner. The HERA version that uses no landmark loss during training and trains on the original $500$ ear images is named HERA-W/O-AUG-LM.
Our HERA system is now compared with Zhou \& Zaferiou's and Hansley \& Segundo's work regarding the normalised landmark error's mean, standard deviation, median and cumulative error distribution (CED) curve evaluated on the test set of ITWE-A which contains $105$ ear images. The numerical results are shown in Table \ref{tab:quant-res} and the CED curve is shown in \ref{fig:quant-ced_curve}. Additionally, the percentage of predictions that have error less than $0.1$ and $0.6$ are given in Table \ref{tab:quant-res}.
\begin{table}
\caption{Normalised landmark distance error statistics on ITWE-A.}
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & mean $\pm$ std & median & $\leq 0.1$ & $\leq 0.06$\\
\hline\hline
Zhou \& Zaferiou & $0.0522 \pm 0.024$ & $0.0453$ & $95\%$ & $78\%$ \\
HERA & $\bm{0.0398 \pm 0.009}$ & $\bm{0.0391}$ & $\bm{100\%}$ & $\bm{96.2\%}$ \\
HERA-W/O-AUG-LM & $0.0591 \pm 0.014$ & $0.0567$ & $99\%$ & $64.7\%$ \\
\hline
\end{tabular}
}
\end{center}
\label{tab:quant-res}
\end{table}
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{
\begin{tikzpicture}
\begin{axis}[
xlabel={Landmark error},
ylabel={Percentage of images},
xmin=0, xmax=0.1,
ymin=0, ymax=1.0,
xtick={0,0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.10},
ytick={0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0},
x tick label style={/pgf/number format/fixed, font=\footnotesize},
ymajorgrids=true,
grid style={dashed,gray},
grid=both,
legend style={font=\tiny},
legend pos=south east,
]
\addplot[
color=blue,
mark=square,
]
coordinates {
(0.0,0)(0.01,0)(0.02,0)(0.03,0.133)(0.04,0.543)(0.05,0.83)(0.06,0.962)(0.07,1.0)(0.08,1.0)(0.09,1.0)(0.10,1.0)
};
\addplot[
color=black,
mark=triangle,
]
coordinates {
(0.0,0)(0.01,0)(0.02,0.03)(0.03,0.19)(0.04,0.51)(0.05,0.82)(0.06,0.93)(0.07,0.98)(0.08,1.0)(0.09,1.0)(0.10,1.0)
};
\addplot[
color=red,
mark=o,
]
coordinates {
(0.0,0)(0.01,0)(0.02,0.02)(0.03,0.07)(0.04,0.33)(0.05,0.59)(0.06,0.78)(0.07,0.85)(0.08,0.89)(0.09,0.94)(0.10,0.95)
};
\addplot[
color=green,
mark=square,
]
coordinates {
(0.0,0)(0.01,0)(0.02,0.00)(0.03,0.00)(0.04,0.04)(0.05,0.29)(0.06,0.65)(0.07,0.81)(0.08,0.91)(0.09,0.95)(0.10,0.99)
};
\legend{HERA, Hansley \& Segundo, Zhou \& Zaferiou, HERA-W/O-AUG-LM}
\end{axis}
\end{tikzpicture}
}
\caption{Cumulative error distribution curve comparison among different landmark detection algorithms and our work}
\label{fig:quant-ced_curve}
\end{figure}
From Table \ref{tab:quant-res} and Figure \ref{fig:quant-ced_curve}, it can be concluded that HERA outperforms Zhou \& Zaferiou's work by a large margin in terms of 2D landmark localisation task. When compared with Hansley \& Segundo's 2D landmark localisation work, similar results are shown. This is considered acceptable when comparing a 3D reconstruction algorithm with a 2D landmark localisation algorithm. Hansley \& Segundo's landmark localiser is comprised of two specifically designed CNNs for landmark regressions while HERA uses only one CNN to regress a richer set of information (\textit{i}.\textit{e}. pose, 3D model's parameters and colour parameters). Regarding the threshold of $0.1$ proposed by \cite{zhou2017deformable}, both HERA and Hansley \& Segundo's work are $100\%$ below $0.1$, and HERA trained without landmarks achieves $99\%$ below $0.1$. The CED curves show that, although HERA-W/O-AUG-LM performs worse than Zhou \& Zaferiou's work in the error region below around 0.077, our performance is better at the 0.1 error point. In other words, HERA-W/O-AUG-LM can predict landmarks with less than $0.1$ error more consistently than the baseline.
\subsection{Qualitative Evaluations}
\noindent Qualitative evaluations of this work focus on visually showing the 3D reconstruction results on ITWE-A's test set. In Figure \ref{fig:qual-example-render-and-lm}, three images with large colour variation are predicted, the top row shows the 2D landmark predictions look reasonable. The comparison between the top row and the bottom row shows that the quality of the reconstructed 3D meshes are reasonable in geometric aspect, while the in-the-wild colour model can reconstruct a large variation of in-the-wild ear colours even from grayscale images.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{
\begin{tikzpicture}
\node (0_lm) [inner sep=0pt, anchor=south east] at (0,0)
{\includegraphics[width=4cm]{qual_predictions/best_lm.png}};
\node (0_render) [below = 0.2cm of 0_lm, inner sep=0pt]
{\includegraphics[width=4cm]{qual_predictions/best_render.png}};
\node (1_lm) [right = 0.2cm of 0_lm, inner sep=0pt]
{\includegraphics[width=4cm]{qual_predictions/96_lm.png}};
\node (1_render) [below = 0.2cm of 1_lm, inner sep=0pt]
{\includegraphics[width=4cm]{qual_predictions/96_render.png}};
\node (2_lm) [right = 0.2cm of 1_lm, inner sep=0pt]
{\includegraphics[width=4cm]{qual_predictions/82_lm.png}};
\node (2_render) [below = 0.2cm of 2_lm, inner sep=0pt]
{\includegraphics[width=4cm]{qual_predictions/82_render.png}};
\end{tikzpicture}
}
\caption{Test set prediction results with different ear colours. Top row: original ear images marked with predicted 2D landmarks. Bottom row: predicted 3D ear meshes projected onto original ear images.}
\label{fig:qual-example-render-and-lm}
\end{figure}
In Figure \ref{fig:qual-poses}, two images with different head poses are selected for 3D ear reconstruction. The top row shows the results from a near-ideal head pose (\textit{i}.\textit{e}. near-profile face) and the bottom row shows the results from a large head pose deviation from the ideal (\textit{i}.\textit{e}. front facing, tilted head). The figure shows that HERA works well with different head poses. For the front facing images, the model predicts the correct horizontal rotation rather than narrowing the 3D ear mesh's width to match the 2D image.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{
\begin{tikzpicture}
\node (small_pose_orig) [inner sep=0pt, anchor=south east] at (0,0)
{\includegraphics[width=4cm]{poses/test_0078.png}};
\node (small_pose_mesh) [right = 0.2cm of small_pose_orig, inner sep=0pt]
{\includegraphics[width=4cm]{poses/78_mesh.png}};
\node (small_pose_lm) [right = 0.2cm of small_pose_mesh, inner sep=0pt]
{\includegraphics[width=4cm]{poses/78_lm.png}};
\node (large_pose_orig) [below = 0.2cm of small_pose_orig, inner sep=0pt]
{\includegraphics[width=4cm]{poses/test_0028.png}};
\node (large_pose_mesh) [right = 0.2cm of large_pose_orig, inner sep=0pt]
{\includegraphics[width=4cm]{poses/28_mesh.png}};
\node (large_pose_lm) [right = 0.2cm of large_pose_mesh, inner sep=0pt]
{\includegraphics[width=4cm]{poses/28_lm.png}};
\end{tikzpicture}
}
\caption{Test set prediction results with different head poses. Each row represents a distinct subject. \nth{1} column: Original uncropped images. \nth{2} column: Predicted 3D ear meshes. \nth{3} column: Predicted 2D landmarks. Ear pose is successfully predicted when difficult head pose involves.}
\label{fig:qual-poses}
\end{figure}
\subsection{Ablation Study}
\begin{table}
\caption{Normalised landmark distance error statistics on ITWE-A for ablation study.}
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & mean $\pm$ std & median & $\leq 0.1$ & $\leq 0.06$\\
\hline\hline
HERA & $0.0398 \pm 0.009$ & $0.0391$ & $100\%$ & $96.2\%$ \\
HERA-W/O-WTN & $0.0401 \pm 0.009$ & $0.0384$ & $100\%$ & $96.2\%$ \\
HERA-W/O-PIX & $0.0392 \pm 0.009$ & $0.0387$ & $100\%$ & $96.2\%$ \\
HERA-W/O-AUG & $0.0446 \pm 0.011$ & $0.0437$ & $100\%$ & $92.4\%$ \\
HERA-W/O-AUG-LM & $0.0591 \pm 0.014$ & $0.0567$ & $99\%$ & $64.7\%$ \\
\hline
\end{tabular}
}
\end{center}
\label{tab:abla-res}
\end{table}
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{
\begin{tikzpicture}
\node (ablation_3dera) [inner sep=0pt, anchor=south east, label=below:(1)] at (0,0)
{\includegraphics[width=4cm]{ablation_figure/3dera.png}};
\node (ablation_wo_pix) [right = 0.2cm of ablation_3dera, inner sep=0pt, label=below:(2)]
{\includegraphics[width=4cm]{ablation_figure/wo_wtn.png}};
\node (ablation_wo_whitening) [right = 0.2cm of ablation_wo_pix, inner sep=0pt, label=below:(3)]
{\includegraphics[width=4cm]{ablation_figure/wo_pix.png}};
\end{tikzpicture}
}
\caption{Appearance comparison between the reconstructed 3D ear meshes of (1) HERA, (2) HERA-W/O-WTN and (3) HERA-W/O-PIX. Only HERA reconstructs the external auditory canal part correctly.}
\label{fig:abla-wtn-pix}
\end{figure}
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{
\begin{tikzpicture}
\node (ablation_3dera_2) [inner sep=0pt, anchor=south east, label=below:(1)] at (0,0)
{\includegraphics[width=4cm]{ablation_figure/3dera_2.png}};
\node (ablation_wo_aug) [right = 0.2cm of ablation_3dera_2, inner sep=0pt, label=below:(2)]
{\includegraphics[width=4cm]{ablation_figure/wo_aug.png}};
\node (ablation_wo_aug_lm) [right = 0.2cm of ablation_wo_aug, inner sep=0pt, label=below:(3)]
{\includegraphics[width=4cm]{ablation_figure/wo_aug_lm.png}};
\end{tikzpicture}
}
\caption{2D landmark localisation comparison between the prediction results of (1) HERA, (2) HERA-W/O-AUG and (3) HERA-W/O-AUG-LM. Data augmentation enables better ear rotation prediction and landmark loss is vital to accurate alignment especially for the ear contour part.}
\label{fig:abla-aug-lm}
\end{figure}
\noindent We now study how each component can affect HERA's performance and we evaluate on several system variations including HERA-W/O-WTN (without PCA whitening on 3D ear shape parameters $\bm{\beta}_{s}$), HERA-W/O-PIX (without pixel loss), HERA-W/O-AUG (without data augmentation) and HERA-W/O-AUG-LM (without landmark loss). Table \ref{tab:abla-res} shows the statistics for all the variations of HERA. When training without PCA whitening on 3D ear shape parameters and without pixel loss, their performances on 2D landmark localisation are similar to the final proposed method. However, as shown in Figure \ref{fig:abla-wtn-pix}, both variations failed to predict the projection of the external auditory canal part of the ear correctly. An imbalanced intermediate code vector is one of the potential reasons why the variation without PCA whitening on shape parameters fails on the external auditory canal part. In any case, a balanced design of intermediate code vector with similar variance for each parameter can benefit the performance of the neural network. The version without pixel loss focuses on lowering the landmark alignment error regardless of the overall appearance of the ear. Therefore it is necessary to utilise the pixel loss.
When training without data augmentation, the 2D landmark localisation performance drops by a small amount mainly due to its lack of variety in ear rotation, shown in Figure \ref{fig:abla-aug-lm}. When training without landmark loss, the predicted landmarks is not accurate enough, shown in Figure \ref{fig:abla-aug-lm}. As a result, the reconstructed 3D ears are not accurately aligned with the 2D ears especially for the ear contours.
\section{\uppercase{Conclusion}}
\noindent As a large proportion of human-related 3D reconstruction approaches focus on the human face, 3D ear reconstruction, as an important human-related task, has much less related work.
In this paper, we propose a self-supervised deep 3D ear reconstruction autoencoder from single image. Our model reconstructs the 3D ear mesh with a plausible appearance and accurate dense alignment, as witnessed by the accurate alignment compared to ground truth landmarks. The comprehensive evaluation shows that our method achieves state-of-the-art performance in 3D ear reconstruction and 3D ear alignment.
\bibliographystyle{apalike}
{\small
|
1,116,691,499,145 | arxiv | \section{Introduction}
In two dimensional condensed matter physics, the search of
non-Abelian Majorana fermions (MF) \cite{majorana}, which are their
own antiparticles and may have potential important application in
fault-tolerant topological quantum computation \cite{sarma},
attracts much recent attention to topological superfluid (TS) and
superconductors (TSC) \cite{qi, hasan}, which has a full pairing gap
in the bulk and topologically protected gapless states on the
boundary. Inspired by the fact that MFs exist at the vortex cores of
a two-dimensional (2D) $p_x+ip_y$ superconductor \cite{read},
theoretically some practical systems have been proposed, such as
fractional quantum Hall effect at filling $\nu=5/2$ \cite{read},
superfluid He-3 \cite{tsut}, non-centrosymmetric superconductors
\cite{sato1, lee}, the surface of a three-dimensional topological
insulator in proximity to an s-wave superconductor \cite{fu, linder}
and a spin-orbit-coupled (SOC) semiconductor quantum well coupled to
an s-wave superconductivity and a ferromagnetic insulator
\cite{jau}. Especially the latter has been improved by applying an
in-plane magnetic field to a (110)-grown semiconductor coupled only
to an s-wave superconductor. This improvement not only enhances the
ability to tune into the topological superconducting state but also
avoid orbital effect of the external magnetic field \cite{anj1}.
It is widely known that cold Fermi gases can be used to simulate
many other systems owing to their many controllable advantages and
operabilities. Certainly the simulations to TS are also possible and
as far as what we know is concerned, three main routes have been
suggested. The first was direct and based on p-wave superfluidity of
degenerate Fermi gases by means of p-wave Feshbach resonance
\cite{gurarie}. Although this method is very simple, it is
challenging due to short lifetimes of the p-wave pairs and
molecules. Subsequently Zhang et al. \cite{zhang} propose to creat
an effective $p_x + ip_y$ TS from an s-wave interaction making use
of an artificially generated SOC. In fact, SOC have been realized in
a neutral atomic Bose-Einstein condensate (BEC) and the same
technique is also feasible for cold Fermi gases \cite{e1, e2}.
Realizing that in a dual transformation SOC is formally equivalent
to a p-wave superfluid gap, Sato et al. \cite{sato} suggest to
artificially generate the vortices of SOC by using lasers carrying
orbital angular momentum. In terms of the latter two ansatzs, in
order to enter into non-Abelian TS, a Zeeman magnetic field larger
than superfluid pairing gap is essentially needed.
In this paper we propose a model to realize non-Abelian TS in cold
Fermi gases by using an anisotropic and spin-dependent optical
lattice (ASDOL) to engineer mismatched Fermi surfaces for each
hyperfine species. Such two requirements for optical lattices are
both accessible in an experiment. The anisotropy is determined by
the intensity of the corresponding pair of laser beams in different
directions, while spin-dependence is also available in the light of
the fact that the strength of the optical potential crucially
depends on the atomic dipole moment between the internal states
involved \cite{mandle1, mandle2, liu, jaksch}. In contrast to an
isotropic and spin-independent optical lattice (ISIOL) \cite{zhang,
sato}, our model realizes another new non-Abelian TS phase in which
Zeeman magnetic field can be smaller than superfluid pairing gap.
The organization of this paper is as follows. In section 2, we
firstly give our model and analyze the condition of gap closing.
Then we calculate TKNN number $I_{TKNN}$ \cite{TKNN} of occupied
bands addressing the topological properties of the model to obtain
topological phase diagram. A new non-Abelian TS is discovered. In
addition we also investigate the properties of edge states to prove
bulk-boundary correspondence. In section 3, by taking the
self-consistency constraint on the s-wave pairing gap into account,
the stability of TS and the effects of a harmonic trap potential are
discussed. We find that this new non-Abelian TS is stable. A brief
conclusion is given in Section 4.
\section{Hamiltonian and Topological Phase Diagram}
In this paper we consider s-wave superfluid of cold Fermi gases with
Rashba SOC in a two-dimension square optical lattice which is
anisotropic and spin-dependent. For convenience below we assume the
lattice constant to be unit. The Hamiltonian suggested is
\begin{eqnarray}
H=\sum_{k\sigma}[\epsilon_{k\sigma}-\mu-\sigma\Gamma]a_{k\sigma}^{\dag}a_{k\sigma}+\sum_k[
J_k a_{k\uparrow}^{\dag}a_{k\downarrow}-\Delta
a_{-k\downarrow}a_{k\uparrow}+H.C.],\label{1}
\end{eqnarray}
where $a_{k\sigma}^{\dag}$ creates a spin
$\sigma=\uparrow,\downarrow$ fermion at momentum $\vec{k}=(k_x,
k_y)$. $J_k=2J[\sin{(k_y)}+i \sin{(k_x)}]$ with $J$ ($J>0$) denoting
the strength of Rashba SOC. $\mu$, $\Gamma$, $\Delta$ are chemical
potential, effective Zeeman magnetic field and s-wave superfluid
pairing gap, respectively.
The kinetic energy terms $\epsilon_{k\sigma}$ come from anisotropic
and spin-dependent optical lattices \cite{fisher}. Imagine tuning
the intensities of lasers so that one spin state prefers to hop
along the $x$ axis and the other prefers to hop along the $y$ axis.
In this paper we concentrate on a specific situation where the Fermi
surfaces of the two spin states are rotated by $90^0$ with respect
to one another, and only consider a near-neighbor hopping
Hamiltonian with single particle dispersions
\begin{eqnarray}
\epsilon_{k\uparrow}=-2t_a\cos{(k_x)}-2t_b\cos{(k_y)}, \nonumber\\
\epsilon_{k\downarrow}=-2t_b\cos{(k_x)}-2t_a\cos{(k_y)}. \label{2}
\end{eqnarray}
When there are no Zeeman magnetic field and SOC $\Gamma=J=0$, this
model realized a stable paired superfluid state with coexisting
pockets of momentum space with gapless unpaired fermions
\cite{fisher}, similar to the Sarma state in polarized mixtures
\cite{sar, liuw, forbes}. Moreover in the strong coupling limit, a
d-wave pairing superfluid as well as a d-wave density wave state,
are also proposed to be achievable in this system \cite{caizi}.
Generally speaking, the close of bulk gap is a signal of topological
phase transitions, although it is not a sufficient condition
\cite{law}. Thus to obtain topological phase diagram of Hamiltonian
(\ref{1}), we firstly calculate the bulk spectrum of the system to
search parameter regions for which different topological phases are
possible. Then for every regions we calculate topological invariants
to label topological properties. To obtain the bulk spectrum, we
rewrite the Hamiltonian as
\begin{eqnarray}
H=\frac{1}{2}\sum_k\psi_{k}^{\dag}M_{4\times4}\psi_{k},\label{3}
\end{eqnarray}
where $\psi_{k}^{\dag}=(a_{k\uparrow}^{\dag},a_{k\downarrow}^{\dag},
a_{-k\uparrow},a_{-k\downarrow})$ is a row vector and $M_{4\times4}$
is a matrix with $M_{11}=-M_{33}=\epsilon_{k\uparrow}-\mu-\Gamma$,
$M_{22}=-M_{44}=\epsilon_{k\downarrow}-\mu+\Gamma$,
$M_{12}=M_{43}=M_{21}^{\ast}=M_{34}^{\ast}=J_k$,
$M_{23}=M_{32}=-M_{14}=-M_{41}=\Delta$,
$M_{13}=M_{31}=M_{24}=M_{42}=0$.
Diagonalizing the matrix $M_{4\times4}$, we find the energy spectrum
\begin{eqnarray}
E_k^{\pm}=\sqrt{\xi_{k+}^2+\xi_{k-}^2+|J_k|^2+\Delta^2 \pm 2E_0}
\label{4}
\end{eqnarray}
with $\xi_{k+}=-(t_a+t_b)[\cos{(k_x)}+\cos{(k_y)}]-\mu$,
$\xi_{k-}=(-t_a+t_b)[\cos{(k_x)}-\cos{(k_y)}]-\Gamma$ and $E_0=
\sqrt{\xi_{k+}^2\xi_{k-}^2+\xi_{k+}^2|J_k|^2+\xi_{k-}^2\Delta^2}$.
The close of the bulk energy gap is possible only if $E_k^-=0$, in
other words
\begin{eqnarray}
\xi_{k+}^2+\xi_{k-}^2+|J_k|^2+\Delta^2=2E_0, \label{5}
\end{eqnarray}
which is equivalent to
\begin{eqnarray}
\xi_{k+}^2-\xi_{k-}^2-|J_k|^2+\Delta^2=0,\quad\quad
|J_k|^2\Delta^2=0.\label{6}
\end{eqnarray}
For the s-wave pairing, $\Delta\neq 0$ and the second equation in
(\ref{6}) is satisfied only when $k = (0, 0)$, $(0, \pi)$, $(\pi,
0)$, $(\pi, \pi)$. Substituting these values into the first equation
in (\ref{6}), four different gap closing conditions are obtained
\begin{eqnarray}
\mu^2+\Delta^2=[2(t_b-t_a)+\Gamma]^2,\quad
\mu^2+\Delta^2=[2(t_b-t_a)-\Gamma]^2,\quad
\nonumber\\\Gamma^2=\Delta^2+ [2(t_a+t_b)+\mu]^2,\quad
\Gamma^2=\Delta^2+ [2(t_a+t_b)-\mu]^2.\label{7}
\end{eqnarray}
By means of these conditions, Fig.1(a) shows that there are at least
$12$ regions, which may be topologically distinct. By now we have
completed the first step of deciding topological phase diagram.
Below we will explore the topological numbers to classify the
topological phases of the model (\ref{1}).
In terms of our system, it explicitly breaks the time-reversal
symmetry even though Zeeman magnetic field is zero. Thus TKNN number
$I_{TKNN}$ plays a central role in topological nature of the system
\cite{TKNN}, which is consistent with the conclusion in the periodic
table of TSC that BdG Hamiltonian only with particle-hole symmetry
in two dimension is classified by integer group $Z$ \cite{ludwig}.
Furthermore the topological properties of the system are identified
as follows \cite{satom}. If TKNN number is nonzero and even (odd),
the system is Abelian (non-Abelian) TS without (with) non-Abelian
anyons; If TKNN number is zero, the system is topologically trivial
and is normal superfluid.
TKNN number is defined by
\begin{eqnarray}
I_{TKNN}=\frac{1}{2\pi i}\int d^2k F_{12}(k),\label{8}
\end{eqnarray}
where Berry connection $A_{i}(k)$ ($i=1,2$) and associated field
strength $F_{12}(k)$ are given by
\begin{eqnarray}
A_i(k)&=&\sum_{E_n<0}<\phi_n(k) |\partial_{k_i}\phi_n(k)>\nonumber\\
F_{12}(k)&=&\partial_1 A_2(k) - \partial_2 A_1(k) \label{9}
\end{eqnarray}
with $|\phi_n(k)>$ being a normalized wave function of the $n$th
band such that $M_{4\times4}|\phi_n(k)>=E_n(k)|\phi_n(k)>$. It is
should be noted that the sum in (\ref{9}) is restricted to occupied
bands.
TKNN numbers of $12$ regions have been numerically calculated
\cite{fukui} and given in Fig.1(a). Totally $5$ non-Abelian TS and
one Abelian TS phases are found. In order to make a contrast, in
Fig.1(b) we also show the topological phase diagram of an ISIOL and
there are 4 non-Abelian TS and 1 Abelian TS phases. By comparison
the effects of ASDOL are mainly twofolds. On the one hand it creates
another non-Abelian TS phase around the chemical potential $\mu= 0$,
which can be realized for Zeeman magnetic field $\Gamma$ less than
superfluid pairing gap $\Delta$. This is our main result in this
paper and can be understood from the fact that an ASDOL ($t_a\neq
t_b$) supplies an effective Zeeman magnetic field $\pm2(t_a-t_b)$,
as seen from (7). On the other hand in the direction of increasing
Zeeman magnetic field it separates two successive non-Abelian TS
phases in Fig.1(b).
According to the bulk-boundary correspondence, a topologically
nontrivial bulk guarantees the existence of topologically stable
gapless edge states on the boundary. Cold Fermi gases with sharp
edges may be realized along the lines proposed in \cite{goldman}. In
our case, the gapless edge states is a chiral Majorana fermion mode.
It should also be remembered that the core of a vortex is
topologically equivalent to an edge which has been closed on itself.
The topologically protected edge modes we describe here are
therefore equivalent to the Majorana fermions known to exist in the
core of vortices of p-wave superfluid \cite{lewenstein}.
To study edge states, we transform Hamiltonian (\ref{1}) into
lattice representation
\begin{eqnarray}
H&=&-t_a\sum_i [ a_{i\uparrow}^{\dag}a_{i+x\uparrow}+
a_{i\downarrow}^{\dag}a_{i+y\downarrow}+H.C.] -t_b\sum_i [
a_{i\uparrow}^{\dag}a_{i+y\uparrow}+
a_{i\downarrow}^{\dag}a_{i+x\downarrow}+H.C.]\nonumber\\
&&+J\sum_i[(a_{i\uparrow}^{\dag}a_{i+x\downarrow}-a_{i\downarrow}^{\dag}a_{i+x\uparrow})
-i(a_{i\uparrow}^{\dag}a_{i+y\downarrow}+a_{i\downarrow}^{\dag}a_{i+y\uparrow})+H.C.]\nonumber\\
&&-\sum_{i\sigma}(\mu+\sigma\Gamma)a_{i\sigma}^{\dag}a_{i\sigma}-
\Delta\sum_i(a_{i\uparrow}^{\dag}a_{i\downarrow}^{\dag}+H.C.)
\label{10}
\end{eqnarray}
where $a_{i\sigma}^{\dag}$ is the creation operator of a fermion
with spin $\sigma$ at lattice site $i= (i_x, i_y)$. Without loss of
generality we suppose that the system has two open boundary in the
$x$-direction and is periodic in the $y$-direction. After performing
a Fourier transformation along the $y$-direction only, we
numerically diagonalize the Hamiltonian (\ref{10}) and
correspondingly obtain the excitation spectrum $E_{n,k_y}$ with
subscript $n$ labeling different energy levels.
In Fig.2 we shows the energy spectra for an ASIOL with edges at $i_x
= 0$ and $i_x = 30$. These $12$ figures correspond to $12$ regions
in Fig.1(a), respectively. Fig.2(b), (d), (f), (h), (j) are for
non-Abelian TS phases, (k) for Abelian TS phase and others non-TS
phases. As stated in \cite{satom, lewenstein}, we also find that in
the non-Abelian TS phases, where $I_{TKNN}=\pm 1$ and
$(-1)^{I_{TKNN}}=-1$, a single pair of gapless edge modes appears.
Since on a given boundary there is no state available for backward
spinconserving scattering, these states are topologically protected
\cite{lewenstein}. In this case the edge zero mode is a chiral
Majorana fermion. While in Abelian TS phase $I_{TKNN}=2$ and non-TS
phases $I_{TKNN}=0$, where $(-1)^{I_{TKNN}}=1$, the system contains
either zero or two pairs of edge states. These edge states are
topologically trivial for either edge modes can perform
spin-conserving backscattering. In addition it is also found that
edge excitations only cross at zero energy with a linear dispersion
at $k_y = 0$ or $\pi$. Their origin is topological and can be
identified by a topological winding number $I(k_y)$ defined only for
$k_y = 0$ and $\pi$ \cite{satom}: when $I(k_y)$ is non-zero for $k_y
= 0$ or $\pi$, the energy of the gapless edge mode becomes zero at
this value of $k_y$.
\section{Phase Diagram with Self-Consistent Pairing Gap}
In section 2 we have determined the topological phase diagram for a
fixed s-wave superfluid pairing gap. However for a realistic
physical system the pairing gap generally varies when the chemical
potential and Zeeman magnetic field change. Especially Zeeman
magnetic field breaks time-reversal symmetry and weakens the
stability of superfluidity. A well-known example is so-called
Chandrasekar-Clogston (CC) limit \cite{cc, cc1} in superconducting
systems without SOC. Hence it is possible that not all phases in
Fig.1(a) are accessible. In this section within BCS mean-field
theory, we self-consistently determine s-wave superfluid pairing gap
and consider the competition from normal phase and phase separation
to investigate the stability of TS.
Let $-U$ ($U>0$) denote the effective attraction strength between
fermions, then the pairing gap
$\Delta=U\sum_k<a_{-k\downarrow}a_{k\uparrow}>$ can be obtained from
the minimization of thermodynamic potential $\Omega_s=\sum_k
\left[\xi_{k+}-\frac{1}{2}(E_k^{+}+E_k^{-})\right]+N \Delta^2/U$.
The unstability of superfluidity against phase separation is
signalled by the condition $\Delta\neq 0$ and $\partial^2
\Omega_s/\partial \Delta^2<0$, while unstability against normal
state is $\Omega_n<\Omega_s$, where $\Omega_n$ is thermodynamic
potential of normal state. For parameter chosen superfluidity is
robust as long as $\Delta\ne0$. The resulting zero-temperature
pairing gap and phase diagram are shown in Fig.3. Fig.3(b) shows
similar structures with Fig.1(a) except TS with large Zeeman
magnetic field are not available. Comparing Fig.3(a) with (b), it is
easily seen that for small chemical potential and Zeeman magnetic
field, although the pairing gap has a larger magnitude ($\Delta\sim
t_a$) than Zeeman magnetic field, the system can be non-Abelian TS
with $I_{TKNN}=-1$. This finding is very important and suggests that
such a new non-Abelian TS we have found in section 2 is stable.
At last we consider the effects of a harmonic trap potential by
local density approximation (LDA). Under this approximation the
system is locally uniform and trap potential effectively provides a
scan over the chemical potential, in other words local chemical
potential $\mu_R=\mu-\Delta_t R^2$ with $\Delta_t$ denoting the
strength of an isotropic potential. From Fig.3(b) depending on
Zeeman magnetic field some different topological phases can coexist
in different concentric spherical shells. In Fig.4 we show the space
distributions of pairing gap, particle number and TKNN number. It is
should be noted that on the boundary of different topological phases
there always exists zero-energy states due to the change of
topological invariant.
\section{Conclusions}
In conclusion we have investigated the effects of an anisotropic and
spin-dependent optical lattice (ASDOL) on non-Abelian topological
superfluid and found that a new non-Abelian topological superfluid
phase exists steadily, in contrast to an isotropic and
spin-independent optical lattice. Moreover in this new non-Abelian
topological superfluid phase Zeeman magnetic field can be smaller
than the superfluid pairing gap. Thus an ASDOL supplies a convenient
route to realize TS. In addition we also calculated chiral Majorana
edge states and investigated the effects of a harmonic trap
potential.
\section*{Acknowledgement}
The work was supported by National Natural Science Foundation of
China under Grant No. 10675108 and Foundation of Yancheng Institute
of Technology under Grant No. XKR2010007.
|
1,116,691,499,146 | arxiv | \section{Introduction}
In \cite{MR1272539}, Andersen, Jantzen and Soergel introduced a combinatorial model to describe certain representations of the two following situations:
\begin{enumerate}
\item $U_p$ is the quantized enveloping algebra of a complex Lie algebra at a $p$-th root of unity, where $p>1$ is an odd integer (and prime to 3 if the corresponding root system is of type $G_2$),
\item ${\operatorname{Lie}}\, G_k$, where $G_k$ is a connected, simply connected semisimple algebraic group over an algebraically closed field $k$ of char $p>h$, where $h$ is the Coxeter number of the corresponding root system.
\end{enumerate}
A relevant consequence of this work is the following. Lusztig's Modular Conjecture (cf. \cite{lusztig1980some}) provides a character formula for representations of connected reductive algebraic groups over an algebraically closed field $k$ with ${\operatorname{char}\, } k =p>h$. In this context, \cite{MR1272539} designates an essential step in proving this conjecture for almost all $p$ by deducing this from the validity of the char 0 analogue, Lusztig's Quantum Conjecture (resulting from \cite{zbMATH00755403} and \cite{KL93}).
This model is realized by a combinatorial category ${\mathcal K}_k$ and its full subcategory of so-called \emph{special objects} ${\mathcal M}_k'$. ${\mathcal M}_k'$ is then equivalent to ${\mathcal R}_k$, the category of \emph{$X$-graded special deformed modular representations} of ${\operatorname{Lie}} \, G_k$ where $X$ is the corresponding weight lattice. The feature of ${\mathcal R}_k$ is to contain the objects that are of primary interest regarding a character formula, namely the deformed projective objects of the principal block of the category of \emph{$X$-graded restricted representations}.
The only necessary data derived from $G_k$ to construct a certain $X$-graded ${\mathcal M}_k'$ is the corresponding root datum. Starting with a unit object, we obtain this category ${\mathcal M}_k'$ by repeatedly applying a set of translation functors ${\mathcal T}_s'$ indexed by the simple affine reflections, taking direct sums and direct summands. However, the main actor of this paper is the category ${\mathcal M}_k$ that is constructed in the same manner with the only difference lying in the definition of the translation functors ${\mathcal T}_s$. This definition is related to moment graph theory and due to Fiebig who showed in \cite{MR2726602} that ${\mathcal M}_k$ and ${\mathcal M}_k'$ are equivalent categories. The methods in \cite{MR2726602} lead to an improvement in the matter of Lusztig's Modular Conjecture by giving an explicit bound on exceptional primes. Furthermore, a proof of Lusztig's Quantum Conjecture independent from \cite{zbMATH00755403}, \cite{KL93} could be given.
It is very difficult to describe these special objects intrinsically. A first step towards such a description is -- and that is what is done in this paper -- to study the behaviour of an intuitionally defined duality in the category ${\mathcal M}_k$ and its parent ${\mathcal K}_k$. The main result of this paper is
\begin{ThM}[Theorem \ref{selfdualanti}]
The indecomposable special objects in ${\mathcal M}_k$ that correspond to alcoves in the anti-fundamental box are tilting self-dual up to a shift.
\end{ThM}
\section{Preliminaries}
\subsection{Notation}\label{sec:not}
Let $R$ be a reduced and irreducible root system in a finite dimensional ${\mathbb Q}$-vector space $V$. Furthermore, the coroot system of $R$ is denoted by $R^{\vee} \subset V^*={\operatorname{Hom}}_{{\mathbb Q}}(V,{\mathbb Q})$, $R^{+}\subset R$ is a subset of chosen positive roots and this determines a set of simple roots $\Delta \subset R^{+} \subset R$.
For each $\alpha \in R$ and $n \in {\mathbb Z}$, an affine transformation in ${\operatorname{Aff}} (V^*)$ is defined by
\begin{equation*}
s_{\alpha, n}(v)=v-(\langle\alpha, v\rangle -n)\alpha^{\vee},
\end{equation*}
where $\langle\cdot,\cdot\rangle\colon V \times V^* \to {\mathbb Q}$ is the canonical pairing. This transformation is a reflection through the hyperplane
\begin{equation*}
H_{\alpha, n}=\{v\in V^*\mid \langle\alpha, v\rangle=n\},
\end{equation*}
and therefore, it partitions the dual space into three parts
\begin{equation}\label{hyperpartition}
V^*=H_{\alpha,n}^-\dot\cup H_{\alpha,n}\dot\cup H_{\alpha,n}^+
\end{equation}
with
\begin{equation*}
H_{\alpha,n}^-=\{v\in V^*\mid \langle\alpha, v\rangle<n\}
\end{equation*}
and
\begin{equation*}
H_{\alpha,n}^+=\{v\in V^*\mid \langle\alpha, v\rangle>n\}.
\end{equation*}
Let $\tilde{\alpha}$ be the highest root in $R$, i.e. the unique root with the property that for every $\alpha \in R^{+}$,
\begin{equation*}
\tilde{\alpha} -\alpha \in\sum_{\beta \in \Delta}{\mathbb Z}_{\geq 0} \beta.
\end{equation*}
Then
\begin{equation*}
\widehat{{\mathcal S}}:=\{s_{\alpha,0}\mid \alpha \in \Delta\}\cup \{s_{\tilde{\alpha},1}\}
\end{equation*}
is called the set of \emph{simple affine reflections}. We will distinguish between the \emph{finite Weyl group} ${\mathcal W} \subset {\operatorname{GL}} (V^*)$ generated by ${\mathcal S}:=\{s_{\alpha,0}\mid \alpha \in \Delta\}$ and the \emph{affine Weyl group} $\widehat{{\mathcal W}}\subset{\operatorname{Aff}}(V^*)$ generated by $\widehat{{\mathcal S}}$. As $(\widehat{{\mathcal W}},\widehat{{\mathcal S}})$ is a Coxeter system, one is able to associate a length function
\begin{equation*}
l\colon \widehat{{\mathcal W}} \to {\mathbb N}.
\end{equation*}
The element of maximal length of the finite Coxeter group ${\mathcal W}$ is denoted by $w_0$.
There is a common way to identify elements in $\widehat{{\mathcal W}}$ with subsets of $V^{*}$. One considers the coarsest partition ${\mathscr F}$ of $V^{*}$ that refines all partitions of the form \eqref{hyperpartition} for $\alpha \in R^+, n \in {\mathbb Z}$. The open components of this partition are known as \emph{alcoves} with the designated \emph{fundamental alcove}
\begin{equation*}
A_e=\{v\in V^*\mid 0<\langle \alpha, v\rangle <1 \quad\forall \alpha \in R^+\}.
\end{equation*}
$\widehat{{\mathcal W}}$ acts simply transitively on the \emph{set of alcoves} ${\mathscr A}$, and hence there is the bijection
\begin{align*}
\widehat{{\mathcal W}} &\to {\mathscr A}\\
w&\mapsto A_w=w.A_e.
\end{align*}
A component $B$ of ${\mathscr F}$ which is included in exactly one hyperplane $H_{\alpha,n}$ is called \emph{wall}. In this situation, the \emph{type $\alpha(B)$ of} $B$ is the positive root $\alpha$. A wall $B$ is a \emph{wall to} an alcove $A\in {\mathscr A}$ if $B$ is a subset of the closure of $A$. $B_-$ (resp. $B_+$) denotes the unique alcove whose wall is $B$ and which is contained in the negative (resp. positive) half-space corresponding to $B$.
For every $\beta \in R^{+}$ there is a bijection $\beta \uparrow\cdot$ on ${\mathscr F}$ mapping alcoves to alcoves and walls to walls. For $F\in {\mathscr F}$ the component $\beta\uparrow F$ is given by $s_{\beta,m}.F$ where $m$ is the smallest integer such that $F\subset H^-_{\beta,m}\cup H_{\beta,m}$. The inverse of $\beta\uparrow\cdot$ is denoted by $\beta \downarrow\cdot$.
To each simple affine reflection $s\in \widehat{{\mathcal S}}$ we associate the wall $A_{s,e}$ of the fundamental alcove $A_e$ which is contained in the closure of $A_s$, too. Thereby, we obtain a mapping
\begin{align*}
{\mathscr A} \to \{\text{walls}\}\\
A\mapsto \w{A}{s}
\end{align*}
where $\w{A}{s}$ is the unique wall which lies in the $\widehat{{\mathcal W}}$-orbit of $A_{s,e}$ and intersects the closure of $A$. In case we need the negative alcove to a wall of an alcove, namely $(\w{A}{s})_-$ we will simply denote it by $\wm{A}{s}$ and analougously $(\w{A}{s})_+=\wp{A}{s}$. This should not cause any confusion. For having a more detailed view on alcoves, \cite{humphreys1990reflection} is recommended.
Take an algebraically closed field $k$ with ${\operatorname{char}\, } k\neq 2$ and, if $R$ is of type $G_2$, ${\operatorname{char}\, } k \neq 3$. Let
\begin{equation*}
X=\{v\in V\mid \langle v, \alpha^{\vee}\rangle \in {\mathbb Z} \quad \forall \alpha \in R \}
\end{equation*}
and let $S_k$ be the symmetric algebra of $X_k:=X \otimes_{{\mathbb Z}}k$. We define the following $S_k$-subalgebras of the quotient field of $S_k$,
\begin{equation*}
S_k^{\emptyset}=S_k[\alpha^{-1}\mid\alpha\in R^+]
\end{equation*}
and
\begin{equation*}
S_k^{\beta}=S_k[\alpha^{-1}\mid\alpha \in R^+, \alpha\neq \beta]
\end{equation*}
for $\beta \in R^+$.
All those algebras carry a compatible ${\mathbb Z}$-grading which is induced by setting the degree of $X$ to $2$.
\subsection{Alcoves}
In the course of this article we will need to be familiar with the behaviour of alcoves and their walls in connection with the action of the affine Weyl group. The necessary properties are collected in this paragraph.
\begin{lemma}[\cite{MR2726602}, Lemma 5.1]\label{wallcomb} Let $s\in\widehat{{\mathcal S}}$ and $\beta\in R^+$. Choose $A\in{\mathscr A}$ and $B\in \widehat{{\mathcal W}}.A_{s,e}$.
\begin{enumerate}
\item[a)] If $\beta\uparrow B=B$, then $\beta\uparrow B_-=B_+$.
\item[b)] If $\beta\uparrow B\ne B$, then $\{\beta\uparrow B_-, \beta\uparrow B_+\}=\{(\beta\uparrow B)_-, (\beta\uparrow B)_+\}$.
\item[c)] If $\beta\uparrow \w{A}{s}=\w{A}{s}$ and $A=\wm{A}{s}$, then $\w{(\beta\uparrow A)}{s}=\w{A}{s}$ and $\beta\uparrow A=\wp{A}{s}$.
\item[d)] If $\beta\uparrow \w{A}{s}\ne \w{A}{s}$, then $\w{(\beta\uparrow A)}{s}=\beta\uparrow \w{A}{s}$.
\end{enumerate}
\end{lemma}
\begin{remark}\label{updown}
If $\w{A}{s}$ is of type $\beta$ with $\beta \uparrow A=s_{\beta,n}.A$, then $\w{(\beta \uparrow A)}{s}=(s_{\beta, n}.A)^{(s)}=s_{\beta, n}.\w{A}{s}$ is of type $\beta$, too, and vice versa. In formula this means
\begin{equation*}
\beta \uparrow \w{A}{s}=\w{A}{s} \iff \beta\uparrow\w{(\beta \uparrow A)}{s}=\w{(\beta \uparrow A)}{s}.
\end{equation*}
Let $\beta \uparrow \w{A}{s}=\w{A}{s}$ and $A=\wp{A}{s}$. In particular, it follows from Lemma \ref{wallcomb}.a) that
\begin{equation*}
\beta \downarrow A= \wm{A}{s}
\end{equation*}
and
\begin{equation*}
\beta \uparrow A =\wm{(\beta \uparrow A)}{s}.
\end{equation*}
\end{remark}
Below, there are two more lemmas on alcove calculation with an involved finite Weyl group element. Define $w(\beta)^+$ to be the unique element in $\{w(\beta), -w(\beta)\}\cap R^+$.
\begin{lemma}\label{kipp:wb}
Let $\beta \in R^{+}, A \in {\mathscr A}$ and $w \in {\mathcal W}$. Then
\begin{equation*}
w.(\beta\uparrow A) =
\begin{cases}
w(\beta)^+\uparrow w.A & \text{if }w(\beta) \in R^{+},\\
w(\beta)^+\downarrow w.A & \text{if }w(\beta) \in R^{-}.
\end{cases}
\end{equation*}
\end{lemma}
\begin{proof}
Let $n \in {\mathbb Z}$ be such that
\begin{equation}\label{prel:betaup}
\beta \uparrow A =s_{\beta,n}.A
\end{equation}
and hence
\begin{equation*}
w.(\beta \uparrow A) = (w s_{\beta, n}).A= s_{w(\beta),n}.(w.A) = s_{-w(\beta),-n}.(w.A).
\end{equation*}
Equation \eqref{prel:betaup} is equivalent to
\begin{equation*}
\forall N\geq n:\ A \subset H_{\beta,N}^{-} \quad\text{ and }
\quad \forall M < n: \, A\subset H_{\beta,M}^{-}.
\end{equation*}
But
\begin{equation*}
A \subset H_{\beta,N}^{-} \iff w.A \subset w(H_{\beta,N}^{-})= H_{w(\beta),N}^{-}=H_{-w(\beta),-N}^{+}.
\end{equation*}
If $w(\beta)\in R^+$, this leads to $w.(\beta \uparrow A)= s_{w(\beta),n}.(w.A)=w(\beta)^+ \uparrow w.A$.
Furthermore,
\begin{align*}
A \subset H_{\beta,N}^{-}& \\
\iff\quad &w.(\beta \uparrow A) \subset w s_{\beta, n} (H_{\beta,N}^{-})\\
&= w(H_{\beta,2n-N}^{+})=H_{-w(\beta),N-2n}^{-}.
\end{align*}
Hence for $w(\beta)\in R^-$, we obtain
\begin{equation*}
w(\beta)^+ \uparrow w.(\beta \uparrow A)=s_{-w(\beta), -n}w.(\beta \uparrow A)= w.A.
\end{equation*}
This yields the claim.
\end{proof}
\begin{lemma}\label{kipp:arithmetik}
Let $w\in {\mathcal W}$ then
\begin{enumerate}
\item[a)] $\{\wm{(w.A)}{s}, \wp{(w.A)}{s}\}=\{w.(\wm{A}{s}),w.(\wp{A}{s})\}$,
\item[b)] $\beta \uparrow \w{A}{s} =\w{A}{s}$ if and only if $w(\beta)^+\uparrow
\w{(w.A)}{s}=\w{(w.A)}{s}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item[a)] The common wall of the left-hand set is $\w{(w.A)}{s}$ and $w.\w{A}{s}$ is the common wall of the right-hand set. But $w.\w{A}{s}=\w{(w.A)}{s}$ and hence, both sets coincide. (This is even true for $w\in \widehat{{\mathcal W}}$.)
\item[b)] Let $\beta \uparrow \w{A}{s} =\w{A}{s}$ with $\w{A}{s} \subset H_{\beta, n}$. Applying $w$ yields $w.\w{A}{s}=\w{(w.A)}{s}\subset H_{w(\beta), n}$ and therefore $w(\beta)^+\uparrow \w{(w.A)}{s}=\w{(w.A)}{s}$.
For the if-part we simply set $w^{-1}$ into the role of $w$.
\end{enumerate}
\end{proof}
\section{The Category of Andersen-Jantzen-Soergel}
\subsection{The Category ${\mathcal K}_k$}
As a first step, we will define the category ${\mathcal K}_k$. Objects in ${\mathcal K}_k$ are of the form $M=(\{M(A)\}_{A\in {\mathscr A}}, \{M(A,\beta)\}_{A\in {\mathscr A}, \beta \in R^{+}})$, where
\begin{enumerate}
\item for all $A\in {\mathscr A}$: $M(A)$ is a ${\mathbb Z}$-graded $S^{\emptyset}_k$-module,
\item for all $A\in {\mathscr A}, \beta \in R^{+}$: $M(A,\beta)$ is an $S^{\beta}_k$-submodule of $M(A)\oplus M(\beta \uparrow A)$ with compatible grading.
\end{enumerate}
A morphism $f\colon M \to N$ in ${\mathcal K}_k$ is given by $f= (f_A)_{A\in {\mathscr A}}$, where
\begin{enumerate}
\item each $f_A\colon M(A) \to N(A)$ is a graded $S^{\emptyset}_k$-module homomorphism of degree $0$,
\item for all $A \in {\mathscr A}$ and $\beta \in R^{+}$ the homomorphism $f_A\oplus f_{\beta\uparrow A}$ maps the $S^{\beta}_k$-submodule $M(A,\beta)$ of \mbox{$M(A)\oplus M(\beta \uparrow A)$} to $N(A, \beta)\subseteq N(A) \oplus N(\beta \uparrow A)$.
\end{enumerate}
\subsubsection*{Examples}
We want to introduce two examples of objects in ${\mathcal K}_k$. The first, called the \emph{unit object}, is defined by
\begin{align*}
P_0(A)& :=
\begin{cases} S^{\emptyset}_k & \text{if }A=A_e, \\
0 & \text{else},
\end{cases}\\
P_0(A,\beta)& := \begin{cases} S^{\beta}_k & \text{if }A\in \{A_e,\beta \downarrow A_e\}, \\
0 & \text{else}.
\end{cases}\\
\end{align*}
To describe the second example, we need to define a set ${\mathscr A}^{-}_{\beta}\subset {\mathscr A}$ for any positive root $\beta$. This is the set of those alcoves inside the negative half-space of $\beta$ that additionally correspond to elements of the finite Weyl group. In formula this means
\begin{equation}\label{abminus}
{\mathscr A}^{-}_{\beta}=\{A \in {\mathscr A}\mid A=A_w \text{ with } w\in {\mathcal W}, A \subset H_{\beta}^{-}\}.
\end{equation}
For a graphical description, see Figure \ref{fig:cases2}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{article2.pdf}
\caption{The following subsets of ${\mathscr A}$, here of type $\tilde{A}_2$:}
Color key \quad
\includegraphics{artcp2.pdf} \quad $A\in {\mathscr A}_+^{\beta}$ \qquad
\includegraphics{artcp1.pdf} \quad $A\in {\mathscr A}_-^{\beta}$ \qquad
\includegraphics{artcp4.pdf} \quad $A\in \beta\downarrow {\mathscr A}_-^{\beta}$
\label{fig:cases2}
\end{figure}
Analogously, we define the set ${\mathscr A}^{+}_{\beta}$. Please notice that
\begin{equation*}
{\mathscr A}^{+}_{\beta}=\beta \uparrow {\mathscr A}^{-}_{\beta}.
\end{equation*}
We define the object $Q_0 \in {\mathcal K}_k$ as follows.
\begin{align*}
Q_0(A) & :=
\begin{cases} S^{\emptyset}_k & \text{if $A=A_w$ with } w \in {\mathcal W}, \\
0 & \text{else},
\end{cases}\\
Q_0(A,\beta) & :=
\begin{cases} \{(\beta x+y, y)\mid x,y \in S^{\beta}_k\} & \text{ for } A \in {\mathscr A}^{-}_{\beta},\\
\beta S^{\beta}_k & \text{ for } A \in \beta\uparrow {\mathscr A}^{-}_{\beta},\\
S^{\beta}_k & \text{ for } A \in \beta \downarrow {\mathscr A}^{-}_{\beta}, \\
0 & \text{ else}.
\end{cases}
\end{align*}
$Q_0$ is indeed an object in ${\mathcal K}_k$ as there are the following embeddings:
\begin{eqnarray*}
\forall A \in {\mathscr A}^{-}_{\beta} \colon &\{(\beta x+y, y)\mid x,y \in S^{\beta}_k\}&\hookrightarrow S^{\emptyset}_k \oplus S^{\emptyset}_k,\\
\forall A \in \beta \uparrow{\mathscr A}^{-}_{\beta} \colon &\beta S^{\beta}_k\oplus 0&\hookrightarrow S^{\emptyset}_k \oplus 0,\\
\forall A \in \beta \downarrow {\mathscr A}^{-}_{\beta} \colon &0\oplus S^{\beta}_k &\hookrightarrow 0\oplus S^{\emptyset}_k.
\end{eqnarray*}
\subsection{The Categories ${\mathcal M}_k$ and ${\mathcal M}_k^{\circ}$}
\subsubsection{Translation Functors}
In the work \cite{MR1272539} by Andersen, Jantzen and Soergel, a set of translation functors indexed by the affine simple roots was introduced in order to span a subcategory of ${\mathcal K}_k$. In \cite{MR2726602} it was then shown that one is able to replace those original translation functors by different translation functors that yield an equivalent subcategory. Fiebig's translation functors are motivated by moment graph theory. We will work with the translation functors by Fiebig.
The \emph{translation functor ${\mathcal T}_s\colon {\mathcal K}_k \to {\mathcal K}_k$ associated with} a simple affine reflection $s\in \widehat{{\mathcal S}}$ is given by
\begin{align*}
{\mathcal T}_s M (A) &= M(\wm{A}{s}) \oplus M(\wp{A}{s}),\\
{\mathcal T}_s M (A,\beta) & =
\begin{cases}
\{(\beta x + y,y)\mid x,y \in M(A,\beta)\} & \text{if $\beta \uparrow \w{A}{s}=\w{A}{s}$}\\
&\quad \text{and $A=\wm{A}{s}$},\\
\beta M(\beta \downarrow A,\beta) \oplus M(\beta \uparrow A,\beta) & \text{if $\beta \uparrow \w{A}{s}=\w{A}{s}$}\\
&\quad \text{and $A=\wp{A}{s}$},\\
M(\wm{A}{s},\beta) \oplus M(\wp{A}{s},\beta)&\text{if }\beta \uparrow \w{A}{s}\neq\w{A}{s},
\end{cases}
\end{align*}
where $A\in {\mathscr A}$ and $\beta \in R^+$. See Figure \ref{fig:cases} for a graphical description of the three different cases appearing in the definition of ${\mathcal T}_s M(A,\beta)$. The translation functors are well-defined due to Lemma \ref{wallcomb} and Remark \ref{updown}. For complete description of ${\mathcal T}_s$, we must specify how the $S^{\beta}_k$-submodules ${\mathcal T}_s M (A,\beta)$ embed into ${\mathcal T}_s M (A)\oplus {\mathcal T}_s M (\beta \uparrow A)$. If $\beta \uparrow \w{A}{s}=\w{A}{s}, A= \wm{A}{s}$, the first entry of a pair in ${\mathcal T}_s M (A,\beta)$ maps to ${\mathcal T}_s M (A)$ and the second to ${\mathcal T}_s M(\beta \uparrow A)$. If $\beta \uparrow \w{A}{s}=\w{A}{s}, A= \wp{A}{s}$, the first direct summand of ${\mathcal T}_s M (A, \beta)$ lives inside ${\mathcal T}_s M (A)$ and the second inside ${\mathcal T}_s M (\beta \uparrow A)$. For $\beta \uparrow \w{A}{s}\neq \w{A}{s}$, the submodule $M(\wm{A}{s},\beta)$ lies crosswise inside both ${\mathcal T}_s M (A)$ and ${\mathcal T}_s M (\beta \uparrow A)$ and analogously $M(\wp{A}{s},\beta)$ does.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{article1.pdf}
\caption{Example of an alcove pattern, here of type $\tilde{A}_2$}
\label{fig:cases}
\flushleft The color key of the three different cases:
\begin{enumerate}
\item \includegraphics{artcp1.pdf}\qquad $\beta \uparrow \w{A}{s}=\w{A}{s}, A= \wm{A}{s}$,
\item \includegraphics{artcp2.pdf}\qquad $\beta \uparrow \w{A}{s}=\w{A}{s}, A= \wp{A}{s}$,
\item \includegraphics{artcp3.pdf}\qquad $\beta \uparrow \w{A}{s}\neq \w{A}{s}$ .
\end{enumerate}
\end{figure}
For an integer $l$ we define the \emph{shift functor}
\begin{equation*}
\{l\}\colon {\mathcal K}_k \to {\mathcal K}_k
\end{equation*}
to be the functor that shifts an object $M$ in all components $M(A)$ and $M(A,\beta)$ by $l$. Here, the homogeneous elements of degree $n$ of an $\{l\}$-shifted graded $S^{\emptyset}_k$- resp. $S^{\beta}_k$-module are given by
\begin{equation*}
M\{l\}_{n}=M_{n-l}.
\end{equation*}
The shift functor is certainly not superfluous. Indeed each $A$-com\-ponent separately is isomorphic to the $\{-2\}$-shifted component itself, where the isomorphism is given by a multiplication with an $\alpha \in R^+$ that is a unit in $S^{\emptyset}_k$. But this does not yield a global isomorphism on the whole object in general as $\alpha$ is not invertible in $S^{\alpha}_k$.
\subsubsection{Special objects}
An object $M \in {\mathcal K}_k$ is called \emph{special} if it is isomorphic to a direct summand of a direct sum of objects of the form
\begin{equation}\label{eq:bottsamelson}
{\mathcal T}_{s_r}\circ \ldots\circ {\mathcal T}_{s_1}(P_0)\{n\},
\end{equation}
where $s_1,\ldots, s_r$ is a sequence in $\widehat{{\mathcal S}}$ and $n\in {\mathbb Z}$. \emph{Bott-Samelson objects} are objects of the form as in \eqref{eq:bottsamelson}.
\begin{definition}
The full subcategory of ${\mathcal K}_k$ given by special objects is denoted by ${\mathcal M}_k$.
\end{definition}
\begin{remark}
We will verify lateron that $Q_0$ appears as a direct summand of ${\mathcal T}_{s_j}\circ\ldots\circ{\mathcal T}_{s_1}(P_0)$ where $w_0=s_1\ldots s_j$ is a reduced expression. Since the proof needs further results we will hand it in later (see below, Proposition \ref{q0summand}).
Hence we can define a full subcategory ${\mathcal M}_k^{\circ}\subset{\mathcal M}_k$ consisting of objects isomorphic to a direct summand of a direct sum of objects of the form
\begin{equation}\label{bottsamlike}
{\mathcal T}_{s_r}\circ\ldots\circ{\mathcal T}_{s_1}(Q_0)\{n\},
\end{equation}
where $s_1,\ldots, s_r$ is a sequence in $\widehat{{\mathcal S}}$ and $n\in {\mathbb Z}$. Objects of the form as in \eqref{bottsamlike} are called \emph{Bott-Samelson like objects}.
\end{remark}
\subsection{Elementary properties of special objects}\label{presprop}
A common approach to prove a property of special objects in ${\mathcal K}_k$ is to first show the validity of this property for the unit object $P_0$ and to verify to be preserved by applying the translation functors. If this property is unaffected by taking direct sums and direct summands, we completed.
This is how one obtains
\begin{lemma}\label{fingen:torfre}
Let $M$ be a special object. For all $A\in{\mathscr A}$ and $\beta \in R^{+}$:
\begin{enumerate}
\item[a)] $M(A)$ and $M(A,\beta)$ are finitely generated $S^{\emptyset}_k$- resp. $S^{\beta}_k$-modules.
\item[b)] $M(A)$ and $M(A,\beta)$ are free $S^{\emptyset}_k$- resp. $S^{\beta}_k$-modules.
\end{enumerate}
\end{lemma}
\begin{lemma} \label{basics:dense}
Let $M \in {\mathcal M}_k$, $A \in {\mathscr A}$ and $\beta \in R^{+}$. The inclusion
\begin{equation*}
\iota_{M(A,\beta)}\colon M(A,\beta) \hookrightarrow M(A) \oplus M (\beta \uparrow A)
\end{equation*}
induces the following isomorphism of $S_k^{\emptyset}$-modules \begin{align}\label{basics:transdense}
\widehat{\iota}_{M(A,\beta)} \colon M(A,\beta) \otimes_{S_k^{\beta}} S^{\emptyset}_k &\to M(A) \oplus M (\beta \uparrow A)\\
\nonumber \sum m\otimes \lambda &\mapsto \sum \iota_{M(A,\beta)} (m)\cdot \lambda.
\end{align}
It is of degree $0$.
\end{lemma}
\begin{proof}
This is true for the unit object $P_0$. For all $\beta \in R^+$
\begin{align*}
&\widehat{\iota}_{P_0(A_e,\beta)}\colon S^{\beta}_k\otimes_{S^{\beta}_k}S^{\emptyset}_k \to S^{\emptyset}_k \text{ resp. }\\
&\widehat{\iota}_{P_0(\beta \downarrow A_e,\beta)}\colon S^{\beta}_k\otimes_{S^{\beta}_k}S^{\emptyset}_k \to S^{\emptyset}_k
\end{align*}
is surjective as $\forall s\in S_k^{\emptyset}$ it is $\widehat{\iota}(1\otimes s)=s$.
Applying ${\mathcal T}_s$ inductively yields only objects fulfilling \eqref{basics:transdense}. Assume $M\in {\mathcal M}_k$ to fulfill \eqref{basics:transdense} and let $s\in \widehat{{\mathcal S}}$. Then the object ${\mathcal T}_s M$ fulfills \eqref{basics:transdense}, too:
Let $\beta \in R^{+}$ and $A \in {\mathscr A}$ (cf. Figure \ref{fig:cases}). In case $\beta \uparrow \w{A}{s}\neq \w{A}{s}$, the property is obvious with
\begin{equation*}
\iota_{{\mathcal T}_s M (A,\beta)} =\iota_{M(\wm{A}{s},\beta)}\oplus\iota_{M(\wp{A}{s},\beta)}.
\end{equation*}
Let $\beta \uparrow \w{A}{s}= \w{A}{s}$, then
\begin{equation*}
\iota_{{\mathcal T}_s M (A,\beta)} = \begin{cases}
\iota_{M( A,\beta)}\oplus \iota_{M( A,\beta)}& \text{if }A=\wm{A}{s}, \\
\iota_{M(\beta\downarrow A,\beta)} \oplus \iota_{M(\beta\uparrow A,\beta)} & \text{if }A=\wp{A}{s}.
\end{cases}\\
\end{equation*}
And furthermore if $\widehat{\iota}_{M(A,\beta)}$ is surjective then
\begin{equation*}
\widehat{\iota}_{M(A,\beta)}(\beta M(A,\beta) \otimes_{S^{\beta}_k} S^{\emptyset}_k) = M(A)\oplus M(A,\beta),
\end{equation*}
too, since $m\otimes \lambda=\beta m \otimes \beta^{-1}\lambda$ in $M(A,\beta) \otimes_{S^{\beta}_k} S^{\emptyset}_k$.
\end{proof}
The $S^{\beta}_k$-modules $M(A,\beta)$ and $M(\beta \downarrow A, \beta)$ have in common that both partially map into the $S_k^{\emptyset}$-module $M(A)$. We would like to understand this relation better. Let $M\in {\mathcal K}_k$, $A\in{\mathscr A}$ and $\beta \in R^+$. By abuse of notation we will write
\begin{equation*}
M(A,\beta) \cap M(A)= M(A,\beta) \cap [M(A) \oplus 0].
\end{equation*}
For a direct sum $M(A) \oplus M(A')$ we denote the projection to $M(A)$ by ${\operatorname{pr}}_A$.
\begin{lemma}\label{verma:imagel}
Let $M \in {\mathcal M}_k$ be a special object, $A\in {\mathscr A}$ and $\beta \in R^+$. Then
\begin{equation}\label{verma:imagedown}
{\operatorname{pr}}_A [M(A,\beta)]
=M(\beta \downarrow A, \beta)\cap M(A)
\end{equation}
and
\begin{equation}\label{verma:imageup}
\beta \, {\operatorname{pr}}_A [M(\beta \downarrow A,\beta)]
\subseteq M(A, \beta)\cap M(A).
\end{equation}
If $M\in {\mathcal M}_k^{\circ}$, then the above inclusion is an equality.
\end{lemma}
\begin{proof}
As usual, we firstly examine the object $P_0$. We only need to consider the case when $A=A_e$ because otherwise $P_0(A)$ is the trivial module. One immediately gets
\begin{equation*}
{\operatorname{pr}}_{A_e}[P_0(A_e,\beta)]=S^{\beta}_k
=P_0(\beta \downarrow A_e, \beta)\cap P_0(A_e)
\end{equation*}
and
\begin{equation*}
\beta \, {\operatorname{pr}}_{A_e} [P_0(\beta \downarrow A_e,\beta)]=\beta S^{\beta}_k
\subset S^{\beta}_k = P_0(A_e, \beta)\cap P_0(A_e).
\end{equation*}
Let now $A$ correspond to an element in the finite Weyl group ${\mathcal W}$. Then
\begin{align}\label{verma:q0}
\beta\, {\operatorname{pr}}_A &[Q_0(\beta\downarrow A,\beta)]=\beta S^{\beta}_k\nonumber\\
&=
\begin{cases} \beta S^{\beta}_k \cap S^{\emptyset}_k & \text{if }A\in \beta\uparrow {\mathscr A}_{\beta}^-, \\
\{(\beta x+y,y)\mid x,y\in S_k^{\beta}\}\cap [S_k^{\emptyset}\oplus 0] & \text{if }A\in {\mathscr A}_{\beta}^-,
\end{cases}\\
&= Q_0(A,\beta)\cap Q_0(A)\nonumber.
\end{align}
Assume now that an object $M\in {\mathcal K}_k$ fulfills \eqref{verma:imagedown} for all $\beta \in R^+$ and all $A\in {\mathscr A}$ and let $s\in \widehat{{\mathcal S}}$ be an arbitrary simple affine reflection. Then ${\mathcal T}_s M$ fulfills \eqref{verma:imagedown} as well. We distinguish two cases. In the first case, $\w{A}{s}$ is of type $\beta_1\neq\beta$ and by this $\w{\beta \downarrow A}{s}$ is of a type $\beta_2 \neq \beta$. Then
\begin{equation}\label{verma:case3}
{\operatorname{pr}}_{A} [{\mathcal T}_s M(A,\beta)]={\operatorname{pr}}_{\wm{A}{s}} [M(\wm{A}{s},\beta)] \oplus {\operatorname{pr}}_{\wp{A}{s}} [M(\wp{A}{s},\beta)].
\end{equation}
It is known by induction hypothesis that
\begin{equation*}
{\operatorname{pr}}_{\wm{A}{s}} [M(\wm{A}{s},\beta)] = M(\beta\downarrow\wm{A}{s},\beta)\cap M(\wm{A}{s})
\end{equation*}
and
\begin{equation*}
{\operatorname{pr}}_{\wp{A}{s}} [M(\wp{A}{s},\beta)]= M(\beta\downarrow\wp{A}{s},\beta)\cap M(\wp{A}{s})
\end{equation*}
and Lemma \ref{wallcomb}.b) provides
\begin{equation*}
\{\beta\downarrow\wm{A}{s},\beta\downarrow\wp{A}{s}\}=\{\wm{(\beta \downarrow A)}{s}, \wp{(\beta \downarrow A)}{s}\}.
\end{equation*}
Hence, \eqref{verma:case3} equals
\begin{equation*}
{\mathcal T}_s M(\beta \downarrow A,\beta)\cap {\mathcal T}_s M(A).
\end{equation*}
In the second case, both $\w{A}{s}$ and $\w{(\beta \downarrow A)}{s}$ are of type $\beta$. If $A$ is a \emph{negative alcove with respect to $s$}, that means $A=\wm{A}{s}$, $\beta \downarrow A$ is positive with respect to $s$ and
\begin{equation*}
{\operatorname{pr}}_{A} [{\mathcal T}_s M (A, \beta)] = M(A,\beta)
\end{equation*}
is equal to
\begin{equation*}
{\mathcal T}_s M (\beta \downarrow A, \beta)\cap {\mathcal T}_s M(A) =M(A,\beta).
\end{equation*}
If $A$ is a positive alcove, i.e. , $\beta \downarrow A$ is negative and thereby
\begin{equation*}
{\operatorname{pr}}_A[{\mathcal T}_s M (A, \beta)]=\beta M(\beta \downarrow A,\beta)
\end{equation*}
while
\begin{equation*}
\{(\beta x+y,y)\mid x,y \in M(\beta \downarrow A,\beta)\}\cap [0\oplus ({\mathcal T}_s M)(A)]=\beta M(\beta \downarrow A,\beta).
\end{equation*}
If $M\in {\mathcal K}_k$ fulfills the condition in \eqref{verma:imageup} for all $\beta \in R^+$ and all $A\in {\mathscr A}$, then (\ref{verma:imageup}) is also true for the object ${\mathcal T}_s M$ where $s\in \widehat{{\mathcal S}}$ is an arbitrary simple affine reflection. To prove this statement one calculates
\begin{align*}
\beta \, {\operatorname{pr}}_A &[{\mathcal T}_s M(\beta \downarrow A,\beta)]\\
&=\beta
\begin{cases}
M(\beta\uparrow(\beta\downarrow A),\beta)=M(A,\beta)&\text{if }\beta \uparrow \w{A}{s}=\w{A}{s}\\
&\quad\text{and }A=\wm{A}{s}\\
M(\beta\downarrow A, \beta)&\text{if }\beta \uparrow\w{A}{s}=\w{A}{s}\\
&\quad\text{and }A=\wp{A}{s}.\\
\end{cases}
\end{align*}
This immediately coincides with
\begin{equation*}
{\mathcal T}_s M(A,\beta) \cap {\mathcal T}_s M(A)
\end{equation*}
for $\beta \uparrow \w{A}{s}=\w{A}{s}$.
For $\beta \uparrow \w{A}{s}\neq \w{A}{s}$ it is
\begin{align*}
\beta \, &{\operatorname{pr}}_A [{\mathcal T}_s M(\beta \downarrow A,\beta)]\\
&=
\beta \, {\operatorname{pr}}_{\wm{A}{s}}[M(\beta \downarrow\wm{A}{s},\beta)] \oplus \beta \, {\operatorname{pr}}_{\wp{A}{s}}[M(\beta \downarrow\wp{A}{s},\beta)].
\end{align*}
But by induction hypothesis this is included in
\begin{equation*}
M(\wm{A}{s},\beta)\cap M(\wm{A}{s})\oplus M(\wp{A}{s},\beta)\cap M(\wp{A}{s})={\mathcal T}_s M(A,\beta) \cap {\mathcal T}_s M(A).
\end{equation*}
If the induction base is $Q_0$, we even obtain equality as in \eqref{verma:q0}.
Both properties are also preserved by taking direct sums, direct summands and by shifting.
\end{proof}
\begin{corollary}\label{projtimesbeta}
Let $(m_1,m_2) \in M(A)\oplus M(\beta \uparrow A)$ such that $(m_1, m_2)\in M(A,\beta)$. Then $(\beta m_1, 0) \in M(A,\beta) \text{ and } (0, \beta m_2) \in M(A,\beta)$.
\end{corollary}
\section{The Duality}
\begin{definition}
Let $M \in {\mathcal M}_k$ with
\begin{equation*}
M=(\{M(A)\}_{A\in {\mathscr A}}, \{M(A,\beta)\}_{A\in {\mathscr A}, \beta \in R^{+}}).
\end{equation*}
We define the \emph{duality $\cD$} in the following way:
\begin{align*}
\cD M (A) &= {\operatorname{Hom}}^{\bullet}_{S^{\emptyset}_k}(M(A),S^{\emptyset}_k),\\
\cD M (A,\beta) &= {\operatorname{Hom}}^{\bullet}_{S^{\beta}_k}(M(A,\beta),S^{\beta}_k).
\end{align*}
\end{definition}
Let $B$ and $C$ be two graded $S^{\emptyset}_k$-modules. A graded homomorphism $\varphi\colon B \to C$ is of degree $l$ if
\begin{equation*}
\varphi(B_n) \subseteq C_{l+n}
\end{equation*}
for all $n\in {\mathbb Z}$.
The module of all homomorphisms of degree $l$ is denoted by
\begin{equation*}
{\operatorname{Hom}}_{S^{\emptyset}_k}^l (B,C).
\end{equation*}
Following Lemma \ref{fingen:torfre} we only consider finitely generated $S^{\emptyset}_k$-modules, and hence each module $\cD M(A)$ carries a natural ${\mathbb Z}$-grading, namely
\begin{equation*}
\cD M(A)_l= {\operatorname{Hom}}_{S^{\emptyset}_k}^l (M(A),S^{\emptyset}_k).
\end{equation*}
In the following, we want to describe how $\cD M(A,\beta)$ embeds into $\cD M(A)\oplus \cD M(\beta \uparrow A)$. As each $M(A,\beta)$ is torsion free as seen in Lemma \ref{fingen:torfre}, there is the embedding
\begin{equation*}
-\otimes_{S^{\beta}_k}1 \colon {\operatorname{Hom}}_{S^{\beta}_k}(M(A,\beta),S^{\beta}_k) \hookrightarrow {\operatorname{Hom}}_{S^{\beta}_k}(M(A,\beta),S^{\beta}_k)\otimes_{S^{\beta}_k} S^{\emptyset}_k
\end{equation*}
where the right-hand side is isomorphic to
\begin{equation*}
{\operatorname{Hom}}_{S^{\beta}_k}(M(A,\beta), S^{\emptyset}_k)
\end{equation*}
as $M(A,\beta)$ is finitely generated as an $S_k^{\beta}$-module. This yields an isomorphism to
\begin{equation}\label{rightside}
{\operatorname{Hom}}_{S^{\emptyset}_k}(M(A,\beta)\otimes_{S^{\beta}_k}S^{\emptyset}_k, S^{\emptyset}_k).
\end{equation}
But $M \in {\mathcal M}_k$ and hence by Lemma \ref{basics:dense}
\begin{equation*}
M(A,\beta)\otimes_{S^{\beta}_k}S^{\emptyset}_k \cong M(A) \oplus M(\beta \uparrow A).
\end{equation*}
So the module in \eqref{rightside} in turn is isomorphic to
\begin{equation*}
{\operatorname{Hom}}_{S^{\emptyset}_k}(M(A) \oplus M (\beta \uparrow A),S^{\emptyset}_k).
\end{equation*}
$\varphi \in \cD M(A,\beta)$ and $\varphi \otimes 1 \in \cD M(A)\oplus \cD M(\beta \uparrow A)$ coincide on $M(A,\beta)$.
All homomorphisms above are of degree 0 so $\cD M(A,\beta)$ embeds gradedly into $\cD M(A) \oplus \cD M(\beta\uparrow A)$ and by this it is verified that for a special object $M \in {\mathcal M}_k$ the dual object $\cD M$ is indeed an object in ${\mathcal K}_k$.
In addition the following equations hold for graded $S^{\emptyset}_k$-modules $B$ and $C$:
\begin{equation*}
{\operatorname{Hom}}_{S^{\emptyset}_k}^l (B,C)={\operatorname{Hom}}_{S^{\emptyset}_k}^0 (B\{l\},C)={\operatorname{Hom}}_{S^{\emptyset}_k}^0 (B,C\{-l\}),
\end{equation*}
where the same can be done for $S^{\beta}_k$-modules.
Hence, the functors
\begin{equation}\label{dualshift}
\cD \circ \{n\} \simeq \{-n\} \circ \cD
\end{equation}
between ${\mathcal M}_k$ and ${\mathcal K}_k$ are isomorphic.
\subsection{The relation between $\cD$ and ${\mathcal T}_s$}
To understand the duality $\cD$ in ${\mathcal K}_k$ better, we need to know the relation between the translation functors ${\mathcal T}_s$ and $\cD$ besides the already seen relation of $\cD$ to $\{n\}$ in \eqref{dualshift}.
To do so, we need to introduce a further set of \emph{dual} translation functors $\{{\mathcal T}^{\vee}_{s}\mid s\in \widehat{{\mathcal S}}\}$ which only differs to $\{{\mathcal T}_{s}\mid s\in \widehat{{\mathcal S}}\}$ by interchanging the role of up-arrows and down-arrows. More precisely, this affects the case when $\beta \uparrow \w{A}{s}=\w{A}{s}$ and $A=\wp{A}{s}$ and is realised by instead of shifting $M(\beta \downarrow A,\beta)$ to shift $M(\beta\uparrow A,\beta)$ in degree. Concretely:
\begin{align*}
{\mathcal T}^{\vee}_s M (A) &= M(\wm{A}{s}) \oplus M(\wp{A}{s}) \text{ and }\\
{\mathcal T}^{\vee}_s M (A,\beta) & =
\begin{cases}
\{(\beta x + y,y)\mid x,y \in M(A,\beta)\} & \text{if $\beta \uparrow \w{A}{s}=\w{A}{s}$}\\
&\text{ and $A=\wm{A}{s}$,}\\
M(\beta \downarrow A,\beta) \oplus \beta M(\beta \uparrow A,\beta) & \text{if $\beta \uparrow \w{A}{s}=\w{A}{s}$}\\
&\text{ and $A=\wp{A}{s}$,}\\
M(\wm{A}{s},\beta) \oplus M(\wp{A}{s},\beta)&\text{if }\beta \uparrow \w{A}{s}\neq\w{A}{s}.
\end{cases}
\end{align*}
A lot of properties preserved by ${\mathcal T}_s$ are preserved by ${\mathcal T}^{\vee}_s$, too. This is in particular true for the properties in the Lemmas \ref{fingen:torfre} and \ref{basics:dense}.
\begin{proposition}\label{theo:dualtrans}
Let $s \in \widehat{{\mathcal S}}$. Then the functor
\begin{equation*}
\cD \circ {\mathcal T}_{s}\colon {\mathcal M}_k \to {\mathcal K}_k
\end{equation*}
is isomorphic to
\begin{equation*}
\{-2\}\circ {\mathcal T}^{\vee}_{s} \circ \cD \colon {\mathcal M}_k \to {\mathcal K}_k.
\end{equation*}
\end{proposition}
\begin{proof}
We will give the natural transformations between
\begin{equation*}
\cD \circ {\mathcal T}_{s}\overset{\tau}{\underset{\sigma}{\rightleftarrows}} \{-2\}\circ{\mathcal T}^{\vee}_{s} \circ \cD
\end{equation*}
explicitly. For $A\in {\mathscr A}$ define $\alpha_s (A):={\operatorname{sign}}(A) \alpha(\w{A}{s})$ where $\alpha(\w{A}{s})\in R^+$ is the type of $\w{A}{s}$ (i.e. $\alpha(\w{A}{s}) \uparrow \w{A}{s}=\w{A}{s}$) and
\begin{equation*}
{\operatorname{sign}}(A)=
\begin{cases}
1 &\text{ if }A=\wp{A}{s},\\
-1 & \text{ if }A=\wm{A}{s}.
\end{cases}
\end{equation*}
Keep in mind that
\begin{align*}
[\cD \circ {\mathcal T}_{s}] M(A) &= {\operatorname{Hom}}_{S_k^{\emptyset}}({\mathcal T}_s M(A), S_k^{\emptyset})\\
&= {\operatorname{Hom}}_{S_k^{\emptyset}}(M(\wm{A}{s}) \oplus M(\wp{A}{s}), S_k^{\emptyset})
\end{align*}
and
\begin{align*}
[\cD \circ {\mathcal T}_{s}] M(A,\beta) &= {\operatorname{Hom}}_{S_k^{\beta}}({\mathcal T}_s M(A,\beta), S_k^{\beta})\\
&= \begin{cases}
&{\operatorname{Hom}}_{S_k^{\beta}}(\{\beta x+y,y)\mid x,y \in M(A,\beta)\}, S_k^{\beta})\\
&\qquad\qquad\text{if $\beta \uparrow \w{A}{s}=\w{A}{s}$ and $A=\wm{A}{s}$,}\\
&{\operatorname{Hom}}_{S_k^{\beta}}(\beta M(\beta \downarrow A,\beta)\oplus M(\beta \uparrow A, \beta), S_k^{\beta}) \\
&\qquad\qquad\text{if $\beta \uparrow \w{A}{s}=\w{A}{s}$ and $A=\wp{A}{s}$,}\\
&{\operatorname{Hom}}_{S_k^{\beta}}(M(\w{A}{s}_-,\beta) \oplus M(\wp{A}{s},\beta), S_k^{\beta})\\
&\qquad\qquad\text{if } \beta \uparrow \w{A}{s}\neq\w{A}{s}.
\end{cases}
\end{align*}
Similarly, we obtain
\begin{align*}
[\{-2\} \circ {\mathcal T}_{s}^{\vee}\circ \cD] M(A) =\cD M(\wm{A}{s})\{-2\}\oplus \cD M(\wp{A}{s})\{-2\}\\
\qquad \qquad= {\operatorname{Hom}}_{S_k^{\emptyset}}(M(\wm{A}{s}), S_k^{\emptyset})\{-2\}\oplus {\operatorname{Hom}}_{S_k^{\emptyset}}(M(\wp{A}{s}), S_k^{\emptyset})\{-2\}\\
\end{align*}
and
\begin{align*}
&[\{-2\} \circ {\mathcal T}_{s}^{\vee}\circ \cD] M(A,\beta) \\&= \begin{cases}
&\{\beta \varphi+\psi,\psi)\mid \varphi, \psi \in \cD M(A,\beta)\}\{-2\}\\
&\qquad\qquad\text{if } \beta \uparrow \w{A}{s}=\w{A}{s}, A=\wm{A}{s},\\
&\cD M(\beta \downarrow A,\beta)\{-2\}\oplus \beta \cD M(\beta \uparrow A, \beta)\{-2\}\\
&\qquad\qquad\text{if } \beta \uparrow \w{A}{s}=\w{A}{s}, A=\wp{A}{s},\\
&\cD M(\wm{A}{s},\beta)\{-2\} \oplus \cD M(\wp{A}{s},\beta)\{-2\}\\
&\qquad\qquad\text{if } \beta \uparrow \w{A}{s}\neq\w{A}{s}.
\end{cases}
\end{align*}
For any special object $M$ define the natural transformation $\tau^M=(\tau^M_A)_{A\in{\mathscr A}}$ resp. $\sigma^M=(\sigma^M_A)_{A\in{\mathscr A}}$ as follows.
We set
\begin{align*}
&\tau^M_A\colon &[\cD \circ {\mathcal T}_{s}] M(A) &\to [\{-2\}\circ{\mathcal T}^{\vee}_{s} \circ \cD ]M(A),\\
&&\varphi &\mapsto \alpha_s(A)(\varphi|_{M(\wm{A}{s})}\oplus \varphi|_{M(\wp{A}{s})}),\\
&\text{ and }&&\\
&\sigma^M_A\colon &[\{-2\}\circ{\mathcal T}^{\vee}_{s} \circ \cD ]M(A)&\to [\cD \circ {\mathcal T}_{s}]M(A),\\
&&(\varphi_{M(\wm{A}{s})},\varphi_{M(\w{A}{s}_+)}) &\mapsto \alpha_s(A)^{-1}(\varphi_{M(\w{A}{s}_-)} + \varphi_{M(\w{A}{s}_+)}).
\end{align*}
Each $\tau^M$ is of total degree $2$ and each $\sigma^M$ is of total degree $-2$ and hence after the $\{-2\}$-shift of ${\mathcal T}_s^{\vee}\circ \cD$ both are of degree $0$. Under the assumption that $\tau$ and $\sigma$ indeed are natural transformations and by the fact that $\tau^M_A\circ \sigma^M_A={\operatorname{id}}_{[\{-2\}\circ{\mathcal T}_s^{\vee}\circ \cD]M(A)}$ and $\sigma^M_A\circ \tau^M_A={\operatorname{id}}_{[\cD\circ {\mathcal T}_s]M(A)}$ for each $A\in {\mathscr A}$ and $M \in {\mathcal M}_k$, we get the claim.
In order to complete the proof the following must be verified:
\begin{align*}
&\tau^M_A\oplus \tau^M_{\beta \uparrow A} ([\cD \circ {\mathcal T}_{s}]M(A,\beta))\subseteq [\{-2\}\circ{\mathcal T}^{\vee}_{s} \circ \cD]M(A,\beta) \quad\text{ and }\\
&\sigma^M_A\oplus \sigma^M_{\beta \uparrow A} ([\{-2\}\circ{\mathcal T}^{\vee}_{s} \circ \cD ]M(A,\beta))\subseteq [\cD \circ {\mathcal T}_{s} ]M(A,\beta)
\end{align*}
for any $\beta \in R^{+}$ and $A \in {\mathscr A}$.
Consider the case when $\beta \uparrow \w{A}{s} =\w{A}{s}$ and $A=\wm{A}{s}$ (cf. Figure \ref{fig:cases}). In this case, we get $\alpha_s(A)=-\beta$ and $\alpha_s(\beta \uparrow A)=\beta$. $M(A,\beta)$ is finitely generated by $g_1, \ldots, g_j$ with the dual homomorphisms $g_1^*,\ldots, g_j^*$ as it is free over $S_k^{\beta}$. Then the generators of ${\mathcal T}_s M (A,\beta)$ are
\begin{equation*}
(g_1,g_1),(\beta g_1,0),\ldots,(g_j,g_j),(\beta g_j,0).
\end{equation*}
Hence
\begin{equation*}
(0,g_1^*),(\beta^{-1}g_1^*,-\beta^{-1}g_1^*),\ldots,(0,g_j^*),(\beta^{-1}g_j^*,-\beta^{-1}g_j^*)
\end{equation*}
generate $[\cD \circ {\mathcal T}_s] M(A,\beta)$ which are mapped in turn by $\tau^M_A\oplus\tau^M_{\beta \uparrow A}$ to
\begin{equation*}
(0,\beta g_1^*),(-g_1^*,-g_1^*),\ldots,(0,\beta g_j^*), (-g_j^*,-g_j^*).
\end{equation*}
But this is a generating set of $[\{-2\}\circ{\mathcal T}_s^{\vee}\circ \cD]M(A,\beta)$.
Next, we study the case $\beta \uparrow \w{A}{s}=\w{A}{s},A=\wp{A}{s}$. Here, $\alpha_s(A)=\beta$ and $\alpha_s(\beta \uparrow A)=-\beta$. In this case $[\cD \circ {\mathcal T}_{s}]M (A,\beta)$ is
\begin{equation*}
{\operatorname{Hom}}_{S^{\beta}_k}(\beta M(\beta \downarrow A,\beta), S^{\beta}_k)\oplus{\operatorname{Hom}}_{S^{\beta}_k}(M(\beta \uparrow A,\beta), S^{\beta}_k).
\end{equation*}
For an element $\varphi \in [\cD\circ {\mathcal T}_{s}]M (A,\beta)$ we write $\varphi=(\varphi_1,\varphi_2)$ with $\varphi_1\in \cD(\beta M(\beta \downarrow A,\beta))$ and $\varphi_2 \in \cD(M(\beta \uparrow A,\beta))$. By $\tilde{\varphi_1}\colon M(\beta \downarrow A,\beta)\to S^{\beta}_k$ we denote the homomorphism $\tilde{\varphi_1}(m)=\varphi_1(\beta m)$. Thus
\begin{equation*}
\varphi=(\varphi_1,\varphi_2)\hookrightarrow (\tilde{\varphi_1}\otimes \beta^{-1},\varphi_2 \otimes 1)\overset{\tau}{\mapsto}(\tilde{\varphi_1}\otimes 1,-\varphi_2\otimes \beta),
\end{equation*}
but this corresponds to the element $(\tilde{\varphi_1},\beta \varphi_2)$ in $[\{-2\}\circ{\mathcal T}^{\vee}_s \circ \cD ]M(A,\beta)$. In both cases,
the opposite direction via $\sigma$ works analogously.
The residual part is $\beta \uparrow \w{A}{s}\neq \w{A}{s}$. In this particular case, $\alpha_s(A)\notin \{-\beta, \beta\}$, but at least $\alpha_s(\beta \uparrow A) =\alpha_s(A) + \beta z$, $z \in {\mathbb Z}$. Let $\varphi \in \cD(M(\w{A}{s}_{\pm},\beta))$ and $(m_1,m_2) \in M(\w{A}{s}_{\pm},\beta)$. With Corollary \ref{projtimesbeta} we get $(0,\beta m_2) \in M(\w{A}{s}_{\pm},\beta)$ and then
\begin{align*}
&\tau_A\oplus \tau_{\beta \uparrow A}(\varphi \otimes 1)(m_1, m_2)\\
&\qquad = \alpha_s(A) (\varphi \otimes 1)(m_1,m_2) + z (\varphi \otimes 1)(0, \beta m_2) \in S^{\beta}_k.
\end{align*}
Hence $\tau_A\oplus \tau_{\beta \uparrow}(\varphi \otimes 1)\in \cD(M(\w{A}{s}_{\pm},\beta))$.
Because of $\alpha_s(A)\neq \pm \beta \neq \alpha_s(\beta \uparrow A)$, the elements $\alpha_s(A)$ and $\alpha_s(\beta \uparrow A)$ are not only units in $S^{\emptyset}_k$ but as well in $S^{\beta}_k$ and therefore, multiplication by $\alpha_s(A)^{-1}$ resp. $\alpha_s(\beta \uparrow A)^{-1}$ means multiplication by an element in $S^{\beta}_k$. Hence,
\begin{eqnarray*}
\sigma_A\oplus \sigma_{\beta\uparrow A}(\varphi \otimes 1)(m_1, m_2)\\
= \alpha_s(A)^{-1}(\varphi \otimes 1)(m_1,m_2) - z(\alpha_s(\beta \uparrow A))^{-1} (\varphi \otimes 1)(0, \beta m_2) \in S^{\beta}_k
\end{eqnarray*}
and $\sigma_A\oplus \sigma_{\beta \uparrow A}(\varphi \otimes 1)\in \cD(M(\w{A}{s}_{\pm},\beta))$.
\end{proof}
\subsection{The tilting functor ${\mathcal C}_{w_0}$}
So far there is the following situation. $\{{\mathcal T}_{s}\mid s\in \widehat{{\mathcal S}}\}$ is the set of translation functors on ${\mathcal K}_k$ which spans a subcategory ${\mathcal M}_k$. Equally, the set $\{{\mathcal T}^{\vee}_{s}\mid s\in \widehat{{\mathcal S}}\}$ of the so-called dual translation functors spans a subcategory ${\mathcal M}^{\vee}_k$ of similar type. The duality interchanges objects of those two subcategories.
Our goal is to relate those two categories and to find a concept of self-duality of special objects.
${\mathcal T}_s$ and ${\mathcal T}^{\vee}_s$ differ essentially in the order arising from $\uparrow$ resp. $\downarrow$. This gives reason to define a covariant functor that tilts this order.
Let $w\in {\mathcal W}$. We want to distinguish the two situations where $w(\beta)\in R^{+}$ and $w(\beta) \in R^{-}$ for a positive root $\beta$. Therefore, recall the definition of
\begin{equation*}
w(\beta)^+=\begin{cases}
w(\beta) & \text{if }w(\beta) \in R^{+}, \\
-w(\beta) & \text{if }w(\beta) \in R^{-}.
\end{cases}
\end{equation*}
Furthermore, we will need to move from $S^{w(\beta)^+}_k$-modules to $S^{\beta}_k$-modules. This is done by the twist functor
\begin{equation*}
t_w \colon S^{w(\beta)^+}_k{\operatorname{-mod}} \to S^{\beta}_k{\operatorname{-mod}},
\end{equation*}
where the $S^{\beta}_k$-action on a module $t_w(M)$ is given by
\begin{equation*}
\forall s\in S^{\beta}_k,\, m\in t_w(M)=M: \ s.m=w^{-1}(s).m.
\end{equation*}
Here $w^{-1}$ is extended to $S^{\emptyset}_k$ with
\begin{align*}
w^{-1}\colon S^{\emptyset}_k &\to S^{\emptyset}_k\\
1 &\mapsto 1, \\
\alpha &\mapsto w^{-1}( \alpha).
\end{align*}
Let $N$ be an object in ${\mathcal K}_k$. The following defines a functor ${\mathcal C}_{w}\colon {\mathcal K}_k \to {\mathcal K}_k$ depending on an element $w\in {\mathcal W}$:
\begin{align*}
{\mathcal C}_{w} N (A) &=N(w.A),\\
{\mathcal C}_{w} N (A,\beta)&=
\begin{cases}
t_{w} N(w.(\beta \uparrow A), w(\beta)^+) & \text{if }w(\beta) \in R^{-}\\
t_{w} N(w.A, w(\beta)^+)& \text{if }w(\beta) \in R^{+}.
\end{cases}
\end{align*}
We will call ${\mathcal C}_{w}$ \emph{tilting functor on $w$}.
The next step is to verify that ${\mathcal C}_{w}$ is well-defined. If $N$ is an object in ${\mathcal K}_k$, we need to make sure that ${\mathcal C}_w N (A,\beta)$ is an $S^{\beta}_k$-submodule of ${\mathcal C}_w N (A) \oplus {\mathcal C}_w N (\beta\uparrow A)$ for each $\beta \in R^{+}, A \in {\mathscr A}$. Let $w(\beta)\in R^{+}$. By definition
\begin{equation*}
{\mathcal C}_w N (A,\beta)=t_w N(w.A,w(\beta)^+).
\end{equation*}
As $N\in {\mathcal K}_k$, $N(w.A,w(\beta)^+)$ embeds into $N(w.A)\oplus N(w(\beta)^+\uparrow w.A)$.
By Lemma \ref{kipp:wb} the previous term equals
\begin{equation*}
N(w.A)\oplus N(w.(\beta\uparrow A)) = {\mathcal C}_w N (A) \oplus {\mathcal C}_w N(\beta \uparrow A).
\end{equation*}
The case where $w(\beta) \in R^{-}$ works similarly.
\begin{equation*}
{\mathcal C}_w N (A,\beta)=t_w N(w.(\beta \uparrow A),w(\beta)^+)
\end{equation*}
embeds into
\begin{equation*}
N(w.(\beta \uparrow A))\oplus N(w(\beta)^+\uparrow w.(\beta \uparrow A))
\end{equation*}
and by Lemma \ref{kipp:wb} this equals
\begin{equation*}
{\mathcal C}_w N(\beta \uparrow A) \oplus {\mathcal C}_w N(A).
\end{equation*}
Please notice that in this case the role of $A$ and $\beta \uparrow A$ were exchanged.
The tilting functor would not be a functor in general if it was defined the same way for the case $w(\beta) \in R^{-}$ as for the case $w(\beta) \in R^{+}$. As we have seen this would violate the condition
\begin{equation*}
{\mathcal C}_w N (A,\beta)\hookrightarrow{\mathcal C}_w N (A) \oplus {\mathcal C}_w N
(\beta\uparrow A)
\end{equation*}
for some objects in ${\mathcal K}_k$.
From now on, we will restrict to the case $w=w_0$ where $w_0$ is the longest element of the finite Weyl group. We will denote ${\mathcal C}_{w_0}$ simply by ${\mathcal C}$. Please keep in mind that this implies $w(\beta) \in R^{-}$ for all $\beta \in R^+$.
\begin{proposition}\label{theo:kipptrans}
Let $s \in \widehat{{\mathcal S}}$. Then the functors
\begin{equation*}
{\mathcal C} \circ {\mathcal T}^{\vee}_s \colon {\mathcal K}_k \to {\mathcal K}_k
\end{equation*}
and
\begin{equation*}
{\mathcal T}_s \circ {\mathcal C} \colon {\mathcal K}_k \to {\mathcal K}_k
\end{equation*}
are isomorphic.
\end{proposition}
\begin{proof}
Let $N$ be an object in ${\mathcal K}_k$ and we will use $w=w_0$ for convenience. By definition
\begin{align*}
[{\mathcal C} \circ {\mathcal T}^{\vee}_s] N (A)&= N(\wm{(w.A)}{s})\oplus
N(\wp{(w.A)}{s})\text{ and }\\
[{\mathcal T}_s\circ {\mathcal C} ] N (A)&=N(w.(\wm{A}{s}))\oplus N(w.(\wp{A}{s})).
\end{align*}
By Lemma \ref{kipp:arithmetik}.a) these modules coincide. Let us now consider
\begin{equation*}
[{\mathcal C} \circ{\mathcal T}_s^{\vee}]N(A,\beta)=t_w {\mathcal T}_s^{\vee} N(w.(\beta\uparrow A),w(\beta)^+).
\end{equation*}
As in Remark \ref{updown} and with Lemma \ref{kipp:arithmetik}.b) it is equivalent
\begin{align*}
\beta \uparrow \w{A}{s}=\w{A}{s} &\iff \beta \uparrow\w{(\beta \uparrow A)}{s}=\w{(\beta \uparrow A)}{s}\\
& \iff w(\beta)^+\uparrow \w{(w.(\beta\uparrow A))}{s}=\w{(w.(\beta\uparrow A))}{s}.
\end{align*}
Furthermore, let $\beta \uparrow \w{A}{s}=\w{A}{s}$ and $A=\wm{A}{s}$. We know that
\begin{equation*}
\w{(w.(\beta\uparrow A))}{s}=w.(\w{(\beta\uparrow A)}{s})=w.(\w{A}{s}),
\end{equation*}
where the second equation follows from Lemma \ref{wallcomb}. Hence
\begin{align*}
\{\wm{(w.(\beta\uparrow A))}{s},\wp{(w.(\beta\uparrow A))}{s}\}&=\{w.(\beta\uparrow A),w.A\}\\
&=\{w(\beta)^+\downarrow w.A, w.A\},
\end{align*}
which finally leads to the necessary statement
\begin{equation*}
A=\wm{A}{s}\iff w.(\beta \uparrow A)=\wm{(w.(\beta\uparrow A))}{s}.
\end{equation*}
\underline{Case 1} ($\beta \uparrow \w{A}{s}=\w{A}{s}, A=\wm{A}{s}$, cf. Figure \ref{fig:cases}):
By the preceding we obtain
\begin{equation*}
[{\mathcal C}\circ {\mathcal T}_s^{\vee}] N (A,\beta)=t_w \{(w(\beta)^+x + y, y)\mid x,y \in N(w.(\beta \uparrow A),w(\beta)^+)\}
\end{equation*}
which coincides with
\begin{equation*}
[{\mathcal T}_s \circ{\mathcal C}] N(A,\beta)=\{(\beta x + y,y)\mid x,y\in t_w N(w.(\beta \uparrow A),w(\beta)^+\}.
\end{equation*}
\underline{Case 2} ($\beta \uparrow \w{A}{s}=\w{A}{s}, A=\wp{A}{s}$):
By definition
\begin{align*}
&[{\mathcal C}\circ {\mathcal T}^{\vee}_s] N (A,\beta)\\
&=t_w [ N(w(\beta)^+\downarrow w.(\beta \uparrow A),w(\beta)^+)\oplus w(\beta)^+ N(w(\beta)^+ \uparrow w.(\beta \uparrow A), w(\beta)^+)],
\end{align*}
while
\begin{equation*}
[{\mathcal T}_s \circ{\mathcal C}] N (A,\beta)=\beta t_w N(w.A,w(\beta)^+)\oplus t_w N(w.(\beta \uparrow^2 A),w(\beta)^+).
\end{equation*}
But this coincides because with Lemma \ref{kipp:wb}
\begin{align*}
w(\beta)^+\downarrow w.(\beta \uparrow A)&=w.(\beta \uparrow^2 A) \text{ and }\\
w(\beta)^+\uparrow w.(\beta\uparrow A)&=w.A.
\end{align*}
\underline{Case 3} ($\beta \uparrow \w{A}{s}\neq\w{A}{s}$):
\begin{equation*}
[{\mathcal C} \circ{\mathcal T}^{\vee}_s] N (A,\beta)=t_w [ N(\wm{(w.(\beta \uparrow A))}{s},w(\beta)^+)\oplus N(\wp{(w.(\beta \uparrow A))}{s}, w(\beta)^+)],
\end{equation*}
while
\begin{equation*}
[{\mathcal T}_s \circ{\mathcal C}] N (A,\beta)= t_w N(w.(\beta \uparrow \wm{A}{s}),w(\beta)^+)\oplus t_w N(w.(\beta \uparrow \wp{A}{s}),w(\beta)^+).
\end{equation*}
Because of Lemma \ref{kipp:arithmetik}.a)
\begin{equation*}
\{\wm{(w.(\beta\uparrow A))}{s}, \wp{(w.(\beta\uparrow A))}{s}\}=\{w.(\wm{(\beta\uparrow A)}{s}),w.(\wp{(\beta\uparrow A)}{s})\}
\end{equation*}
and it is enough to see that
\begin{equation*}
\{\wm{(\beta\uparrow A)}{s},\wp{(\beta\uparrow A)}{s}\}=\{\beta \uparrow\wm{A}{s},\beta \uparrow\wp{A}{s}\}.
\end{equation*}
This is true as it was observed in Lemma \ref{wallcomb}.
\end{proof}
This is not true for general $w\in {\mathcal W}$. For $w\neq w_0$ we cannot exclude the case where $w(\beta)\in R^{+}$. But this means a violation in the case where $\beta \uparrow \w{A}{s}=\w{A}{s}$, $A=\wp{A}{s}$ which means in formulas
\begin{equation*}
[{\mathcal C} \circ{\mathcal T}^{\vee}_s] N (A,\beta)= t_w N(w.(\beta\downarrow A),w(\beta)^+)\oplus \beta t_wN(w.(\beta \uparrow A),w(\beta)^+)
\end{equation*}
and on the other side
\begin{equation*}
[{\mathcal T}_s\circ{\mathcal C}] N (A,\beta)=\beta t_w N(w.(\beta\downarrow A),w(\beta)^+)\oplus t_w N(w.(\beta \uparrow A),w(\beta)^+).
\end{equation*}
\section{Some tilting self-dual objects in ${\mathcal M}_k$}
\subsection{The object $Q_0$}
Properties of objects in ${\mathcal M}_k^{\circ}$ certainly depend on the object $Q_0$ from where the translation functors span ${\mathcal M}_k^{\circ}$. This gives rise to the study of $Q_0$.
\begin{lemma}
$Q_0$ is indecomposable.
\end{lemma}
\begin{proof}
Assume that $Q_0$ was decomposable, i.e. $Q_0=V\oplus W$ with $V\neq 0\neq W$.
We define the support of an object $M$ as follows:
\begin{equation*}
{\operatorname{supp}}\,M=\{A \in {\mathscr A} \mid M(A) \neq 0\}.
\end{equation*}
In our case we obtain
\begin{equation*}
{\mathcal W} = {\operatorname{supp}}\, Q_0= {\operatorname{supp}}\, V\,\dot\cup\, {\operatorname{supp}}\, W,
\end{equation*}
as the rank of each $Q_0 (A)$ is $\leq 1$. For both $V$ and $W$ the support is non-empty. But for any $w \in {\operatorname{supp}}\, V$ (resp. $W$) follows $s_{\beta}w \in {\operatorname{supp}}\, V$ (resp. $W$) because either $Q_0 (A,\beta)$ or $Q_0(\beta \uparrow A, \beta)$ equals to $\{(\beta x+y,y)\mid x,y \in S^{\beta}_k\}$. An object $M$ \emph{links} the alcoves $A$ and $\beta\uparrow A$ if $M(A,\beta)\hookrightarrow M(A)\oplus M(\beta \uparrow A)$ does not decompose into $S_k^{\beta}$-submodules of $M(A)\oplus 0$ and $0\oplus M(\beta\uparrow A)$. Hence $Q_0$ links $A$ and $\beta \uparrow A$. As ${{\operatorname{rk}}}\, Q_0 (A)={{\operatorname{rk}}}\, Q_0(\beta \uparrow A)=1$ this implies that $A$ and $\beta \uparrow A$ lie in the support of the same indecomposable object.
Take now $w_0 \in {\operatorname{supp}}\, V$ and via induction over all $\beta \in R^{+}$ we receive
\begin{equation*}
{\mathcal W}={\operatorname{supp}}\, V.
\end{equation*}
This contradicts the decomposition of $Q_0$.
\end{proof}
The next step is to compute $[{\mathcal C}\circ \cD]Q_0$. In the $S^{\emptyset}_k$-components it is easily seen to be
\begin{align*}
[{\mathcal C}\circ \cD] Q_0(A)&=
\begin{cases}
{\operatorname{Hom}}_{S^{\emptyset}_k}(S^{\emptyset}_k,S^{\emptyset}_k)& \text{if }A = A_w \text{ for } w\in {\mathcal W},\\
0&\text{else}
\end{cases}
\end{align*}
because $w_0 w \in {\mathcal W} \iff w \in {\mathcal W}$.
For any $\beta \in R^{+}$ and $A \in {\mathscr A}$
\begin{align*}
[{\mathcal C}\circ \cD] Q_0(A,\beta)&=
\begin{cases}
{\operatorname{Hom}}_{S^{\beta}_k}(\{(\beta x+y,y)\mid x,y \in S^{\beta}_k\},S^{\beta}_k)\\
\qquad\qquad\qquad\text{if }w_0.(\beta\uparrow A)\in {\mathscr A}_{-}^{w(\beta)^+},\\
{\operatorname{Hom}}_{S^{\beta}_k}(\beta S^{\beta}_k,S^{\beta}_k)\\
\qquad\qquad\qquad \text{if }w_0.(\beta\uparrow A)\in w(\beta)^+ \uparrow {\mathscr A}_{-}^{w(\beta)^+},\\
{\operatorname{Hom}}_{S^{\beta}_k}(S^{\beta}_k,S^{\beta}_k)\\
\qquad\qquad\qquad\text{if }w_0.(\beta\uparrow A)\in w(\beta)^+ \downarrow {\mathscr A}_{-}^{w(\beta)^+},\\
0\qquad\qquad\qquad\qquad\qquad\text{else}.
\end{cases}
\end{align*}
\begin{remark}
\begin{enumerate}
\item[a)] $w_0.(\beta\uparrow A) \in {\mathscr A}_{-}^{w(\beta)^+}$ is equivalent to $A \in {\mathscr A}_{-}^{\beta}$. That is because of the following arguments' chain:
\begin{align*}
w_0.(\beta \uparrow A)&= w(\beta)^+ \downarrow w_0 .A \in {\mathscr A}_{-}^{w(\beta)^+} \\
&\iff w_0 .A \in w(\beta)^+ \uparrow {\mathscr A}_{-}^{w(\beta)^+} \\
&\iff A \in \beta \downarrow {\mathscr A}_{+}^{\beta} \\
&\iff A\in {\mathscr A}_{-}^{\beta},
\end{align*}
where we used Lemma \ref{kipp:wb}.
\item[b)] The equivalence of $w_0.(\beta\uparrow A) \in w(\beta)^+ \uparrow {\mathscr A}_{-}^{w(\beta)^+}$ and $A \in \beta \downarrow{\mathscr A}_{-}^{\beta}$ is received by
using part a) and replacing $A$ by $\beta \uparrow A$.
\end{enumerate}
\end{remark}
By this remark, $[{\mathcal C}\circ \cD] Q_0$ transforms in the $\beta$-components into
\begin{align*}
[{\mathcal C}\circ \cD]Q_0(A,\beta)&=
\begin{cases}
{\operatorname{Hom}}_{S^{\beta}_k}(\{(\beta x+y,y)\mid x,y \in S^{\beta}_k\},S^{\beta}_k)\\
\qquad\qquad\qquad\qquad\qquad
\text{if }A\in {\mathscr A}_{-}^{\beta},\\
{\operatorname{Hom}}_{S^{\beta}_k}(S^{\beta}_k,S^{\beta}_k)\\
\qquad\qquad\qquad\qquad\qquad \text{if }A\in \beta\uparrow {\mathscr A}_{-}^{\beta},\\
{\operatorname{Hom}}_{S^{\beta}_k}(\beta S^{\beta}_k,S^{\beta}_k)\\
\qquad\qquad\qquad\qquad\qquad \text{if }A\in \beta \downarrow {\mathscr A}_{-}^{\beta},\\
0 \qquad\qquad\qquad\qquad\text{else}.
\end{cases}
\end{align*}
\begin{lemma}\label{q0dual}
$Q_0$ is tilting self-dual up to a shift by $-2l(w_0)$, which means in formula
\begin{equation*}
[{\mathcal C} \circ \cD] Q_0\cong Q_0\{-2l(w_0)\}.
\end{equation*}
\end{lemma}
\begin{proof}
We will explicitly define the isomorphism between those two objects. Let $\delta= \prod_{\alpha \in R^{+}} \alpha$ and the morphisms
\begin{equation*}
[{\mathcal C} \circ \cD] Q_0 \overset{g}{\underset{f}{\rightleftarrows}} Q_0\{-2l(w_0)\}
\end{equation*}
be given by
\begin{align*}
f_{A_w} \colon Q_0 (A_w) &\to [{\mathcal C} \circ \cD] Q_0 \{-2l(w_0)\}(A_w),\\
q &\mapsto (-1)^{l(w)} \delta^{-1} q \cdot{\operatorname{id}}_{S_k^{\emptyset}}
\end{align*}
and
\begin{align*}
g_{A_w} \colon [{\mathcal C} \circ \cD] Q_0 (A_w) &\to Q_0\{-2l(w_0)\} (A_w),\\
\varphi &\mapsto (-1)^{l(w)} \delta \varphi(1).
\end{align*}
Indeed, both compositions yield the identity. Now the remaining question is if they are well-defined morphisms. We will study this question case by case.
\underline{Case 1} ($A \in {\mathscr A}_{-}^{\beta}$):
This is the case where the signs in $f$ and $g$ really matter. Let $\varphi$ lie in
\begin{equation*}
{\operatorname{Hom}}_{S^{\beta}_k}(\{(\beta x + y,y)\mid x,y\in S^{\beta}_k\}, S^{\beta}_k)\hookrightarrow [{\mathcal C} \circ \cD]Q_0 (A) \oplus [{\mathcal C} \circ \cD]Q_0 (\beta \uparrow A)
\end{equation*}
where it is embedded to
\begin{equation*}
(\varphi_1,\varphi_2)=(\varphi(\beta,0)\beta^{-1}\cdot {\operatorname{id}}_{S_k^{\emptyset}}, (\varphi(1,1)-\varphi(\beta,0)\beta^{-1})\cdot{\operatorname{id}}_{S_k^{\emptyset}}).
\end{equation*}
If $A=A_w$ and $\beta \uparrow A=A_v$ then
\begin{equation*}
\{(-1)^{l(w)}, (-1)^{l(v)}\}=\{1,-1\}.
\end{equation*}
By this fact, applying $g_A\oplus g_{\beta \uparrow A}$ yields
\begin{equation*}
(x,y)=(\pm \delta \varphi(\beta,0)\beta^{-1}, \mp \delta (\varphi(1,1)- \varphi (\beta,0) \beta^{-1}))\in S^{\beta}_k\oplus S^{\beta}_k
\end{equation*}
which additionally satisfy $x-y=\pm \delta \varphi(1,1) \in \beta S^{\beta}_k$.
Now, let $(\beta x + y, y)\in Q_0 (A,\beta)$. Then this is mapped by $f_A\oplus f_{\beta\uparrow A}$ to
\begin{equation*}
(\pm \delta^{-1}(\beta x+y)\cdot{\operatorname{id}}_{S^{\emptyset}_k}, \mp \delta^{-1}y\cdot{\operatorname{id}}_{S^{\emptyset}_k}).
\end{equation*}
Applying this element of $[{\mathcal C} \circ \cD]Q_0 (A)\oplus [{\mathcal C} \circ \cD]Q_0 (\beta\uparrow A)$ to $(t,t+\beta s)$ yields
\begin{equation*}
\pm \delta^{-1} \beta (xt-ys) \in S^{\beta}_k.
\end{equation*}
\underline{Case 2} ($A \in \beta \uparrow{\mathscr A}_{-}^{\beta}$ and ${\operatorname{Hom}}_{S^{\beta}_k}(S^{\beta}_k,S^{\beta}_k)\hookrightarrow [{\mathcal C} \circ \cD] Q_0 (A)$):
Let $\varphi \in {\operatorname{Hom}}_{S^{\beta}_k}(S^{\beta}_k, S^{\beta}_k)$ and map this by $g_A$ to
\begin{equation*}
(-1)^{l(w)}\delta \varphi(1)\in \beta S^{\beta}_k
\end{equation*}
because $\varphi(1) \in S^{\beta}_k$. Consider now the map
\begin{equation*}
f_A(\beta s)=(-1)^{l(w)}\delta^{-1} \beta s \cdot {\operatorname{id}}_{S^{\emptyset}_k}.
\end{equation*}
Restricting this to $S^{\beta}_k$ yields an element in ${\operatorname{Hom}}_{S^{\beta}_k}(S^{\beta}_k,S^{\beta}_k)$.
\underline{Case 3} ($A \in \beta \downarrow {\mathscr A}_{-}^{\beta}$ and ${\operatorname{Hom}}_{S^{\beta}_k}(\beta S^{\beta}_k,S^{\beta}_k)\hookrightarrow [{\mathcal C} \circ \cD]Q_0 (\beta \uparrow A)$):
Let $\varphi \in {\operatorname{Hom}}_{S^{\beta}_k}(\beta S^{\beta}_k, S^{\beta}_k)$ and map this by $g_{\beta \uparrow A}$ to
\begin{equation*}
(-1)^{l(w)}\delta \varphi(\beta)\beta^{-1}\in S^{\beta}_k
\end{equation*}
because $\varphi(\beta) \in S^{\beta}_k$ and $\delta \beta^{-1} \in S^{\beta}_k$. Consider now the map
\begin{equation*}
f_{\beta \uparrow A}(s)=(-1)^{l(w)}\delta^{-1} s \cdot {\operatorname{id}}_{S^{\emptyset}_k}.
\end{equation*}
Restricting this to $\beta S^{\beta}_k$ yields an element in ${\operatorname{Hom}}_{S^{\beta}_k}(\beta S^{\beta}_k,S^{\beta}_k)$.
\end{proof}
\subsection{${\mathcal M}_k^{\circ}$ is a subcategory of ${\mathcal M}_k$}
Let $s \in {\mathcal S} \subset {\mathcal W}$ be a simple reflection. Then $Q_0$ is special in the sense that for each alcove $A\in {\mathscr A}$ the $S^{\emptyset}_k$-modules $Q_0(\wm{A}{s})$ and $Q_0(\wp{A}{s})$ coincide (this is definitely false for the remaining affine reflection $\widehat{s}\in \widehat{{\mathcal S}}\setminus {\mathcal S}$). Hence, there is the diagonal embedding
\begin{align*}
\Delta_A \colon Q_0(A)&\to Q_0(\wm{A}{s})\oplus Q_0(\wp{A}{s})={\mathcal T}_s Q_0(A)\\
t&\mapsto (t,t)
\end{align*}
and this even defines a morphism $\Delta$ from $Q_0$ to ${\mathcal T}_s Q_0$ as we will see in
\begin{lemma}\label{homsvonq}
Let $K\in {\mathcal K}_k$, $f\in {\operatorname{Hom}}_{{\mathcal K}_k}(Q_0, K)$ and $s\in {\mathcal S}$. Then $\w{f}{s} = ({\mathcal T}_s f) \circ \Delta \in {\operatorname{Hom}}_{{\mathcal K}_k}(Q_0, {\mathcal T}_s K)$.
\end{lemma}
\begin{proof}
We need to verify that $\Delta \in {\operatorname{Hom}}_{{\mathcal K}_k}(Q_0, {\mathcal T}_s Q_0)$. If the wall of $A\in {\mathscr A}$ is not of type $\beta$ then there is no hyperplane $H_{\beta,n}$ separating $\wm{A}{s}$ and $\wp{A}{s}$. This means
\begin{equation*}
[\beta \uparrow \w{A}{s}\neq \w{A}{s}\text{ and } A\in {\mathscr A}_-^{\beta}]\Rightarrow \{\wm{A}{s},\wp{A}{s}\}\subset {\mathscr A}_-^{\beta}
\end{equation*}
where ${\mathscr A}_-^{\beta}=\{A \in {\mathscr A}\mid A=A_w \text{ with } w\in {\mathcal W}, A \subset H_{\beta}^{-}\}$ as in \eqref{abminus}
and equivalent statements for $A\in {\mathscr A}_+^{\beta}$ and $A\in \beta \downarrow{\mathscr A}_-^{\beta}$. As $\wm{A}{s}$ and $\wp{A}{s}$ share the same \emph{$\beta$-strip}, i.e. there is a $n\in {\mathbb Z}$ such that $\wm{A}{s},\wp{A}{s}\subset H^{-}_{\beta,n+1}\cap H^{+}_{\beta,n}$, the $S_{k}^{\beta}$-modules $Q_0(\wm{A}{s},\beta)$ and $Q_0(\wp{A}{s},\beta)$ can be identified canonically. Hence, for $(p,q)\in Q_0(A,\beta)$ the element $(\Delta_A \oplus \Delta_{\beta \uparrow A})(p,q)\in {\mathcal T}_s Q_0(A,\beta)$.
Furthermore, observe that for $\beta \uparrow \w{A}{s}=\w{A}{s}$ by definition of ${\mathscr A}_-^{\beta}$ there are the implications
\begin{align*}
A\in {\mathscr A}_-^{\beta} &\Rightarrow A=\wm{A}{s} \text{ and }\\
[A\in {\mathscr A}_+^{\beta} \text{ or } A\in \beta \downarrow {\mathscr A}_-^{\beta}] &\Rightarrow A=\wp{A}{s}.
\end{align*}
With this in mind we will verify the remaining cases. If $\beta\uparrow \w{A}{s}=\w{A}{s}$, $A=\wm{A}{s}$ and $(m,n) \in Q_0(A,\beta)$ then by definition of $Q_0$ we know that $(m,m), (n,n)\in Q_0(A,\beta)$ and $(m-n,m-n) \in \beta\, Q_0(A,\beta)$. Hence, $(m,m,n,n)\in Q_0(A,\beta)$.
As $(1,1)S_k^{\beta}\subset \{(\beta x+y,y)\mid x,y\in S_k^{\beta}\}$ we obtain
\begin{equation*}
(\Delta_A \oplus \Delta_{\beta\uparrow A}) (Q_0(A,\beta))\subseteq [{\mathcal T}_s Q_0](A,\beta)
\end{equation*}
even in the case when $\beta \uparrow \w{A}{s}=\w{A}{s}, A=\wp{A}{s}$. Therefore, $\Delta$ is indeed a morphism.
But if $\Delta\in {\operatorname{Hom}}_{{\mathcal K}_k}(Q_0,{\mathcal T}_s Q_0)$, then
$({\mathcal T}_s f) \circ \Delta \in {\operatorname{Hom}}_{{\mathcal K}_k}(Q_0, {\mathcal T}_s K)$ since $({\mathcal T}_s f)\in {\operatorname{Hom}}_{{\mathcal K}_k}({\mathcal T}_s Q_0, {\mathcal T}_s K)$.
\end{proof}
This lemma is very helpful for defining various morphisms with domain $Q_0$. One application can be seen in
\begin{proposition}\label{q0summand}
$Q_0$ is an object in ${\mathcal M}_k$. In particular, ${\mathcal M}_k^{\circ}$ is a full subcategory of ${\mathcal M}_k$.
\end{proposition}
\begin{proof}
Let $(s_1,\ldots, s_l)$ be a reduced expression for $w_0$, the longest element of the finite Weyl group. We want to show that $Q_0$ is a direct summand of ${\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1}(P_0)$.
In particular, for all $i \in \{1,\ldots, l\}$ the reflection $s_i$ is in ${\mathcal S}$ so that we can make use of Lemma \ref{homsvonq}. We start with two morphisms with domain $Q_0$, namely $f \in {\operatorname{Hom}}_{{\mathcal K}_k}(Q_0, P_0)$ given by $f_{A_e}={\operatorname{id}}_{S_k^{\emptyset}}$ and $g \in {\operatorname{Hom}}_{{\mathcal K}_k}(Q_0, {\mathcal C} P_0)$ given by $g_{w_0.A_{e}}={\operatorname{id}}_{S_k^{\emptyset}}$. Thereon, we continue applying Lemma \ref{homsvonq} until we get the morphisms
\begin{align}
\tilde{f}\colon Q_0 &\to {\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1}(P_0),\label{homnachp}\\
\tilde{g}\colon Q_0 &\to [{\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1}]({\mathcal C} P_0).\label{homnachgekippt}
\end{align}
In a next step, we dualize the morphism in \eqref{homnachgekippt} to
\begin{equation*}
[{\mathcal C}\circ\cD] (\tilde{g})\colon {\mathcal C}\circ\cD\circ{\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1}\circ{\mathcal C} (P_0)\to {\mathcal C}\circ\cD (Q_0).
\end{equation*}
With $\cD P_0 \simeq P_0$ and following Proposition \ref{theo:dualtrans} and Proposition \ref{theo:kipptrans}, there is an isomorphism
\begin{equation*}
\eta \colon {\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1} (P_0) \to {\mathcal C}\circ\cD\circ{\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1}\circ{\mathcal C} (P_0)\{2l\}
\end{equation*}
and by Lemma \ref{q0dual} there is an isomorphism
\begin{equation*}
\xi \colon {\mathcal C}\circ \cD (Q_0)\{2l\}\to Q_0.
\end{equation*}
Then
\begin{equation*}
\xi_{A_{w_0}}\circ [{\mathcal C}\circ\cD] (\tilde{g})_{A_{w_0}} \circ \eta_{A_{w_0}}=\xi_{A_{w_0}}\circ {\operatorname{id}}_{S_k^{\emptyset}}\circ \eta_{A_{w_0}}
\end{equation*}
is an isomorphism on the $A_{w_0}$-component, too. (Doing explicit calculation even shows that this is the identity.)
An endomorphism $h$ of $Q_0$ which is an isomorphism of degree $0$ at the $A_{w_0}$-component cannot be nilpotent. As $Q_0$ is indecomposable, $h$ must be an automorphism.
We obtain the sequence
\begin{equation*}
Q_0 \to {\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1}(P_0) \to Q_0
\end{equation*}
which builds an automorphism of $Q_0$. So $Q_0$ is indeed a direct summand of ${\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1}(P_0) $.
\end{proof}
We can define $Q_0$ equivalently as the indecomposable direct summand of ${\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1}(P_0)$ with $w_0 \in {\operatorname{supp}} \, Q_0$ where $(s_1,\ldots,s_l)$ is a reduced expression of $w_0$. This summand must be unique because the rank of $[{\mathcal T}_{s_l}\circ \ldots \circ{\mathcal T}_{s_1}]P_0(A_{w_0})$ is one. Then another consequence of the above proposition is that this definition does not depend on the choice of the reduced expression.
\subsection{Bott-Samelson like objects}
\begin{theorem}\label{dual:mainthm}
For a Bott-Samelson like object $M\in {\mathcal M}_k^{\circ}$ there is an $z \in {\mathbb Z}$ such that
\begin{equation*}
{\mathcal C} \circ \cD (M) \cong M \{z\}.
\end{equation*}
\end{theorem}
\begin{proof}
This proof uses the recursive definition of Bott-Samelson like objects (see \eqref{bottsamlike}) in ${\mathcal M}_k$. It consists of applying Lemma \ref{q0dual} and inductively making use of Proposition \ref{theo:dualtrans} and Proposition \ref{theo:kipptrans}. The integer $z$ in particular depends on the length of the word in \eqref{bottsamlike} -- it equals
\begin{equation*}
z= -2r-2l(w_0) -2n.
\end{equation*}
\end{proof}
\subsection{Indecomposables corresponding to the anti-fundamental box}
For the next steps we need the notion of \emph{$\alpha$-strings} in the alcove pattern. Let $\alpha \in R^+$ and $A\in {\mathscr A}$. Then
\begin{equation*}
\{\alpha \uparrow^n A \mid n \in {\mathbb Z}\}\subset {\mathscr A}
\end{equation*}
is such an $\alpha$-string. Consider a Bott-Samelson like object $M$ in ${\mathcal M}_k^{\circ}$, i.e.
\begin{equation*}
M={\mathcal T}_{s_l}\circ\ldots\circ {\mathcal T}_{s_1} (Q_0).
\end{equation*}
The intersection of an $\alpha$-string with the support of this object $M$ is \emph{connected}:
\begin{lemma} Let $M$ be a Bott-Samelson like object. For all $\alpha \in R^+$ and $A\in {\operatorname{supp}}\, M$ there are $k\in {\mathbb N}$ and $B\in {\mathscr A}$ such that
\begin{equation*}
{\operatorname{supp}}\, M \cap \{\alpha \uparrow^n A \mid n \in {\mathbb Z}\} =\{B,\alpha \uparrow B,\ldots, \alpha \uparrow^k B\}.
\end{equation*}
\end{lemma}
\begin{proof}
This is true for $Q_0$ since for all $\alpha \in R^+$ we have ${\operatorname{supp}}\, Q_0 ={\mathscr A}_-^{\alpha} \,\dot\cup\,{\mathscr A}_+^{\alpha}$. If $A\in {\mathscr A}_-^{\alpha}$ then
\begin{equation*}
{\operatorname{supp}}\, Q_0 \cap \{\alpha \uparrow^n A \mid n \in {\mathbb Z}\}=\{A, \alpha \uparrow A\}.
\end{equation*}
If $A\in {\mathscr A}_+^{\alpha}$ then
\begin{equation*}
{\operatorname{supp}}\, Q_0 \cap \{\alpha \uparrow^n A \mid n \in {\mathbb Z}\}=\{\alpha \downarrow A, A\}.
\end{equation*}
Assume the claim holds for a Bott-Samelson like object $M'$. Consider now $M={\mathcal T}_s M'$ and let $C, \alpha \uparrow^k C\in {\operatorname{supp}}\,M$. Then the following two intersections are non-empty
\begin{align*}
\{\wm{C}{s},\wp{C}{s}\}\cap {\operatorname{supp}} \, M'\neq \emptyset,\\
\{\wm{(\alpha\uparrow^k C)}{s},\wp{(\alpha\uparrow^k C)}{s}\}\cap {\operatorname{supp}} \, M'\neq \emptyset.
\end{align*}
Without loss of generality let $\wp{C}{s} \in {\operatorname{supp}} \, M'$ then so is $s_{\alpha}\wp{C}{s}\in {\operatorname{supp}} \, M'$ where both live in the same $\alpha$-string. By induction hypothesis the intersection of the $\alpha$-string through $\wp{C}{s}$ with ${\operatorname{supp}} \, M' \subset {\operatorname{supp}} \, M$ is connected. Translating this connected $\alpha$-string by $s$ gives another one between $\wm{C}{s}$ and $\w{(s_{\alpha}\w{C}{s})}{s}_{\mp}$ in the support of $M$.
If $\alpha\uparrow^k C$ was not in the translated $\alpha$-string yet then start the same procedure with $\w{\alpha\uparrow^k C}{s}_{\pm}$. We obtain
\begin{equation*}
\{C,\ldots, \alpha \uparrow^k C\}\subseteq {\operatorname{supp}}\, M \cap \{\alpha \uparrow^n A \mid n \in {\mathbb Z}\}
\end{equation*}
and with the finiteness of the support of $M$ we receive the claim.
\end{proof}
We will now consider the special case when the support of a Bott-Samelson like object intersects an $\alpha$-string in exactly two alcoves.
\begin{lemma}\label{link}
Let $M \in {\mathcal M}^{\circ}_k$ be a Bott-Samelson like object and $B \in {\mathscr A}$ with
\begin{equation*}
|{\operatorname{supp}}\, M \cap \{\alpha \uparrow^n B \mid n \in {\mathbb Z}\}| =2.
\end{equation*}
Let $A\in {\mathscr A}$ be such that
\begin{equation*}
{\operatorname{supp}}\, M \cap \{\alpha \uparrow^n B \mid n \in {\mathbb Z}\}=\{A,\alpha \uparrow A\}.
\end{equation*}
If ${{\operatorname{rk}}}\, M(A)=1$, then $M$ links the alcoves $A$ and $\alpha\uparrow A$. In particular, $\{A,\alpha \uparrow A\}\subseteq {\operatorname{supp}} \, N$ where $N$ is the indecomposable summand of $M$ with $A\in {\operatorname{supp}} \, N$.
\end{lemma}
\begin{proof}
Since $\alpha \downarrow A \notin {\operatorname{supp}} \, M$, the $S^{\alpha}_k$-module $M(\alpha \downarrow A,\alpha)$ must be of the form $\alpha^l S^{\alpha}_k \hookrightarrow 0\oplus S_k^{\emptyset}$. We obtain
\begin{align*}
M(\alpha\downarrow A,\alpha) \cap M(A) =\alpha^l S^{\alpha}_k,\\
{\operatorname{pr}}_A [M(\alpha \downarrow A, \alpha)] =\alpha^l S^{\alpha}_k.
\end{align*}
But $M \in {\mathcal M}_k^{\circ}$ and we apply Lemma \ref{verma:imagel}:
\begin{align*}
{\operatorname{pr}}_A [M(A,\alpha)]&=\alpha^l S^{\alpha}_k,\\
M(A,\alpha) \cap M(A)&=\alpha^{l+1} S^{\alpha}_k.
\end{align*}
This is only true if $M$ links $A$ to $\alpha \uparrow A$.
\end{proof}
Our goal is to extend the self-duality we have seen in the context of Bott-Samelson like objects in ${\mathcal M}_k^{\circ}$ to certain indecomposable objects. The \emph{anti-fundamental box} $\Pi^-$ of ${\mathscr A}$ consists of those alcoves that live in the strips
\begin{equation*}
A\subset H^-_{\alpha,0}\cap H^+_{\alpha,-1}
\end{equation*}
for all $\alpha \in \Delta$. For each $A\in{\mathscr A}$ there is exactly one $\lambda \in X^{\vee}$ such that $\lambda + A \in \Pi^-$.
\begin{theorem}\label{selfdualanti}
Let $A_w \in \Pi^-$ and $(s_1,\ldots, s_l, s_{l+1},\ldots, s_j)$ be a reduced expression of $w\in \widehat{{\mathcal W}}$ with $w_0=s_1 \ldots s_l$. Define
\begin{equation*}
M=[{\mathcal T}_{s_j}\circ \ldots \circ {\mathcal T}_{s_{1}}]P_0.
\end{equation*}
Let $N$ be the up to isomorphism unique indecomposable direct summand of $M$ with $A_w\in {\operatorname{supp}}\, N$. Then
\begin{equation*}
{\mathcal C} \circ \cD (N)\cong N\{-2l(w)\}.
\end{equation*}
\end{theorem}
\begin{proof}
With the proof of Proposition \ref{q0summand} the object $Q_0$ is a direct summand of $[{\mathcal T}_{s_l}\circ \ldots \circ {\mathcal T}_{s_{1}}]P_0$ and so $[{\mathcal T}_{s_j}\circ \ldots \circ {\mathcal T}_{s_{l+1}}]Q_0=M'$ is a summand of $M$. Hence, the unique indecomposable direct summand of $M'$ with $A_w$ in its support is isomorphic to $N$, too. The rank in an alcove component of a Bott-Samelson like object is invariant under the action of the finite Weyl group which can be seen by induction. First of all, it is
\begin{equation*}
{{\operatorname{rk}}}\, Q_0(A_x) =\begin{cases}
1 & \text{if }x\in {\mathcal W},\\
0 & \text{else}
\end{cases}
\end{equation*}
and the induction step is achieved with the fact that $\{w.\wm{A}{s},w.\wp{A}{s}\}=\{\wm{(w.A)}{s}, \wp{(w.A)}{s}\}$. Hence for all $w\in {\mathcal W}$ and $A \in {\mathscr A}$ we have
\begin{equation}\label{rankinvariant}
{{\operatorname{rk}}}\, M'(A) ={{\operatorname{rk}}}\, M'(w.A).
\end{equation}
As $(s_1,\ldots,s_j)$ is a reduced expression of $w$, ${{\operatorname{rk}}}\, M(A_w)= {{\operatorname{rk}}}\, M'(A_w)=1$ and hence, ${{\operatorname{rk}}}\, M'(w_0. A_w)=1$. For this reason, there is a unique indecomposable summand $N_{{\mathcal C}}$ of $M'$ with $w_0. A_w \in {\operatorname{supp}}\, N_{{\mathcal C}}$. Because of Theorem \ref{dual:mainthm} and ${\operatorname{supp}} \, {\mathcal C} N=w_0.\, {\operatorname{supp}}\, N$ there is
\begin{equation*}
{\mathcal C} \circ \cD (N)\cong N_{{\mathcal C}}\{-2l(w)\}.
\end{equation*}
In the remaining part of the proof we will show that $w_0 A_w \in {\operatorname{supp}} \, N$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{article3.pdf}
\caption{The path $A_w \to w_0.A_w$, the support of $M$ is framed}
Color key \quad
\includegraphics{artcp2.pdf}\- Alcoves in $\Pi^-$ \quad
\includegraphics{artcp1.pdf}\- Constructed path $A_w \to w_0.A_w$
\label{fig:path}
\end{figure}
We would like to construct a path via the $(\cdot \uparrow)$-operation through the support of $M'$ between the alcoves $A_w$ and $w_0 A_w$. For an example of type $\tilde{A}_2$, see Figure \ref{fig:path}. Recall that $(s_1,\ldots, s_l)$ was a reduced expression of $w_0$. Consider
\begin{equation}\label{path}
A_{w} \to s_1 .A_{w} \to s_1 s_{2} .A_{w} \to \ldots \to s_1 \ldots s_l .A_{w}=w_0 .A_w.
\end{equation}
As the alcove $A_w$ is in the anti-fundamental box, it lives in
\begin{equation*}
A_w \subset H_{\alpha,0}^-\cap H_{\alpha,-1}^+
\end{equation*}
for all simple roots $\alpha\in \Delta$. Hence for each $\alpha\in \Delta$ and $k\in {1,\ldots,l}$
\begin{equation*}
(s_1 \ldots s_{k-1})A_w \subset H_{s_1 \ldots s_{k-1}(\alpha),0}^-\cap H_{s_1 \ldots s_{k-1}(\alpha),-1}^+,
\end{equation*}
in particular when $\alpha$ corresponds to the simple root of the simple reflection $s_k$ noted by $\alpha_k$. Furthermore, it is known that there is a positive root $\beta_k \in R^+$ such that
\begin{equation*}
s_1 \ldots s_{k-1} s_k =s_{\beta_k} s_1\ldots s_{k-1}.
\end{equation*}
From this follows
\begin{equation}\label{more:dual}
s_1 \ldots s_{k-1} (\alpha_k) \in \{\pm \beta_k\}.
\end{equation}
Since this describes the walk from $A_{w_0}$ to $A_e$ at the same time, all roots in \eqref{more:dual} are positive.
We have got $s_1 \ldots s_{k-1} (\alpha_k) = \beta_k$ and on that account
\begin{equation*}
s_1 \ldots s_{k-1}.A_w \subset H_{\beta_k,0}^-\cap H_{\beta_k,-1}^+.
\end{equation*}
Therefore
\begin{equation*}
s_1\ldots s_k.A_w=s_{\beta_k}s_1\ldots s_{k-1}.A_w=\beta_k \uparrow (s_1\ldots s_{k-1}.A_w).
\end{equation*}
So \eqref{path} indeed yields the desired path, represented by
\begin{equation*}
A_{w} \to \beta_1\uparrow A_{w} \to \ldots \to \beta_l\uparrow \ldots \beta_1 \uparrow A_{w}=w_0 A_w.
\end{equation*}
For any simple root $\alpha \in \Delta$ we would like to describe the $\alpha$-string through $A_w$ intersected with the support of $M'$. Certainly, $\alpha \uparrow A_w=s_{\alpha} A_w \in {\operatorname{supp}} \, M'$. On the other side $\alpha \downarrow A_w \notin {\operatorname{supp}} \, M'$ as in the anti-fundamental chamber
\begin{equation*}
l(\alpha \downarrow A_w) \geq l(A_w).
\end{equation*}
The $\alpha$-string of $A_w$ in $M'$ additionally does not contain $s_{\alpha}(\alpha \downarrow A_w)=\alpha \uparrow^2 A_w$. This leads to
\begin{equation*}
{\operatorname{supp}} \,M' \cap \{\alpha \uparrow^z A_w\mid z \in {\mathbb Z}\}=\{A_w, \alpha \uparrow A_w\}.
\end{equation*}
This statement can be extended to all alcoves in the path we would like to study. Applying \eqref{rankinvariant} in shape of $w=s_1\ldots s_n$ with $n\in\{1,\ldots, l\}$ shows that the $\beta_k$-strings in $s_1\ldots s_{k-1}A_w$ all own exactly two elements. But now we can apply Lemma \ref{link} inductively and receive that $w_0 .A_w \in {\operatorname{supp}} \,N$.
\end{proof}
\begin{remark}
So far, we do not have an intrinsic classification of the indecomposable objects in ${\mathcal M}_k$ nor in ${\mathcal M}_k^{\circ}$. However, the representation theoretical side says that it is enough to know the self-duality of objects as in the previous Theorem \ref{selfdualanti}. Other indecomposable objects in ${\mathcal M}_k^{\circ}$ are received by translating those with respect to the alcove pattern and hence are tilting self-dual up to translating inside the alcove pattern.
\end{remark}
\subsection*{Acknowledgements}
I would like to thank Peter Fiebig for very helpful discussions and ideas and for his useful comments on previous versions of this paper. This project is supported by the DFG priority program 1388.
|
1,116,691,499,147 | arxiv | \section{Introduction}
\label{sec:intro}
The collinear factorization framework allows us to calculate a number of hard exclusive amplitudes
in terms of perturbatively calculable coefficient functions and nonperturbative hadronic
matrix elements of light-cone operators. The prime example is the calculation of
the deeply virtual Compton-scattering amplitude in the handbag approximation
with generalized parton distributions,
nonforward matrix elements of a quark-antiquark nonlocal operator
$\Psi(z)\overline{\Psi}(0)$ between an incoming and an outgoing baryon state.
Strictly speaking, this description is only valid in a restricted kinematical region,
called the generalized Bjorken scaling region, for a few specific reactions,
and in the leading-twist approximation. It is, however, suggestive to extend
this framework to the description of other reactions
where the presence of a hard scale seems to justify the factorization of a short-distance dominated
partonic subprocess from long-distance hadronic matrix elements. Such an extension has,
in particular, been proposed in Ref.~\cite{Goritschnig:2009sq} for the reaction
$p\bar{p} \,\to\, \Lambda_c\bar{\Lambda}_c$ with nucleon to charmed baryon generalized parton distributions.
The extension of the collinear factorization framework to the backward region of deeply virtual
Compton-scattering and deep exclusive meson production
\cite{Pire:2004ie, Lansberg:2012ha}
leads to the definition of transition distribution amplitudes (TDAs) as nonforward matrix elements
of a three-quark nonlocal operator $\Psi(z)\Psi(y)\Psi(0)$ between an incoming and an outgoing
state carrying a different baryon number. Here, too, this description is likely to be valid in a restricted
kinematical region, for a few specific reactions, and in the leading twist approximation.
We propose here to extend the approach of Ref.~\cite{Goritschnig:2009sq} to the reaction
$p\bar{p} \,\to\, \overline{D^0} D^0$ which will be measured with the
PANDA \cite{PANDA} detector at GSI-FAIR.
For this process the baryon number exchanged in the t-channel implies
that hadronic matrix elements with $\Psi(z)\Psi(y)\Psi(0)$ operators enter the game.
Let us stress that we have no proof of the validity of this approximation but
take it as an assumption to be confronted with experimental data.
For this approach to be testable, one needs to model the occurring
nucleon to charmed meson TDAs.
In contrast to the $N\to \pi$ TDAs, which have been much discussed \cite{PSS}, we do not have any soft meson limit to normalize these TDAs. We will rather use an overlap representation in the spirit of Ref.~\cite{Diehl:2000xz}.
\section{Hadron Kinematics}
\label{sec:kinematics}
\begin{figure}[h]
\begin{center}
\includegraphics[width=.47\textwidth, clip=true]{hadron_kin.eps}
\end{center}
\caption{Kinematics of $p\bar{p} \, \to \, \overline{D^0} D^0$ in the symmetric CMS.}
\label{fig:kinemtics}
\end{figure}
The kinematical situation for $p\bar{p} \, \to \, \overline{D^0} D^0$ scattering
is sketched in Fig.~\ref{fig:kinemtics}.
The momenta and helicities of the incoming proton and antiproton are
denoted by $p$, $\mu$ and $q$, $\nu$ and the momenta of the outgoing
$\overline{D^0}$ and $D^0$ by $p'$ and $q'$, respectively. \
The mass of the proton is denoted by $m$ and that of the $D^0$ by $M$.
We choose a symmetric center-of-momentum system (CMS) in which
the longitudinal direction is defined by the average momentum of \
the incoming proton and the outgoing $\overline{D^0}$, respectively.
The transverse momentum transfer is symmetrically shared
between the incoming and outgoing hadrons.
In light-cone coordinates the hadronic momenta are parameterized as follows,
\begin{equation}
\begin{split}
p &=\left[(1+\xi)\bar{p}^{\,+},\, \frac{m^2+ {\bm \Delta}_\perp^2/4}{2(1+\xi)\bar{p}^{\,+}},\,
-\frac{{\bm \Delta}_\perp}2 \right ]\,, \quad
p^\prime=\left[(1-\xi )\bar{p}^{\,+},\, \frac{M^2+ {\bm \Delta}_\perp^2/4}{2(1-\xi)\bar{p}^{\,+}},
\,+\frac{{\bm \Delta}_\perp}2\right ]\,, \\
q &=\left[\frac{m^2+ {\bm \Delta}_\perp^2/4}{2(1+\xi)\bar{p}^{\,+}},\,(1+\xi)\bar{p}^{\,+},\,
+\frac{{\bm \Delta}_\perp}2\right]\,, \quad
q^\prime=\left[\frac{M^2+ {\bm \Delta}_\perp^2/4}{2(1-\xi)\bar{p}^{\,+}},\,(1-\xi )\bar{p}^{\,+},\,
-\frac{{\bm \Delta}_\perp}2 \right]\,,\\
\label{def-momenta-ji}
\end{split}
\end{equation}
where we have introduced sums and differences of the hadron momenta,
\begin{equation}
\bar{p} \ := \ \frac12(p+p^\prime)\,, \qquad \bar{q} \ := \ \frac12(q+q^\prime)\,
\qquad \text{and}\qquad
\Delta \ := \ p^\prime-p=q-q^\prime\,.
\label{sum-and-diff}
\end{equation}
The minus momentum components can be obtained by using the on-mass shell conditions
$p^2 = q^2 = m^2$ and $p^{\prime 2} = q^{\prime 2} = M^2$.
The skewness parameter $\xi$ gives the relative momentum transfer in the plus direction, i.e.,
\begin{equation}
\xi \ := \ \frac{p^{+}-p^{\prime +}}{p^{+}+p^{\prime+}}
= - \frac{\Delta^+}{2\bar{p}^{\,+}} \,.
\label{skewness}
\end{equation}
The Mandelstam variable $s$ is given by
\begin{equation}
s \=\ \left( p + q \right)^2
\=\ \left( p^\prime + q^\prime \right)^2 \,.
\end{equation}
In order to produce a $\overline{D^0}\,D^0$ pair,
$s$ must be larger than $4 M^2$.
The remaining Mandelstam variables, $t$ and $u$, read
\begin{equation}
t \=\ \Delta^2
\=\ \left( p^\prime - p \right)^2
\=\ \left( q - q^\prime \right)^2
\end{equation}
and
\begin{equation}
u \=\ \left( q^\prime - p \right)^2
\=\ \left( p^\prime - q \right)^2 \,,
\end{equation}
so that $s+t+u = 2 M^2 + 2 m^2$.
For later convenience we also introduce the abbreviations
\begin{equation}
\Lambda_m \:=\ \sqrt{1-4m^2/s}
\qquad\text{and}\qquad
\Lambda_M \:=\ \sqrt{1-4M^2/s} \,.
\end{equation}
For further relations between the kinematical quantities, see Appendix~\ref{app-kinematics}.
\section{Double Handbag Mechanism}
\label{sec:DHM}
The double handbag mechanism which we use to describe
$p\bar{p} \, \to \, \overline{D^0} D^0$
is shown in Fig.\ \ref{fig:handbag}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.70\textwidth, clip=true]{ppbar_to_DDbar.eps}
\end{center}
\caption{The handbag contribution to the process $p\bar{p} \, \to \, \overline{D^0} D^0$.
The momenta and helicities of the baryons and quarks are specified.}
\label{fig:handbag}
\end{figure}
It is understood that the proton emits an $S[ud]$ diquark with momentum $k_1$
and the antiproton a $\overline{S[ud]}$-diquark with momentum $k_2$.
They undergo a scattering with each other, i.e. they annihilate in our case
into a gluon which subsequently decays into the heavy $\bar{c}c$ pair.
Those produced heavy partons, characterized by $k^\prime_1$, $\lambda^\prime_1$
and $k^\prime_2$, $\lambda^\prime_2$, are reabsorbed by the remnants
of the proton and the antiproton to form the $\overline{D^0}$ and the $D^0$, respectively.
One could, of course, also think of vector-diquark configurations in the proton
and $V\left[ud\right]\overline{V\left[ud\right]}$ annihilation to produce the $\bar{c}c$ pair.
But in common diquark models of the proton it is usually assumed that
the probability to find a $V\left[ud\right]$ diquark is smaller than the one for the $S\left[ud\right]$ diquark.
Further suppression of $V\left[ud\right]$ diquarks as compared to
$S\left[ud\right]$ diquarks occurs in hard processes via
diquark form factors at the diquark-gluon vertices \cite{Jakob:1993th}.
We thus expect that our final estimate of the $\overline{D^0}D^0$
cross section will not be drastically altered by the inclusion of
vector-diquark contributions and we stick to the simpler scalar
diquark model.
The whole hadronic four-momentum transfer $\Delta$ is also exchanged
between the active partons in the partonic subprocess
\begin{equation}
S[ud](k_1) \, \overline{S[ud]}(k_2) \,\to\,
\bar{c}(k_1^\prime , \lambda_1^\prime) \,c(k_2^\prime , \lambda_2^\prime) \,.
\label{subprocess}
\end{equation}
In Eq.~(\ref{subprocess}) we neglect the mass of the $S[ud]$ (anti)diquark,
but take into account the heavy \hbox{(anti-)} charm-quark mass $m_c$.
In order to produce the heavy $\bar{c}c$ pair,
the Mandelstam variable $\hat{s}$ of the partonic subprocess has to be
\begin{equation}
\hat{s} \,\geq\, 4 m_c^2 \,,
\label{partonic-threshold}
\end{equation}
where $4 m_c^2 \approx 6.5\,\text{GeV}^2$.
We have taken the (central) value for the charm-quark mass from the Particle Data Group \cite{PDG}, which gives $m_c\=1.275 \, \pm \, 0.025\,{\rm GeV}$.
Thus, the heavy-quark mass $m_c$ is a natural intrinsic hard scale
which demands that the intermediate gluon has to be highly virtual.
This allows us to treat the partonic subprocess perturbatively, even at small $-t$,
by evaluating the corresponding Feynman diagram.
All the other non-active partons inside the parent hadrons
are unaffected by the hard scattering and thus act as spectators.
For the double handbag mechanism the hadronic $p\bar{p} \, \to \, \overline{D^0} D^0$
amplitude can be written as
\begin{equation}
\begin{split}
M_{\mu \nu} = & \sum_{\rm a_i^{(\prime)}}
\sum_{\rm \alpha_i^{\prime}}
\int d^4 \bar{k}_1 \theta(\bar{k}_1^{+})
\int \frac{d^4 z_1}{(2\pi)^4} e^{i \bar{k}_1 z_1}
\int d^4 \bar{k}_2 \theta(\bar{k}_2^{-})
\int \frac{d^4 z_2}{(2\pi)^4} e^{i \bar{k}_2 z_2}
\\
& \times \langle \overline{D^0} : p^\prime | \mathcal{T}\,
\Psi^c_{a^\prime_1\alpha^\prime_1}(-z_1/2)
\Phi^{S[ud]}_{a_1}(+z_1/2)
|p: p , \mu\rangle\;
\Tilde{H}_{a_i^{(\prime)} \alpha^{\prime}_i}(\bar{k}_1, \bar{k}_2)
\\
& \times \langle D^0: q^\prime | \mathcal{T}\,
\Phi^{S[ud]\dagger}_{a_2}(+z_2/2)
\bar{\Psi}^c_{a^\prime_2\alpha^\prime_2}(-z_2/2)
|\bar{p}: q , \nu\rangle \,, \\
\label{ampl}
\end{split}
\end{equation}
where the assignment of momenta, helicities, etc., can be seen in Fig.~\ref{fig:handbag}.
$a_i^{(\prime)}$ and $\alpha_i^{\prime}$ denote color and spinor indices, respectively.
In analogy to the hadronic level we have introduced the average partonic momenta
$\bar{k}_i := \left(k_i + k_i^\prime\right)/2$, $i=1,2$, of the active partons.
We note once more that the full hadronic momentum transfer is also
transferred between the active partons, i.e.
$k_1 - k_1^\prime = p - p^\prime = k_2^\prime - k_2 = q^\prime - q$.
The hard scattering kernel,
denoted by $\Tilde{H}_{a_i^{(\prime)} \alpha^{\prime}_i}(\bar{k}_1, \bar{k}_2)$,
describes the hard $S[ud]\overline{S[ud]} \rightarrow \bar{c}c$ subprocess.
The soft part of the $p \to \overline{D^0}$ transition is encoded in the Fourier transform of
a hadronic matrix element which is a time-ordered, bilocal product of
a quark and a diquark field operator:
\begin{equation}
\int \frac{d^4 z_1}{(2\pi)^4} e^{i \bar{k}_1 z_1}
\langle \overline{D^0} : p^\prime | {\cal T}\,
\Psi^c_{a^\prime_1\alpha^\prime_1}(-z_1/2)
\Phi^{S[ud]}_{a_1}(+z_1/2)
|p : p , \mu\rangle \,.
\label{soft-Tordered-hadronic-matrix-element}
\end{equation}
In Eq.~(\ref{soft-Tordered-hadronic-matrix-element}) $\Phi^{S[ud]}(+z_1/2)$ takes out
an $S[ud]$ diquark from the proton state $|p : p , \mu\rangle$ at the
space-time point $z_1/2$.
The $S[ud]$ diquark then takes part in the hard partonic subproces.
The $\Psi^c(-z_1/2)$ reinserts the $\bar{c}$ quark at $-z_1/2$
into the remnant of the proton which gives the desired final hadronic
$\overline{D^0}$ state $|\overline{D^0} : p^\prime \rangle$.
At this stage the appropriate time-ordering of the quark field operators
(denoted by the symbol ${\cal T}$) has to be taken into account.
The remnant of the proton,
which does not participate in the hard partonic subprocess,
constitutes the spectator system.
For the $\bar{p} \to D^0$ transition we have the Fourier transform
\begin{equation}
\int \frac{d^4 z_2}{(2\pi)^4} e^{i \bar{k}_2 z_2}
\langle D^0 : q^\prime | {\cal T}\,
\Phi^{S[ud]\dagger}_{a_2}(+z_2/2)
\overline{\Psi}^c_{a^\prime_2\alpha^\prime_2}(-z_2/2)
|\bar{p} : q , \nu\rangle \,,
\label{soft-Tordered-anti-hadronic-matrix-element}
\end{equation}
which can be interpreted in a way analogous to Eq.~(\ref{soft-Tordered-hadronic-matrix-element}).
The $p\bar{p} \, \to \, \overline{D^0} D^0$ amplitude (\ref{ampl}) is
thus a convolution of a hard scattering kernel with hadronic matrix elements
Fourier transformed with respect to the average momenta $\bar{k}_1$ and $\bar{k}_2$ of the active partons.
For the active partons we can now introduce the momentum fractions
\begin{equation}
x_1\ :=\ \frac{k_1^+}{p^+}\qquad\text{and}\qquad x_1^\prime\ :=\ \frac{k_1^{\prime +}}{p^{\prime +}}\,.
\end{equation}
For later convenience we also introduce the average fraction
\begin{equation}
\bar{x}_1 = \frac{k_1^+ + k_1^{\prime +}}{p^+ + p^{\prime +}} = \frac{\bar{k}_1^+}{\bar{p}^{\,+}}\,,
\label{xbar-1-def}
\end{equation}
which is related to $x_1$ and $x_1^\prime$ by
\begin{equation}
x_1\= \frac{\bar{x}_1+\xi}{1+\xi} \qquad\text{and}\qquad x_1^\prime\= \frac{\bar{x}_1-\xi}{1-\xi}\,,
\label{individual-fractions}
\end{equation}
respectively.
As for the processes in Refs.~\cite{Goritschnig:2009sq, DFJK1},
due to the large intrinsic scale given by the heavy quark mass $m_c$,
the transverse and minus (plus) components of the active \hbox{(anti)}parton momenta in the hard scattering kernel $\Tilde{H}$ are small as
compared to their plus (minus) components. Thus, the parton
momenta can be replaced by vectors lying in the scattering plane
formed by the parent hadron momenta.
For this assertion one only has to make the physically plausible assumptions
that the momenta are almost on mass-shell and
that their intrinsic transverse components
[divided by the respective momentum fractions (\ref{individual-fractions})]
are smaller than a typical hadronic scale of the order of $1$ GeV.
We thus make the following replacements:
\begin{eqnarray}
k_1 &\to& \left[\;{k}_1^+\,,\, \frac{x_1^2 {\bm \Delta}_\perp^2}{8{k}_1^+}\,,
- x_1\frac{{\bm \Delta}_\perp}{2}\right]
\quad\text{with}\quad {k}_1^+ = x_1 p^+ \,, \nonumber\\[0.1em]
k^\prime_1 &\to& \left[\;{k}_1^{\prime +},
\frac{m_c^2+x_1^{\prime 2} {\bm \Delta}_\perp^2/4}
{2{k}_1^{\prime +}}, x_1^\prime\frac{{\bm \Delta}_\perp}{2} \right]
\quad\text{with}\quad {k}_1^{\prime +} = x_1^\prime p^{\prime +} \,, \nonumber\\[0.1em]
k_2 &\to& \left[\;\frac{x_2^2 {\bm \Delta}_\perp^2}{8{k}_2^-}\,,
\,{k}_2^-\,, \phantom{-} x_2\frac{{\bm \Delta}_\perp}{2} \right]
\quad\text{with}\quad {k}_2^- = x_2 q^- \,, \nonumber\\[0.1em]
k^\prime_2 &\to& \left[\frac{m_c^2+x_2^{\prime 2} {\bm \Delta}_\perp^2/4}{2{k}_2^{\prime -}},
{k}_2^{\prime -}, - x_2^\prime\frac{{\bm \Delta}_\perp}{2}\right]
\quad\text{with}\quad {k}_2^{\prime -} = x_2^\prime q^{\prime -} \,.
\label{parton-mom}
\end{eqnarray}
As a consequence of these replacements it is then possible to explicitly perform the integrations
over $\bar{k}_1^-$, $\bar{k}_2^+$, $\bar{\VEC{k}}_{\perp 1}$ and $\bar{\VEC{k}}_{\perp 2}$.
Furthermore, the relative distance between the
\hbox{(anti-)}$S[ud]$-diquark and the \hbox{(anti-)}$c$-quark
field operators in the hadronic matrix elements is forced to be lightlike,
i.e., they have to lie on the light cone and thus the
time ordering of the field operators can be dropped.
After these simplifications one arrives at the following expression
for the $p\bar{p} \, \to \, \overline{D^0} D^0$ amplitude:
\begin{equation}
\begin{split}
M_{\mu \nu} =
& \sum_{\rm a_i^{(\prime)},\, \alpha_i^{(\prime)}} \,
\int d \bar{k}_1^{\,+} \theta(\bar{k}_1^{\,+}) \int \frac{d z_1^-}{2\pi} e^{i \bar{k}_1^{\,+} z_1^-}
\int d \bar{k}_2^{\,-} \theta(\bar{k}_2^{\,-}) \int \frac{d z_2^+}{2\pi} e^{i \bar{k}_2^{\,-} z_2^+} \\
& \times \langle \overline{D^0} : p^\prime |
\Psi^c_{a^\prime_1\alpha^\prime_1}(-\bar{z}_1/2)
\Phi^{S[ud]}_{a_1}(+\bar{z}_1/2)
|p : p , \mu\rangle
\,\, \Tilde{H}_{a_i^{(\prime)} \alpha^{\prime}_i}
\left( \bar{k}_1 , \bar{k}_2 \right) \\
& \times \langle D^0 : q^\prime |
\Phi^{S[ud]\dagger}_{a_2}(+\bar{z}_2/2)
\overline{\Psi}^c_{a^\prime_2\alpha^\prime_2}(-\bar{z}_2/2)
|\bar{p} : q , \nu\rangle \,. \\
\label{eq:integration-1}
\end{split}
\end{equation}
From now on we will omit the color and spinor labels whenever this does not lead to ambiguities and
replace the field-operator arguments $\bar{z}_1$ and $\bar{z}_2$ by
their non-vanishing components $z_1^-$ and $z_2^+$, respectively.
Furthermore, if one uses $\bar{k}_1^{\,+} = \bar{x}_1 \bar{p}^{\,+}$ and
$\bar{k}_2^{\,-} = \bar{x}_2 \bar{q}^{\,-}$ to rewrite the
$\bar{k}_1^{\,+}$ and $\bar{k}_2^{\,-}$ integrations in the amplitude (\ref{eq:integration-1})
as integrations over the longitudinal momentum fractions $\bar{x}_1$ and $\bar{x}_2$, respectively,
one arrives at,
\begin{equation}
\begin{split}
M_{\mu \nu} =
&\int d \bar{x}_1 \, \bar{p}^{\,+} \, \int \frac{d z_1^-}{2\pi} e^{i \bar{x}_1 \bar{p}^{\,+} z_1^-}
\int d \bar{x}_2 \, \bar{q}^{\,-} \, \int \frac{d z_2^+}{2\pi} e^{i \bar{x}_2 \bar{q}^{\,-} z_2^+} \\
& \times \langle \overline{D^0} : p^\prime |
\Psi^c(-z_1^-/2)
\Phi^{S[ud]}(+z_1^-/2)
|p : p , \mu\rangle
\,\, \Tilde{H}\left( \bar{x}_1\bar{p}^{\,+}, \bar{x}_2\bar{q}^{\,-} \right) \\
& \times \langle D^0 : q^\prime |
\Phi^{S[ud]\dagger}(+z_2^+/2)
\overline{\Psi}^c(-z_2^+/2)
|\bar{p} : q , \nu\rangle \,.
\label{eq:integration}
\end{split}
\end{equation}
As in Ref.~\cite{Goritschnig:2009sq} for $p \,\to\, \Lambda_c^+$ ($\bar{p} \,\to\, \overline{\Lambda}_c^-$),
the $p \,\to\, \overline{D^0}$ ($\bar{p} \,\to\, D^0$) transition matrix element is expected
to exhibit a pronounced peak with respect to the momentum fraction.
The position of the peak is approximately at
\begin{equation}
x_0 \= \frac{m_c}{M} \= 0.68\,.
\end{equation}
From Eq.~(\ref{partonic-threshold}) one then infers
that the relevant average momentum fractions
$\bar{x}_1$ and $\bar{x}_2$
have to be larger than the skewness $\xi$.
This means that the convolution integrals in Eq.~(\ref{eq:integration}) have to be
performed only from $\xi$ to 1 and not from 0 to 1.
In the following section we will analyze the soft hadronic matrix elements in some more detail.
\section{Hadronic Transition Matrix Elements}
\label{sec:quark-field-product}
Compared to Eq.~(\ref{ampl}) the Fourier transforms of the hadronic matrix elements
for the $p \to \overline{D^0}$ and $\bar{p} \to D^0$ transitions
are rendered to Fourier integrals solely over $z_1^-$ and $z_2^+$, respectively.
Hence we have to study the integral
\begin{equation}
\bar{p}^{\,+} \, \int \frac{d z_1^-}{2\pi} e^{i \bar{x}_1\bar{p}^{+} z_1^-}
\langle \overline{D^0} : p^\prime | \Psi^c(-z_1^-/2)
\Phi^{S[ud]}(+z_1^-/2)|p : p , \mu\rangle \,,
\label{soft-hadronic-matrix-element}
\end{equation}
over the $p \to \overline{D^0}$ transition matrix element and the integral
\begin{equation}
\bar{q}^- \, \int \frac{d z_2^+}{2\pi} e^{i \bar{x}_2\bar{q}^{-} z_2^+}
\langle D^0 : q^\prime | \Phi^{S[ud]\dagger}(+z_2^+/2)
\overline{\Psi}^c(-z_2^+/2)|\bar{p} : q , \nu\rangle \,,
\label{soft-anti-hadronic-matrix-element}
\end{equation}
over the $\bar{p} \to D^0$ transition matrix element
instead of Eqs.~(\ref{soft-Tordered-hadronic-matrix-element}) and
(\ref{soft-Tordered-anti-hadronic-matrix-element}), respectively.
We will first concentrate on the
$p \to \overline{D^0}$ transition (\ref{soft-hadronic-matrix-element})
and investigate the product of field operators
$\Psi^c(-z^-_1/2)\Phi^{S[ud]}(+z^-_1/2)$.
For this purpose we consider the $c-$quark field operator $\Psi^c$
in the hadron frame of the outgoing $\overline{D^0}$,
cf. e.g., Refs.~\cite{Kogut:1969xa, Brodsky:1989pv},
where the $\overline{D^0}$ has no transverse momentum component.
It can be reached from our symmetric CMS by a transverse boost
\cite{Dirac:1949cp, Leutwyler:1977vy} with the boost parameters
\begin{equation}
b^+ = \left(1-\xi\right) \bar{p}^{\,+}
\quad\text{and}\quad
\VEC{b}_\perp = \frac{\bm{\Delta}_\perp}{2}\,.
\end{equation}
In this hadron-out frame we write the field operator in terms of its
\lq\lq good\rq\rq\ and \lq\lq bad\rq\rq\ light cone components,
\begin{equation}
\Psi^c \= \frac12 (\gamma^-\gamma^+ + \gamma^+\gamma^-) \, \Psi^c
\,\equiv\, \Psi^{c}_+ + \Psi^{c}_-\,,
\end{equation}
by means of the good and bad projection operators
$\mathcal{P}^+ = \frac12 \gamma^- \gamma^+$ and
$\mathcal{P}^- = \frac12 \gamma^+ \gamma^-$, respectively.
After doing that we eliminate the $\gamma^-$ appearing in
$\mathcal{P}^+$ and $\mathcal{P}^-$ by
using the $\bar{c}-$quark energy projector
\begin{equation}
\sum_{\lambda^\prime_1} v({k}^\prime_1,\lambda^\prime_1)
\bar{v}({k}^\prime_1,\lambda^\prime_1)\=
{k}^\prime_1\cdot\gamma - m_c \,.
\end{equation}
In the hadron-out frame it explicitely takes on the form
\begin{equation}
\sum_{\lambda^\prime_1} v(\hat{k}^\prime_1,\lambda^\prime_1)
\bar{v}(\hat{k}^\prime_1,\lambda^\prime_1)\=
{k}_1^{\prime +}\gamma^- + \frac{m_c^2}{2{k}_1^{\prime +}}\gamma^+ - m_c \,,
\label{eq:proj-mass}
\end{equation}
since there the $\bar{c}-$quark momentum is
\begin{equation}
\hat{k}_1^\prime =
\left[
{k}^{\prime +}_1,
\frac{m_c^2}{2{k}_1^{\prime +}},\tr0
\right]\,.
\end{equation}
With those replacements the $c-$quark field operator becomes
\begin{eqnarray}
\Psi^c &=& \frac{1}{2{k}_1^{\prime +}} \sum_{\lambda^\prime_1} \Big\{
v(\hat{k}^\prime_1,\lambda^\prime_1)
\left(\bar{v}(\hat{k}^\prime_1,\lambda^\prime_1)\gamma^+\Psi^c\right) \nonumber\\
&+& \gamma^+\Big[ v(\hat{k}^\prime_1,\lambda^\prime)
(\bar{v}(\hat{k}^\prime_1,\lambda^\prime_1)\Psi^c)
+ 2m_c\Psi^c\Big]\Big\}\,.
\label{eq:decomp-mass}
\end{eqnarray}
As in the case of $p\bar{p} \,\to\, \Lambda_c^+\overline{\Lambda}_c^-$
in Ref.~\cite{Goritschnig:2009sq}, one can argue that the contribution coming from
$\left(\bar{v}(\hat{k}^\prime_1,\lambda^\prime_1)\gamma^+\Psi^c\right)$
dominates over the one in the square brackets, and thus the latter one can be neglected.
Since this dominant contribution can be considered as a plus component of a four-vector,
one can immediately boost back to our symmetric CMS where it then still holds that
\begin{equation}
\Psi^c(-z_1/2) \= \frac{1}{2{k}_1^{\prime +}} \sum_{\lambda^\prime_1}
v({k}^\prime_1,\lambda^\prime_1)
\left(\bar{v}({k}^\prime_1,\lambda^\prime_1)\gamma^+\Psi^c(-z_1/2)\right)\,.
\label{eq:decomp-c}
\end{equation}
Furthermore, one can even show that in
$\left( \bar{v}(k_1^\prime ,\,\lambda_1^\prime )\gamma^+\Psi^c(-z_1^-/2) \right)$
on the right-hand side of Eq.~(\ref{eq:decomp-c}) only the good component of
$\Psi^c(-z_1^-/2)$ is projected out, since
\begin{equation}
\bar{v}(k_1^\prime ,\, \lambda_1^\prime )\gamma^+\Psi^{c}(-z_1^-/2)
=\bar{v}(k_1^\prime ,\, \lambda_1^\prime )\gamma^+ \mathcal{P}^+ \Psi^{c}(-z_1^-/2) \,.
\end{equation}
Finally, we note that such manipulations are not necessary for the
scalar field operator $\Phi^{S[ud]}$ of the $S[ud]$ diquark.
Putting everything together gives for the $p \to \overline{D^0}$
transition matrix element (\ref{soft-hadronic-matrix-element})
\begin{equation}
\begin{split}
&\bar{p}^{\,+} \, \int \frac{d z_1^-}{2\pi} e^{i \bar{x}_1\bar{p}^{+} z_1^-}
\langle \bar{D^0} : p^\prime | \Psi^c(-z_1^-/2)
\Phi^{S[ud]}(+z_1^-/2)|p : p , \mu\rangle
\,=\, \\
&\frac{\bar{p}^{\,+}}{2k_1^{\prime +}} \,\sum_{\lambda_1^\prime} \,\int \frac{d z_1^-}{2\pi}
e^{i \bar{x}_1\bar{p}^{+} z_1^-}
\langle \bar{D^0} : p^\prime |
v\left(k_1^\prime ,\,\lambda_1^\prime \right)
\left( \bar{v}(k_1^\prime ,\,\lambda_1^\prime )\gamma^+\Psi^{c}_+(-z_1^-/2) \right)
\Phi^{S[ud]}(+z_1^-/2)
|p : p , \mu\rangle \,.
\label{explicit-hadronic-matrix-element}
\end{split}
\end{equation}
Proceeding in an analogous way for the $\bar{p} \to D^0$ transition matrix element,
where the role of the $+$ and $-$ components are interchanged,
we get for Eq.~(\ref{soft-anti-hadronic-matrix-element})
\begin{equation}
\begin{split}
&\bar{q}^- \, \int \frac{d z_2^+}{2\pi} e^{i \bar{x}_2\bar{q}^{-} z_2^+}
\langle D^0 : q^\prime | \Phi^{S[ud]\dagger}(+z_2^+/2)
\overline{\Psi}^c(-z_2^+/2)|\bar{p} : q , \nu\rangle
\,=\, \\
&\frac{\bar{q}^-}{2k_2^{\prime -}} \,\sum_{\lambda_2^\prime} \,\int \frac{d z_2^+}{2\pi}
e^{i \bar{x}_2\bar{q}^{-} z_2^+}
\langle D^0 : q^\prime | \Phi^{S[ud]\dagger}(+z_2^+/2)
\left( \overline{\Psi}^{c}_+(-z_2^+/2) \gamma^- u\left(k_2^\prime,\lambda_2^\prime\right) \right)
\bar{u}\left(k_2^\prime,\lambda_2^\prime\right)
|\bar{p} : q , \nu\rangle \,.
\label{explicit-anti-hadronic-matrix-element}
\end{split}
\end{equation}
Also here only the good components of the quark field are projected out on the right-hand side.
Using now Eqs.~(\ref{explicit-hadronic-matrix-element})
and (\ref{explicit-anti-hadronic-matrix-element}) and
attaching the spinors $v\left(k_1^\prime ,\,\lambda_1^\prime \right)$
and $\bar{u}\left(k_2^\prime,\lambda_2^\prime\right)$ to the hard subprocess amplitude
$\Tilde{H}$ by introducing
\begin{equation}
H_{\lambda_1^\prime\,,\lambda_2^\prime}\left( \bar{x}_1 \,, \bar{x}_2\right) :=
\bar{u}\left(k_2^\prime,\lambda_2^\prime\right)
\Tilde{H}\left( \bar{x}_1\bar{p}^{\,+}, \bar{x}_2\bar{q}^{\,-} \right)
v\left(k_1^\prime ,\,\lambda_1^\prime \right) \,,
\label{def:subprocess-amplitudes}
\end{equation}
we get for the $p\bar{p} \, \to \, \overline{D^0} D^0$ amplitude (\ref{eq:integration})
\begin{equation}
\begin{split}
M_{\mu \nu} =
&\frac{1}{4(\bar{p}^{\,+})^2}
\, \sum_{\lambda_1^\prime , \lambda_2^\prime}
\, \int d \bar{x}_1 \, \int d \bar{x}_2 \,
\, H_{\lambda_1^\prime\,,\lambda_2^\prime}\left( \bar{x}_1 \,, \bar{x}_2\right)
\, \frac{1}{\bar{x}_1-\xi} \, \frac{1}{\bar{x}_2-\xi} \\
& \times \bar{v}(k_1^\prime ,\,\lambda_1^\prime )\gamma^+ \quad
\bar{p}^{\,+} \,\int \frac{d z_1^-}{2\pi} \, e^{i \bar{x}_1\bar{p}^{+} z_1^-} \,
\langle \bar{D^0} : p^\prime |
\Psi^c_+(-z_1^-/2)
\Phi^{S[ud]}(+z_1^-/2)
|p : p , \mu\rangle \\
&\times \bar{q}^{\,-} \,\int \frac{d z_2^+}{2\pi} \, e^{i \bar{x}_2\bar{q}^{-} z_2^+} \,
\langle D^0 : q^\prime | \Phi^{S[ud]\dagger}(+z_2^+/2)
\overline{\Psi}^c_+(-z_2^+/2)
|\bar{p} : q , \nu\rangle \quad
\gamma^- u\left(k_2^\prime,\lambda_2^\prime\right)\,.
\label{Mmunu}
\end{split}
\end{equation}
Introducing the abbreviations
\begin{equation}
\mathcal{H}_{\lambda_1^\prime \mu}^{\bar{c}S} \,:=\,
\bar{v}(k_1^\prime ,\,\lambda_1^\prime )\gamma^+ \,\,
\bar{p}^{\,+} \,\int \frac{d z_1^-}{2\pi} \, e^{i \bar{x}_1\bar{p}^{+} z_1^-} \,
\langle \bar{D^0} : p^\prime |
\Psi^c_+(-z_1^-/2)
\Phi^{S[ud]}(+z_1^-/2)
|p : p , \mu\rangle
\label{eq:mathcalH}
\end{equation}
and
\begin{equation}
\mathcal{H}_{\lambda_2^\prime \nu}^{c\bar{S}} \,:=\,
\bar{q}^{\,-} \,\int \frac{d z_2^+}{2\pi} \, e^{i \bar{x}_2\bar{q}^{-} z_2^+} \,
\langle D^0 : q^\prime | \Phi^{S[ud]\dagger}(+z_2^+/2)
\overline{\Psi}^c_+(-z_2^+/2)
|\bar{p} : q , \nu\rangle \,\,
\gamma^- u\left(k_2^\prime,\lambda_2^\prime\right) \,,
\label{eq:mathcalantiH}
\end{equation}
for the pertinent projections of the hadronic transition matrix elements,
we can write the hadronic scattering amplitude in a more compact form:
\begin{equation}
M_{\mu \nu} =
\frac{1}{4(\bar{p}^{\,+})^2}
\, \sum_{\lambda_1^\prime , \lambda_2^\prime}
\, \int d \bar{x}_1 \, \int d \bar{x}_2 \,
\, H_{\lambda_1^\prime\,,\lambda_2^\prime}\left( \bar{x}_1 \,, \bar{x}_2\right)
\, \frac{1}{\bar{x}_1-\xi} \, \frac{1}{\bar{x}_2-\xi}
\,\mathcal{H}_{\lambda_1^\prime \mu}^{\bar{c}S}
\,\mathcal{H}_{\lambda_2^\prime \nu}^{c\bar{S}}\,.
\label{Mmunu-short}
\end{equation}
\section{Overlap Representation of $\mathcal{H}_{\lambda_1^\prime \mu}^{\bar{c}S}$}
\label{sec:overlap-representation}
In the following section we will derive a representation for the hadronic
$p \,\to\, \overline{D^0}$ and $\bar{p} \,\to\, D^0$ transition matrix elements
as an overlap of hadronic light-cone wave functions (LCWFs) for the
valence Fock components of $p$ and $\overline{D^0}$ \cite{Diehl:2000xz}.
Since we only need them for $\bar{x} > \xi$, i.e.,
in the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi region,
the hadronic transition matrix elements admit such a representation.
For doing that we will make use of the Fock expansion of the hadron states and
the Fourier decomposition of the partonic field operators in light-cone quantum field theory.
At a given light-cone time, say $z^+ = 0$,
the good independent dynamical field components
$\Phi^{S[ud]}$ and $\Psi^c_+\left(-z_1^-/2\right)$
of the $S[ud]$ diquark and the $c$ quark, respectively, have the Fourier decomposition
\begin{equation}
\begin{split}
\Phi^{S[ud]}\left(+z_1^-/2\right) \=
\int\frac{dk_1^+}{k_1^+}\,\int\frac{d^2 k_{1\perp}}{16\pi^3}\,\theta\left(k_1^+\right)
&\left[
a\left(S[ud]:\,k_1^+,\,\VEC{k}_{1\perp}\right)
e^{-\imath k_1^+ \frac{z_1^-}{2}} \right. \\
&\left. +
b^\dagger\left(S[ud]:\,k_1^+,\,\VEC{k}_{1\perp}\right)
e^{+\imath k_1^+ \frac{z_1^-}{2}}
\right]
\label{Sud-field-operator}
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\Psi^{c}_+\left(-z_1^-/2\right) \=
\int\frac{dk_1^{\prime +}}{k_1^{\prime +}}\,
\int\frac{d^2 k_{1\perp}^\prime}{16\pi^3}\,
\theta\left(k_1^{\prime +}\right)
\sum_{\lambda_1^\prime}
&\left[
c\left(c:\,k_1^{\prime +},\,\VEC{k}_{1\perp}^\prime,\,\lambda_1^\prime\right)
u_+\left(k_1^\prime,\,\lambda_1^\prime\right)
e^{+\imath k_1^{\prime +} \frac{z_1^-}{2}} \right. \\
&\left. +
d^\dagger\left(c:\,k_1^{\prime +},\,\VEC{k}_{1\perp}^\prime,\,\lambda_1^\prime\right)
v_+\left(k_1^\prime,\,\lambda_1^\prime\right)
e^{-\imath k_1^{\prime +} \frac{z_1^-}{2}}
\right] \,.
\label{c-field-operator}
\end{split}
\end{equation}
The spinors $u_+$ and $v_+$ are the good components of the (anti)quark
spinors $u$ and $v$, i.e., $u_+ \= \mathcal{P}^+ u$ and $v_+ \= \mathcal{P}^+ v$.
The operators $a$ and $b^\dagger$ are the annihilator of an $S[ud]$ diquark and
the creator of an $\overline{S[ud]}$ diquark, respectively.
The operator $c$ annihilates a $c$ quark and
the operator $d^\dagger$ creates a $\bar{c}$ quark.
Their action on the vacuum gives the single-parton states
\begin{eqnarray}
a^\dagger\left(S[ud]:\,k_1^+,\,\VEC{k}_{1\perp}\right) \mid 0 \,\rangle
&=& \mid S[ud]:\,k_1^+,\,\VEC{k}_{1\perp} \rangle \,, \\
b^\dagger\left(S[ud]:\,k_1^+,\,\VEC{k}_{1\perp}\right)\mid 0 \,\rangle
&=& \mid \overline{S[ud]}:\,k_1^+,\,\VEC{k}_{1\perp} \rangle \,, \\
c^\dagger\left(c:\,k_1^{\prime +},\,\VEC{k}_{1\perp}^\prime,\,\lambda_1^\prime\right)\mid 0 \,\rangle
&=& \mid c:\,k_1^{\prime +},\,\VEC{k}_{1\perp}^\prime,\,\lambda_1^\prime \rangle \,, \\
d^\dagger\left(c:\,k_1^{\prime +},\,\VEC{k}_{1\perp}^\prime,\,\lambda_1^\prime\right)\mid 0 \,\rangle
&=& \mid \bar{c}:\,k_1^{\prime +},\,\VEC{k}_{1\perp}^\prime,\,\lambda_1^\prime \rangle \,,
\end{eqnarray}
which are normalized as follows:
\begin{equation}
\langle \cdots :\,k^{\prime +},\,\VEC{k}^\prime_\perp,\,\lambda^\prime
\mid
\cdots:\,k^{+},\,\VEC{k}_\perp,\,\lambda \rangle \=
16\pi^3 \, k^+ \,
\delta \left( k^{\prime +} - k^+ \right)
\delta^{(2)} \left( \VEC{k}^\prime_\perp - \VEC{k}_\perp \right) \,
\delta_{\lambda^\prime,\,\lambda} \,.
\end{equation}
In the case of $S[ud]$ states no $\lambda^{(\prime)}$ and no $\delta_{\lambda^\prime,\,\lambda}$ appear.
This normalization is in accordance with the \hbox{(anti)}commutation relations
\begin{equation}
\begin{split}
\left[
a\left(S[ud]:\,k^{\prime +},\,\VEC{k}^\prime_\perp\right),\,
a^\dagger\left(S[ud]:\,k^+,\,\VEC{k}_\perp\right)
\right]
& \=
\left[
b\left(S[ud]:\,k^{\prime +},\,\VEC{k}^\prime_\perp\right),\,
b^\dagger\left(S[ud]:\,k^+,\,\VEC{k}_\perp\right)
\right] \\
& \=
16\pi^3 \, k^+ \,
\delta \left( k^{\prime +} - k^+ \right)
\delta^{(2)} \left( \VEC{k}^\prime_\perp - \VEC{k}_\perp \right)
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\{
c\left(c:\,k^{\prime +},\,\VEC{k}^\prime_\perp,\,\lambda^\prime\right),\,
c^\dagger\left(c:\,k^+,\,\VEC{k}_\perp,\,\lambda\right)
\}
& \=
\{
d\left(c:\,k^{\prime +},\,\VEC{k}^\prime_\perp,\,\lambda^\prime\right),\,
d^\dagger\left(c:\,k^+,\,\VEC{k}_\perp,\,\lambda\right)
\} \\
& \=
16\pi^3 \, k^+ \,
\delta \left( k^{\prime +} - k^+ \right)
\delta^{(2)} \left( \VEC{k}^\prime_\perp - \VEC{k}_\perp \right)
\delta_{\lambda^\prime,\,\lambda} \,.
\end{split}
\end{equation}
In the Fock state decomposition hadrons on the light front are replaced
by a superposition of parton states.
Taking only into account the valence Fock state,
the proton and the $\overline{D^0}$ state in our quark-diquark picture
are represented as
\begin{equation}
\begin{split}
\mid p : p,\,\mu \rangle =
&\int d\tilde{x}\,\frac{d^2\tilde{k}_\perp}{16\pi^3} \,
\psi_p\left(\tilde{x},\,\VEC{\tilde{k}}_{\perp}\right) \,
\frac{1}{\sqrt{\tilde{x}(1-\tilde{x})}} \\
&\times\mid S_{\left[ud\right]}:\,\tilde{x}p^+,\,\VEC{\tilde{k}}_{\perp}+\tilde{x}\VEC{p}_{\perp} \rangle
\mid u:\,(1-\tilde{x})p^+,\,-\VEC{\tilde{k}}_{\perp}+(1-\tilde{x})\VEC{p}_{\perp},\,\mu \rangle
\label{Eq_qD_ProtonState}
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\mid \overline{D^0} : p^\prime \rangle =
&\int d\hat{x}^\prime\,\frac{d^2 \hat{k}^\prime_\perp}{16\pi^3}\,
\psi_D\left( \hat{x}^\prime ,\,\VEC{\hat{k}^\prime}_{\perp} \right) \,
\frac{1}{\sqrt{\hat{x}^\prime(1-\hat{x}^\prime)}} \,
\frac{1}{\sqrt{2}}\sum_{\lambda^\prime} \left(2\lambda^\prime\right) \\
&\times\mid \bar{c}:\,\hat{x}^\prime p^{\prime +},\,\VEC{\hat{k}^\prime}_{\perp}+\hat{x}^\prime\VEC{p}^\prime_{\perp},\,\lambda^\prime \rangle\,
\mid u:\,(1-\hat{x}^\prime)p^{\prime +} ,\,
-\VEC{\hat{k}^\prime}_{\perp}+(1-\hat{x}^\prime)\VEC{p}^\prime_{\perp},\,-\lambda^\prime \rangle \,,
\label{Eq_qD_DState}
\end{split}
\end{equation}
respectively, with normalization
\begin{equation}
\langle \cdots :\,p^{\prime +},\,\VEC{p}^\prime_\perp\,(,\,\lambda^\prime)
\mid
\cdots :\,p^{+},\,\VEC{p}_\perp\,(,\,\lambda) \rangle \=
16\pi^3 \, p^+ \,
\delta \left( p^{\prime +} - p^+ \right)
\delta^{(2)} \left( \VEC{p}^\prime_\perp - \VEC{p}_\perp \right)
\left(\delta_{\lambda^\prime,\,\lambda}\right) \,.
\end{equation}
Also here, in the case of the pseudoscalar $D^0$ and $\overline{D^0}$ states
no $\lambda^{(\prime)}$ and no $\delta_{\lambda^\prime,\,\lambda}$ appear.
$\psi_p$ and $\psi_D$ are the LCWFs
of the proton and the $\overline{D^0}$, respectively,
which will be specified in Sec.~\ref{sec:Modelling-the-TDAs}.
The LCWFs do not depend on the total momentum of the hadron,
but only on the momentum coordinates of the partons relative to the hadron momentum.
Those relative momenta are most easily identified in the hadron frame of the parent hadron.
Here we have assumed that the partons inside the proton and the $\overline{D^0}$
have zero orbital angular momentum.
The arguments of the LCWFs are related to the
average momenta and momentum fractions by:
\begin{equation}
\tilde{x} \= \frac{\bar{x} + \xi}{1 + \xi}\,,\quad
\tilde{{\bm k}}_\perp \= \bar{{\bm k}}_\perp - \frac{1-\bar{x}}{1 + \xi}\frac{{\bm \Delta}_\perp}{2} \,,
\end{equation}
\begin{equation}
\hat{x}^\prime \= \frac{\bar{x} - \xi}{1 - \xi}\,,\quad
\hat{{\bm k}}_\perp^\prime \= \bar{{\bm k}}_\perp + \frac{1-\bar{x}}{1 - \xi}\frac{{\bm \Delta}_\perp}{2} \,.
\end{equation}
Using the expressions above we can write the hadronic matrix elements
appearing in Eq.~(\ref{Mmunu}) as
\begin{equation}
\begin{split}
\mathcal{H}_{\lambda_1^\prime \mu}^{\bar{c}S}
\=& - 2\sqrt{2}\mu\bar{p}^{\,+} \, \int\,\frac{d\bar{x}d^2\bar{k}_\perp}{16\pi^3}
\, \sqrt{\frac{\bar{x}-\xi}{\bar{x}+\xi}}
\, \psi_D
\left(\hat{x}^\prime(\bar{x},\,\xi),\,\hat{\VEC{k}}^\prime_\perp(\bar{\VEC{k}}_\perp\,,\bar{x}\,,\xi)\right) \\
&\phantom{=}\, \times \,
\psi_p
\left(\tilde{x}(\bar{x},\,\xi),\,\tilde{\VEC{k}}_\perp(\bar{\VEC{k}}_\perp\,,\bar{x}\,,\xi)\right)
\, \delta\left(\bar{x}_1 - \bar{x}\right)\delta_{-\lambda_1^\prime ,\, \mu} \,,
\label{hadronic-transition-overlap}
\end{split}
\end{equation}
for the $p \,\to\, \overline{D^0}$ transition and
\begin{equation}
\begin{split}
\mathcal{H}_{\lambda_2^\prime \mu}^{c\bar{S}}
\=& - 2\sqrt{2}\nu\bar{q}^{\,-}\, \int\,\frac{d\bar{y}d^2\bar{l}_\perp}{16\pi^3}
\, \sqrt{\frac{\bar{y}-\xi}{\bar{y}+\xi}}
\, \psi_D
\left(\hat{y}^\prime(\bar{y},\,\xi),\,\hat{\VEC{l}}^\prime_\perp(\bar{\VEC{l}}_\perp\,,\bar{y}\,,\xi)\right) \\
&\phantom{=}\, \times \,
\psi_p
\left(\tilde{y}(\bar{y},\,\xi),\,\tilde{\VEC{l}}_\perp(\bar{\VEC{l}}_\perp\,,\bar{y}\,,\xi)\right)
\, \delta\left(\bar{x}_2 - \bar{y}\right)\delta_{-\lambda_2^\prime ,\, \nu} \,,
\label{anti-hadronic-transition-overlap}
\end{split}
\end{equation}
for the $\bar{p} \,\to\, D^0$ transition.
Here we have used that
\begin{equation}
\bar{v}\left(k_1^\prime,\,\lambda_1^\prime\right) \gamma^+ v\left(k_1^\prime,\,-\mu\right)
\= 2 k_1^{\prime\,+}
\quad\text{and}\quad
\bar{u}\left(k_2^\prime,\,-\nu\right) \gamma^- u\left(k_2^\prime,\,\lambda_2^\prime\right)
\= 2 k_2^{\prime\,-} \,.
\label{eq:vbar-gammaplus-v-etc}
\end{equation}
Collecting all pieces we finally get
\begin{equation}
\begin{split}
M_{\mu \nu} \=
& 2\mu\nu
\, \int d \bar{x}_1 \, \int d \bar{x}_2 \,
\, H_{-\mu,\,-\nu}\left( \bar{x}_1 \,, \bar{x}_2\right)
\, \frac{1}{\sqrt{\bar{x}_1^2-\xi^2}} \, \frac{1}{\sqrt{\bar{x}_2^2-\xi^2}} \\
& \times \, \int\,\frac{d^2\bar{k}_\perp}{16\pi^3} \,
\, \psi_D
\left(\hat{x}^\prime(\bar{x}_1,\,\xi),\,\hat{\VEC{k}}^\prime_\perp(\bar{\VEC{k}}_\perp\,,\bar{x}_1\,,\xi)\right)
\psi_p
\left(\tilde{x}(\bar{x}_1,\,\xi),\,\tilde{\VEC{k}}_\perp(\bar{\VEC{k}}_\perp\,,\bar{x}_1\,,\xi)\right) \\
& \times \, \int\,\frac{d^2\bar{l}_\perp}{16\pi^3} \,
\, \psi_D
\left(\hat{y}^\prime(\bar{x}_2,\,\xi),\,\hat{\VEC{l}}^\prime_\perp(\bar{\VEC{l}}_\perp\,,\bar{x}_2\,,\xi)\right)
\psi_p
\left(\tilde{y}(\bar{x}_2,\,\xi),\,\tilde{\VEC{l}}_\perp(\bar{\VEC{l}}_\perp\,,\bar{x}_2\,,\xi)\right) \,.
\label{Mmunu-overlap-final}
\end{split}
\end{equation}
Furthermore, we can take advantage of the expected shape of the
$p \,\to\, \overline{D^0}$ ($\bar{p} \,\to\, D^0$) transition matrix elements.
Due to their pronounced peak around $x_0$ only kinematical regions
in the hard scattering amplitude close to the peak position are
enhanced by the hadronic transition matrix elements.
For the hard partonic subprocess we, therefore, apply a
\lq\lq peaking approximation\rq\rq, i.e., we replace the momentum fractions
appearing in the hard-scattering amplitude by $x_0$.
Then the hard scattering amplitude can be pulled out of the convolution integral and the $p\bar{p} \, \to \, \overline{D^0} D^0$ amplitude simplifies further to
\begin{equation}
\begin{split}
M_{\mu \nu} \,=\,
& 2\mu\nu
\, H_{-\mu,\,-\nu}\left( x_0,\, x_0\right)
\, \Big[
\, \int_\xi^1 d \bar{x} \,
\, \frac{1}{\sqrt{\bar{x}^2-\xi^2}}
\, \int\,\frac{d^2\bar{k}_\perp}{16\pi^3} \\
& \,
\psi_D
\left(\hat{x}^\prime(\bar{x},\,\xi),\,\hat{\VEC{k}}^\prime_\perp(\bar{\VEC{k}}_\perp\,,\bar{x}\,,\xi)\right)
\psi_p
\left(\tilde{x}(\bar{x},\,\xi),\,\tilde{\VEC{k}}_\perp(\bar{\VEC{k}}_\perp\,,\bar{x}\,,\xi)\right)
\, \Big]^2 \,,
\label{Mmunu-overlap-final-peaking-approx}
\end{split}
\end{equation}
where the term in square bracket can be considered as a sort of generalized form factor.
\section{Hard Scattering Subprocess}
\label{sec:hard-subprocess}
Before we start to specify the LCWFs occuring in this overlap representation
of the $p \,\to\, \overline{D^0}$ ($\bar{p} \,\to\, D^0$) transition,
we will first calculate scattering amplitudes of the hard partonic
$S[ud]\,\overline{S[ud]} \,\to\, \bar{c}c$ subprocess within the
peaking approximation.
\begin{figure}[h]
\begin{center}
\includegraphics[width=.70\textwidth, clip=true]{SSbar_to_ccbar.eps}
\end{center}
\caption{The hard scattering process on the partonic level
$S[ud]\overline{S[ud]} \,\to\, c\bar{c}$.}
\label{fig:hard-subprocess}
\end{figure}
The hard-scattering amplitudes for the hard partonic subprocess,
as shown in Fig.~\ref{fig:hard-subprocess}, is given by
\begin{equation}
H_{\lambda_1^\prime,\lambda_2^\prime} \,=\,
\imath \frac{4}{9} \left(
-\imath g_s \bar{u}\left(k_2^\prime,\lambda_2^\prime\right)
\gamma^\mu v\left(k_1^\prime,\lambda_1^\prime\right)
\right)
\frac{-\imath g_{\mu\nu}}{\left( k_1 + k_2 \right)^2}
\left(
\left(-\imath g_s F_s \right) \left(k_1 - k_2\right)^\nu
\right) \,.
\label{hard-amplitude}
\end{equation}
$4/9$ is the color factor which we have attached to the hard-scattering amplitude.
$g_s=\sqrt{4\pi\alpha_s}$ is the \lq\lq usual\rq\rq\ strong coupling constant
and $F_s$ denotes the diquark form factor at the gluon-diquark vertex.
This diquark form factor takes care of the composite nature of the $S[ud]$ diquark
and the fact that for large $s$ the diquark should dissolve into quarks.
We have taken the phenomenological form factor from Ref.~\cite{Kroll:1993zx}, namely,
\begin{equation}
F_s(\hat{s}) \=
\mid \frac{ Q_0^2 }{ Q_0^2 - \hat{s} } \mid \,,\quad
Q_0^2 \= 3.22 \,{\rm GeV}^2 \,,\quad
\hat{s} > Q_0^2 \,.
\label{eq:F_s}
\end{equation}
It is just the analytic continuation of a spacelike form factor
to the timelike region. The original spacelike form factor was introduced in Ref.~\cite{Anselmino:1987vk}
(where it has been obtained from fits to the structure functions of deep inelastic lepton-hadron scattering,
to the electromagnetic proton form factor and elastic proton-proton data at large momentum transfer). It should be remarked here that
such a continuation is not unique; the form factor can acquire unknown phases when doing the continuation.
But, fortunately, such phases are irrelevant with respect to the physics, which is the reason for
taking the absolute value in (\ref{eq:F_s}).
With the help of the peaking approximation we can express the subprocess amplitudes in terms of the kinematical variables of the full process. For the different helicity combinations we explicitely have,
\begin{eqnarray}
H_{++} &\,=\,& + 4\pi\alpha_s(x_0^2 s) \, F_s(x_0^2 s)
\,\frac{4}{9}\, \frac{2M}{\sqrt{s}} \cos\theta \,, \nonumber\\
H_{+-} &\,=\,& - 4\pi\alpha_s(x_0^2 s) \, F_s(x_0^2 s)
\,\frac{4}{9}\, \sin\theta \,, \nonumber\\
H_{-+} &\,=\,& - 4\pi\alpha_s(x_0^2 s) \, F_s(x_0^2 s)
\,\frac{4}{9}\, \sin\theta \,, \nonumber\\
H_{--} &\,=\,& - 4\pi\alpha_s(x_0^2 s) \, F_s(x_0^2 s)
\,\frac{4}{9}\, \frac{2M}{\sqrt{s}} \cos\theta \,.
\label{eq:subprocess-amplitudes}
\end{eqnarray}
\section{Modelling the Hadronic Transition Matrix Elements}
\label{sec:Modelling-the-TDAs}
In order to make numerical predictions we have to specify the LCWFs for the proton and the $D^0$.
We will use wave functions of the form
\begin{equation}
\psi \sim e^{-a^2 \sum_i \frac{\VEC{k}_{i\perp}^2+m_i^2}{x_i}} \,,
\label{eq:oszi-LCWF}
\end{equation}
which can be traced back to a harmonic oscillator ansatz \cite{Wirbel:1985ji}
that is transformed to the light cone \cite{Guo:1991eb}.
In Ref.~\cite{Korner:1991zxKorner:1992uw} it was adapted to the case
of baryons within a quark-diquark picture.
According to Refs.~\cite{Goritschnig:2009sq, Kroll:1988cd} we write the wave functions
of a proton in an $S[ud]u$ Fock state as
\begin{equation}
\psi_p(x,\,\VEC{k}_\perp) \=
N_p \, x \, e^{-a_p^2 \frac{\VEC{k}_\perp^2}{x(1-x)}}
\label{eq:proton-LCWF}
\end{equation}
and the one of a pseudoscalar $D^0$ meson in a $u\bar{c}$ Fock state as
\begin{equation}
\psi_D(x,\,\VEC{k}_\perp) \=
N_D \, e^{-a_D^2 M^2 \frac{(x-x_0)^2}{x(1-x)}} \, e^{-a_D^2 \frac{\VEC{k}_\perp^2}{x(1-x)}} \,.
\label{eq:D-LCWF}
\end{equation}
Here $x$ is the momentum fraction of the active constituent,
the $S[ud]$ diquark or the $\bar{c}$ quark, respectively.
The mass exponential in Eq.~(\ref{eq:D-LCWF}) generates the expected pronounced peak at $x \approx x_0$
and is a slightly modified version of the one given in Ref.~\cite{Korner:1991zxKorner:1992uw}.
In each of the wave functions, Eqs.~(\ref{eq:proton-LCWF}) and (\ref{eq:D-LCWF}), we have two free parameters:
on the one hand the transverse size parameter $a_{p/D}$ and, on the other hand,
the normalization constant $N_{p/D}$.
The parameters can be associated with the mean intrinsic transverse momentum squared
$\langle \VEC{k}_\perp^2 \rangle_{p/D}$ of the active constituent inside its parent hadron
and with the probability to find the hadron in the specific Fock state
(or with the decay constant $f_{p/D}$ of the corresponding hadron).
The probabilities and the intrinsic transverse momenta for the valence Fock states
as given in Eqs.~(\ref{Eq_qD_ProtonState}) and (\ref{Eq_qD_DState})
can be calculated as
\begin{equation}
P_{p/D} \= \int dx \int \frac{d^2 k_\perp}{16\pi^3} \mid \psi_{p/D}(x,\,\VEC{k}_\perp) \mid^2
\label{eq:probs}
\end{equation}
and
\begin{equation}
\langle\VEC{k}_\perp^2\rangle_{p/D} \= \frac{1}{P_{p/D}}
\int dx \int \frac{d^2 k_\perp}{16\pi^3}
\VEC{k}_\perp^2 \mid \psi_{p/D}(x,\,\VEC{k}_\perp) \mid^2 \,,
\label{eq:intr-transv-mom}
\end{equation}
respectively.
Inserting the wave functions (\ref{eq:proton-LCWF}) and (\ref{eq:D-LCWF})
into Eqs.~(\ref{eq:probs}) and (\ref{eq:intr-transv-mom}), we obtain
\begin{equation}
P_p \=
\frac{N_p^2}{640\pi^2a_p^2}
\,,\qquad
\langle \VEC{k}_{\perp}^{2} \rangle_p \=
\frac{2}{21a_p^2}
\end{equation}
and
\begin{equation}
P_D \=
\frac{N_D^2}{32 \pi^2 a_D^2} I_{11}(a_D^2)
\,,\qquad
\langle \VEC{k}_{\perp}^{2} \rangle_D \=
\frac{1}{2 a_D^2}\frac{I_{22}(a_D^2)}{I_{11}(a_D^2)}\,,
\end{equation}
where we have introduced the abbreviation
\begin{equation}
I_{nm}(a_D^2)\,:=\,\int_0^1 dx \, x^n \left(1-x\right)^m \exp{\left[-2 a_D^2 M^2 \frac{(x-x_0)^2}{x(1-x)}\right]} \,.
\end{equation}
For the proton we use the same parameters as in Refs.~\cite{Goritschnig:2009sq, Kroll:1988cd}.
We choose $a_p = 1.1 \text{GeV}^{-1}$ for the oscillator parameter
and $P_p = 0.5$ for the valence Fock state probability.
Choosing $P_p = 0.5$ for the proton may appear rather large at first sight.
As a bound state of two quarks a diquark embodies also gluons and sea quarks
and thus effectively incorporates also higher Fock states.
Therefore, a larger probability than one would expect for a 3-quark valence Fock state
and a larger transverse size of the quark-diquark state appear plausible.
Choosing the parameter values as stated above we get for the proton
\begin{equation}
\sqrt{\langle \VEC{k}_\perp^2\rangle_p} = 280\,\text{MeV}
\qquad\text{and}\qquad
N_p = 61.818\,\,{\rm GeV}^{-2} \,.
\end{equation}
For the $D$ meson we fix the two parameters such that we get certain values
for the valence Fock state probability $P_D$ and the decay constant $f_D$.
The decay constant $f_D$ is defined by the relation
\begin{equation}
\langle 0 \mid
\overline{\Psi}^u(0) \gamma^\mu \gamma_5 \Psi^c(0)
\mid D^0:\,p \rangle
\= \imath f_D p^\mu \,.
\end{equation}
Taking the plus component and inserting the fields as given in
Sec.~\ref{sec:overlap-representation}, we get (omitting phases)
\begin{equation}
2\sqrt{6} \int dx \frac{d^2 k_\perp}{16\pi^3} \psi_D (x,\,{\bm k}_{\perp})
\= f_D \,,
\end{equation}
such that
\begin{equation}
N_D \= \frac{16 \pi^2 a_D^2 f_D}{2 \sqrt{6} I_{11}(a_D^2/2)} \,.
\end{equation}
As value for the decay constant we take the experimental value
$f_D = 206$ MeV from Ref.~\cite{PDG};
for the valence Fock state probability we choose $P_D = 0.9$.
This amounts to $a_D = 0.864 \text{GeV}^{-1}$.
As values for the normalization constant and for the root mean square of
the intrinsic transverse momentum of the active quark we then get
\begin{equation}
\sqrt{\langle \VEC{k}_\perp^2\rangle_D} = 383\,\text{MeV}
\qquad\text{and}\qquad
N_D = 55.2\,\,{\rm GeV}^{-2} \,,
\end{equation}
respectively.
Let us now turn to the issue of the error assesment with respect to the parameters.
For the decay constant of the $D^0$ meson we take $f_D = 206 \,\pm 8.9$ MeV
as stated in Ref.~\cite{PDG}. The valence Fock state probability
of the $D$ meson $P_D$ is varied between $0.8$ and $1$.
We do not take into account the uncertainties of the parameters
appearing in the proton LCWF.
They are small compared to the ones of the $D$ meson LCWF since
they have been determined from detailed studies of other processes.
The influence of the parameter uncertainties on the
cross sections are indicated by grey error bands
in Figs.~\ref{fig:dsdt} and \ref{fig:sigma}.
We now turn to the wave function overlap
as derived in Sec.~\ref{sec:overlap-representation}.
When taking the model wave functions (\ref{eq:proton-LCWF})
and (\ref{eq:D-LCWF}) we explicitely get
\begin{equation}
\begin{split}
&\int \frac{d^2\overline{k}_\perp}{16\pi^3}
\psi_D
\left(\hat{x}^\prime(\bar{x},\,\xi),\,\hat{\VEC{k}}^\prime_\perp(\bar{\VEC{k}}_\perp\,,\bar{x}\,,\xi)\right)
\psi_p
\left(\tilde{x}(\bar{x},\,\xi),\,\tilde{\VEC{k}}_\perp(\bar{\VEC{k}}_\perp\,,\bar{x}\,,\xi)\right)
\= \\
& \=
\frac{N_p N_D}{16\pi^2}
\frac{\left( \bar{x} + \xi \right) \, \left( \bar{x}^2 - \xi^2 \right) \, \left( 1 - \bar{x} \right)}{\left(1+\xi\right)}
\frac{1}{a_{D}^2 \left(1-\xi\right)^2\left(\bar{x}+\xi\right) + a_p^2 \left(1+\xi\right)^2\left(\bar{x}-\xi\right)} \\
& \times \exp\left[ - a_{D}^2 M^2 \frac{\left(\bar{x} - \xi
- x_0\left(1-\xi\right)\right)^2} {\left(\bar{x}-\xi\right)
\left(1-\bar{x}\right)}\right]
\exp\left[- \bm{\Delta}_{\perp}^2 \frac{a_{D}^2 a_p^2 \left(1-\bar{x}\right)}
{a_{D}^2 \left(1-\xi\right)^2
\left(\bar{x}+\xi\right) + a_p^2
\left(1+\xi\right)^2\left(\bar{x}-\xi\right)}\right] \,.
\end{split}
\label{Eq_GPD}
\end{equation}
\begin{figure}[h]
\subfigure{
\includegraphics[width=.49\textwidth, clip=true]
{H_vs_xbar_theta_is_0.eps}
}
\hfill
\subfigure{
\includegraphics[width=.49\textwidth, clip=true]
{H_vs_xbar_theta_is_PiHalbe.eps}
}
\caption{
The wave function overlap of Eq.~\req{Eq_GPD} versus $\bar{x}$
at the CMS scattering angle $\theta = 0$ (upper figure)
and $\theta = \pi/2$ (lower figure).
We show it for Mandelstam $s = 30,\,20$ and $15\,\,{\rm GeV}^2$
(solid, dashed and dotted curves).
}
\label{fig:H_vs_xbar}
\end{figure}
In Fig.~\ref{fig:H_vs_xbar} we show the wave function overlap
of Eq.~\req{Eq_GPD} versus the momentum fraction $\bar{x}$
with the parameters chosen as stated above.
First we observe that it is centered at $\bar{x} \approx x_0$
for vanishing CMS scattering angle.
Next, let us compare the upper and the lower panel. We see that the
magnitude of the wave function overlap is strongly decreasing with
increasing CMS scattering angle $\theta$.
The wave function overlap is also more pronounced in magnitude and shape in forward direction.
Furthermore, when comparing the overlap for different values of Mandelstam $s$,
we observe that in the more important forward scattering hemisphere the overlap is
increasing in magnitude with increasing CMS energy $s$, whereas at large scattering
angles this behavior is reversed.
\section{Cross Sections}
\label{sec:Cross-Sections}
The differential cross section for $p\bar{p} \, \to \, \overline{D^0} D^0$ reads
\begin{equation}
\frac{d\sigma_{p\bar{p} \, \to \, \overline{D^0} D^0}}{d\Omega}
\=
\frac{1}{4\pi} s \Lambda_M \Lambda_m \frac{d\sigma_{p\bar{p} \, \to \, \overline{D^0} D^0}}{dt}
\=
\frac{1}{64\pi^2} \frac{1}{s} \frac{\Lambda_M}{\Lambda_m} \sigma_0 \,,
\label{eq:diff-CS}
\end{equation}
where we have introduced
\begin{equation}
\sigma_0 \,:=\, \frac14 \sum_{\mu\nu} \, \mid M_{\mu\nu} \mid^2 \,.
\label{eq:sigma0}
\end{equation}
\begin{figure}[h!]
\subfigure{
\includegraphics[width=.49\textwidth, clip=true]
{dsdt_vs_tprime_s_is_15.eps}
}
\hfill
\subfigure{
\includegraphics[width=.49\textwidth, clip=true]
{dsdt_vs_tprime_s_is_20.eps}
}
\begin{center}
\subfigure{
\includegraphics[width=.49\textwidth, clip=true]
{dsdt_vs_tprime_s_is_30.eps}
}
\end{center}
\caption{
The differential cross section $d\sigma_{p\bar{p} \, \to \, \overline{D^0} D^0} / dt$
versus $\mid t^\prime \mid$. On the upper, middle and lower panel
we show it for Mandelstam $s = 15,\,20$ and $30\,\,{\rm GeV}^2$, respectively.
}
\label{fig:dsdt}
\end{figure}
In Fig.~\ref{fig:dsdt} the differential cross section
$d\sigma_{p\bar{p} \, \to \, \overline{D^0} D^0}/dt$ is plotted versus $\mid t^\prime \mid$,
again for Mandelstam $s=15,\,20$ and $30\,\,{\rm GeV}^2$.
The decrease of the cross section with increasing $\mid t^\prime \mid$
can mainly be attributed to the wave function overlap which
gives rise to a generalized form factor
[cf. Eq.~\req{Mmunu-overlap-final-peaking-approx}].
This form factor enters the differential cross section
to the fourth power.
The forward direction is dominated by those amplitudes in which
the helicities of the proton and antiproton (and also of the
$c$ and $\bar{c}$ quark) are equal.
They go with $\cos\theta$.
With increasing scattering angle they compete with those in which
proton and antiproton (and also $c$ and $\bar{c}$) have opposite helicities.
The latter go with $\sin\theta$ and dominate at $90^\circ$.
If one looks at the energy dependence one observes that
$M_{++}$ and $M_{--}$ are suppressed by a factor $2M/\sqrt{s}$ as
compared to $M_{+-}$ and $M_{-+}$
[cf. Eq.~\req{eq:subprocess-amplitudes}].
In the $p\bar{p} \,\to\, \Lambda_c^+ \overline{\Lambda_c}^-$ case of Ref.~\cite{Goritschnig:2009sq}
the factor $M/\sqrt{s}$ comes with those amplitudes which vanish in forward direction.
When comparing the different panels of Fig.~\ref{fig:dsdt} one sees that
the effect of the increase of the differential cross section with decreasing
scattering angle becomes more pronounced for higher CMS energies.
\begin{figure}[htb!]
\begin{center}
\includegraphics[clip=true]
{sigma_vs_s.eps}
\end{center}
\caption{
The integrated cross section $\sigma_{p\bar{p} \, \to \, \overline{D^0} D^0}$ versus Mandelstam $s$.
}
\label{fig:sigma}
\end{figure}
In Fig.~\ref{fig:sigma} we show the integrated cross section
$\sigma$ versus Mandelstam $s$.
It is of the order of $\text{nb}$, which is
of the same order of magnitude as the integrated cross section
for $p\bar{p} \,\to\, \Lambda_c^+ \overline{\Lambda_c}^-$ in
Ref.~\cite{Goritschnig:2009sq}.
This finding is in accordance with the diquark-model calculation
of Ref.~\cite{Kroll:1988cd}.
According to Ref.~\cite{Kroll:1988cd} larger cross sections
are to be expected for the $p\bar{p} \,\to\, D^+ D^-$ reaction.
This, however, requires to extend our handbag approach by
including vector diquarks and will be the topic of
future investigations.
Our estimated cross section is about one order of magnitude
smaller than the predictions given in Refs.~\cite{Khodjamirian:2011sp, Haidenbauer:2010nx},
where hadronic interaction models have been used.
Whereas the authors of Ref.~\cite{Khodjamirian:2011sp}
determine their couplings of the initial proton
to the intermediate and final charmed hadrons by means of QCD sum rules,
the authors of Ref.~\cite{Haidenbauer:2010nx}
rather use $\text{SU}(4)$ flavor symmetry.
Though their predictions for the integrated $p\bar{p} \, \to \, \overline{D^0} D^0$ cross section
are comparable, they differ substantially in the
$p\bar{p} \,\to\, D^+ D^-$ cross section which, in
Ref.~\cite{Khodjamirian:2011sp}, is even smaller than our
$p\bar{p} \, \to \, \overline{D^0} D^0$ cross section.
Such big discrepancies reveal the high necessity of
experimental data which allow to decipher between
different dynamical models.
Such experiments could also help to pin down the charm-quark
content of the proton sea.
A considerably higher cross section within our approach could
only be explained if the charm-quark content of the proton sea
was not negligible.
\section{Summary}
\label{sec:Summary}
We have described the exclusive process $p\overline{p} \rightarrow \overline{D^0} D^0$ by means of a double handbag mechanism. This means that the process was assumed to factorize into a hard subprocess on the constituent level, which can be treated by means of perturbative QCD, and into soft hadronic matrix elements describing the nonperturbative $p\rightarrow \overline{D^0}$ and $\overline{p}\rightarrow D^0$ transitions. The intrinsic hard scale, justifying this approach, is given by the mass of the $c$ quark.
In order to produce the $\overline{D^0} D^0$ pair via $p\overline{p}$ annihilation a $u\overline{u}$ and a $d\overline{d}$ pair has to be annihilated on the constituent level and a $c\bar{c}$ pair must be created. We have adopted the simplifying assumption that the (dominant) valence Fock component of the proton consists of a scalar $S[ud]$ diquark and a $u$ quark such that the flavor changing hard process on the constituent level then becomes a simple $S[ud] \overline{S[ud]}\rightarrow c\overline{c}$ annihilation via the exchange of a highly virtual gluon. When calculating this annihilation, the composite nature of the $S[ud]$ diquark has been taken into account by a form factor at the diquark-gluon vertex. The form-factor parameter has been taken from the literature, where such kind of diquark model was already applied to other processes.
The soft part of the $p\rightarrow \overline{D^0}$ transition is encoded in a hadronic matrix element consisting of an $S[ud]$-diquark and a $c$-quark field operator, sandwiched between the incoming proton and the outgoing $\overline{D^0}$ state. We have given a parametrization of this matrix element in terms of (four) $p\rightarrow \overline{D^0}$ transition distribution amplitudes \cite{footnote1}. To model the $p\rightarrow \overline{D^0}$ transition matrix element we have employed an overlap representation in terms of light-cone wave functions of the proton and the $\overline{D^0}$. Such a representation makes sense for energies well above threshold and scattering angles in the forward hemisphere, where the momentum fractions of the active constituents have to be larger than the skewness. For the light-cone wave functions of the proton and the $\overline{D^0}$ we have taken simple oscillator-type models from the literature. The two parameters (normalization and oscillator parameter) in each case have been fixed such that the wave functions provide reasonable probabilities for the valence Fock state and reasonable values for the mean intrinsic transverse momentum (in case of the proton) and the $D^0$ decay constant.
This overlap representation provided us with a model for the transition distribution amplitudes and allowed us to predict differential and integrated cross sections for the $p\overline{p} \rightarrow \overline{D^0} D^0$ process. For this simple wave function model only the transition distribution amplitude associated with the covariant $\gamma_5 u_{\mathrm{proton}}$ survived. The maximum size of the differential and integrated cross sections was found to be of the order of nb, i.e., about one order of magnitude smaller than corresponding cross sections calculated within hadronic interaction models. Experimental data are therefore highly needed to figure out the favorable approach. Higher cross sections can be expected for $p\overline{p} \rightarrow D^- D^+$ within our approach, but this would require to extend the concept of transition distribution amplitudes to vector diquarks.
\begin{acknowledgements}
We acknowledge helpful discussions with L.~Szymanowski.
This work was supported by the Austrian Science Fund FWF
under Grant No. J 3163-N16 and via the Austrian-French scientific
exchange program Amadeus.
\end{acknowledgements}
\begin{appendix}
\section{Kinematics}
\label{app-kinematics}
The four-momentum transfer $\Delta$ can be written as
[cf. (\ref{def-momenta-ji}), (\ref{sum-and-diff})]
\begin{equation}
\Delta \=\ \Big[-2\xi\bar{p}^{\,+}\,,
\frac{M^2(1+\xi)-m^2(1-\xi)+\xi {\bm \Delta}_\perp^2/2}{2\bar{p}^{\,+}(1-\xi^2)}\,,
{\bm \Delta}_\perp\Big]\,.
\label{Delta-explicit}
\end{equation}
Note that $\Delta^+ = -\Delta^-$ since $p^\prime-p=q-q^\prime$.
In order to find expressions for the sine and the cosine of the CMS scattering angle $\theta$,
we write the absolute value of the three-momentum and
the momentum component into $z$ direction of the incoming proton as
\begin{equation}
|\VEC{p}| \=\ \frac{\sqrt{s}}{2} \,\Lambda_m \, , \quad
p_3 \=\ \frac{\sqrt{s}}{2} \,\sqrt{\Lambda_m^2- {\bm \Delta}_\perp^2/s}
\label{proton_three_momentum}
\end{equation}
and that of the outgoing $\bar{D^0}$ as
\begin{equation}
|\VEC{p}^\prime| \=\ \frac{\sqrt{s}}{2} \,\Lambda_M \, , \quad
|p_3^\prime| \=\ \frac{\sqrt{s}}{2} \,\sqrt{\Lambda_M^2- {\bm \Delta}_\perp^2/s} \,,
\label{lambda_three_momentum}
\end{equation}
respectively.
\noindent
Note that we have chosen the coordinate system in such a way that the $z$ component of
the incoming proton momentum is always positive.
But that of the outgoing $\bar{D^0}$ can become negative at large scattering angles
due to the unequal-mass kinematics.
This change of sign occurs when $\bm{\Delta}_\perp^2$ reaches its maximal value
\begin{equation}
\bm{\Delta}_{\perp\text{max}}^2 \=\ s \Lambda_M^2 \, ,
\end{equation}
which follows directly from Eq.~(\ref{lambda_three_momentum}).
Then the CMS scattering angle can be written as
\begin{equation}
\begin{split}
\theta &\=\ \arccos\left(\frac{p_3}{|\VEC{p}|}\right) + \arccos\left(\frac{p_3^\prime}{|\VEC{p}^\prime|}\right) \\
&\=\ \arcsin\left(\frac{|\Delta_\perp|}{2|\VEC{p}|}\right) +
\arcsin\left(\frac{|\Delta_\perp|}{2|\VEC{p}^\prime|}\right)\phantom{ + \pi} \,,
\quad \text{for forward scattering}\\
&\=\ \arcsin\left(\frac{|\Delta_\perp|}{2|\VEC{p}|}\right) -
\arcsin\left(\frac{|\Delta_\perp|}{2|\VEC{p}^\prime|}\right) + \pi \,,
\quad \text{for backward scattering}.
\label{def_cms_theta}
\end{split}
\end{equation}
\noindent
Using Eq.~(\ref{def_cms_theta}) the sine and cosine of the CMS scattering angle $\theta$ turn out to be
\begin{equation}
\sin\theta \=\ \sqrt{\frac{\bm{\Delta}_\perp^2}{s}} \frac{1}{\Lambda_m\Lambda_M}
\left( \sqrt{\Lambda_m^2 - \frac{\bm{\Delta}_\perp^2}{s}} +
\text{sign}\left(p_3^\prime\right) \sqrt{\Lambda_m^2 - \frac{\bm{\Delta}_\perp^2}{s}} \right)
\label{sin_theta}
\end{equation}
and
\begin{equation}
\cos\theta \=\ \frac{\text{sign}\left(p_3^\prime\right)}{\Lambda_m\Lambda_M}
\left( \sqrt{\left( \Lambda_m^2 - \frac{\bm{\Delta}_\perp^2}{s} \right)
\left( \Lambda_M^2 - \frac{\bm{\Delta}_\perp^2}{s} \right)} -
\frac{1}{\text{sign}\left(p_3^\prime\right)} \frac{\bm{\Delta}_\perp^2}{s} \right),
\label{cos_theta}
\end{equation}
respectively, where $\text{sign}\left(p_3^\prime\right)$ takes care of the kinematical situation of
forward or backward scattering.
Now we are able to express several kinematical variables in a compact form.
Starting from the definition (\ref{sum-and-diff}) of the average hadron momentum $\bar{p}$
and using Eqs.~(\ref{proton_three_momentum}), (\ref{lambda_three_momentum}),
(\ref{sin_theta}) and (\ref{cos_theta}) its plus component can be written as
\begin{equation}
\begin{split}
\bar{p}^{\,+} &\=\ \frac12 \left( p^+ + p^{\prime +} \right)
\=\ \frac{1}{2\sqrt{2}} \left( \left(p_0 + p_3\right) +
\left(p^\prime_0 + p^\prime_3\right) \right) \\
&\=\ \frac14\sqrt{\frac{s}{2}}
\Big[2+ \sqrt{\Lambda_m^2- {\bm \Delta}_\perp^2/s} +{\rm sign}(p^\prime_3)
\sqrt{\Lambda_M^2- {\bm \Delta}_\perp^2/s}\Big] \\
&\=\ \frac14 \sqrt{\frac{s}{2}} \Big[2 + \sqrt{\Lambda_m^2 + \Lambda_M^2
+ 2\Lambda_m\Lambda_M \cos\theta } \Big]\,.
\end{split}
\end{equation}
Note that in our symmetric CMS $\bar{q}^- = \bar{p}^+$.
For the skewness parameter $\xi$ we get
\begin{equation}
\begin{split}
\xi &\=\ \frac{p^+ - p^{\prime +}}{p^+ + p^{\prime +}} \\
&\=\ \frac{\sqrt{\Lambda_m^2- {\bm \Delta}_\perp^2/s} -{\rm sign}(p^\prime_3)\sqrt{\Lambda_M^2- {\bm \Delta}_\perp^2/s}}
{2+ \sqrt{\Lambda_m^2- {\bm \Delta}_\perp^2/s} +{\rm sign}(p^\prime_3)
\sqrt{\Lambda_M^2- {\bm \Delta}_\perp^2/s}} \\
&\=\ \frac{\Lambda_m^2 - \Lambda_M^2}{\sqrt{\Lambda_m^2 +
\Lambda_M^2 + 2 \Lambda_m\Lambda_M\cos\theta}} \,\,
\frac{1}{2 + \sqrt{\Lambda_{m}^2 + \Lambda_{M}^2 + 2 \Lambda_{m}\Lambda_{M}\cos\theta}} \,.
\label{skew}
\end{split}
\end{equation}
Note that, as a consequence of the unequal-mass kinematics, $\xi$ cannot become zero,
which is different from, e.g., Compton scattering
where $\xi$ would be equal to zero in such a symmetric frame.
For $p'_3\geq 0$, however, $\xi$ is fairly small in our case and tends to zero for $s\to\infty$. \\
\noindent
Now let us further investigate the Mandelstam variables and write them in a more compact form
with the help of Eqs.~(\ref{proton_three_momentum}), (\ref{lambda_three_momentum}),
(\ref{sin_theta}) and (\ref{cos_theta}).
Mandelstam $t$ can be written as
\begin{equation}
\begin{split}
t &\ = \ -\frac{ {\bm \Delta}_\perp^2}{1-\xi^2} - \frac{2\xi}{1-\xi^2}\Big[(1+\xi) M^2
-(1-\xi) m^2\Big] \\
&\ = \ -\frac{ {\bm \Delta}_\perp^2}{2}
-\frac{s}{4}\Big[\Lambda_m^2 + \Lambda_M^2 - 2\text{sign}\left(p^\prime_3\right)
\sqrt{\Lambda_m^2- {\bm \Delta}_\perp^2/s}\sqrt{\Lambda_M^2 - {\bm \Delta}_\perp^2/s}\Big] \\
&\ = \ -\frac{s}{4} \Big[\Lambda_m^2 + \Lambda_M^2 - 2 \Lambda_m\Lambda_M\cos\theta \Big]\,.
\label{def:t}
\end{split}
\end{equation}
It cannot become zero for forward scattering but acquires the value
\begin{equation}
t_{0} \ := \ t( {\bm \Delta}_\perp^2=0,p^\prime_3\geq 0) \ = \ -\frac{s}{4} \,(\Lambda_m - \Lambda_M)^2\,,
\label{min-t}
\end{equation}
and for backward scattering
\begin{equation}
t_{1} \ := \ t( {\bm \Delta}_\perp^2=0,p^\prime_3\leq 0) \ = \ -\frac{s}{4} \,(\Lambda_m + \Lambda_M)^2\,.
\label{backwd-t}
\end{equation}
It is furthermore convenient to introduce a \lq\lq reduced\rq\rq\ Mandelstam variable $t'$
that vanishes for forward scattering,
\begin{equation}
\begin{split}
t^\prime \ := \ & t-t_0 \\
\ = \ & -\frac{ {\bm \Delta}_\perp^2}{2}-\frac{s}{2}\Big[\Lambda_m\Lambda_M
- {\rm sign}(p'_3)\sqrt{\Lambda_m^2- {\bm \Delta}_\perp^2/s}\sqrt{\Lambda_M^2- {\bm \Delta}_\perp^2/s}\Big] \\
\ = \ & -\frac{s}{2} \Lambda_{m} \Lambda_{M} \left(1 - \cos\theta \right)\,.
\label{def:tprime}
\end{split}
\end{equation}
\noindent
Also the transverse component of $\Delta$ can easily be written as a function of
the sine and the cosine of the scattering angle $\theta$ using Eqs.~(\ref{sin_theta}) and (\ref{cos_theta}),
\begin{equation}
{\bm \Delta}_\perp^2 \=\ s \frac{\Lambda_m^2 \Lambda_M^2 \sin^2\theta}
{\Lambda_m^2 + \Lambda_M^2 + 2 \Lambda_m\Lambda_M\cos\theta}\,,
\label{delta_perp_vs_theta}
\end{equation}
or solving Eq.~\req{def:tprime} for $ {\bm \Delta}_\perp^2$ one finds
\begin{equation}
{\bm \Delta}_\perp^2 \=\ -t^\prime\frac{s\Lambda_m\Lambda_M+t^\prime}{s/4(\Lambda_m+\Lambda_M)^2+t^\prime}.
\end{equation}
If we define Mandelstam $u$ for forward scattering in an analogous way
\begin{equation}
u_0 \ :=u \ ( {\bm \Delta}_\perp^2=0,p^\prime_3\geq 0) \ = \ -\frac{s}4\,(\Lambda_m+\Lambda_M)^2\,
\end{equation}
and for backward scattering
\begin{equation}
u_1 \ := \ u( {\bm \Delta}_\perp^2=0,p^\prime_3\leq 0) \ = \ -\frac{s}4\,(\Lambda_m-\Lambda_M)^2\,,
\end{equation}
the sine and the cosine of half the CMS scattering angle $\theta$ can be written compactly as
\begin{equation}
\sin^2\,\Big(\frac{\theta}{2}\Big) \ = \ \frac{1-\cos{\theta}}{2} \=
\frac{t_0-t}{s\Lambda_m\Lambda_M} \,,
\label{sin_theta_compact}
\end{equation}
\begin{equation}
\cos^2\,\Big(\frac{\theta}{2}\Big) \ = \ \frac{1+\cos{\theta}}{2} \=
\frac{u_1-u}{s\Lambda_m\Lambda_M} \,,
\label{cos_theta_compact}
\end{equation}
respectively.
\section{Light Cone Spinors}
\label{app-lc-spinors}
For our purposes we use the light cone spinors \cite{Soper:1972xc, Brodsky:1997de}.
They read
\begin{equation}
u\left(p , \uparrow\right) =
\frac{1}{2^{1/4}}\frac{1}{\sqrt{p^+}}
\left(
\begin{array}{c}
p^+ + m/\sqrt{2} \\
p_\perp / \sqrt{2} \\
p^+ - m/\sqrt{2} \\
p_\perp / \sqrt{2}
\end{array}
\right)
\quad , \quad
u\left(p , \downarrow\right) =
\frac{1}{2^{1/4}}\frac{1}{\sqrt{p^+}}
\left(\begin{array}{c}
-p_\perp^* / \sqrt{2} \\
p^+ + m/\sqrt{2} \\
p_\perp^* / \sqrt{2}\\
-p^+ + m/\sqrt{2}
\end{array}\right)
\,,
\label{eq:LC-Dirac-spinors-pos-z}
\end{equation}
\begin{equation}
v\left(p , \uparrow\right) =
\frac{-1}{2^{1/4}}\frac{1}{\sqrt{p^+}}
\left(\begin{array}{c}
-p_\perp^* / \sqrt{2} \\
p^+ - m/\sqrt{2} \\
p_\perp^* / \sqrt{2}\\
-p^+ - m/\sqrt{2}
\end{array}\right)
\quad , \quad
v\left(p , \downarrow\right) =
\frac{-1}{2^{1/4}}\frac{1}{\sqrt{p^+}}
\left(\begin{array}{c}
p^+ - m/\sqrt{2} \\
p_\perp / \sqrt{2} \\
p^+ + m/\sqrt{2} \\
p_\perp / \sqrt{2}
\end{array}\right)
\,.
\label{eq:LC-Dirac-anti-spinors-pos-z}
\end{equation}
for $p^3 > 0$ and
\begin{equation}
u\left(p , \uparrow\right) =
\frac{1}{2^{1/4}}\frac{sign\left(p^1\right)}{\sqrt{p^-}}
\left(\begin{array}{c}
p_\perp^* / \sqrt{2} \\
p^- + m/\sqrt{2} \\
p_\perp^* / \sqrt{2} \\
p^- - m/\sqrt{2}
\end{array}\right)
\quad , \quad
u\left(p , \downarrow\right) =
\frac{1}{2^{1/4}}\frac{-sign\left(p^1\right)}{\sqrt{p^-}}
\left(\begin{array}{c}
p^- + m/\sqrt{2} \\
-p_\perp / \sqrt{2} \\
-p^- + m/\sqrt{2} \\
p_\perp / \sqrt{2}
\end{array}\right)
\,,
\label{eq:LC-Dirac-spinors-neg-z}
\end{equation}
\begin{equation}
v\left(p , \uparrow\right) =
\frac{1}{2^{1/4}}\frac{sign\left(p^1\right)}{\sqrt{p^-}}
\left(\begin{array}{c}
p^- - m/\sqrt{2} \\
-p_\perp / \sqrt{2} \\
-p^- - m/\sqrt{2} \\
p_\perp / \sqrt{2}
\end{array}\right)
\quad , \quad
v\left(p , \downarrow\right) =
\frac{1}{2^{1/4}}\frac{-sign\left(p^1\right)}{\sqrt{p^-}}
\left(\begin{array}{c}
p_\perp^* / \sqrt{2} \\
p^- - m/\sqrt{2} \\
p_\perp^* / \sqrt{2} \\
p^- + m/\sqrt{2}
\end{array}\right)
\label{eq:LC-Dirac-anti-spinors-neg-z}
\end{equation}
for $p^3 < 0$.
They satisfy the charge-conjugation relation
\begin{equation}
v\left(p , \lambda\right) = \imath \gamma^2 u^*\left(p , \lambda\right)
\end{equation}
and are normalized as
\begin{equation}
\bar{u}_{\lambda^{\prime}}\left(p\right) u_{\lambda}\left(p\right) = 2m\delta_{\lambda^{\prime} \lambda}
\qquad\text{and}\qquad
\bar{v}_{\lambda^{\prime}}\left(p\right) v_{\lambda}\left(p\right) = -2m\delta_{\lambda^{\prime} \lambda}.
\end{equation}
\section{TDAs}
\label{app-TDAs}
Following Ref.~\cite{PSS} the $p \to \overline{D^0}$ transition matrix element
can be decomposed at leading twist into the following covariant structures,
\begin{equation}
\begin{split}
\Tilde{\mathcal{H}}^{\bar{c}S}_\mu
\,:=\, & \bar{p}^{\,+}\,\int\frac{dz_1^-}{2\pi}\,e^{\imath \bar{x}_1 \bar{p}^+ z_1^-} \,
\langle \overline{D^0}:\, p^\prime \mid
\Psi^c_+\left(-z_1^-/2\right) \Phi^{S[ud]}\left(+z_1^-/2\right)
\mid p:\, p,\,\mu \rangle \\
\,=\, & \gamma_5 \, u(p,\,\mu) \, V_1(\bar{x}_1,\,\xi,\,t)
+ \, \frac{\Delta\hspace*{-0.18cm}/}{M+m} \gamma_5 \, u(p,\,\mu) \, V_2(\bar{x}_1,\,\xi,\,t) \\
& + \, u(p,\,\mu) \, \Tilde{V}_1(\bar{x}_1,\,\xi,\,t)
+ \, \frac{\Delta\hspace*{-0.18cm}/}{M+m} \, u(p,\,\mu) \, \Tilde{V}_2(\bar{x}_1,\,\xi,\,t) \,,
\label{eq:TDA-parameterization}
\end{split}
\end{equation}
where we have introduced the $p \to \overline{D^0}$ TDAs
$V_1,\,V_2,\,\Tilde{V}_1$ and $\Tilde{V}_2$.
When evaluating the $p\bar{p} \, \to \, \overline{D^0} D^0$ amplitude the hadronic transition matrix element
(\ref{eq:TDA-parameterization}) appears within the spinor product
\begin{equation}
\mathcal{H}^{\bar{c}S}_{\lambda_1^\prime\mu} \,=\,
\bar{v}(k_1^\prime,\,\lambda_1^\prime) \,\gamma^+\, \Tilde{\mathcal{H}}^{\bar{c}S}_\mu \,,
\label{eq:mathcalH}
\end{equation}
cf. Eq.~(\ref{Mmunu}). Expressed in terms of TDAs we thus have
\begin{equation}
\begin{split}
\mathcal{H}^{\bar{c}S}_{\lambda_1^\prime\mu}
\,=\, & \bar{v}(k_1^\prime,\,\lambda_1^\prime) \,\gamma^+ \gamma_5 \, u(p,\,\mu) \, V_1(\bar{x}_1,\,\xi,\,t)
+ \, \bar{v}(k_1^\prime,\,\lambda_1^\prime) \,\gamma^+ \frac{\Delta\hspace*{-0.18cm}/}{M+m} \gamma_5 \, u(p,\,\mu) \, V_2(\bar{x}_1,\,\xi,\,t) \\
& + \,\bar{v}(k_1^\prime,\,\lambda_1^\prime) \,\gamma^+ u(p,\,\mu) \, \Tilde{V}_1(\bar{x}_1,\,\xi,\,t)
+ \,\bar{v}(k_1^\prime,\,\lambda_1^\prime) \,\gamma^+ \frac{\Delta\hspace*{-0.18cm}/}{M+m} \, u(p,\,\mu) \, \Tilde{V}_2(\bar{x}_1,\,\xi,\,t) \\
\,=\, & \sqrt{\frac{\bar{x}_1 - \xi}{1 - \xi}} \,
\left[ \bar{v}(p^\prime,\,\lambda_1^\prime) \,\gamma^+ \gamma_5 \, u(p,\,\mu) \, V_1(\bar{x}_1,\,\xi,\,t)
+ \, \bar{v}(p^\prime,\,\lambda_1^\prime) \,\gamma^+ \frac{\Delta\hspace*{-0.18cm}/}{M+m} \gamma_5 \, u(p,\,\mu) \, V_2(\bar{x}_1,\,\xi,\,t) \right. \\
& \left. + \,\bar{v}(p^\prime,\,\lambda_1^\prime) \,\gamma^+ u(p,\,\mu) \, \Tilde{V}_1(\bar{x}_1,\,\xi,\,t)
+ \,\bar{v}(p^\prime,\,\lambda_1^\prime) \,\gamma^+ \frac{\Delta\hspace*{-0.18cm}/}{M+m} \, u(p,\,\mu) \, \Tilde{V}_2(\bar{x}_1,\,\xi,\,t) \right] \,,
\label{eq:mathcalHtilde-parameterization}
\end{split}
\end{equation}
after making the replacement $k_1^\prime = x_1^\prime p^\prime$
in the $\bar{v}-$spinor.
Evaluating the various spinor products
which appear in Eq.~(\ref{eq:mathcalHtilde-parameterization})
by using the light cone spinors of Appendix~\ref{app-lc-spinors}
gives
\begin{equation}
\mathcal{H}^{\bar{c}S}_{++} \=
\frac{4\bar{p}^{\,+}}{M+m}\sqrt{\frac{\bar{x}_1-\xi}{1+\xi}}\frac{\Delta_\perp}{2}
\left( V_2(\bar{x}_1,\,\xi,\,t)
+
\Tilde{V}_2(\bar{x}_1,\,\xi,\,t) \right) \,,
\label{eq:mathcalHupup}
\end{equation}
\begin{equation}
\mathcal{H}^{\bar{c}S}_{--} \=
\frac{4\bar{p}^{\,+}}{M+m}\sqrt{\frac{\bar{x}_1-\xi}{1+\xi}}\frac{\Delta_\perp}{2}
\left( V_2(\bar{x}_1,\,\xi,\,t)
-
\Tilde{V}_2(\bar{x}_1,\,\xi,\,t) \right)
\label{eq:mathcalHdndn}
\end{equation}
and
\begin{equation}
\begin{split}
\mathcal{H}^{\bar{c}S}_{+-} \= &
2\bar{p}^{\,+}\sqrt{\bar{x}_1-\xi}\sqrt{1+\xi}
\Big[
V_1(\bar{x}_1,\,\xi,\,t)
-
\Tilde{V}_1(\bar{x}_1,\,\xi,\,t) \\
& +
\frac{2\xi}{1+\xi} \frac{m}{M+m}
\left( V_2(\bar{x}_1,\,\xi,\,t)
+
\Tilde{V}_2(\bar{x}_1,\,\xi,\,t) \right)
\Big] \,,
\label{eq:mathcalHupdn}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\mathcal{H}^{\bar{c}S}_{-+} \= &
-2\bar{p}^{\,+}\sqrt{\bar{x}_1-\xi}\sqrt{1+\xi}
\Big[
V_1(\bar{x}_1,\,\xi,\,t)
+
\Tilde{V}_1(\bar{x}_1,\,\xi,\,t) \\
& +
\frac{2\xi}{1+\xi} \frac{m}{M+m}
\left( V_2(\bar{x}_1,\,\xi,\,t)
-
\Tilde{V}_2(\bar{x}_1,\,\xi,\,t) \right)
\Big] \,.
\label{eq:mathcalHdnup}
\end{split}
\end{equation}
The TDAs $V_1,\,\Tilde{V}_1,\,V_2$ and $\Tilde{V}_2$ can now be
expressed as linear combinations of $\mathcal{H}^{\bar{c}S}_{++},\,
\mathcal{H}^{\bar{c}S}_{--},\,\mathcal{H}^{\bar{c}S}_{+-}$ and
$\mathcal{H}^{\bar{c}S}_{-+}$.
For our overlap representation of the hadronic
transition matrix elements we have
$\mathcal{H}^{\bar{c}S}_{++} = \mathcal{H}^{\bar{c}S}_{--} = 0$
[cf. Eq.~\req{eq:vbar-gammaplus-v-etc}].
This means that $V_2 = \Tilde{V}_2 = 0$ and
\begin{equation}
V_1 \=
\frac{1}{4\bar{p}^{\,+}\sqrt{\bar{x}_1-\xi}\sqrt{1+\xi}}
\left(
\mathcal{H}^{\bar{c}S}_{+-} - \mathcal{H}^{\bar{c}S}_{-+} \,,
\right) \,,
\end{equation}
\begin{equation}
\Tilde{V}_1 \=
- \frac{1}{4\bar{p}^{\,+}\sqrt{\bar{x}_1-\xi}\sqrt{1+\xi}}
\left(
\mathcal{H}^{\bar{c}S}_{+-} + \mathcal{H}^{\bar{c}S}_{-+}
\right) \,.
\end{equation}
We further have
$\mathcal{H}^{\bar{c}S}_{+-} = -\mathcal{H}^{\bar{c}S}_{-+}$
[cf. Eq.~\req{hadronic-transition-overlap}],
so that we finally get
\begin{equation}
V_1 \=
\frac{1}{2\bar{p}^{\,+}\sqrt{\bar{x}_1-\xi}\sqrt{1+\xi}}
\mathcal{H}^{\bar{c}S}_{+-}
\quad\text{and}\quad
\Tilde{V}_1 \= 0 \,.
\end{equation}
\end{appendix}
|
1,116,691,499,148 | arxiv | \section{Introduction}
How should we rotate two necklaces, each with $n$ beads at different locations,
to best align the beads? More precisely, each necklace is represented
by a set of $n$ points on the unit-circumference circle,
and the goal is to find rotations of the necklaces,
and a perfect matching between the beads of the two necklaces,
that minimizes some norm of the circular distances between matched beads.
In particular, the $\ell_1$ norm minimizes the average absolute circular
distance between matched beads, the $\ell_2$ norm minimizes the average
squared circular distance between matched beads, and the $\ell_\infty$ norm
minimizes the maximum circular distance between matched beads.
The $\ell_1$ version of this necklace alignment problem was introduced by
Toussaint \cite{Toussaint-2004-JCDCG} in the context of comparing
rhythms in computational music theory, with possible applications
to rhythm phylogeny
\cite{Diaz-2004, Toussaint-2004-ISMIR}.
Toussaint \cite{Toussaint-2004-JCDCG} gave a simple $O(n^2)$-time algorithm
for $\ell_1$ necklace alignment, and highlighted as an interesting open
question whether the problem could be solved in $o(n^2)$ time.
In this paper, we solve this open problem by giving $o(n^2)$-time algorithms
for $\ell_1$, $\ell_2$, and $\ell_\infty$ necklace alignment,
in both the standard real RAM model of computation and the less realistic
nonuniform linear decision tree model of computation.
Our results for the case of the $\ell_1$ and $\ell_\infty$ distance measures in the real RAM model
also answer the questions posed by Clifford et al. in~\cite{clifford-04}
(see the \emph{shift matching problem} in Problem 5 of the tech report).
\paragraph{Necklace alignment problem.}
\begin{figure}[b]
\centering
\includegraphics[scale=0.86]{necklace}
\caption{An example of necklace alignment: the input (left) and
one possible output (right).}
\label{necklace}
\end{figure}
More formally, in the \emph{necklace alignment problem}, the input is
a number $p$ representing the $\ell_p$ norm, and
two sorted vectors of $n$ real numbers,
$\vec x = \langle x_0, x_1, \dots, x_{n-1} \rangle$ and
$\vec y = \langle y_0, y_1, \dots, y_{n-1} \rangle$,
representing the two necklaces.
See Figure \ref{necklace}.
Canonically, we assume that each number $x_i$ and $y_i$ is in the range
$[0,1)$, representing a point on the unit-circumference circle
(parameterized clockwise from some fixed point).
The distance between two beads $x_i$ and $y_j$ is the minimum between the clockwise
and counterclockwise distances along the circumference of the unit-perimeter circular necklaces.
We define this distance as:
$$d^\circ(x_i,y_j) = \min \{\left| x_i - y_j\right|, \displaystyle(1 - \left| x_i - y_j\right|\displaystyle)\}.$$
The optimization problem involves two parameters.
The first parameter, the \emph{offset} $c \in [0, 1)$, is the
clockwise rotation angle of the first necklace relative to the second necklace.
The second parameter, the \emph{shift} $s \in \{0, 1, \dots, n\}$,
defines the perfect matching between beads: bead $i$ of the first necklace
matches with bead $(i + s) \bmod n$ of the second necklace.
(Here we use the property that an optimal perfect matching between the beads
does not cross itself.)
The goal of the $\ell_p$ necklace alignment problem is to find
the offset $c \in [0, 1)$ and the shift $s \in \{0, 1, \dots, n\}$
that minimize
\begin{equation}
\label{circ}
\sum_{i=0}^{n-1} \left(d^\circ((x_i+c) \bmod 1,y_{(i+s)\bmod n})\right)^p
\end{equation}
or, in the case $p=\infty$, that minimize
$$ \max_{i=0}^{n-1} \{d^\circ((x_i+c) \bmod 1,y_{(i+s)\bmod n})\}.$$
The $\ell_1$, $\ell_2$, and
$\ell_\infty$ necklace alignment problems all have trivial $O(n^2)$ solutions,
although this might not be obvious from the definition.
In each case, as we show, the optimal offset $c$ can be computed
in linear time for a given shift value~$s$ (sometimes even independent of~$s$).
The optimization problem is thus effectively over just
$s \in \{0, 1, \dots, n\}$,
and the objective costs $O(n)$ time to compute for each~$s$,
giving an $O(n^2)$-time algorithm.
\paragraph{}
Although necklaces are studied throughout mathematics,
mainly in combinatorial settings, we are not aware of
any work on the necklace alignment problem before
Toussaint \cite{Toussaint-2004-JCDCG}.
He introduced $\ell_1$ necklace alignment, calling it the
\emph{cyclic swap-distance} or \emph{necklace swap-distance} problem,
with a restriction that the beads lie at integer coordinates.
Ardila et al.~\cite{ardila-2008} give a $O(k^2)$-time algorithm for computing the necklace swap-distance
between two binary strings, with $k$ being the number of 1-bits (beads at integer coordinates).
Colannino et
al.~\cite{Colannino-Damian-Hurtado-Iacono-Meijer-Ramaswami-Toussaint-2006}
consider some different distance measures between two sets of points
on the real line in which the matching does not have to match every point.
They do not, however, consider alignment under such distance measures.
Aloupis et al.~\cite{aloupis-2004}, consider the problem of computing the similarity
of two melodies represented as closed orthogonal chains on a cylinder.
Their goal is to find the proper (rigid) translation of one of the chains in the vertical (representing pitch)
and tangential (representing time) direction so that the area between the chains is minimized.
The authors present an $O(m n \lg(n+m))$ algorithm that solves the problem.
When the melodic chains each have a note at every time unit, the melodic similarity problem
is equivalent to the necklace alignment problem, and as our results are subquadratic,
we improve on the results of Aloupis et al.~\cite{aloupis-2004}
for this special case.
\paragraph{Convolution.}
Our approach in solving the necklace alignment problem
is based on reducing it
to another important problem, convolution, for which we also obtain improved algorithms.
The $(+,\cdot)$ convolution
of two vectors $\vec x = \langle x_0, x_1, \allowbreak\dots, x_{n-1} \rangle$ and
$\vec y = \langle y_0, y_1, \dots, y_{n-1} \rangle$, is the vector
$\vec x {\mathop{*}\nolimits} \vec y = \langle z_0, z_1, \dots, z_{n-1} \rangle$
where $z_k = \sum_{i=0}^k x_i \cdot y_{k-i}$.
One can generalize convolution to any $(\oplus,\odot)$ operators.
Algorithmically, a convolution with specified addition and
multiplication operators (here denoted $\vec x {\mathop{*}\limits_\oplus^\odot} \vec y$)
can be easily computed in $O(n^2)$ time.
However,
the $(+,\cdot)$ convolution can be computed in $O(n \lg n)$
time using the Fast Fourier Transform
\cite{Cooley-Tukey-1965, Gauss-1866-III, Heideman-Johnson-Burrus-1985},
because the Fourier transform converts convolution into elementwise
multiplication.
Indeed, fast $(+,\cdot)$ convolution was one of the early breakthroughs
in algorithms, with applications to
polynomial and integer multiplication \cite{Bernstein-survey},
batch polynomial evaluation
\cite[Problem~30-5]{Cormen-Leiserson-Rivest-Stein-2001},
3SUM \cite{Erickson-1999-satisfiability, Baran-Demaine-Patrascu-2008},
string matching
\cite{cc-07, Fischer-Paterson-1973, Indyk-1998-string, Kalai-2002-dontcare,
Cole-Hariharan-2002},
matrix multiplication \cite{Cohn-Kleinberg-Szegedy-Umans-2005},
and even juggling \cite{Cardinal-Kremer-Langerman-2006}.
In this paper we use three types of convolutions:
$(\min,+)$ convolution, whose $k$th entry $z_k = \min_{i=0}^k \left\{ x_i + y_{k-i} \right\}$;
$(\mathop{\rm median},+)$ convolution, whose $k$th entry $z_k = \mathop{\rm median}_{i=0}^k \left\{ x_i + y_{k-i} \right\}$;
and $(+,.)$ convolution, whose $k$th entry $z_k = \sum_{i=0}^k \left\{ x_i . y_{k-i} \right\}$.
As we show in
Theorems \ref{l_2}, \ref{l_infty reduction}, and \ref{l_1 reduction},
respectively,
$\ell_2$ necklace alignment reduces to standard $(+,\cdot)$ convolution,
$\ell_\infty$ necklace alignment reduces to $(\min,+)$ [and $(\max,+)$]
convolution, and
$\ell_1$ necklace alignment reduces to $(\mathop{\rm median},+)$ convolution.
The $(\min,+)$ convolution problem has appeared frequently in the literature,
already appearing in Bellman's
early work on dynamic programming in the early 1960s
\cite{Bellman-Karush-1962, Felzenszwalb-Huttenlocher-2004,
Maragos-2000, Moreau-1970, Rockafellar-1970, Stroemberg-1996}.
Its name varies among ``minimum convolution'', ``min-sum convolution'',
``inf-convolution'', ``infimal convolution'', and ``epigraphical sum''.%
\footnote{``Tropical convolution'' would also make sense,
by direct analogy with tropical geometry, but we have never
seen this terminology used in print.}
To date, however, no worst-case $o(n^2)$-time algorithms for this convolution,
or the more complex $(\mathop{\rm median},+)$ convolution, has been obtained
(it should be noted here that the quadratic worst-case running time
for $(\mathop{\rm median},+)$ convolution follows from linear-time median finding~\cite{Blum-1973, Schonhage-1976}).
In this paper, we develop worst-case $o(n^2)$-time algorithms for
$(\min,+)$ and $(\mathop{\rm median},+)$ convolution,
in the real RAM and the nonuniform linear decision tree
models of computation.
The only subquadratic results for $(\min,+)$ convolution
concern two special cases.
First, the $(\min,+)$ convolution of two convex sequences or functions
can be trivially computed in $O(n)$ time by a simple merge,
which is the same as computing the Minkowski sum of two convex polygons
\cite{Rockafellar-1970}.
This special case is already used in image processing and computer vision
\cite{Felzenszwalb-Huttenlocher-2004, Maragos-2000}.
Second, Bussieck et al.~\cite{Bussieck-Hassler-Woeginger-Zimmermann-1994}
proved that the $(\min,+)$ convolution of two \emph{randomly permuted}
sequences can be computed in $O(n \lg n)$ expected time.
Our results are the first to improve the worst-case running time
for $(\min,+)$ convolution.
\paragraph{Connections to $X+Y$.}
The necklace alignment problems, and their corresponding convolution problems,
are also intrinsically connected to problems on $X+Y$ matrices.
Given two lists of $n$ numbers,
$X = \langle x_0, x_1, \dots, x_{n-1} \rangle$ and
$Y = \langle y_0, y_1, \dots, y_{n-1} \rangle$,
$X+Y$ is the matrix of all pairwise sums, whose $(i,j)$th entry is
$x_i + y_j$.
A classic unsolved problem \cite{TOPP-41}
is whether the entries of $X+Y$ can be sorted
in $o(n^2 \lg n)$ time.
Fredman \cite{Fredman-1976-XpY} showed that $O(n^2)$ comparisons suffice
in the nonuniform linear decision tree model, but it remains open whether
this can be converted into an $O(n^2)$-time algorithm in the real RAM model.
Steiger and Streinu \cite{Steiger-Streinu-1995} gave a simple algorithm
that takes $O(n^2 \lg n)$ time while using only $O(n^2)$ comparisons.
The $(\min,+)$ convolution is equivalent to finding the minimum
element in each antidiagonal of the $X+Y$ matrix, and similarly
the $(\max,+)$ convolution finds the maximum element in each antidiagonal.
We show that $\ell_\infty$ necklace alignment is equivalent to finding the
antidiagonal of $X+Y$ with the smallest \emph{range}
(the maximum element minus the minimum element).
The $(\mathop{\rm median},+)$ convolution is equivalent to finding the median element
in each antidiagonal of the $X+Y$ matrix.
We show that $\ell_1$ necklace alignment is equivalent to finding the
antidiagonal of $X+Y$ with the smallest \emph{median cost}
(the total distance between each element and the median of the elements).
Given the apparent difficulty in sorting $X+Y$,
it seems natural to believe that the minimum, maximum, and median elements
of every antidiagonal cannot be found, and that the corresponding objectives
cannot be minimized, any faster than $O(n^2)$ total time.
Figure~\ref{xplusy} shows a sample $X+Y$ matrix with the maximum element
in each antidiagonal marked, with no apparent structure.
Nonetheless, we show that $o(n^2)$ algorithms are possible.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{xplusy}
\caption{An $X+Y$ matrix. Each polygonal line denotes an antidiagonal
of the matrix, with a point at coordinates $(x,y)$ denoting
the value $x+y$ for $x \in X$ and $y \in Y$.
An~$\times$ denotes the maximum element in each antidiagonal.}
\label{xplusy}
\end{figure}
\paragraph{Our results.}
In the standard real RAM model, we give subquadratic algorithms
for the $\ell_1$, $\ell_2$, and $\ell_\infty$ necklace alignment problems,
and for the $(\min,+)$ and $(\mathop{\rm median},+)$ convolution problems.
We present:
\begin{enumerate}
\item an $O(n \lg n)$-time algorithm on the real RAM
for $\ell_2$ necklace alignment (Section \ref{sec:l_2}).
\item an $O(n^2/\lg n)$-time algorithm on the real RAM
for $\ell_\infty$ necklace alignment and $(\min,+)$ convolution
(Section \ref{sec:l_infty}).
This algorithm uses a technique of Chan originally developed for the
all-pairs shortest paths problem~\cite{Chan-2005-apsp}.
Despite the roughly logarithmic factor improvements for
$\ell_1$ and~$\ell_\infty$, this result does not use word-level bit tricks
of word-RAM fame.
\item a further improved $O(n^2 (\lg \lg n)^3 / \lg^2 n)$-time algorithm
for $\ell_\infty$ necklace alignment and $(\min,+)$ convolution
(Section \ref{sec:l_infty}).
We actually give a direct black-box reduction of $(\min,+)$ convolution
to all-pairs shortest paths; the result then follows from the
current best upper bound for all-pairs shortest paths~\cite{Chan-2010-apsp}.
The all-pairs shortest paths works in the real RAM with respect to the inputs, i.e. it does not use bit tricks on the inputs.
The algorithm, however, requires bit tricks on other numbers, but works in a standard model
that assumes $(\lg n)$-bit words.
\item an $O(n^2 (\lg \lg n)^2/\lg n)$-time algorithm on the real RAM
for $\ell_1$ necklace alignment and $(\mathop{\rm median},+)$ convolution
(Section \ref{sec:l_1}).
This algorithm uses an extension of the technique of Chan~\cite{Chan-2005-apsp}.
\setcounter{last}{\theenumi}
\end{enumerate}
In the nonuniform linear decision tree model, we give particularly fast
algorithms for the $\ell_1$ and $\ell_\infty$ necklace alignment problems,
using techniques of Fredman \cite{Fredman-1976-XpY, Fredman-1976-APSP}:
\begin{enumerate}
\setcounter{enumi}{\thelast}
\item $O(n \sqrt n)$-time algorithm in the nonuniform linear decision
tree model for $\ell_\infty$ necklace alignment and
$(\min,+)$ convolution (Section \ref{sec:l_infty}).
\item $O(n \sqrt {n \lg n})$-time algorithm in the nonuniform linear
decision tree model for $\ell_1$ necklace alignment and
$(\mathop{\rm median},+)$ convolution (Section \ref{sec:l_1}).
\end{enumerate}
(Although we state our results here in terms of $(\min,+)$ and
$(\mathop{\rm median},+)$ convolution, the results below use $-$ instead of $+$
because of the synergy with necklace alignment.)
We also mention connections to the venerable $X+Y$ and 3SUM problems
in Section~\ref{Conclusion}.
\section{Linear Versus Circular Alignment}
Before we proceed with proving our results, we first show that
any optimal solution to the necklace alignment problem can be transformed into an optimal solution to
the problem of linear alignment---aligning and matching beads that are on a line.
We then can use the simpler optimization function of the ``linear alignment problem'' to show our results.
Let $d^- (x_i,y_j)= |x_i - y_j|$ be the linear distance between two beads $x_i$ and $y_j$.
In the \emph{linear alignment problem} we
are given two sorted vectors of real numbers
$\vec x = \langle x_0, x_1, \dots, x_{n-1} \rangle$ and
$\vec y = \langle y_0, y_1, \dots, y_{m-1} \rangle$
with $m \geq n$,
and we want to find $s$ ($s<m-n$) and $c$ that minimize
\begin{equation}
\label{lin}
\sum_{i=0}^{n-1} \left(d^-(x_i+c,y_{i+s})\right)^p
\end{equation}
or, in the case $p=\infty$, that minimize
$$ \max_{i=0}^{n-1} \{d^-(x_i+c,y_{i+s})\}.$$
The main difference between \eqref{circ} and \eqref{lin} is that instead of taking
the minimum between the clockwise and counterclockwise distances between pairs of matched beads in \eqref{circ},
we are simply summing the forward distances between beads in \eqref{lin}.
We will now show that
whether the beads are on a line (repeating $\vec y$ infinitely many times on the line) or a circle, the optimal alignment of the beads $\vec x$ and $\vec y$
in these two cases are equal.
Let $M^{\circ}$ and $M^-$ be an alignment/matching of the beads of $\vec x$ and $\vec y$ along the unit circumference circle $C$ and
the infinite line segment $L$ respectively.
An \emph{edge} $(x_i,y_j)$ of $M^{\circ}$ ($M^-$)
is the shortest segment that connects two matched beads $x_i$ and $y_j$ in $M^{\circ}$ ($M^-$);
thus, the length of $(x_i,y_j)$ is equal to $d^\circ(x_i,y_j)$ ($d^-(x_i,y_j)$).
We will show that the sum of the lengths of the edges of each of the optimal matchings $M^{\circ*}$ and $M^{-*}$ are equal.
Note that by the quadrangle inequality, we have that
the edges of both $M^{\circ*}$ and $M^{-*}$ are non-crossing.
\begin{observation}
\label{min edge length}
Consider any edge $(x_i,y_j)$ along the circular necklace.
If this edge crosses point $0$,
then the distance $$d^\circ(x_i,y_j) = \displaystyle(1-|x_i - y_j|\displaystyle);$$
otherwise, $$d^\circ(x_i,y_j) = |x_i - y_j|.$$
\end{observation}
Let
$\vec{yy}$ be the doubling of the vector~$\vec y$ such that
\begin{eqnarray*}
\vec{yy} &=& \langle y_0, \dots, y_{m-1}, y_0, \dots, y_{m-1} \rangle
= \langle yy_0, \dots, yy_{m-1}, yy_m, \dots, yy_{2m-1} \rangle.\\
\end{eqnarray*}
\begin{theorem}
If $M^{\circ *}$ is the optimal matching of two given vectors $\vec x$ and $\vec y$ (both of length $n$) along the unit-circumference circle $C$
and $M^{-*}$ is the optimal matching of $\vec{x}$ and $\vec{yy}$ along line~$L$, then $|M^{\circ*}| = |M^{-*}|$.
\end{theorem}
\begin{proof}
First we show that the value of any optimal matching of a set of beads along $L$ is at least
as large as the value of the optimal solution of the beads along $C$.
Given an optimal matching $M^{-*}=\{s,c\}$ beads along $L$,
we will ``wrap'' the line $L$ around a unit circle $C$
by mapping each of the $n$ \emph{matched}
beads $x_i$ ($yy_{i+s}$) along~$L$ to $x_i^\circ$ ($y_{(i+s)\bmod n}^\circ$) along~$C$.
(Thus we have exactly $n$ pairs of beads along~$C$.)
Now, for every $i=0,1,\dots,n-1$, the length of an edge of $M^{-*}$ is equal to
\begin{eqnarray*}
&&d^-(x_i+c,yy_{i+s}) \\
&=&d^-(x_i+c,y_{(i+s) \bmod n}) \\
&=& |x_i+c - y_{(i+s)\bmod n}|\\
&\geq& \min \{\left| (x_i + c)\bmod 1 - y_{(i+s)\bmod n}\right| , \displaystyle(1 - \left| (x_i + c)\bmod 1 - y_{(i+s)\bmod n}\right|\displaystyle)\}\\
&=& d^\circ((x_i^\circ + c)\bmod 1, y_{(i+s)\bmod n}^\circ).
\end{eqnarray*}
Thus, as every edge length of the matching $M^{-*}$ is at least as large as its corresponding edge along the circle $C$, we have $|M^{-*}| \geq |M^{\circ*}|$.
Next we show that the value of any optimal matching of a set of beads along $C$ is at least
as large as the value of the optimal solution of the beads along $L$.
Suppose we have an optimal matching $M^{\circ*}=(s,c)$.
We map every point $x_i+c$ and $y_i$ to the infinite line segment so that
the edges of $M^{\circ*}$ are preserved in $M^{-}$.
Thus, for all $i = 0,1,\dots,n-1$ and $k \in \mathbb{Z}$, we map
\begin{eqnarray*}
(x_i + c)\bmod 1 &\mapsto& x^-_{i} = x_i + c, \\
y_i &\mapsto& y^-_{i+kn} = y_i + k.\\
\end{eqnarray*}
With this transformation, in any valid matching $M^{-}$, the matched beads span at most two consecutive intervals $[k,k+1)$ and $[k+1,k+2)$
for any $k \in \mathbb{Z}$. In particular, the beads $x_i + c$ span the intervals $[0,1)$ and $[1,2)$.
Now we construct $M^{-}$ given $M^{\circ*}$ by matching every $x^-_{i}$ to $y^-_{i+s+kn}$ such that,
whenever $x_i+c < 1$ (see Figure~\ref{unrolling}),
\begin{itemize}
\item $k=-1$ \\ if $((x_i+c)\bmod 1, y_{(i+s)\bmod n})$ crosses point $0$ and $(x_i+c)\bmod 1 < y_{(i+s)\bmod n}$;
\item $k=1$ \\ if $((x_i+c)\bmod 1, y_{(i+s)\bmod n})$ crosses point $0$ and $(x_i+c)\bmod 1 > y_{(i+s)\bmod n}$;
\item $k=0$ \\ if $((x_i+c)\bmod 1, y_{(i+s)\bmod n})$ does not cross point $0$;
\end{itemize}
and, whenever $x_i+c \geq 1$, we increment $k$ by $1$ in each of the cases.
Thus, when $x_i+c \geq 1$,
\begin{itemize}
\item $k=0$ \\ if $((x_i+c)\bmod 1, y_{(i+s)\bmod n})$ crosses point $0$ and $(x_i+c)\bmod 1 < y_{(i+s)\bmod n}$;
\item $k=2$ \\ if $((x_i+c)\bmod 1, y_{(i+s)\bmod n})$ crosses point $0$ and $(x_i+c)\bmod 1 > y_{(i+s)\bmod n}$;
\item $k=1$ \\ if $((x_i+c)\bmod 1, y_{(i+s)\bmod n})$ does not cross point $0$.
\end{itemize}
\begin{figure}[t]
\centering
\subfigure[A matching $M^{\circ}$ of beads along a circular necklace with three types of edges:
those that do not cross point $0$,
and those that cross point $0$
with $(x_i+c) \bmod 1$ either greater or less than~$y_{i+s}$.]{
\includegraphics[scale=0.7]{Unrolling}
\label{fig:circle}
}\hskip0.35cm
\subfigure[Unrolling a necklace to a line $L$ by preserving the edges of $M^{\circ}$ in the matching $M^{-}$.
In this example, $0 \leq x_0+c < 1$.]{
\includegraphics[scale=0.8]{Line}
\label{fig:line}
}
\caption{Unrolling a circular necklace to a line.}
\label{unrolling}
\end{figure}
Here, the variable $k$ basically decides the interval $[-1,0)$, $[0,1)$, or $[1,2)$ in
which the bead $y_{(i+s)\bmod n}$ is located, based on the type of the edge $((x_i+c)\bmod 1, y_{(i+s)\bmod n})$.
Observe that, if an edge in $M^{\circ*}$ crosses point $0$,
then its corresponding edge in $M^{-}$ crosses $(r,r)$ for some $r \in \{0,1,2\}$.
Now, the sum of the distances of the matched beads of $M^{-}$ is equal to
\begin{align*}
&d^-(x_0+c, y_{(0+s) \bmod n}+k) + d^-(x_1+c, y_{(1+s) \bmod n}+k) + \dots + \\
&d^-(x_{n-s-1}+c, y_{n-1}+k)+ d^-(x_{n-s}+c, y_{0}+k+1) +\dots + \\
&d^-(x_{n-s-1}+c, y_{n-1}+k+1).
\end{align*}
We claim that the value of $M^{\circ*}$ is equal to at least the value of this matching $M^{-}$.
We show this claim by comparing the length of each edge $((x_i+c)\bmod 1, y_{i+s})$ of $M^{\circ*}$ with its corresponding edge in $M^{-}$.
\paragraph{Edges that do not cross point \boldmath{$0$}:}
If an edge of $M^{\circ*}$ does not cross point $0$, then the corresponding edge in $M^{-}$
does not cross any of the edges $(0,0)$, $(1,1)$ or $(2,2)$;
hence both endpoints (beads) of the given matching edge are within the same interval $[r, r+1)$ for some $r\in\{0,1\}$
(the purple edge $(x_q+c,y_{q+s})$ in Figure~\ref{unrolling}).
This means that, when $x_i+c = (x_i+c)\bmod 1$, we have $k=0$ and
$$
\begin{array}{ll}
d^-(x_i+c, y_{(i+s)\bmod n}+k) &= |(x_i+c) - (y_{(i+s)\bmod n}+k)| \\
&= |(x_i+c)\bmod 1 - (y_{(i+s)\bmod n})| \\
&= |x_i^- - y^-_{(i+s)\bmod n}| \\
&= d^\circ(x_i^-, y_{(i+s)\bmod n}^-) ${\rm \indent(by~Observation~\ref{min edge length}).}$\\
\end{array}
$$
We can similarly show that, when $x_i+c = (x_i+c)\bmod 1 + 1$, we have $k=1$ and the
edges of $M^{\circ*}$ that do not cross point $0$ have the same length as their corresponding edge in $M^{-}$.
\paragraph{Edges that cross point \boldmath{$0$}:}
If an edge of $M^{\circ*}$ crosses point $0$, then the corresponding edge
in $M^{-}$ crosses edge $(r,r)$ and hence
the two endpoints (beads) of the given edge of $M^{-}$ must be in different and consecutive intervals:
$[r-1, r)$ and $[r, r+1)$ for some $r\in \{0,1,2\}$
(the green and blue edges $(x_i+c,y_i+s)$,$(x_j+c,y_j+s)$ in Figure~\ref{unrolling}).
Then, assuming $x_i + c = (x_i+c)\bmod 1$, we have
\begin{itemize}
\item
When $(x_i+c)\bmod 1 < y_{(i+s)\bmod n}$, $k = -1$ and
$$
\begin{array}{ll}
d^-(x_i+c, y_{(i+s)\bmod n}+k) &= |(x_i+c) - (y_{(i+s)\bmod n}-1)| \\
&= |(x_i+c)\bmod 1 - y_{(i+s)\bmod n} + 1|\\
&= \displaystyle(1 - |((x_i+c)\bmod 1) - y_{(i+s)\bmod n}|\displaystyle)\\
&= \displaystyle(1 - |x_i^- - y^-_{((i+s)\bmod n)-n}|\displaystyle)\\
&= d^\circ(x_i^-, y^-_{((i+s)\bmod n)-n}) ${\rm \indent(by~Observation~\ref{min edge length}).}$\\
\end{array}
$$
\item
When $(x_i+c)\bmod 1 > y_{(i+s)\bmod n}$, $k = 1$ and
$$
\begin{array}{ll}
d^-(x_i+c, y_{(i+s)\bmod n}+k) &= |(x_i+c) - (y_{(i+s)\bmod n}+1)| \\
&= |(x_i+c)\bmod 1 - y_{(i+s)\bmod n} - 1|\\
&= \displaystyle(1 - |((x_i+c)\bmod 1) - y_{(i+s)\bmod n}|\displaystyle)\\
&= \displaystyle(1 - |x_i^- - y^-_{((i+s)\bmod n)+ n}|\displaystyle)\\
&= d^\circ(x^-_i+c, y^-_{((i+s)\bmod n)+n}) ${\rm \indent(by~Observation~\ref{min edge length}).}$\\
\end{array}
$$
We can similarly show that, when $x_i+c = (x_i+c)\bmod 1 + 1$,
the edges of $M^{\circ*}$ that cross point $0$ have the same length as their corresponding edge in $M^{-}$.
\end{itemize}
Therefore, the length of every edge of $M^{\circ*}$ along the circle is equal to the length of its corresponding edge in $M^{-}$.
Thus, the value of the matching $M^{\circ*}$ is at least as large as that of $M^{-*}$, completing the proof of the theorem.
\end{proof}
We now proceed to prove our results by using the objective function \eqref{lin}.
\section{$\ell_2$ Necklace Alignment and $(+,\cdot)$ Convolution}
\label{sec:l_2}
In this section, we first show how $\ell_2$ necklace alignment reduces
to standard convolution, leading to an $O(n \lg n)$-time algorithm that uses the Fast Fourier Transform.
We then show how this result generalizes to $\ell_p$ for any even $p$.
It should be noted here that the $\ell_2$ necklace alignment problem was solved independently by Clifford et al.~\cite{clifford-04} (see Problem 5)
using Fast Fourier Transforms. Our proof uses essentially the same technique of expanding the squared term and then optimizing terms separately,
but goes through the steps in more detail;
we include our proof for completeness.
More results that use the FFT to solve different flavors of matching problems may also be found in~\cite{clifford-05} and \cite{clifford-07}.
\begin{theorem} \label{l_2}
The $\ell_2$ necklace alignment problem can be solved in $O(n \lg n)$ time
on a real RAM.
\end{theorem}
\begin{proof}
The objective \eqref{lin} expands algebraically to
%
\begin{eqnarray*}
&&\sum_{i=0}^{n-1} \left( x_i - y_{(i+s) \bmod n} + c \right)^2\\
&=& \sum_{i=0}^{n-1} \left( x_i^2 + y_{(i+s) \bmod n}^2 +
2 c x_i - 2c y_{(i+s) \bmod n} + c^2 \right)
- 2 \sum_{i=0}^{n-1} x_i y_{(i+s) \bmod n} \\
&=& \sum_{i=0}^{n-1} \left( x_i^2 + y_i^2 +
2 c x_i - 2c y_i + c^2 \right)
- 2 \sum_{i=0}^{n-1} x_i y_{(i+s) \bmod n} \\
&=& \left[
\sum_{i=0}^{n-1} \left( x_i^2 + y_i^2 \right) +
2 c \sum_{i=0}^{n-1} \left( x_i - y_i \right) +
n c^2
\right]
- 2 \sum_{i=0}^{n-1} x_i y_{(i+s) \bmod n}. \\
\end{eqnarray*}
%
The first term depends solely on the inputs and the variable~$c$,
while the second term depends solely on the inputs and the variable~$s$.
Thus the two terms can be optimized separately.
The first term can be optimized in $O(n)$ time by solving for when the
derivative, which is linear in~$c$, is zero.
The second term can be computed, for each $s \in \{0, 1, \dots, n-1\}$,
in $O(n \lg n)$ time using $(+,\cdot)$ convolution
(and therefore optimized in the same time).
Specifically, define the vectors
%
\begin{eqnarray*}
\vec x' &=& \langle x_0, x_1, \dots, x_{n-1};
\underbrace{0, 0, \dots, 0}_n \rangle, \\
\vec y' &=& \langle y_{n-1}, y_{n-2}, \dots, y_0;
y_{n-1}, y_{n-2}, \dots, y_0 \rangle.
\end{eqnarray*}
%
Then, for $s' \in \{0, 1, \dots, n-1\}$, the $(n+s')$th entry of the
convolution $\vec x' {\mathop{*}\nolimits} \vec y'$ is
%
$$ \sum_{i=0}^{n+s'} x'_i y'_{n+s'-i}
= \sum_{i=0}^{n-1} x_i y_{(i-s'-1) \bmod n},
$$
%
which is the desired entry if we let $s' = n-1-s$.
We can compute the entire convolution in $O(n \lg n)$ time
using the Fast Fourier Transform.
\end{proof}
The above result can be generalized to $\ell_p$ for any fixed \emph{even} integer $p$.
When $p \geq 4$, expanding the objective and rearranging the terms results in
\begin{align*}
\sum_{i=0}^{n-1} (x_i-y_{(i+s)\bmod n} + c)^p &= \sum_{i=0}^{n-1}\sum_{j=0}^{p}{p \choose j}(x_i-y_{(i+s)\bmod n})^{p-j}c^j\\
&=\sum_{j=0}^{p}\left({p \choose j}\sum_{i=0}^{n-1}(x_i-y_{(i+s)\bmod n})^{p-j}\right)c^j,\\
\end{align*}
which is a degree-$p$ polynomial in $c$, all of whose coefficients can be computed
for all values of $s$ by computing $O(p^2)$ convolutions.
\begin{theorem} \label{l_p}
The $\ell_p$ necklace alignment problem with $p$ even can be solved in $O(p^2n \lg n)$ time
on a real RAM.
\end{theorem}
\section{$\ell_\infty$ Necklace Alignment and $(\min,+)$ Convolution}
\label{sec:l_infty}
\subsection{Reducing $\ell_\infty$ Necklace Alignment to $(\min,+)$ Convolution}
First we show the relation between $\ell_\infty$ necklace alignment
and $(\min,+)$ convolution. We need the following basic fact:
\begin{fact} \label{min max fact}
For any vector $\vec z = \langle z_0, z_1, \dots, z_{n-1} \rangle$ and
$c = -\frac{1}{2} \left( \min_{i=0}^{n-1} z_i + \max_{i=0}^{n-1} z_i \right)$,
the minimum value of $\max_{i=0}^{n-1} |z_i + c|$
is $$\frac{1}{2} \left( \max_{i=0}^{n-1} z_i - \mathop{\rm m\i n}_{i=0}^{n-1} z_i \right).$$
\end{fact}
Instead of using $(\min,+)$ convolution directly,
we use two equivalent forms, $(\min,-)$ and $(\max,-)$ convolution:
\begin{theorem} \label{l_infty reduction}
The $\ell_\infty$ necklace alignment problem can be reduced in $O(n)$ time
to one $(\min,-)$ convolution and one $(\max,-)$ convolution.
\end{theorem}
\begin{proof}
For two necklaces $\vec x$ and $\vec y$, we apply the $(\min,-)$ convolution
to the following vectors:
\begin{eqnarray*}
\vec x' &=& \langle x_0, x_1, \dots, x_{n-1};
\underbrace{\infty, \infty, \dots, \infty}_n \rangle, \\
\vec y' &=& \langle y_{n-1}, y_{n-2}, \dots, y_0;
y_{n-1}, y_{n-2}, \dots, y_0 \rangle.
\end{eqnarray*}
%
Then, for $s' \in \{0, 1, \dots, n-1\}$, the $(n+s')$th entry of
$\vec x' {\mathop{*}\limits_{\mIn}^-} \vec y'$ is
%
$$ \mathop{\rm m\i n}_{i=0}^{n+s'} (x'_i - y'_{n+s'-i})
= \mathop{\rm m\i n}_{i=0}^{n-1} (x_i - y_{(i-s'-1) \bmod n}),
$$
which is $\min_{i=0}^{n-1} (x_i - y_{(i+s) \bmod n})$
if we let $s' = n-1-s$.
By symmetry, we can compute the $(\max,-)$ convolution
$\vec x'' {\mathop{*}\limits_{\max}^-} \vec y'$, where $\vec x''$ has $-\infty$'s
in place of $\infty$'s, and use it to compute
$\max_{i=0}^{n-1} (x_i - y_{(i+s) \bmod n})$
for each $s \in \{0, 1, \dots, n-1\}$.
Applying Fact~\ref{min max fact}, we can therefore minimize
$\max_{i=0}^{n-1} |x_i - y_{(i+s) \bmod n} + c|$ over~$c$,
for each $s \in \{0, 1, \dots, n-1\}$.
By brute force, we can minimize over $s$ as well using $O(n)$
additional comparisons and time.
\end{proof}
\subsection{$(\min,-)$ Convolution in Nonuniform Linear Decision Tree}
For our nonuniform linear decision tree results, we use the main theorem
of Fredman's work on sorting $X+Y$:
\begin{theorem} {\rm \cite{Fredman-1976-XpY}} \label{Fredman lemma}
For any fixed set $\Gamma$ of permutations of $N$ elements,
there is a comparison tree of depth $O(N + \lg |\Gamma|)$ that sorts
any sequence whose rank permutation belongs to $\Gamma$.
\end{theorem}
\begin{theorem} \label{min convolution nonuniform}
The $(\min,-)$ convolution of two vectors of length $n$
can be computed in $O(n \sqrt n)$ time
in the nonuniform linear decision tree model.
\end{theorem}
\begin{proof}
Let $\vec x$ and $\vec y$ denote the two vectors of length~$n$,
and let $\vec x {\mathop{*}\limits_{\mIn}^-} \vec y$ denote their $(\min,-)$ convolution,
whose $k$th entry is $\min_{i=0}^k \left( x_i - y_{k-i} \right)$.
First we sort the set $D = \{x_i - x_j, y_i - y_j : |i-j| \leq d\}$
of pairwise differences between nearby $x_i$'s and nearby $y_i$'s,
where $d \leq n$ is a value to be determined later.
This set $D$ has $N = O(n d)$ elements.
The possible sorted orders of $D$ correspond to cells in the arrangement of
hyperplanes in $\mathbb{R}^{2 n}$ induced by all $N \choose 2$ possible comparisons
between elements in the set,
and this hyperplane arrangement has $O(N^{4 n})$ cells.
By Theorem~\ref{Fredman lemma}, there is a comparison tree sorting $D$
of depth $O(N + n \lg N) = O(n d + n \lg n)$.
The comparisons we make to sort $D$ enable us to compare $x_i - y_{k-i}$
versus $x_j - y_{k-j}$ for free, provided $|i - j| \leq d$, because
$x_i - y_{k-i} < x_j - y_{k-j}$ precisely if
$x_i - x_j < y_{k-i} - y_{k-j}$.
Thus, in particular, we can compute
$$M_k(\lambda) = \min \Big\{x_i - y_{k-i} ~\Big|~
i = \lambda, \lambda+1, \dots, \min\{\lambda+d,n\}-1\Big\}$$
for free (using the outcomes of the comparisons we have already made).
We can rewrite the $k$th entry
$\min_{i=0}^k ( x_i - y_{k-i} )$ of $\vec x {\mathop{*}\limits_{\mIn}^-} \vec y$
as $\min\{M_k(0), \allowbreak M_k(d), \allowbreak M_k(2 d), \dots, M_k(\lceil k/d \rceil d)\}$,
and thus we can compute it in $O(k/d) = O(n/d)$ comparisons
between differences.
Therefore all $n$ entries can be computed in $O(n d + n \lg n + n^2/d)$
total time.
This asymptotic running time is minimized when $n d = \Theta(n^2/d)$,
i.e., when $d^2 = \Theta(n)$.
Substituting $d = \sqrt{n}$, we obtain a running time of
$O(n \sqrt n)$ in the nonuniform linear decision tree model.
\end{proof}
Combining Theorems~\ref{l_infty reduction} and \ref{min convolution nonuniform},
we obtain the following result:
\begin{corollary} \label{l_infty nonuniform}
The $\ell_\infty$ necklace alignment problem can be solved in
$O(n \sqrt n)$ time in the nonuniform linear decision tree model.
\end{corollary}
\subsection{$(\min,-)$ Convolution in Real RAM via Geometric Dominance}
Our algorithm
on the real RAM uses the following geometric lemma from
Chan's work on all-pairs shortest paths:
\begin{lemma} {\rm \cite[Lemma~2.1]{Chan-2005-apsp}} \label{chan}
Given $n$ points $p_1, p_2, \dots, p_n$ in $d$ dimensions, each colored
either red or blue, we can find the $P$ pairs $(p_i, p_j)$ for which
$p_i$ is red, $p_j$ is blue, and $p_i$ dominates~$p_j$ (i.e., for all~$k$,
the $k$th coordinate of $p_i$ is at least the $k$th coordinate of $p_j$), in
$2^{O(d)} n^{1+\epsilon} + O(P)$ time for arbitrarily small $\epsilon > 0$.
\end{lemma}
\begin{theorem} \label{min convolution RAM}
The $(\min,-)$ convolution of two vectors of length $n$
can be computed in $O(n^2/\lg n)$ time on a real RAM.
\end{theorem}
\begin{proof}
Let $\vec x$ and $\vec y$ denote the two vectors of length~$n$,
and let $\vec x {\mathop{*}\limits_{\max}^-} \vec y$ denote their $(\max,-)$ convolution.
(Symmetrically, we can compute the $(\min,-)$ convolution.)
for each $i \in \{0, d, 2 d, \dots, \lfloor n/d \rfloor d\}$,
and for each $j \in \{0, 1, \dots, n-1\}$,
we define the $d$-dimensional points
$$\arraycolsep=0.5\arraycolsep
\begin{array}{rcllll}
p_{\delta,i} &=& (x_{i+\delta} - x_i, & x_{i+\delta} - x_{i+1},
& \dots, & x_{i+\delta} - x_{i+d-1}), \\
q_{\delta,j} &=& (y_{j-\delta} - y_i, & y_{j-\delta} - y_{i-1},
& \dots, & y_{j-\delta} - y_{j-d-1}).
\end{array}$$
(To handle boundary cases, define $x_i=\infty$ and $y_j=-\infty$
for indices $i,j$ outside $[0,n-1]$.)
For each $\delta \in \{0,1,\dots, d-1\}$,
we apply Lemma~\ref{chan} to the set of red
points $\{p_{\delta,i} : i = 0, d, 2 d, \dots, \lfloor n/d \rfloor d\}$ and
the set of blue points $\{q_{\delta,j} : j = 0, 1, \dots, n-1\}$,
to obtain all dominating pairs $(p_{\delta,i},q_{\delta,j})$.
Point $p_{\delta,i}$ dominates $q_{\delta,j}$ precisely if
$x_{i+\delta} - x_{i+\delta'} \geq y_{j-\delta} - y_{j-\delta'}$
for all $\delta' \in \{0, 1, \dots, d-1\}$
(ignoring the indices outside $[0,n-1]$).
By re-arranging terms, this condition is equivalent to
$x_{i+\delta} - y_{j-\delta} \geq x_{i+\delta'} - y_{j-\delta'}$
for all $\delta' \in \{0, 1, \dots, d-1\}$.
If we substitute $j = k-i$, we obtain that
$(p_{\delta,i}, q_{\delta,k-i})$ is a dominating
pair precisely if
$x_{i+\delta} - y_{k-i-\delta} = \max_{\delta'=1}^{d-1} (x_{i+\delta'} - y_{k-i-\delta'})$.
Thus, the set of dominating pairs gives us the maximum
$M_k(i)=\max\{x_i - y_{k-i}, x_{i+1} - y_{k-i+1}, \dots,
x_{\min\{i+d,n\}-1} - y_{\min\{k-i+d,n\}-1}\}$
for each $i$ divisible by $d$ and for each~$k$.
Also, there can be at most $O(n^2/d)$ such pairs for all $i,j,\delta$,
because there are $O(n/d)$ choices for $i$~and
$O(n)$ choices for~$j$, and
if $(p_{\delta,i},q_{\delta,j})$
is a dominating pair, then $(p_{\delta',i},q_{\delta',j})$ cannot be a
dominating pair for any $\delta'\neq \delta$.
(Here we assume that the $\max$ is achieved uniquely, which can be arranged
by standard perturbation techniques or by breaking ties consistently
\cite{Chan-2005-apsp}.)
Hence, the running time of the $d$ executions of Lemma~\ref{chan} is
$d 2^{O(d)} n^{1+\epsilon} + O(n^2/d)$ time,
which is $O(n^2/\lg n)$ if we choose $d = \alpha \lg n$
for a sufficiently small constant $\alpha > 0$.
We can rewrite the $k$th entry
$\max_{i=0}^k ( x_i - y_{k-i} )$ of $\vec x {\mathop{*}\limits_{\max}^-} \vec y$ as
$\max\{M_k(0), M_k(d), M_k(2 d), \allowbreak \dots, M_k(\lceil k/d \rceil d)\}
$,
and thus we can compute it in $O(k/d) = O(n/d)$ time.
Therefore all $n$ entries can be computed in $O(n^2/d) = O(n^2/\lg n)$ time
on a real RAM.
\end{proof}
Combining Theorems~\ref{l_infty reduction} and \ref{min convolution RAM},
we obtain the following result:
\begin{corollary} \label{l_infty RAM}
The $\ell_\infty$ necklace alignment problem can be solved
in $O(n^2/\lg n)$ time on a real RAM.
\end{corollary}
Although we will present a slightly faster algorithm for $(\min,-)$ convolution
in the next subsection, the approach described above will be useful
later when we discuss the $(\mathop{\rm median},-)$ convolution problem.
\subsection{$(\min,-)$ Convolution
via Matrix Multiplication}
Our next algorithm
uses Chan's $O(n^3 (\lg \lg n)^3 / \lg^2 n)$
algorithm for computing the $(\min,+)$ matrix multiplication of two
$n \times n$ matrices \cite{Chan-2010-apsp}
(to which all-pairs shortest paths also reduces).
We establish a reduction from convolution to matrix multiplication.
\begin{theorem}\label{reduction}
If we can compute the $(\min,-)$ matrix multiplication of two
$n \times n$ matrices in $T(n)$ time, then we can compute the
$(\min,-)$ convolution of two vectors of length $n$
in $O((n + T(\sqrt n)) \sqrt n)$ time.
\end{theorem}
\begin{proof}
We claim that computing the $(\min,-)$ convolution
$\vec z = \vec x {\mathop{*}\limits_{\mIn}^-} \vec y$
reduces to the following $(\min,-)$ matrix multiplication:
%
$$
\def\matrixA{
\begin{matrix}
x_0 & x_1 & \cdots & x_{\sqrt n-1} \\
x_{\sqrt n} & x_{\sqrt n+1} & \cdots & x_{2 \sqrt n-1} \\
\vdots & \vdots & \ddots & \vdots \\
x_{n-\sqrt n} & x_{n-\sqrt n+1} & \cdots & x_{n-1}
\end{matrix}
}
\def\matrixB{
\begin{matrix}
y_{\sqrt n-1} & y_{\sqrt n} & \cdots & y_{n-2} & y_{n-1} \\
y_{\sqrt n-2} & y_{\sqrt n-1} & \cdots & y_{n-3} & y_{n-2} \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
y_1 & y_2 & \cdots & y_{n-\sqrt n-2} & y_{n-\sqrt n-1} \\
y_0 & y_1 & \cdots & y_{n-\sqrt n-1} & y_{n-\sqrt n}
\end{matrix}
}
P =
{\scriptstyle \sqrt n}
\lefteqn{\phantom{\Bigg\{ \Bigg(}
\underbrace{ \phantom{\matrixA} }_{\sqrt n}}
\left\{ \left( \matrixA \right) \right.
{\mathop{\cdot}\limits_{\mIn}^-}
\lefteqn{\phantom{\Bigg(} \underbrace{ \phantom{\matrixB} }_{n-\sqrt n+1}}
\left. \left( \matrixB \right) \right\} {\scriptstyle \sqrt n}.
$$
%
The $(i,j)$th entry $p_{i,j}$ of this product $P$ is
%
$$
p_{i,j} =
\min_{m=0}^{\sqrt n-1} \left( x_{i\sqrt n + m} - y_{j+\sqrt n-1-m} \right).
$$
%
Let $\bar k = \lfloor k/\sqrt n\rfloor \sqrt n$
denote the next smaller multiple of $\sqrt n$ from~$k$.
Now, given the product $P$ above, we can compute each element $z_k$
of the convolution $\vec z$ as follows:
%
$$
z_k = \min \left\{ \begin{array}{c}
p_{0,k+1-\sqrt n},
p_{1,k+1-2\sqrt n},
p_{2,k+1-3\sqrt n}, \dots,
p_{\lfloor k/\sqrt n\rfloor-1, k-\lfloor k/\sqrt n\rfloor \sqrt n}, \\
x_{\bar k} - y_{k-\bar k},
x_{\bar k+1} - y_{k-\bar k-1}, \dots,
x_k - y_0
\end{array} \right\}.
$$
This $\min$ has $O(\sqrt n)$
terms, and thus $z_k$ can be computed in
$O(\sqrt n)$ time. The entire vector $\vec z$ can therefore
be computed in $O(n \sqrt n)$ time, given the matrix product~$P$.
It remains to show how to compute the rectangular product $P$ efficiently,
given an efficient square-matrix $(\min,-)$ multiplication algorithm.
We simply break the product $P$ into at most $\sqrt n$ products of
$\sqrt n \times \sqrt n$ matrices: the left term is the entire left matrix,
and the right term is a block submatrix. The number of blocks is
$\lceil (n-\sqrt n+1)/\sqrt n \rceil \leq \sqrt n$.
Thus the running time for the product is $O(T(\sqrt n) \sqrt n)$.
Summing the reduction cost and the product cost, we obtain a total cost of
$O((n+T(\sqrt n)) \sqrt n)$.
\end{proof}
Plugging in $T(n) = O(n^3/\lg n)$ from \cite{Chan-2005-apsp}
allows us to obtain an alternative proof of Theorem~\ref{min convolution RAM}.
Plugging in $T(n) = O(n^3 (\lg \lg n)^3/\lg^2 n)$ from \cite{Chan-2010-apsp}
immediately gives us the following improved result:
\begin{corollary} \label{min convolution word RAM} \sloppy
The $(\min,-)$ convolution of two vectors of length $n$
can be computed in $O(n^2 (\lg \lg n)^3/\lg^2 n)$ time on a real RAM.
\end{corollary}
Combining Theorem~\ref{l_infty reduction} and
Corollary~\ref{min convolution word RAM}, we obtain the following result:
\begin{corollary} \label{l_infty word RAM}
The $\ell_\infty$ necklace alignment problem can be solved
in\\ $O(n^2 (\lg \lg n)^3/\lg^2 n)$ time on a real RAM.
\end{corollary}
We remark that by the reduction in Theorem~\ref{reduction}, any nontrivial
lower bound for $(\min,-)$ convolution would imply a
lower bound for $(\min,-)$ matrix multiplication and
the all-pairs shortest path problem.
\section{$\ell_1$ Necklace Alignment and $(\mathop{\rm median},+)$ Convolution}
\label{sec:l_1}
\subsection{Reducing $\ell_1$ Necklace Alignment to $(\mathop{\rm median},+)$ Convolution}
First we show the relation between $\ell_1$ necklace alignment
and $(\mathop{\rm median},+)$ convolution. We need the following basic fact~\cite{ardila-2008}:
\begin{fact} \label{median fact}
For any vector $\vec z = \langle z_0, z_1, \dots, z_{n-1} \rangle$,
$\displaystyle \sum_{i=0}^{n-1} |z_i + c|$ is minimized when
$c = -\mathop{\rm median}_{i=0}^{n-1} z_i$.
\end{fact}
Instead of using $(\mathop{\rm median},+)$ convolution directly,
we use the equivalent form, $(\mathop{\rm median},-)$ convolution:
\begin{theorem} \label{l_1 reduction}
The $\ell_1$ necklace alignment problem can be reduced in $O(n)$ time
to one $(\mathop{\rm median},-)$ convolution.
\end{theorem}
\begin{proof}
For two necklaces $\vec x$ and $\vec y$, we apply the $(\mathop{\rm median},-)$
convolution
to the following vectors,
as in the proof of Theorem~\ref{l_infty reduction}:
\begin{eqnarray*}
\vec x' &=& \langle x_0, x_0, x_1, x_1, \dots, x_{n-1}, x_{n-1};
\underbrace{\infty, -\infty, \infty, -\infty, \dots,
\infty, -\infty}_{2 n} \rangle, \\
\vec y' &=& \langle y_{n-1}, y_{n-1}, y_{n-2}, y_{n-2}, \dots, y_0, y_0;
y_{n-1}, y_{n-1}, y_{n-2}, y_{n-2}, \dots, y_0, y_0
\rangle.
\end{eqnarray*}
$\vec x' {\mathop{*}\limits_{\meD}^-} \vec y'$ is
$$ \mathop{\rm median}_{i=0}^{2(n+s')+1} (x'_i - y'_{2(n+s')+1-i})
= \mathop{\rm median}_{i=0}^{n-1} (x_i - y_{(i-s'-1) \bmod n}),
$$
which is $\mathop{\rm median}_{i=0}^{n-1} (x_i - y_{(i+s) \bmod n})$
if we let $s' = n-1-s$.
Applying Fact~\ref{median fact}, we can therefore minimize
$\mathop{\rm median}_{i=0}^{n-1} |x_i - y_{(i+s) \bmod n} + c|$ over~$c$,
for each $s \in \{0, 1, \dots, n-1\}$.
By brute force, we can minimize over $s$ as well using $O(n)$
additional comparisons and time.
\end{proof}
Our results for $(\mathop{\rm median},-)$ convolution use the following result
of Frederickson and Johnson:
\begin{theorem} {\rm \cite{Frederickson-Johnson-1982}} \label{median}
The median element of the union of $k$ sorted lists, each of length~$n$,
can be computed in $O(k \lg n)$ time and comparisons.
\end{theorem}
\subsection{$(\mathop{\rm median},-)$ Convolution in Nonuniform Linear Decision Tree}
We begin with our results for the nonuniform linear decision tree model:
\begin{theorem} \label{median convolution nonuniform}
The $(\mathop{\rm median},-)$ convolution of two vectors of length $n$ can be computed
in $O(n \sqrt {n \lg n})$ time in the nonuniform linear decision tree model.
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{l_infty nonuniform},
we sort the set $D = \{x_i - x_j, y_i - y_j : |i-j| \leq d\}$
of pairwise differences between nearby $x_i$'s and nearby $y_i$'s,
where $d \leq n$ is a value to be determined later.
By Theorem \ref{Fredman lemma}, this step requires
$O(n d + n \lg n)$ comparisons between differences.
These comparisons enable us to compare $x_i - y_{k-i}$
versus $x_j - y_{k-j}$ for free, provided $|i - j| \leq d$, because
$x_i - y_{k-i} < x_j - y_{k-j}$ precisely if
$x_i - x_j < y_{k-i} - y_{k-j}$.
In particular, we can sort each list
$$L_k(\lambda) = \Big\langle x_i - y_{k-i} ~\Big|~
i = \lambda, \lambda+1, \dots, \min\{\lambda+d,n\}-1
\Big\rangle$$
for free.
By Theorem~\ref{median}, we can compute the median of
$L_k(0) \cup L_k(d) \cup L_k(2 d) \cup \cdots \cup L_k(\lceil k/d \rceil d)$,
i.e., $\mathop{\rm median}_{i=0}^k (x_i - y_{k-i})$,
in $O((k/d) \lg d) = O((n/d) \lg d)$ comparisons.
Also, in the same asymptotic number of comparisons, we can binary search to
find where the median fits in each of the $L_k(\lambda)$ lists, and therefore
which differences are smaller and which differences are larger than
the median.
This median is the $k$th entry of $\vec x {\mathop{*}\limits_{\meD}^-} \vec y$.
Therefore, we can compute all $n$ entries of $\vec x {\mathop{*}\limits_{\meD}^-} \vec y$
in $O(n d + n \lg n + (n^2/d) \lg d)$ comparisons.
This asymptotic running time is minimized
when $n d = \Theta((n^2/d) \lg d)$,
i.e., when $d^2/\lg d = \Theta(n)$.
Substituting $d = \sqrt{n \lg n}$, we obtain a running time of
$O(n \sqrt {n \lg n})$ in the nonuniform linear decision tree model.
\end{proof}
Combining Theorems~\ref{l_1 reduction} and \ref{median convolution nonuniform},
we obtain the following result:
\begin{corollary} \label{l_1 nonuniform}
The $\ell_1$ necklace alignment problem can be solved in
$O(n \sqrt {n \lg n})$ time in the nonuniform linear decision tree model.
\end{corollary}
\subsection{$(\min,-)$ Convolution in Real RAM via Geometric Dominance}
Now we turn to the analogous results for the real RAM:
\begin{theorem} \label{median convolution RAM}
The $(\mathop{\rm median},-)$ convolution of two vectors of length $n$ can be computed
in $O(n^2 (\lg \lg n)^2/\lg n)$ time on a real RAM.
\end{theorem}
\begin{proof}
Let $\vec x$ and $\vec y$ denote the two vectors of length~$n$,
and let $\vec x {\mathop{*}\limits_{\meD}^-} \vec y$ denote their $(\mathop{\rm median},-)$
convolution.
For each permutation $\pi$ on the set $\{0, 1, \dots, d-1\}$,
for each $i \in \{0, d, 2 d, \dots, \lfloor n/d \rfloor d\}$,
and for each $j \in \{0, 1, \dots, n-1\}$,
we define the $(d-1)$-dimensional points
$$\arraycolsep=0.5\arraycolsep
\begin{array}{rcllll}
p_{\pi,i} &=& (x_{i+\pi(0)} - x_{i+\pi(1)}, & x_{i+\pi(1)} - x_{i+\pi(2)},
& \dots, & x_{i+\pi(d-2)} - x_{i+\pi(d-1)}), \\
q_{\pi,j} &=& (y_{j-\pi(0)} - y_{j-\pi(1)}, & y_{j-\pi(1)} - y_{j-\pi(2)},
& \dots, & y_{j-\pi(d-2)} - y_{j-\pi(d-1)}),
\end{array}$$
(To handle boundary cases, define $x_i=\infty$ and $y_j=-\infty$
for indices $i,j$ outside $[0,n-1]$.)
For each permutation $\pi$, we apply Lemma~\ref{chan} to the set of red
points $\{p_{\pi,i} : i = 0, d, 2 d, \dots, \lfloor n/d \rfloor d\}$ and
the set of blue points $\{q_{\pi,j} : j = 0, 1, \dots, n-1\}$,
to obtain all dominating pairs $(p_{\pi,i},q_{\pi,j})$.
Point $p_{\pi,i}$ dominates $q_{\pi,j}$ precisely if
$x_{i+\pi(\delta)} - x_{i+\pi(\delta+1)} \geq
y_{j-\pi(\delta)} - y_{j-\pi(\delta+1)}$
for all $\delta \in \{0, 1, \dots, d-2\}$
(ignoring the indices outside $[0, n-1]$).
By re-arranging terms, this condition is equivalent to
$x_{i+\pi(\delta)} - y_{j-\pi(\delta)} \geq
x_{i+\pi(\delta+1)} - y_{j-\pi(\delta+1)}$
for all $\delta \in \{0, 1, \dots, d-2\}$,
i.e., $\pi$ is a sorting permutation of
$\langle x_i - y_j, x_{i+1} - y_{j-1}, \dots,
x_{i+d-1} - y_{j-d+1} \rangle$.
If we substitute $j = k-i$, we obtain that $(p_{\pi,i}, q_{\pi,k-i})$ is a
dominating pair precisely if $\pi$ is a sorting permutation of the list
$L_k(i) = \langle x_i - y_{k-i}, x_{i+1} - y_{k-i+1}, \dots,
x_{\min\{i+d,n\}-1} - y_{\min\{k-i+d,n\}-1} \rangle$.
Thus, the set of dominating pairs gives us the sorted order of
$L_k(i)$ for each $i$ divisible by $d$ and for each~$k$.
Also, there can be at most $O(n^2/d)$ total dominating pairs
$(p_{\pi,i}, q_{\pi,j})$ over all $i,j,\pi$,
because there are $O(n/d)$ choices for $i$~and
$O(n)$ choices for~$j$,
and if $(p_{\pi,i},q_{\pi,j})$ is a dominating pair,
then $(p_{\pi',i},q_{\pi',j})$ cannot be a dominating pair
for any permutation $\pi' \neq \pi$.
(Here we assume that the sorted order is unique, which can be arranged
by standard perturbation techniques or by breaking ties consistently
\cite{Chan-2005-apsp}.)
Hence, the running time of the $d!$ executions of Lemma~\ref{chan} is
$d! \, 2^{O(d)} n^{1+\epsilon} + O(n^2/d)$ time,
which is $O(n^2 \lg \lg n/\lg n)$
if we choose $d = \alpha \lg n / \lg \lg n$
for a sufficiently small constant $\alpha > 0$.
By Theorem~\ref{median}, we can compute the median of
$L_k(0) \cup L_k(d) \cup L_k(2 d) \cup \cdots \cup L_k(\lceil k/d \rceil d)$,
i.e., $\mathop{\rm median}_{i=0}^k (x_i - y_{k-i})$,
in $O((k/d) \lg d) = O((n/d) \lg d)$ comparisons.
Also, in the same asymptotic number of comparisons, we can binary search to
find where the median fits in each of the $L_k(\lambda)$ lists, and therefore
which differences are smaller and which differences are larger than
the median.
This median is the $k$th entry of $\vec x {\mathop{*}\limits_{\meD}^-} \vec y$.
Therefore all $n$ entries can be computed in
$O(n^2 (\lg d)/d) = O(n^2 (\lg \lg n)^2/\lg n)$ time
on a real RAM.
\end{proof}
Combining Theorems~\ref{l_1 reduction} and \ref{median convolution RAM},
we obtain the following result:
\begin{corollary} \label{l_1 RAM}
The $\ell_1$ necklace alignment problem can be solved in\\
$O(n^2 (\lg \lg n)^2/\lg n)$ time on a real RAM.
\end{corollary}
As before, this approach likely cannot be improved beyond $O(n^2 / \lg n)$,
because such an improvement would require an improvement to Lemma~\ref{chan},
which would in turn improve the fastest known algorithm for all-pairs shortest
paths in dense graphs \cite{Chan-2010-apsp}.
In contrast to $(\mathop{\rm median},+)$ convolution, $(\mathop{\rm mean},+)$ convolution
is trivial to compute in linear time by inverting the two summations.
\iffalse
\section{Connections to $X+Y$}
We have shown how the necklace problem relates to various convolutions
and presented algorithms to compute these convolutions in $o(n^2)$ time
in various models. Here we show the connection between the $X+Y$ problem
and convolutions.
Recall the following definitions:
The $k$th element of $x {\mathop{*}\nolimits} y$ is $\sum_{i=0}^{k} x_i y_{k-i}$.
The $k$th element of $x {\mathop{*}\limits_{\max}^-} y$ is $\max_{i=0}^k x_i y_{k-i}$.
The $k$th element of $x {\mathop{*}\limits_{\meD}^-} y$ is $\text{median}_{i=0}^k x_i y_{k-i}$.
Given any vectors $\vec x$ and $\vec y$, let $x'$ and $y'$ be vectors that have been obtained
form $\vec x$ and $\vec y$ by padding the vectors with 0's. Then, the $k$th element of $x' {\mathop{*}\nolimits} y'$
corresponds to the sum of the $k$th antidiagonal of the $X \cdot Y$ matrix. However,
by taking logs, this can be made equivalent to the sum of the $k$th antidiagonal of
the $X \cdot Y$ matrix. Similarly, $x' {\mathop{*}\limits_{\max}^-} y'$ and $x' {\mathop{*}\limits_{\meD}^-} y'$ are
the maximum and medians of all of the antidiagonals in an $X \cdot Y$, and therefore
a $X+Y$.
\fi
\section{Conclusion}
\label{Conclusion}
The convolution problems we consider here have connections to many classic
problems, and it would be interesting to explore whether the structural
information extracted by our algorithms could be used to devise faster
algorithms for these classic problems.
For example, does the antidiagonal information of the $X+Y$ matrix
lead to a $o(n^2 \lg n)$-time algorithm for sorting $X+Y$?
We believe that any further improvements to our convolution algorithms
would require progress and/or have interesting implications on
all-pairs shortest paths \cite{Chan-2005-apsp}.
Our $(\min,-)$-convolution algorithms give subquadratic algorithms
for \emph{polyhedral 3SUM}:
given three lists,
$A = \langle a_0, a_1, \dots, a_{n-1} \rangle$,
$B = \langle b_0, b_1, \dots, b_{n-1} \rangle$,
and
$C = \langle c_0, c_1, \dots, c_{2n-2} \rangle$,
such that $a_i + b_j \leq c_{i+j}$ for all $0 \leq i, j < n$,
decide whether $a_i + b_j = c_{i+j}$ for any $0 \leq i, j < n$.
This problem is a special case of 3SUM,
and this special case has an $\Omega(n^2)$ lower bound in the
3-linear decision tree model \cite{Erickson-1999-satisfiability}.
Our results solve polyhedral 3SUM in $O(n^2 / \lg n)$ time
in the 4-linear decision tree model, and in $O(n \sqrt n)$ time
in the nonuniform 4-linear decision tree model,
solving an open problem of Erickson \cite{Demaine-O'Rourke-2005-open}.
Can these algorithms be extended to solve 3SUM in subquadratic time
in the (nonuniform) decision tree model?
\section*{Acknowledgments}
This work was initiated at the 20th Bellairs Winter Workshop on
Computational Geometry held January 28--February 4, 2005.
We thank the other participants of that workshop---Greg Aloupis,
Justin Colannino, Mirela Damian-Iordache, Vida Dujmovi\'c,
Francisco Gomez-Martin, Danny Krizanc, Erin \allowbreak McLeish, Henk Meijer, Patrick Morin,
Mark Overmars, Suneeta Ramaswami, David Rappaport, Diane Souvaine,
Ileana Streinu, David Wood, Godfried Toussaint, Remco Veltkamp, and
Sue Whitesides---for helpful discussions
and contributing to a fun and creative atmosphere.
We particularly thank the organizer, Godfried Toussaint,
for posing the problem to us.
The last author would also like to thank Luc Devroye for pointing out
the easy generalization of the $\ell_2$ necklace alignment problem
to $\ell_p$ for any fixed even integer $p$.
|
1,116,691,499,149 | arxiv | \section{Introduction}\label{intro}
An ultrarelativistic nucleus-nucleus collision produces strongly-interacting matter which rapidly thermalizes into a hot quark-gluon plasma (QGP)~\cite{Busza:2018rrf}.
This plasma expands freely into the vacuum and eventually cools down into a gas of hadrons.
Electron-positron and muon-antimuon pairs, referred to as dileptons, are created throughout the history of the QGP by quark-antiquark annihilation.
Once produced, they reach the detector without any further interaction, so that they probe the entire space-time dynamics, including the early stages of the collision.
In particular, they carry unique information about the thermalization of the QGP~\cite{Martinez:2008di}.
Dileptons produced by the QGP can be separated from those produced later on in the hadronic phase using the invariant mass, $M$, of the pair as a selection criterion.
Specifically, the contribution of the QGP dominates for $M\gtrsim 1.2$~GeV~\cite{Rapp:2013nxa,Song:2018xca}.
This QGP dilepton production can be studied not only as a function of the invariant mass $M$, but also as a function of the momentum of the dilepton.
It has long been known~\cite{McLerran:1984ay} that at a given rapidity $y$, the spectrum $dN^{l^+l^-}/d^4K$, where $K$ is the 4-momentum of the dilepton, should depend only on the transverse mass $M_t\equiv \sqrt{M^2+k_t^2}$, where $k_t$ is the transverse momentum of the dilepton, provided that the QGP is in local equilibrium, and that the production occurs early enough that transverse flow is negligible.
We improve over this seminal work by implementing a state-of-the-art treatment of pre-equilibrium dynamics~\cite{Coquet:2021lca}, still neglecting transverse flow.
We study the effect of kinetic and chemical equilibration on the $M_t$ spectrum, and the deviations from $M_t$ scaling that they induce.
In Sec.~\ref{s:scaling}, we explain when and why $M_t$ scaling is expected, and we rederive the expression obtained by McLerran and Toimela for the $M_t$ distribution~\cite{McLerran:1984ay}.
In Sec.~\ref{s:dimensional}, we discuss qualitatively, using dimensional analysis, the effects of pre-equilibrium dynamics.
Quantitative results are presented in Sec.~\ref{s:results}.
The setup of our calculation is the same as in our previous work ~\cite{Coquet:2021lca}, in which we only calculated the mass spectrum of dileptons, integrated over momentum.
We first briefly recall this setup, and we show our results for the QGP dilepton spectrum in Pb+Pb collisions at $\sqrt{s_{\rm NN}}=5.02$~TeV.
We show how the shear viscosity over entropy ratio $\eta/s$ at early times, which governs the thermalization of the QGP, can be extracted from the spectrum.
In Sec.~\ref{s:backgrounds}, we evaluate the dilepton spectrum resulting from the Drell-Yan process, i.e, the annihilation of quarks and antiquarks belonging to incoming nuclei~\cite{Drell:1970wh}, and we compare this background with the QGP spectrum.
\section{Transverse mass scaling}
\label{s:scaling}
It has long been known that in a hadronic gas in thermal equilibrium, the transverse momentum spectra of all identified hadrons fall on the same curve when plotted as a function of their transverse mass $M_t$~\cite{Hagedorn:1965st,Aachen-Berlin-Bonn-CERN-Cracow-Heidelberg-Warsaw:1976gom,Becattini:1997rv,Altenkamper:2017qot}.
This follows from the fact that the phase-space distribution of particles within the gas is a Boltzmann factor $dN/d^3p d^3x=\exp(-E/T)$, where $E$ is the energy of the particle, $T$ the temperature, and we have neglected the small effects of quantum statistics.
Writing $E=M_t\cosh y$, where $y$ is the rapidity, and integrating over the rapidity, the resulting distribution only depends on $M_t$.
This argument does not immediately apply to dileptons because they cease to interact as soon as they are produced.
Therefore, dileptons produced by an equilibrated QGP are not themselves in thermal equilibrium.
However, transverse mass scaling still holds, for reasons which we now explain.
Dilepton production occurs through the production of a virtual photon, which then decays into a lepton-antilepton pair.
What one calls the 4-momentum of the dilepton, $K$, is actually the 4-momentum of the virtual photon.
To leading order in perturbation theory, the production of the virtual photon occurs through a $2\to 1$ process:
The annihilation of a quark, with 4-momentum $P_1$, and an antiquark, with 4-momentum $P_2$, into a virtual photon with 4-momentum $K=P_1+P_2$.
We denote the phase space distributions of quarks and antiquarks by $f_q(x,{\bf p}_1)$ and $f_{\bar q}(x,{\bf p}_2)$, respectively, where $x$ denotes space-time coordinates.
The rate of dilepton production is obtained by integrating the transition rate over all possible values of ${\bf p}_1$ and ${\bf p}_2$, taking energy-momentum conservation into account:
\begin{strip}
\begin{equation}
\label{prodrate}
\frac{dN^{l^+l^-}}{d^4xd^4K}=\int{\frac{d^3 p_1}{(2\pi)^3 2p_1}\frac{d^3 p_2}{(2\pi)^3 2p_2}f_q(x,{\bf p}_1)f_{\bar q}(x,{\bf p}_2)|{\cal A}|^2 (2\pi)^4\delta^{(4)}(P_1+P_2-K)},
\end{equation}
where ${\cal A}$ is the Lorentz-invariant amplitude of the process, which is not modified by the thermal medium to leading order in perturbation theory.
We have neglected quark masses so that the invariant phase-space element is just $d^3p_i/p_i$, with $i=1,2$, and $p_i\equiv |{\bf p}_i|$.
In a baryonless thermal fluid at rest, $f_q(x,{\bf p})=f_{\bar q}(x,{\bf p})=\exp(-p/T(x))$ (assuming Boltzmann statistics), where $T(x)$ is the temperature at space-time point $x$.
Conservation of energy then implies $f_q(x,{\bf p}_1)f_{\bar q}(x,{\bf p}_2)=\exp(-k_0/T(x))$, which can be moved outside the integral:
\begin{equation}
\label{prodrate2}
\frac{dN^{l^+l^-}}{d^4xd^4K}=e^{-k_0/T(x)}\int{\frac{d^3 p_1}{(2\pi)^3 2p_1}\frac{d^3 p_2}{(2\pi)^3 2p_2}|{\cal A}|^2 (2\pi)^4\delta^{(4)}(P_1+P_2-K)}.
\end{equation}
\end{strip}
The pre-factor on the right-hand side is the Boltzmann factor corresponding to the dilepton, and the rest is a non-trivial kinematic integral involving the scattering amplitude, which could in principle depend on the four-momentum $K$.
We show that it is in fact independent of $K$.
First, one notes that the integrand is a Lorentz scalar.
If one neglects quark and lepton masses, the only Lorentz-invariant scale is the invariant mass $M=(K^\mu K_\mu)^{1/2}$ of the dilepton, hence the integral can only depend on $M$.
This dependence can be obtained through dimensional analysis.
The left-hand side of Eq.~(\ref{prodrate2}) is dimensionless in natural units $\hbar=c=1$, therefore, the integral in the right-hand side is also dimensionless.
This implies that it is actually independent of $M$.
For a fluid at rest, we have demonstrated that
\begin{equation}
\label{prodrateC}
\frac{dN^{l^+l^-}}{d^4xd^4K}=C\exp\left(-\frac{k_0}{T(x)}\right),
\end{equation}
where $C$ is a dimensionless constant.
Now, since $dN^{l^+l^-}/d^4xd^4K$ is a Lorentz scalar, the result for a moving fluid is identical, provided that one replaces $k_0$ with the dilepton energy in the rest frame of the fluid.
The dilepton spectrum is obtained by integrating the production rate over the space-time coordinates $x^\mu$.
We assume that the QGP is invariant under longitudinal boosts~\cite{Bjorken:1982qr}.
Then, its space-time volume can be rewritten as $d^4x=d^2{\bf x}_\perp \tau d\tau dy_f$, where ${\bf x}_\perp$ is the transverse position, $\tau\equiv\sqrt{t^2-z^2}$ the proper time and $y_f={\rm artanh}(z/t)$ the fluid rapidity.
Finally, we neglect transverse flow.
Then, the dilepton energy in the fluid rest frame is $M_t\cosh(y-y_f)$.
The dilepton spectrum is:
\begin{equation}
\label{mtscalinggeneral}
\frac{dN^{l^+l^-}}{d^4K}=C\int d{\bf x}_\perp \int_0^{\infty} \tau d\tau \int_{-\infty}^{+\infty} dy_f \exp\left(-\frac{M_t\cosh(y- y_f)}{T({\bf x}_{\perp},\tau)}\right).
\end{equation}
It is independent of $y$, as a consequence of the assumed longitudinal boost invariance.
It depends on $k_t$ and $M$ only through $M_t$, which is the property of transverse mass scaling.
As can be seen from the above argument, $M_t$ scaling is a robust property of the leading-order dilepton production, which is independent of the detailed space-time dynamics. As long as the system is longitudinally boost invariant and the transverse expansion can be neglected it simply follows from symmetry and dimensional analysis.
One expects it to be broken by next-to-leading order perturbative corrections, which are smaller than the leading-order contribution~\cite{Ghiglieri:2014kma}, and by non-perturbative dynamics~\cite{Ding:2016hua,Jackson:2019yao}.
For leading-order production, an explicit calculation gives the expression of the proportionality constant $C$ in Eq.~(\ref{prodrateC})~\cite{Coquet:2021lca,Laine:2015iia}:
\begin{equation}
\label{expressionC}
C=\frac{N_c\alpha^2}{12\pi^4}\sum_f q_f^2,
\end{equation}
where $N_c=3$ is the number of quark colors, $\alpha$ is the fine structure constant, $\sum_f$ denotes the summation over quark flavors, $q_f$ is the quark electric charge, $\frac{2}{3}$ for $u$ and $-\frac{1}{3}$ for $d$ and $s$.
We now derive the explicit form of the $M_t$ spectrum assuming that the equation of state of the QGP is conformal~\cite{Baier:2007ix}, which is approximately true at high temperatures, and implies that $\tau T({\bf x}_\perp,\tau)^3$ is independent of $\tau$~\cite{Bjorken:1982qr}.
Then, the integral over $\tau$ in Eq.~(\ref{mtscalinggeneral}) can easily be done analytically using the change of variables $\tau=x^3$.
The integral over the fluid rapidity $y_f$ can also be done analytically.
We further simplify the problem (although this simplification is not essential) by assuming that the temperature profile is uniform within a transverse area $A_\perp$, that is, $T({\bf x}_\perp,\tau)$ is independent of ${\bf x}_{\perp}$.
We obtain:
\begin{equation}
\label{ratethermal3}
\left(\frac{dN^{l^{+}l^{-}}}{d^4 K}\right)_{\rm ideal}=\frac{32N_c\alpha^2\sum_f q_f^2}{\pi^4} \frac{A_\perp (\tau T^3)^2 }{M_t^6},
\end{equation}
which is the McLerran-Toimela $M_t^{-6}$ spectrum~\cite{McLerran:1984ay}.
The subscript {\it ideal\/} refers to the fact that local equilibrium holds at all times, which in turn implies that the expansion is ruled by {\it ideal\/} hydrodynamics.
Note that the $M_t$ spectrum is a power law.
This seems to contradict the expectation from lower energies that the $M_t$ spectrum should be exponential, with the inverse slope measuring the effective temperature probed by dileptons~\cite{NA60:2008ctj,Rapp:2014hha,Tripolt:2020dac,HADES:2019auv}.
The contradiction is only apparent.
Larger values of $M_t$ are produced at earlier times, when the temperature is higher.
Therefore, the effective temperature depends on $M_t$:
\begin{equation}
\label{teffideal}
T_{\rm eff}(M_t)\equiv -\left[\frac{d}{dM_t}
\ln\left(\frac{dN^{l^{+}l^{-}}}{d^4 K}\right)\right]^{-1}=\frac{M_t}{6}.
\end{equation}
It is the integration over time which converts the exponential spectrum into a power law~\cite{McLerran:1984ay}.
The constant $\tau T^3$ in Eq.~(\ref{ratethermal3}) is proportional to the charged multiplicity per unit rapidity~\cite{Kajantie:1986cu}, and inversely proportional to $A_\perp$~\cite{Coquet:2021lca}.
Our estimates for Pb+Pb collisions at $\sqrt{s_{\rm NN}}=5.02$~TeV near mid-rapidity in the $0-5\%$ centrality range are:
\begin{eqnarray}
\label{tauT3}
A_\perp&=&104~{\rm fm}^{2}=2670~{\rm GeV}^{-2}\cr
\tau T^3&=&10.3~{\rm fm}^{-2}=0.40~{\rm GeV}^2.
\end{eqnarray}
These values will be used in Sec.~\ref{s:results}, where we show that our numerical results smoothly converge to Eq.~(\ref{ratethermal3}) when the viscosity over entropy ratio $\eta/s$, which controls the deviations from equilibrium, goes to zero.
Finally, the spectrum (\ref{ratethermal3}) can be integrated over $M_t$ for fixed $M$. The resulting invariant mass spectrum is proportional to $M^{-3}$~\cite{McLerran:1984ay}:
\begin{eqnarray}
\label{rateintegrated}
\left(\frac{dN^{l^{+}l^{-}}}{dMdy}\right)_{\rm ideal}&= &2\pi M \int_{M}^{\infty}M_tdM_t
\left(\frac{dN^{l^{+}l^{-}}}{d^4 K}\right)_{\rm ideal}\cr
&=& \frac{16N_c\alpha^2\sum_f q_f^2}{\pi^3} \frac{A_\perp (\tau T^3)^2}{M^3}.
\end{eqnarray}
\section{Pre-equilibrium dynamics: qualitative discussion}
\label{s:dimensional}
We now discuss qualitatively the effects of pre-equilibrium dynamics.
Several effects must be taken into account:
\begin{itemize}
\item The time dependence of the temperature is modified due to the anisotropy of the momentum distribution of quarks and gluons.
\item The quark momentum distribution entering the production rate (\ref{prodrate}) is anisotropic.
\item Quarks are underpopulated relative to gluons.
\end{itemize}
The first two effects correspond to kinetic equilibration, while the third corresponds to chemical equilibration.
At the end of this section, we also discuss the qualitative effects of transverse flow, which is not included in our numerical results.
For this qualitative discussion, we model the departure from local thermal equilibrium by replacing ideal hydrodynamics with Navier-Stokes viscous hydrodynamics.
The relative order of magnitude of viscous corrections can then be derived on the basis of dimensional analysis.
The largest term in the energy-momentum tensor of an ideal fluid is proportional to $\epsilon+P=Ts$~\cite{Ollitrault:2007du}.
The correction involving the shear viscosity $\eta$ is a gradient~\cite{Baier:2007ix}.
In the early stages of the collision, due to the fast longitudinal expansion, the largest gradient is the time derivative, which is of order $1/\tau$ for dimensional reasons.
Hence, the viscous term is of order $\eta/\tau$, while the ideal term is of order $Ts$.
The ratio of the two is the inverse Reynolds number, which depends on $\tau$:
\begin{equation}
\label{reynolds}
{Re}^{-1}(\tau) \equiv \frac{\eta}{s}\frac{1}{\tau T(\tau)}.
\end{equation}
Now, dileptons with a transverse mass $M_t$ are dominantly produced when the temperature $T(\tau)$ is of the order of $M_t$.
This occurs at a time $\tau$ proportional to $(\tau T^3)/M_t^3$, where we recall that $\tau T^3$ is approximately constant.
Inserting these orders of magnitude of $T(\tau)$ and $\tau$ into Eq.~(\ref{reynolds}), we obtain the order of magnitude of the relevant Reynolds number, which now depends on $M_t$:
\begin{equation}
\label{reynoldsmt}
{Re}^{-1}(M_t) \equiv \frac{\eta}{s}\frac{M_t^2}{\tau T^3}.
\end{equation}
The relative correction to the dilepton yield due to pre-equilibrium dynamics is of the order of ${Re}^{-1}(M_t)$.
As we shall see in Sec.~\ref{s:results}, this dimensional reasoning is confirmed by numerical calculations.
We now discuss, still at the qualitative level, the breaking of $M_t$ scaling which is expected when the plasma is not in local equilibrium.
The first effect is that the quark momentum distribution is no longer isotropic.
Due to the fast longitudinal expansion, longitudinal momenta in the comoving frame are typically much smaller than tranverse momenta~\cite{Martinez:2008di}.
In order to evaluate the qualitative effect of this momentum anisotropy on dilepton emission, we consider the extreme case where quark distributions are purely transverse, still assuming, for simplicity, that they are Boltzmann distributions:
\begin{equation}
\label{2dboltzmann}
f_{q,\bar q}(x,{\bf p})\propto \delta(p_{z})\exp(-p/T(x)),
\end{equation}
where the proportionality factor has dimension of energy.
Inserting this expression into Eq.~(\ref{prodrate}), and dropping the constant proportionality factors, one obtains
\begin{equation}
\label{rate2d}
\frac{dN^{l^+l^-}}{d^4xd^4K}\propto e^{-k^0/T(x)} \delta(k_z)\int{\frac{d^2 p_1}{p_1}\frac{d^2 p_2}{p_2}|{\cal A}|^2 \delta^{(3)}(P_1+P_2-K)},
\end{equation}
where the integration only runs over the transverse momenta, and we have factored out $\delta(k_z)$, so that the Dirac constraint inside the integral is now in 2+1 dimensions (transverse momentum and energy).
The integrand is invariant under Lorentz transformations in 2+1 dimensions, which again implies that the integral can only depend on the invariant mass of the dilepton, $M$.
Dimensional analysis of Eq.~(\ref{prodrate}) shows that the scattering amplitude ${\cal A}$ is dimensionless, such that the integral in Eq.~(\ref{rate2d}) has the mass dimension $-1$, and is therefore proportional to 1/M.
Explicitly, the integral evaluates to $2 \pi |{\cal A}|^2/M$.
The factor $\delta(k_z)$ can be rewritten as $(1/M_t)\delta(y)$, where $y$ is the rapidity of the dilepton.
In a reference frame where the fluid has rapidity $y_f$, this becomes $(1/M_t)\delta(y-y_f)$.
Finally, since $k_z=0$, the energy of the dilepton coincides with its transverse mass, and one obtains:
\begin{equation}
\label{rate2d2}
\frac{dN^{l^{+}l^{-}}}{d^4x d^4 K}\propto \exp\left(-\frac{M_t}{T(x)}\right)\frac{1}{M_t}\delta(y-y_f) \frac{1}{M}.
\end{equation}
The dilepton spectrum is obtained by integrating over the space-time history of the fluid, as in Eq.~(\ref{mtscalinggeneral}). One obtains
\begin{equation}
\label{2dspectrum}
\frac{dN^{l^+l^-}}{d^4K}\propto \frac{1}{M_t M}\int d{\bf x}_\perp \int_0^{\infty} \tau d\tau \exp\left(-\frac{M_t}{T({\bf x}_{\perp},\tau)}\right).
\end{equation}
$M_t$ scaling is broken by the factor $1/M$, which results from the reduced dimensionality of the phase-space integral in Eq.~(\ref{rate2d}).
For a given $M_t$, $dN^{l^+l^-}/d^4K$ is smaller for larger values of $M$.
As we shall see in Sec.~\ref{s:results}, this hierarchy is borne out by numerical calculations.
The other effect of pre-equilibrium dynamics is that the relative abundances of quarks and antiquarks are smaller than thermal abundances in the early stages of the collision.
The collision between the incoming nuclei creates mostly gluons~\cite{Shuryak:1992wc}.
Quark-antiquark pairs are then gradually produced by collisions between gluons~\cite{Kurkela:2018oqw,Du:2020dvp}.
Since dileptons are produced by quark-antiquark annihilation, quark suppression implies a suppression of dilepton production~\cite{Shuryak:1992bt}.
Studies of QCD thermalization in kinetic theory~\cite{Du:2020dvp} suggest that kinetic and chemical equilibration are governed by a single time scale, $\tilde{w}=\tau T /(4\pi\eta/s)$ , which can be seen as the age of the system $\tau$ in units of the equilibrium relaxation time $\sim (4\pi\eta/s)/T$. Hence, within our effective description the shear viscosity over entropy ratio $\eta/s$ controls not only the kinetic equilibration but also the approach to chemical equilibration.
Therefore, the suppression of dilepton production due to quark suppression follows the above dimensional analysis, and should scale like $Re^{-1}(M_t)$ in Eq.~(\ref{reynoldsmt}).
It depends only on $M_t$, so that quark suppression by itself should not break $M_t$ scaling.
However, it occurs in the early stages where the largest breaking of $M_t$ scaling is expected.
Therefore, one expects the breaking to be {\it milder\/} when quark suppression is included.
We will check this in Sec.~\ref{s:results}.
The last effect which breaks $M_t$ scaling is transverse flow.
The transverse fluid velocity is proportional to $\tau$ at early times~\cite{Ollitrault:2007du,Vredevoogd:2008id,Kurkela:2018wud}.
Therefore, transverse flow becomes more and more important as time goes by.
Since the time of dilepton production decreases with $M_t$ like $M_t^{-3}$, one expects that effects of transverse flow become negligible if $M_t$ is large enough~\cite{Rapp:2014hha}.
The qualitative effect of transverse flow on $M_t$ scaling is the following~\cite{Deng:2010pq}:
For a given $M_t$, the transverse boost enhances dilepton production for larger $k_t$ or, equivalently, smaller values of $M$.
Note that this effect is qualitatively similar to the effect of pre-equilibrium dynamics discussed above.
It has been seen experimentally by the NA60 Collaboration~\cite{NA60:2007lzy}.
We do not model transverse flow, therefore, we cannot assess quantitatively the breaking of $M_t$ scaling resulting from it.
However, we will estimate in Sec.~\ref{s:results} the range of $M_t$ for which transverse flow is likely to be important.
\section{Pre-equilibrium dynamics: quantitative results}
\label{s:results}
We now present quantitative estimates of QGP dilepton production in Pb+Pb collisions at $\sqrt{s_{\rm NN}}=5.02$~TeV.
The calculation is essentially the same as in Ref.~\cite{Coquet:2021lca}, therefore we only recall the essential steps.
Compared to the calculation of Sec.~\ref{s:scaling}, the main difference lies in the quark momentum distribution $f_{q,\bar q}({\bf p})$ in Eq.~(\ref{prodrate}).
The momentum anisotropy is modeled by carrying out the following replacement~\cite{Martinez:2008di} in the Boltzmann\footnote{The only difference with Ref.~\cite{Coquet:2021lca} is that we use Boltzmann distributions instead of Fermi-Dirac distributions in Eq.~(\ref{prodrate}). The advantage of this simplification is that our results converge to the McLerran-Toimela spectrum (\ref{ratethermal3}) as $\eta/s\to 0$, which is a useful benchmark. We have checked that the dilepton yields decrease only by a few percent if one uses Fermi-Dirac instead of Boltzmann.} distribution:
\begin{equation}
\label{xi}
|{\bf p}|\rightarrow \sqrt{p_t^2+\xi^2p_z^2},
\end{equation}
where $\xi>1$ is the anisotropy parameter.
Note that this ansatz implicitly assumes that the tail of the momentum distribution is exponential.
Therefore, our modelization does not take into account the possibility that the falloff at large momentum is slower than exponential, corresponding to the presence of an increased number of high-momentum partons in the early stages, usually referred to as ``minijets'' ~\cite{Paatelainen:2013eea}.
Note that, on the other hand, some choices of initial conditions inspired by the color glass picture imply a falloff at large momentum which is faster than exponential~\cite{Churchill:2020uvk}.
We take quark suppression into account by multiplying the Boltzmann distribution by a global ``quark suppression'' factor $q_s$, which is smaller than unity.
The anisotropy parameter $\xi$ and the quark suppression factor are computed as a function of time using QCD kinetic theory~\cite{Du:2020dvp}.
More precisely, we use QCD kinetic theory to evaluate the pressure anisotropy and the fraction of energy density carried by quarks.
We then match the anisotropy parameter and the quark suppression factor to these results.
Note that we could have used QCD kinetic theory to calculate directly the quark distribution.
The reason why we choose not to do so is the following.
There is by now strong theoretical evidence that the evolution of the pressure anisotropy is fairly universal~\cite{Heller:2015dha,Heller:2016rtz} and does not depend on the details of the microscopic dynamics~\cite{Giacalone:2019ldn}.
We therefore believe that the results we obtain in this indirect way, through a minimal distortion of the Boltzmann distribution, provide an efficient, transparent and robust way to investigate the dilepton spectrum.
We recall that both out-of-equilibrium parameters, namely the anisotropy parameter $\xi$ and quark suppression $q_s$ depend on the scaling variable $\tilde{w}=\tau T /(4\pi\eta/s)$ as described in detail in \cite{Coquet:2021lca}. Hence the only free parameter in the calculation is the viscosity over entropy ratio $\eta/s$, which is assumed to be constant and controls the proper-time dependence of quark production and kinetic equilibration, e.g. for larger values of $\eta/s$, the quark distribution will approach isotropy and chemical equilibrium later on ($\tau\sim3 $fm/c for $\eta/s=0.32$) than for lower values ($\tau\sim1$fm/c for $\eta/s = 0.16$).
The other difference with the calculation of Sec.~\ref{s:scaling} is that the temperature decreases more slowly than $\tau^{-1/3}$ at early times, due to the smaller longitudinal pressure.
One recovers the $\tau^{-1/3}$ dependence at late times.
The temperature is determined by matching the value of $\tau T^3$ at late times to the observed multiplicity.
That is, the value of $\tau T^3$ at late times is the same as in the ideal case.
Our numerical calculations are carried out for Pb+Pb collisions at $\sqrt{s_{\rm NN}}=5.02$~TeV in the 0-5\% centrality range, and the corresponding normalizations are given by Eq.~(\ref{tauT3}).
We have carried out calculations with and without quark suppression, for four different values of $\eta/s$: $0.04$, $0.08$ (not shown), $0.16$, and $0.32$.
The expected value for QCD, in the temperature range spanned by the early evolution, typically lies between $0.16$ and $0.32$~\cite{Christiansen:2014ypa}.
Smaller values $0.04$ and $0.08\simeq\frac{1}{4\pi}$~\cite{Policastro:2001yc} have also been implemented, in order to check numerically that our results converge smoothly to McLerran-Toimela spectrum (\ref{ratethermal3}) in the limit $\eta/s\to 0$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\linewidth]{004overidealversusmt.pdf}
\end{center}
\caption{(Color online)
Ratio of the dilepton spectrum, evaluated with $\eta/s=0.04$, to the McLerran-Toimela spectrum (\ref{ratethermal3}), as a function of the transverse mass of the dilepton $M_t$, for several values of the invariant mass $M$.
Calculations are for $0-5\%$ central Pb+Pb collisions at $\sqrt{s_{\rm NN}}=5.02$~TeV.
Closed symbols correspond to our numerical calculations with all pre-equilibrium effects taken into account.
Open symbols correspond to calculations where chemical equilibrium is assumed at all times, i.e., quark suppression is not taken into account.
The thin lines are the global fit of our results using Eq.~(\ref{fitformula}) (see text).
}
\label{fig:004results}
\end{figure}
The results with $\eta/s=0.04$ are displayed in Fig.~\ref{fig:004results} for five equally-spaced values of the invariant mass $M$.\footnote{The lowest value $M=1$~GeV is shown only for the sake of illustration, as hadronic production, which we do not consider, is significant for $M<1.2$~GeV.}
We have divided the spectrum calculated numerically with the analytic result for ideal hydrodynamics, Eq.(\ref{ratethermal3}).
The ratio is smaller than unity, which confirms the expectation that pre-equilibrium effects inhibit dilepton emission.
It is naturally smaller when quark suppression is taken into account, as can be seen by comparing closed symbols with open symbols.
The ratio is very close to unity for small $M_t$.
The deviation from unity increases as a function of $M_t$ as expected from the larger deviations from equilibrium at the time of production in Eq.~(\ref{reynoldsmt}).
For a fixed $M_t$, the yield decreases as the invariant mass $M$ increases, in line with the expectation from Eq.~(\ref{2dspectrum}).
The dependence of the dilepton yield on the parameters $\eta/s$, $M_t$ and $M$ is well captured by the following formula:
\begin{equation}
\label{fitformula}
\frac{dN^{l^{+}l^{-}}}{d^4 K}\simeq
\left(\frac{dN^{l^{+}l^{-}}}{d^4 K}\right)_{\rm ideal}
\frac{\left(1+a\frac{\displaystyle\eta}{\displaystyle s}M_t^2/n\right)^{-n}}{\sqrt{1+b\frac{\displaystyle\eta}{\displaystyle s}M^2}}
\end{equation}
where the first term in the right-hand side is the McLerran-Toimela spectrum (\ref{ratethermal3}), and
$a$, $b$, $n$ are adjustable parameters.
The parameter $a$ quantifies the dependence of the suppression on $M_t$, according to Eq.~(\ref{reynoldsmt}).
The parameter $b$ quantifies the breaking of $M_t$ scaling due to pre-equilibrium dynamics.
The functional form (\ref{fitformula}) guarantees that the deviations from ideal hydrodynamics are linear in $\eta/s$ in the limit $\eta/s\rightarrow 0$, as implied by the dimensional analysis in Sec.~\ref{s:dimensional}.
The fact that this functional form gives a satisfactory fit of our numerical results for a wide range of values of $\eta/s$ is a clear indication that our dilepton spectrum converges smoothly to the McLerran-Toimela spectrum in the limit $\eta/s\to 0$.
The parameter $n$ specifies the dependence of pre-equilibrium effects on the Reynolds number, in the non-linear regime where these effects are large.
Note that the mass spectrum $dN^{l^{+}l^{-}}/dM$ obtained by integrating Eq.~(\ref{fitformula}) over $M_t$ is a much better approximation of our numerical results than Eq.~(28) of \cite{Coquet:2021lca}.
The parametrization (\ref{fitformula}) implies that the spectrum is proportional to $1/M$ in the limit of large $\eta/s$, in agreement with Eq.~(\ref{2dspectrum}).
However, the calculation leading to Eq.~(\ref{2dspectrum}) is not strictly equivalent to our quantitative calculation for the following reason.
The hypothesis leading to Eq.~(\ref{2dspectrum}) is Eq.~(\ref{2dboltzmann}), namely, that the momentum distribution has zero width in $p_z$ and is exponential in $p_t$.
In the quantitative calculation, the width of the $p_z$ distribution also goes to 0 in the limit of large $\eta/s$, but it is not strictly exponential in $p_t$ (it is a Bessel function $K_0(p_t/T)$).
Nevertheless, the dependence of the dilepton spectrum on $M$ ends up being essentially the same in both cases.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\linewidth]{16spectrum.pdf}
\includegraphics[width=1\linewidth]{32spectrum.pdf}
\end{center}
\caption{(Color online)
Full symbols: Expected dilepton invariant yield per event in $0-5\%$ central Pb+Pb collisions at $\sqrt{s_{\rm NN}}=5.02$~TeV, in the central rapidity window $|y|<1$, as a function of the transverse mass, for several values of the invariant mass $M$, and two values of the shear viscosity over entropy ratio $\eta/s$.
The thick line is the McLerran-Toimela spectrum (\ref{ratethermal3}).
The thin lines are the global fit of our results using Eq.~(\ref{fitformula}).
Open symbols: Dilepton yield from the Drell-Yan process. We only plot the central value of the NLO+NLL calculation (see Sec.~\ref{s:backgrounds}). Uncertainties are shown in Fig.~\ref{fig:DY}.
}
\label{fig:allresults}
\end{figure}
For each setup of our calculation, i.e., with or without quark suppression taken into account, we have carried out a global fit of all our results for $1.5<M_t<7$~GeV using Eq.~(\ref{fitformula}).
The best-fit values with quark suppression are $a=0.61$~GeV$^{-2}$, $b=1.6$~GeV$^{-2}$, $n=3.1$.
Without quark suppression, they are $a=0.07$~GeV$^{-2}$, $b=2.4$~GeV$^{-2}$, $n=1.2$.\footnote{The error on $n$ is large for the results without quark suppression.
If one fixes the value of $n$ to the same value as with quark suppression, the fit is almost as good, and the parameters $a$ and $b$ are not significantly modified, so that the smaller value of $n$ returned by the fit seems of little significance.}
As expected from the discussion of Sec.~\ref{s:dimensional}, the breaking of $M_t$ scaling is larger without quark suppression, resulting in a larger value of $b$.
The parameter $a$ is an order of magnitude smaller without quark suppression, which means that 90\% of the pre-equilibrium effects on the $M_t$ spectrum come from chemical equilibration.
Note that from mere dimensional analysis, by comparing Eq.~(\ref{fitformula}) with Eq.~(\ref{reynoldsmt}), one expects $a\sim b\sim 1/(\tau T^3)\sim 2.5$~GeV$^{-2}$, where the numerical estimate is given by Eq.~(\ref{tauT3}).
The values of $b$ returned by the fit are comparable, while those of $a$ are significantly smaller, in particular when quark suppression is not implemented.
Fig.~\ref{fig:allresults} displays the dilepton yield per event for two values of the viscosity which roughly span the expected range in QCD~\cite{Christiansen:2014ypa}.
By comparing with the McLerran-Toimela spectrum, one sees that pre-equilibrium dynamics suppresses dilepton production by at least a factor 10 for $M_t>5$~GeV.
The dilepton yield is still mostly determined by $M_t$, and the breaking of $M_t$ scaling is a modest effect.
We now evaluate the robustness of our results with respect to transverse flow, which we have neglected.
Transverse flow develops gradually over a time of the order of the nuclear radius.
It becomes important for $\tau\gtrsim 5$~fm/c.
If the fraction of the dileptons produced after $5$~fm/c is small, the dilepton yield is likely to have little sensitivity to transverse flow.
We have calculated this fraction numerically and found that it only depends on $M_t$.
It is roughly 25\% for $M_t=2$~GeV, but only 4\% for $M_t=3$~GeV.
We conclude that for $M_t\lesssim 2$~GeV, sizable corrections from transverse flow are to be expected.
At RHIC, it has been argued on the basis of simple dimensional arguments that these corrections are small in the intermediate mass region~\cite{Rapp:2014hha}.
However, hydrodynamic calculations have shown that they are visible up to $M_t=2.5$~GeV~\cite{Deng:2010pq}.
Effects of transverse flow are larger at LHC than at RHIC.
We intend to study them in a future publication.
The dilepton spectrum is often characterized by its effective temperature $T_{\rm eff}$, defined as the inverse slope of the $M_t$ spectrum (Eq.~(\ref{teffideal})).
It is interesting to note that the spectrum depends on $M$ only through a global factor in Eq.~(\ref{fitformula}), so that $T_{\rm eff}$ still solely depends on $M_t$, as long as transverse flow can be neglected.\footnote{The rise and fall of $T_{\rm eff}$ as a function of $M$ observed by NA60~\cite{NA60:2008ctj,NA60:2008dcb} in the low-mass region can be ascribed to transverse flow.}
In the limit of small $\eta/s$, Eq.~(\ref{fitformula}) gives:
\begin{equation}
\label{teffviscous}
T_{\rm eff}(M_t)\simeq \frac{M_t}{6+2a\frac{\displaystyle\eta}{\displaystyle s} M_t^2},
\end{equation}
where $a\simeq 0.61$~GeV$^{-2}$.
This equation shows that pre-equilibrium dynamics decreases the effective temperature, and that the shear viscosity over entropy ratio at early times $\eta/s$ can be extracted from the inverse slope.
Note that the inverse slope obtained in Ref.~\cite{Song:2018xca} using the Parton-Hadron String Dynamics (PHSD) model exceeds the McLerran-Toimela value (\ref{teffideal}) already at RHIC energies.
We believe that this is due to the presence of hard particles of jets/mini-jets in the PHSD Monte Carlo simulations.
This source of large invariant mass dileptons warrants further investigation.
We now discuss the centrality and system-size dependence of QGP dilepton production.
This dependence is encapsulated in the transverse area, $A_\perp$, and in the value of $\tau T^3$ at late times.
Both quantities satisfy simple scaling laws as a function of the charged hadron multiplicity per unit pseudorapidity, $dN_{\rm ch}/d\eta \propto A_{\perp} \tau T^3$.
The observation that the mean transverse momentum of hadrons $\langle p_t\rangle$ depends weakly on centrality and system size~\cite{ALICE:2018hza} implies that $A_\perp$ varies approximately like $(dN_{\rm ch}/d\eta)^{2/3}$~\cite{Gardim:2019brr} as a function of centrality and system size for fixed rapidity and collision energy.
This in turn implies that $\tau T^3$, which is proportional to $(dN_{\rm ch}/d\eta)/A_\perp$~\cite{Coquet:2021lca}, scales like $(dN_{\rm ch}/d\eta)^{1/3}$.
Eq.~(\ref{ratethermal3}) then shows that the McLerran-Toimela spectrum varies like $(dN_{\rm ch}/d\eta)^{4/3}$.
In other words, the dilepton yield scales like the space-time volume, while the hadron yield scales like the volume at freeze-out.
The time component explains the extra factor $(dN_{\rm ch}/d\eta)^{1/3}$.
Eq.~(\ref{reynoldsmt}) then shows that the relative modification of the dilepton yield due to pre-equilibrium dynamics varies with system size and centrality like $(dN_{\rm ch}/d\eta)^{-1/3}$.
This implies that the parameters $a$ and $b$ in Eq.~(\ref{fitformula}) are also proportional to $(dN_{\rm ch}/d\eta)^{-1/3}$.
However, note that local event-to-event fluctuations of the initial density~\cite{Aguiar:2001ac}, which we neglect, will break this simple scaling.
Similar dimensional arguments can be used to predict the dependence of QGP dilepton production on the collision energy $\sqrt{s_{\rm NN}}$.
For a given collision system, the transverse area $A_\perp$ is approximately independent of $\sqrt{s_{\rm NN}}$, while the hadron multiplicity $dN_{\rm ch}/d\eta$ increases with $\sqrt{s_{\rm NN}}$~\cite{ALICE:2015juo}.
Therefore, $\tau T^3$ scales with energy like $dN_{\rm ch}/d\eta$.
This implies that the McLerran-Toimela spectrum (\ref{ratethermal3}) is proportional to $(dN_{\rm ch}/d\eta)^2$.
On the other hand, the coefficients $a$ and $b$, which govern the modifications due to pre-equilibrium effects, are proportional to the inverse Reynolds number (\ref{reynoldsmt}), i.e., to $(dN_{\rm ch}/d\eta)^{-1}$.
Finally, the dependence on rapidity $y$ follows the same scaling rules as the dependence on collision energy, up to the replacement of $dN_{\rm ch}/d\eta$ with $dN_{\rm ch}/dy$.\footnote{Our calculation setup assumes longitudinal boost invariance, but the results can still be applied if the multiplicity depends on rapidity, since the longitudinal pressure gradient has a negligible effect at LHC energy~\cite{Ollitrault:2007du}.}
The McLerran-Toimela spectrum is proportional to $(dN_{\rm ch}/dy)^2$, while $a$ and $b$ are proportional to $(dN_{\rm ch}/dy)^{-1}$.
The dilepton spectrum is therefore maximum at mid-rapidity, where $dN_{\rm ch}/dy$ is maximum~\cite{ALICE:2016fbt}.
\section{Background from the Drell-Yan process}
\label{s:backgrounds}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\linewidth]{DY_scale_M35.pdf}
\includegraphics[width=\linewidth]{DY_scale_M45.pdf}
\end{center}
\caption{(Color online)
Comparison between the dilepton yield from production in the QGP and from the Drell-Yan process for two fixed values of the invariant mass $M=3.5$~GeV (top) and $M=4.5$~GeV (bottom).
Lines: QGP production with pre-equilibrium dynamics and quark suppression taken into account, for $\eta/s=0.16$ (upper line) and $\eta/s=0.32$ (lower line).
Dark shaded band: Drell Yan process, with uncertainty from the parton distribution function.
Light shaded band: uncertainty on Drell-Yan from the renormalization and factorization scales.
Both bands are obtained by taking the envelope of the results obtained by varying the model parameters.
}
\label{fig:DY}
\end{figure}
The main backgrounds to QGP dilepton production in the intermediate mass region, besides the large $J/\psi$ peak around $M\simeq 3.1$~GeV, are semileptonic decays of heavy quark hadrons, and Drell-Yan production in the initial state.
The number of dileptons from charm hadron decays is expected to exceed the QGP dilepton yield for $\sqrt{s_{\rm NN}}>40$~GeV~\cite{Song:2018xca}.
Charm decays are indeed observed to be the dominant source of dileptons at LHC energies~\cite{ALICE:2018ael}. This source of background can be rejected based on the finite lifetime of the heavy quark hadrons since the leptons from this source do not originate from the primary vertex. Despite the small lifetime of charm hadron ground states of $c \tau \approx$ 50-410~$\mu$m, the partial rejection of these decay leptons in heavy-ion collisions is feasible and will strongly improve with the future detector projects LHCb Upgrade 2~\cite{LHCb:2018roe} and ALICE 3~\cite{Adamova:2019vkf}. We consider in this phenomenological publication only the irreducible background from the Drell-Yan process.
We compute the production of dileptons by the Drell-Yan process using the Drell-Yan Turbo software package~\cite{Camarda:2019zyx}.
The cross section is evaluated at Next-to-Leading Order~(NLO) in the strong coupling constant $\alpha_s$.
In addition, the calculation employs a resummation at small transverse momentum at next-to-leading logarithm~(NLL).
For the non-perturbative contribution to the form factor, the same Gaussian form is chosen as in~\cite{Camarda:2019zyx}.
We evaluate the uncertainty on the Drell Yan spectrum in the following way:
We vary the renormalization and factorization scales by a factor two independently resulting in 8 variations with respect to the default choice. We take into account the uncertainty on the parton distribution function in the EPPS parametrization~\cite{Eskola:2016oht}.
The central value of our calculation is plotted in Fig.~\ref{fig:allresults} for two values of the invariant mass $M$.
The slope of the Drell-Yan $M_t$ spectrum is roughly similar to that of the QGP spectrum, albeit slightly flatter.
The main difference between the two spectra is the normalization, which depends on the invariant mass $M$.
Drell-Yan production is enhanced for larger values of $M$ at a given $M_t$, contrary to QGP production.
The physical explanation is that the momenta of the incoming quark and antiquark responsible for Drell-Yan production are mostly longitudinal, so that smaller values of the transverse momentum $k_t$ (corresponding to larger values of $M$ at a given $M_t$) are preferred.
The kinematics of early QGP production is opposite, in the sense that longitudinal momenta in the QGP are typically smaller than transverse momenta.
This means that the breaking of $M_t$ scaling alone provide a handle to distinguish between QGP and Drell-Yan production.
Drell-Yan gradually takes over QGP production as $M$ increases.
In order to evaluate where the transition occurs, we compare both spectra in Fig.~\ref{fig:DY} for two values of $M$ above the $J/\psi$ peak, and for the two values of $\eta/s$ used in Fig.~\ref{fig:allresults}.
The uncertainty on the Drell-Yan spectrum from the renormalization and factorization scales is displayed as a light shaded band.
The uncertainty from parton distribution function is displayed as a dark shaded band.
One sees that the scale is the dominant source of uncertainty.
The top panel shows that QGP production is likely to dominate over Drell-Yan for $M$ up to $3.5$~GeV, at least for the lowest values of $M_t$.
Looking at the bottom panel, it seems unlikely that QGP dileptons can be isolated above $M=4.5$~GeV.
The precise value of $M$ above which Drell-Yan dominates over QGP production depends on the value of $\eta/s$ at early times.
Drell-Yan calculations can be also performed down to lower invariant masses. However, at low invariant masses the pQCD calculation exhibits increasingly large scale uncertainties. Furthermore, the non-perturbative set-up of the used code (NLO+NLL) was developed based on high invariant masses (Z-mass) and we hence do not want to stretch into a regime where the chosen approach may not be the most appropriate one that one could apply. Nevertheless, based on our estimates, the production from the QGP and the preequilibrium phase is expected to outshine the Drell-Yan contribution at smaller invariant masses.
The calculations displayed in the figures are carried out at mid-rapidity.
The rapidity dependence of the Drell-Yan is milder than that of the QGP spectrum and goes in the opposite direction:
Its minimum is at midrapidity.
On the other hand, the background from semileptonic decays of heavy quarks is easier to eliminate at larger rapidities because the secondary vertex is farther from the collision point.
\section{Conclusions}
\label{s:conclusions}
We have calculated the spectrum of dileptons produced by the quark-gluon plasma in ultrarelativistic heavy-ion collisions for invariant masses larger than $1.2$~GeV, with input from state-of-the-art QCD kinetic theory to model the kinetic and chemical equilibration of the QGP at early times.
The invariant spectrum $dN^{l^+l^-}/d^4K$ depends mostly on the transverse mass $M_t$.
The underpopulation of quarks at early times results in a steeper $M_t$ spectrum.
The viscosity over entropy ratio, which determines the equilibration time of the QGP, can be inferred by measuring the slope of the spectrum.
The anisotropy of the momentum distribution at early times breaks transverse mass scaling, by suppressing the production of higher invariant masses $M$.
Interestingly, the trend is opposite for dileptons produced by the Drell-Yan process, which is enhanced for larger $M$.
Therefore, one can distinguish experimentally QGP production from Drell-Yan production by studying the variation of the dilepton yield as a function of $M$ at fixed $M_t$.
We have introduced a simple parametrization (\ref{fitformula}), from which one can infer the dependence of QGP dilepton production on centrality, system size, rapidity, and collision energy.
Our modelization can be improved in several ways.
For the larger values of $M_t$, the contribution of minijets~\cite{Song:2018xca,Paatelainen:2013eea} warrants additional study.
For the smaller values of $M_t$, one should take transverse flow into account, as its effect on dilepton production is likely to be significant for transverse masses $M_t\lesssim 2$~GeV.
It has been studied in detail in the low-mass region~\cite{Vujanovic:2013jpa}.
In the intermediate mass region, detailed studies of transverse flow have been carried out~\cite{Ryblewski:2015hea,Kasmaei:2018oag}.
However, the effect of transverse flow on the slope on the $M_t$ spectrum has only been studied at RHIC energy~\cite{Deng:2010pq}, and deserves further studies at higher energies.
\section*{Acknowledgments}
This work is supported in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 ``Strong-interaction matter under extreme conditions'' project number 315477589 – TRR 211 and in part in the framework of the GLUODYNAMICS project funded by the ``P2IO LabEx (ANR-10-LABX-0038)'' in the framework ``Investissements d'Avenir'' (ANR-11-IDEX-0003-01) managed by the Agence Nationale de la Recherche (ANR), France.
|
1,116,691,499,150 | arxiv | \section{Introduction}
\subsection{Problem Statement}
\IEEEPARstart{T}{he} advent of internet of things (IoT) era is making it possible to ``locate" and ``connect" anywhere because of the promotion of public wireless (e.g., low-power wide-area network (LPWAN) \cite{UPINLBS}, cellulars, wireless local area network (WiFi), and Bluetooth) base stations and the availability of sensors in smart IoT devices. Crowdsourcing using data from existing public infrastructures is a trend for low-cost mass-market IoT applications \cite{Xiang2017}. However, traditional indoor localization performance in mass-market applications are difficult to predict: the localization accuracy may reach as accurate as one meter in environments that have a strong signal geometry, or be degraded to tens of meters. The unpredictability of positioning accuracy has limited the \textit{scalability} of localization in IoT applications. Although there are extensive research that have made significant improvements on localization under specific scenarios, it is difficult to maintain localization accuracy ubiquitously due to the complexity of daily-life scenarios, the diversity in low-cost devices, and the requirement for system deployments.
An approach for alleviating the scalability issue is to predict the localization accuracy (or uncertainty) in real time, so as to adaptively adjust the location solutions for various scenarios. Till today, most existing indoor localization works focus on improving localization techniques, while there is a lack of investigation of indicator metric regarding the localization accuracy, especially fingerprinting accuracy. Therefore, the main objective of this paper is to provide a reference on predicting fingerprinting accuracy and using it to improve localization.
To begin with, the related state-of-the-art techniques, such as those for low-cost indoor localization, crowdsourcing-based localization, and localization accuracy prediction, are reviewed, followed by the contributions of this paper.
\subsection{Low-Cost Indoor Localization Techniques}
There are generally three types of indoor localization techniques: the geometrical methods, fingerprinting, and dead-reckoning (DR) based algorithms. Typical geometrical methods include multilateration that uses device-access point (AP) ranges and triangulation that is based on angle-of-arrival measurements or ranges. Ranging measurements can either be calculated from received signal strength (RSS) by using path-loss models, or be determined by time-of-arrival measurements. Time-of-arrival and angle-of-arrival measurements require specific hardware for precise time synchronization and phase synchronization, respectively, while RSS is the mainstream wireless signal that is supported by the majority of IoT devices. For RSS-based methods, multilateration has advantages such as a small database, the capability of positioning in areas beyond the database, the extrapolation capability for uncharted areas, and most importantly, the feasibility to obtain real-time positioning accuracy. However, for indoor scenarios, the multilateration performance may be degraded by non-line-of-sight conditions. Thus, it is difficult for multilateration to provide accuracy that is competitive to fingerprinting in indoor areas. Therefore, fingerprinting and DR are the most widely used methods for low-cost indoor localization.
\subsubsection{Fingerprinting}
\label{sub-fp}
Fingerprinting has been proven to be effective for localization using signals such as RSS and magnetic data \cite{LiY-PhD}. Fingerprinting consists of two steps: training and testing. For training, [feature vector, location] fingerprints at multiple reference points (RPs) are used to generate a database. For testing, RPs with the closest features to the measured data are selected to compute the device location. To determine the closest features, the likelihood (or weight) for each fingerprint can be computed through the deterministic (e.g., nearest neighbors \cite{Fu2018}), probabilistic (e.g., Gaussian distribution \cite{Haeberlen2004} and Gauss process \cite{HeZ2018}), and machine-learning (e.g., random forest \cite{Guo2018} and neural networks \cite{FangS2008}) methods. A key challenge for fingerprinting is the inconsistency between database and real-time measurements. Examples of factors that lead to such inconsistency include a) database timeliness; b) signal fluctuations and interference such as reflection, multipath, and disturbances by human body; c) device diversity; and d) device orientation and motion mode. For a), there are database-updating methods through techniques such as crowdsourcing \cite{ZhangP-Sensors} and simultaneous localization and mapping (SLAM) \cite{Gentner2016}. For b), possible solutions include calibration-based methods \cite{Lee2015}, and calibration-free methods such as differential RSS \cite{Hossain2013}. For c), practical methods include RSS filtering \cite{Xue2017}, the selective use of APs \cite{Liang2015}, and the use of advanced models for multipath \cite{Gentner2016}, non-line-of-sight \cite{HeZ2014}, fading \cite{Akyildiz2009}, and channel diversity \cite{Zanella2014}. Finally, for d), there are methods by orientation aiding \cite{Sanchez2012}, database selection \cite{King2006}, and orientation compensation \cite{Fang2015}. In general, improving fingerprinting performance in specific scenarios has been well researched.
\subsubsection{Dead-Reckoning}
\label{sub-dr}
Low-cost motion sensor (e.g., inertial sensor and magnetometer) based DR can provide autonomous indoor/outdoor localization solutions \cite{YuN2018}. However, obtaining a long-term accurate DR solution is challenging due to factors such as a) the requirement for heading and position initialization; b) the existence of sensor errors and drifts, and c) the variety of device motions that brings in misalignment angles between vehicle (e.g., human body) and device. For a), wireless position and magnetic heading are commonly used for coarse location and heading initialization, while a filter (e.g., Kalman filter or particle filter) is used to update position and heading gradually. Thus, the accuracy for both initial and following-up location updates is important. For b), real-time calibration for gyros \cite{LiY2015} and magnetometers \cite{Wahdan2014} can be used. For c), methods such as principal component analysis \cite{DengZ2015} are utilized for misalignment estimation. Meanwhile, the key for using low-cost DR is to extract constraints, such as the pedestrian velocity model \cite{LiY-2018-access}, multi-device constraints \cite{LanH-2015}, and the building heading constraint \cite{Abdulrahim2010} for indoor pedestrian localization. Short-term DR solutions can bridge the outages for external localization signals, provide smoother and more reliable solutions when integrated with other techniques, and aid the profile matching of wireless \cite{LiY2016} and magnetic signals \cite{LiY-Mag}.
\subsubsection{Multi-Sensor Integration}
Due to the complexity of daily-life scenarios, it is challenging to provide low-cost but high-performance localization solutions by using a standalone technology. Because of the complementary characteristics among various technologies, multi-sensor integration is widely adopted. The data from motion sensors are commonly used to build the system model and provide a short-term prediction, while technologies such as wireless and magnetic fingerprinting provide the updates. Kalman filter \cite{LiY2017} and particle filter \cite{Pak2015} are widely used techniques for information fusion. The majority of literature integrates multi-sensor information through a loosely-coupled way, while others apply a tightly-coupled approach \cite{ZhuangY2018}. The key in multi-sensor integration is to take advantage of the merits of each single technique according to specific scenarios. This is commonly achieved by setting and tuning the weight for each technique in the filter, which is described in Subsection \ref{sub-review-fai}.
\subsection{Crowdsourcing-Based Localization}
While millions of users and devices are connected, massive amount of data are collected in real time. Accordingly, ``evolution" is needed for the traditional localization modes that consist of separate training and testing steps. Under the crowdsourcing framework, location users become also location providers, as their localization solutions may be used to update the database. There are crowdsourcing-based localization approaches \cite{Zhuang-EL} that estimate AP locations and path-loss model parameters (PPs) with crowdsourced data, and SLAM-based methods \cite{Gentner2016} that update databases while localizing. For crowdsourcing or SLAM based methods, a challenge is to obtain robust path predictions, updates, and constraints for optimization of the graph that contains device and AP parameters. DR is widely used to provide path predictions, while the fingerprinting solution is utilized to provide position updates or constraints. Therefore, the challenges for DR in Subsection \ref{sub-dr} and fingerprinting in Subsection \ref{sub-fp} are also issues to solve for crowdsourcing-based localization. Meanwhile, the selection of robust sensor data and fingerprints is key to a reliable crowdsourcing system. The research \cite{ZhangP2018} proposes a crowdsourced sensor data quality assessment framework. In this paper, the location accuracy indicator is further introduced, which can be used to predict the quality of location updates before using them for multi-sensor integration or database updating, and in turn enhance the reliability of crowdsourcing solutions. Existing works about localization accuracy indicators are discussed in the next subsection.
\subsection{Localization Accuracy Prediction}
\label{sub-review-fai}
The multi-sensor integration performance is highly dependent on the setting of parameters, such as the initial state covariance matrix ($\textbf{P}$), the system noise covariance matrix ($\textbf{Q}$), and the measurement noise covariance matrix ($\textbf{R}$). Especially, improper setting of $\textbf{Q}$ or $\textbf{R}$ may lead to degradation \cite{Mehra1970} or even divergence \cite{Fitzgerald1971} of estimation. Since the Kalman filter system model is constructed by self-contained motion models, elements in $\textbf{Q}$ are set according to the stochastic sensor characteristics, which can be obtained from factory or in-lab calibration through methods such as Allan variances \cite{El-Sheimy2008}. Therefore, setting of $\textbf{R}$ is more challenging because its elements are dependent on the quality of real-time measurements. Elements in $\textbf{R}$ may be set at empirical constants, by piece-wise models that have various but limited conditions, or by using adaptive Kalman filters that are based on variables such as residuals \cite{YuM2012}, innovations \cite{Aghili2016}, and the expectation-maximization term \cite{HuangY2018}. These methods are presented from the information fusion level. On the other hand, it is worthwhile to predict the accuracy of measurement models before information fusion, so as to obtain a priori information for the models in the filter.
For geometrical localization techniques (e.g., multilateration), criteria such as the dilution of precision (DOP) values are adopted to predict system accuracy, given device-AP distances and AP locations \cite{Langley1999}. There are also theoretical analyses for accuracy of angle-of-arrival \cite{HeY2015} and time-of-arrival/angle-of-arrival \cite{LiYGQ2018} based localization systems.
For fingerprinting, most existing works evaluate its accuracy through field testing (i.e., by localizing with real data and comparing with reference values) \cite{Yiu2016} or simulation \cite{Nguyen2017}. These methods can provide statistics of location errors at various points. However, collection of reference values for field testing is commonly time-consuming, labor-costly, and unaffordable for low-cost applications, while simulation cannot reflect real-world error sources. Researchers have also introduced theoretical methods, such as that uses the Cramer-Rao lower bound (CRLB) to estimate the lower bound for the accuracy of RSS databases \cite{Nikitin2017}, and that uses observability analysis \cite{Lourenco2013} to analyze RSS based localization. Both CRLB and observability analysis methods can reveal details inside the system theoretically. However, CRLB provides a lower bound instead a real-time prediction, while theoretical observability analysis can only analyze simplified models but cannot consider complex terms such as non-linear terms and noises.
For fingerprinting accuracy prediction using real-time data, the research \cite{Lemelson2009} proposes the initial work that predicts position errors through methods such as that uses the distances between one fingerprint and its closest fingerprints in database. Meanwhile, there are fingerprinting accuracy prediction approaches such as that uses DOP-like values \cite{Moghtadaiee2012}, Gaussian distribution based indicators \cite{Marcus2013}\cite{Beder2011}, and entropy from information theory \cite{Berkvens2018}. The fingerprinting accuracy prediction criteria in this paper follow the research \cite{Lemelson2009} but have extensions according to the crowdsourcing-based multi-sensor integration application, as discussed in the next subsection.
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{sys-diag}
\caption{Diagram of proposed crowdsourcing-based localization method }
\label{fig:sys-diag}
\end{figure}
\subsection{Main Contributions}
Compared to the existing works, the main contributions of this paper include
\begin{itemize}
\item The requirement for costly database-training process has limited the use of fingerprinting (e.g., wireless and magnetic fingerprinting) methods in mass-market location-enhanced IoT applications. Thus, this paper proposes a crowdsourcing-based localization approach, which does not involve user intervention or parameter tuning. Crowdsourced sensor data is used for updating both wireless and magnetic databases simultaneously during the localization process.
\item It is not straightforward to set and tune the uncertainty of fingerprinting solutions in multi-sensor integration. Therefore, this paper introduces the fingerprinting accuracy indicator (FAI) factors from three levels (i.e., signal strength, geometry, and database levels) as well as their combinations. Especially, the weighted distance between similar fingerprints (weighted DSF) based FAI is proposed to provide robust predictions from the database level. Additionally, the FAI factors for magnetic fingerprinting are designed. Furthermore, the FAI-enhanced extended Kalman filter (EKF) is proposed, which is effective in enhancing both localization accuracy and reliability (i.e., the capability to resist location outliers). Based on the proposed crowdsourcing localization method, the advantages and limitations of each FAI factor on wireless and magnetic fingerprinting are investigated. It is found that the weighted DSF based FAI is effective in predicting both location errors and outliers. Meanwhile, the geometry based FAI can predict short-term errors and outliers but does not provide accurate tracking of long-term errors. In contrast, the signal strength based FAI is relatively stronger in predicting long-term location errors. Therefore, the presented FAI factors have complementary characteristics and thus are combined to provide more accurate and robust predictions.
\end{itemize}
\subsection{System Description}
Figure \ref{fig:sys-diag} demonstrates the system diagram for the proposed multi-sensor crowdsourcing-based localization method. There are mainly two parts: the crowdsourcing engine and the FAI-enhanced multi-sensor integration engine, which are highlighted by the red and blue boxes, respectively. Both engines process raw sensor (e.g., inertial, wireless, and magnetic sensors) data from IoT devices. The crowdsourcing engine updates wireless and magnetic fingerprint databases with robust sensor data, which is selected through a quality assessment framework that is proposed in \cite{ZhangP2018} and furthermore improved in this paper. AP locations and PPs are also estimated and further used to calculate the geometry based FAI factor.
Within the multi-sensor integration engine, both wireless RSS and magnetic fingerprinting solutions are used as position updates, while the DR data is used to construct the system model. Data from fingerprinting and DR are fused in the FAI-enhanced EKF. Compared to traditional multi-sensor systems, the proposed EKF has an extra the FAI computation module, which predicts the accuracy (or uncertainty) of fingerprinting solutions in real time and thus removes the complex parameter-tuning process.
The details of the proposed system are described in the following sections. Section \ref{sec-sen-data-crowd} illustrates the sensor-based DR that is used for generation of crowdsourced RPs. Section \ref{fai-enh-ms-loc} describes the FAI-enhanced multi-sensor integration approach. Section \ref{sec:test} shows the tests and analyses, and Section \ref{sec-conclusionsl-work} draws the conclusions.
\begin{table}
\centering
\begin{tabular}{p{1.6cm} p{5.8cm} }
\hline
\textbf{Abbreviation} & \textbf{Definition} \\ \hline
AP & access point \\
CDF & cumulative distribution function \\
CRLB & Cramer-Rao lower bound \\
CT & constant fingerprinting accuracy \\
DR & dead-reckoning \\
DSF & distance between similar fingerprints \\
EKF & extended Kalman filter \\
FAI & fingerprinting accuracy indicators \\
GPS & global positioning system \\
INS & inertial navigation systems \\
IoT & internet of things \\
LPWAN & low-power wide-area network \\
MC & multi-FAI combination through linear combination \\
MCM & multi-FAI combination through selection of maximum value \\
PDR & pedestrian dead-reckoning \\
PP & path-loss model parameter \\
RGB-D & red-green-blue-depth \\
RMS & root mean square \\
RP & reference points \\
RSS & received signal strength \\
SD & geometry based FAI \\
SLAM & simultaneous localization and mapping \\
SS & signal strength based FAI \\
STD & standard deviation \\
UTC & coordinated universal time \\
WD & weighted DSF based FAI, WDSF \\
WiFi & wireless local area network \\ \hline
\end{tabular}
\caption{ List of abbreviations }
\label{tab:abbre}
\end{table}
\section{Sensor Data Crowdsourcing}
\label{sec-sen-data-crowd}
As described in Subsection \ref{sub-fp}, a fingerprinting database consists of a set of fingerprints (i.e., [feature vector, RP location] pairs). In this paper, wireless RSS and magnetic data are used to extract the features, while sensor-based DR generates the RP locations. Refer to \cite{ZhuangY2016} and \cite{LiY2017} for details about RSS and magnetic data preprocessing and feature extraction, respectively. This paper focuses on obtaining reliable RP locations in crowdsourcing. Compared to \cite{ZhangP2018}, this paper improves the condition for sensor data selection and avoids the complex model-training process. This section provides the methodology for sensor-based DR and sensor data selection.
\subsection{Sensor-Based Dead-Reckoning}
In contrast to the research \cite{LiY2017} that uses an attitude-determination EKF and a position-tracking EKF, this paper uses an inertial navigation systems (INS)/pedestrian dead-reckoning (PDR) integrated EKF for localization. The EKF system and measurement models are described in this subsection.
\subsubsection{EKF System Models}
The simplified motion model in \cite{Shin2005} is applied as the continuous system model as
\begin{equation}
\begin{aligned}
\left[
\begin{matrix}
\delta \dot{\textbf{p}}^n \\
\delta \dot{\textbf{v}}^n \\
\delta \dot{\bm{\psi}} \\
\dot{\textbf{b}}_g \\
\dot{\textbf{b}}_a
\end{matrix}
\right]
=
\left[
\begin{matrix}
-[\bm{\omega}^n_{en} \times] \delta \textbf{r}^n + \delta \textbf{v}^n \\
-[(2\bm{\omega}^n_{e} + \bm{\omega}^n_{en}) \times] \delta\textbf{v}^n + [\textbf{f}^n \times] \bm{\psi} + \textbf{C}^n_b(\textbf{b}_a + \textbf{w}_a) \\
-[(\bm{\omega}^n_{e} + \bm{\omega}^n_{en}) \times ]\bm{\psi} - \textbf{C}^n_b(\textbf{b}_g + \textbf{w}_g) \\
- (\frac{1}{\bm{\tau}_{b_g}})\textbf{b}_g + \textbf{w}_{b_g} \\
- (\frac{1}{\bm{\tau}_{b_a}})\textbf{b}_a + \textbf{w}_{b_a}
\end{matrix}
\right]
\end{aligned}
\end{equation}
where the states $\delta \textbf{p}^n$, $\delta \textbf{v}^n$, $\bm{\psi}$, $\textbf{b}_g$, and $\textbf{b}_a$ are the vectors of position errors, velocity errors, attitude errors, gyro biases, and accelerometer biases, respectively; $\textbf{C}^n_b$ is the direction cosine matrix from the device body frame (i.e., $b$-frame) to the local level frame (i.e., $n$-frame) predicted by the INS mechanization (refer to \cite{Shin2005} for the INS mechanization and the definition of frames); $\textbf{f}^n$ is the specific force vector projected to the $n$-frame, and $\bm{\omega}^n_{e}$ and $\bm{\omega}^n_{en}$ represent the angular rate of the Earth and that of the $n$-frame with respect to the Earth frame (i.e., e-frame), respectively; $\textbf{w}_g$ and $\textbf{w}_a$ are noises in gyro and accelerometer readings, respectively; $\bm{\tau}_{b_g}$ and $\bm{\tau}_{b_a}$ denote for the correlation time of sensor biases; and $\textbf{w}_{b_g}$ and $\textbf{w}_{b_a}$ are the driving noises for $\textbf{b}_g$ and $\textbf{b}_a$. The sign $[\bm{\gamma} \times]$ denotes the cross-product (skew-symmetric) form of the three-dimensional vector $\bm{\gamma}=\left[ \begin{matrix} \gamma_1 & \gamma_2 & \gamma_3 \end{matrix} \right]^T$,
\begin{equation}
\begin{aligned}
&[\bm{\gamma} \times] =
\left[
\begin{matrix}
0 & -\gamma_3 & \gamma_2 \\
\gamma_3 & 0 & -\gamma_1 \\
-\gamma_2 & \gamma_1 & 0 \\
\end{matrix}
\right]
\end{aligned}
\end{equation}
\subsubsection{EKF Measurement Models}
Two types of updates are used in the EKF, including the sensor-related updates and motion-related updates. The sensor-related updates include the accelerometer and magnetometer measurement updates \cite{LiY2017}
\begin{equation}
\hat{\textbf{f}}^n-\tilde{\textbf{f}}^n = -[\textbf{g}^n \times]\bm{\psi} + \textbf{C}^n_b \bm{\iota}_f
\end{equation}
\begin{equation}
\hat{\textbf{m}}^n-\tilde{\textbf{m}}^n = -[\textbf{m}^n \times]\bm{\psi} + \textbf{C}^n_b \bm{\iota}_m
\end{equation}
where $\textbf{g}^n$ and $\textbf{m}^n$ are the local gravity and magnetic intensity vectors, respectively. The signs $\tilde{\bm{\gamma}}$ and $\hat{\bm{\gamma}}$ represent the term $\bm{\gamma}$ that is measured and predicted by the system model, respectively. $\bm{\iota}_f$ and $\bm{\iota}_m$ are the accelerometer and magnetometer measurement noise vectors.
The motion-related updates include the pedestrian velocity model \cite{Syed2008} and the zero angular rate updates, which are described as
\begin{equation}
\hat{\textbf{v}}-\tilde{\textbf{v}} = (\textbf{C}^n_b)^T \delta\textbf{v}^n - (\textbf{C}^n_b)^T [\textbf{v}^n \times] \bm{\psi} + \bm{\iota}_v
\end{equation}
\begin{equation}
\delta \bm{\omega}^b_{ib} = \textbf{b}_g + \bm{\iota}_{\omega}
\end{equation}
where $\textbf{v}^n$ is the velocity in the $n$-frame and $\bm{\omega}^b_{ib}$ is the gyro measurement vector; $\bm{\iota}_v$ and $\bm{\iota}_{\omega}$ are the velocity and angular rate measurement noise vectors. When the device is static, the velocity update becomes the zero velocity update, and the zero angular rate update is activated.
The measured velocity vector is obtained from PDR by
\begin{equation}
\tilde{\textbf{v}} = [\frac{s_k}{t_k - t_{k-1}} ~ 0 ~ 0]^T
\end{equation}
where $s_k$ is the step length between time epochs $t_{k-1}$ and $t_k$. Refer to \cite{ZhangH2015} and \cite{ShinSH2007} for details about step detection and step length estimation, respectively.
\subsection{Crowdsourced Sensor Data Selection }
\label{sec-sdqs}
From the big data perspective, only a small proportion of crowdsourced data is reliable enough for database updating. In \cite{ZhangP2018}, three sensor data quality assessment conditions are presented to select reliable sensor data: a) using the localization data that lasts shorter than a time threshold (e.g., 10 minutes); b) using the trajectory segments in which both start and end points are anchor points (i.e., points that have known coordinates provided by absolute localization technologies such as satellite positioning and Bluetooth); and c) using forward-backward smoothing to improve DR accuracy. In this paper, the conditions b) and c) are kept, while a) is changed to a) the position errors (compared to the location of anchor points) at the end points of both forward and backward DR solutions are within a geographical distance threshold $\lambda_d$. Through this change, the condition is changed from raw data to location results, and thus becomes stricter. The benefit for using the new condition is that it becomes straightforward to quantify the RP location uncertainty, which is further contained in the FAI values to improve the FAI prediction accuracy.
Additionally, although the research \cite{ZhangP2018} has proposed the sensor data quality assessment conditions, it has not involved the expected RP location uncertainty, which is essential for predicting the uncertainty for the generated fingerprinting database. Thus, this paper estimates the RP location uncertainty as follows. The equation that is used for computation of the forward-backward smoothed DR solution is
\begin{equation}
\begin{aligned}
\textbf{x}_{sm,k} &= \frac{u_{\emph{+},k}}{u_{\emph{+},k}+u_{\emph{-},k}} \textbf{x}_{\emph{+},k} + \frac{u_{\emph{-},k}}{u_{\emph{+},k}+u_{\emph{-},k}} \textbf{x}_{\emph{-},k} \\
&=\xi_k \textbf{x}_{\emph{+},k} +(1-\xi_k) \textbf{x}_{\emph{-},k}, ~ \xi_k = \frac{u_{\emph{+},k}}{u_{\emph{+},k}+u_{\emph{-},k}}
\end{aligned}
\end{equation}
where the subscripts $\emph{+}$, $\emph{-}$, and $sm$ indicate the forward, backward, and smoothed solutions; $\textbf{x}$ is the state vector, and $u$ is the weight; the subscript $k$ indicates the data epoch.
According the error propagation law, the accuracy for the smoothed DR solution is
\begin{equation}
\label{eq-sm-accu}
\bm{\sigma}_{sm,k} = \sqrt{ \xi^2_k \bm{\sigma}^2_{\emph{+},k} + (1-\xi_k)^2 \bm{\sigma}^2_{\emph{-},k}}
\end{equation}
where $\bm{\sigma}$ represents the accuracy (i.e., the root mean square (RMS) value). Through Equation \eqref{eq-sm-accu}, it is possible to approximately predict the RP location uncertainties, which are used when computing FAI factors in the next section.
\section{FAI-Enhanced Multi-Sensor Integrated Localization}
\label{fai-enh-ms-loc}
\subsection{Fingerprinting}
\label{sec-fp1}
The Gaussian distribution based fingerprinting method \cite{Haeberlen2004} computes the probability density function of the states according to given measurements. The index for the selected fingerprint is computed by
\begin{equation}
\label{eq-i}
\begin{aligned}
\hat{i} & = \mathop{\arg\max}\limits_{i} \left( {\zeta(\textbf{l}_i | \textbf{q})} \right)
= \mathop{\arg\max}\limits_{i} \left( { \frac{ \zeta(\textbf{l}_i) \zeta(\textbf{q} | \textbf{l}_i)} {\zeta(\textbf{q})}} \right) \\
& = \mathop{\arg\max}\limits_{i} \left( { \zeta(\textbf{l}_i) \zeta(\textbf{q} | \textbf{l}_i) } \right) \\
& = \mathop{\arg\max}\limits_{i} \left( { \zeta(\textbf{l}_i) { \prod_{j=1}^{n} {\zeta(q_j|l_{i,j})} } } \right)
\end{aligned}
\end{equation}
where $\zeta()$ represents the likelihood, $\textbf{q}$ denotes the measured feature vector, $\textbf{l}_i$ represents the reference feature vector at fingerprint $i$ in the database; $q_j$ and $l_{i,j}$ are the $j$-th feature in $\textbf{q}$ and $\textbf{l}_i$, respectively; $n$ is the dimension for the feature vector. The likelihood ${\zeta(q_j|l_{i,j})}$ can be calculated through the Gaussian distribution model as
\begin{equation}
\zeta(q_j | { l_{i,j} } )= { \frac{1}{\sigma_{i,j}\sqrt{2\pi}} \exp \left( {-\frac{(q_{j}-\mu_{i,j})^2}{2\sigma_{i,j}^2}} \right) }
\end{equation}
where $\mu_{i,j}$ and $\sigma_{i,j}^2$ are the mean and variance of feature $j$ in fingerprint $i$, respectively.
For RSS and magnetic fingerprinting, the features are RSS from multiple APs and the horizontal and vertical magnetic gradient profiles, respectively. The method to generate the magnetic gradient profile is as follows. First, the magnetic intensity profile is built by combining magnetic intensity values at a series of points on a short-term trajectory (e.g., 10 steps). To increase the magnetic fingerprint dimension, accelerometer-derived horizontal angles are used to extract the horizontal and vertical magnetic components. Afterwards, each element in the magnetic intensity profile is subtracted by its first element to generate the magnetic gradient profile. Refer to \cite{LiY2017} for details about RSS and magnetic fingerprints. The magnetic matching solutions are obtained through the wireless-aided magnetic matching strategy from \cite{LiY2017}. That is, a circle area is first determined by wireless localization. The circle center locates at the wireless location solution, while the circle radius is set at three times of wireless location accuracy. Then, magnetic fingerprinting is implemented within the circle.
Once the likelihood for each RP is calculated, $\kappa$ RPs that have the highest likelihood values are selected. Afterwards, the location of selected RPs are weighted averaged to calculate the position estimate by
\begin{equation}
\label{eq-knn}
\hat{\textbf{p}} = \sum_{i=1}^{\kappa} {{ \frac{\textbf{p}_i \zeta_i }{ \sum_{j=1}^{\kappa} {\zeta_j} }} }
\end{equation}
where $\hat{\textbf{p}}$ is the position estimate, $\textbf{p}_i$ is the location for the $i$-th RP and $\zeta_i$ is its likelihood.
\subsection{Fingerprinting Accuracy Indicator}
FAI at three levels are introduced, including the signal strength, geometry, and database DSF levels. This subsection describes these FAI factors and their combinations.
\subsubsection{Signal Strength Based FAI (SS)}
\label{sec-ss}
Accordingly to preliminary tests, features with stronger RSS or magnetic gradient values are generally more reliable. Specifically, a stronger RSS indicates a closer distance to an AP, and stronger magnetic gradients indicate the fingerprint has a higher probability to be indistinct from others. Based on this principle, each feature is given a score according to its value. Afterwards, scores from all features are combined to determine the SS value by
\begin{equation}
\label{eq-ss-alpha}
[ss]^* = \alpha^*_{ss}\frac{\sum_{i=1}^{n_*} {c^*_{i}}}{n_*}
\end{equation}
where the superscript and subscript $*$ may be $w$ for wireless RSS or $m$ for the magnetic data; $n_*$ is the number of features; $c^*_{i}$ is the score for feature $i$, and $\alpha^*_{ss}$ is the scale factor for SS computation. The $c^*_{i}$ values for RSS and magnetic data are respectively computed as
\begin{equation}
c^w_{i} = \frac{1}{d_{i}} = 10^{ \frac{r_i-\beta_{2,i}}{10\beta_{1,i}} }, ~ c^m_{i} = \frac{1}{m_i}
\end{equation}
where $d_i$ is the distance from device to AP $i$, $r_i$ represents the RSS from AP $i$; $\beta_{1,i}$ and $\beta_{2,i}$ are PPs for AP $i$, and $m_i$ represents the value of magnetic gradient feature $i$. The PPs $\beta_{1}$ and $\beta_{2}$ for each AP are determined through least squares by using the measurement model in \cite{ZhuangWcl} as
\begin{equation} \label{rss-hx}
\begin{aligned}
\textbf{r}_a = -10 \beta_1 log_{10}{\left( \sqrt{(x-\textbf{x}_u)^2 + (y-\textbf{y}_u)^2} \right)} + \beta_2
\end{aligned}
\end{equation}
where $\textbf{x}_u=\left[ \begin{matrix} x_{u,1} & x_{u,2} & ... & x_{u,n_p} \end{matrix} \right]^T$ and $\textbf{y}_u=\left[ \begin{matrix} y_{u,1} & y_{u,2} & ... & y_{u,n_p} \end{matrix} \right]^T$ are device coordinates from DR; $\textbf{r}_{a} = \left[ \begin{matrix} r_1 & r_2 & ... & r_{n_p} \end{matrix} \right]^T$ is the RSS measurement vector, and $n_p$ is the number of device locations. The state vector to be estimated is $\textbf{x}_{a} =\left[ x~~y~~\beta_1~~\beta_2 \right]^T$, where $x$ and $y$ are the AP coordinates and $\beta_1$ and $\beta_2$ are the AP PPs. The design matrix $\textbf{H}_a$ for AP location and PP estimation is
\begin{equation}
\begin{aligned}
&\textbf{H}_{a} =
\left[
\begin{matrix}
\frac{-10n(x-x_{u,1})}{d^2_1 ln10} & \frac{-10n(y-y_{u,1})}{d^2_1 ln10} & -10log_{10}{d_1} & 1 \\
... & ... & ... & ... \\
\frac{-10n(x-x_{u,j})}{d^2_j ln10} & \frac{-10n(y-y_{u,j})}{d^2_j ln10} & -10log_{10}{d_j} & 1 \\
... & ... & ... & ... \\
\frac{-10n(x-x_{u,n_p})}{d^2_{n_p} ln10} & \frac{-10n(y-y_{u,n_p})}{d^2_{n_p} ln10} & -10log_{10}{d_{n_p}} & 1 \\
\end{matrix}
\right]
\end{aligned}
\end{equation}
Using least squares, the state vector $\textbf{x}_a$ and corresponding covariance matrix $\textbf{P}_{a}$ are estimated by
\begin{equation}
\label{eq-ls}
\hat{\textbf{x}}_{a} = \left( \textbf{H}^T_{a} \textbf{R}^{-1}_{a} \textbf{H}_{a} \right)^{-1}\textbf{H}^T_{a} \textbf{R}^{-1}_{a} \textbf{r}_{a}
\end{equation}
\begin{equation}
\label{eq-ls-p}
\hat{\textbf{P}}_{a} = \left( \textbf{H}^T_{a} \textbf{R}^{-1}_{a} \textbf{H}_{a} \right)^{-1}
\end{equation}
where $\textbf{R}_{a} =diag(\bm{\iota}^2_{r} )$ and $\bm{\iota}_{r} $ is the RSS noise vector. The sign $diag(\bm{\gamma})$ represents the diagonal matrix in which the diagonal elements are the elements in $\bm{\gamma}$.
\subsubsection{Geometry Based FAI (SD)}
For multilateration, localization accuracy is correlated with the measurement geometry that can be quantified by the DOP value \cite{Langley1999}. Thus, the scaled DOP value is used as a FAI factor. Least squares is used to compute the DOP value. The design matrix for range-base DOP calculation is
\begin{equation}
\begin{aligned}
&\textbf{H}^w_{d} =
\left[
\begin{matrix}
\frac{x_u-x_{w,1}}{d_1} & \frac{y_u-y_{w,1}}{d_1} \\
... & ... \\
\frac{x_u-x_{w,j}}{d_j} & \frac{y_u-y_{w,j}}{d_j} \\
... & ... \\
\frac{x_u-x_{w,{n_w}}}{d_{n_w}} & \frac{y_u-y_{w,{n_w}}}{d_{n_w}}
\end{matrix}
\right]
\end{aligned}
\end{equation}
where $[x_u, y_u]$ and $[x_{w,j},y_{w,j}]$ are the locations for the device and AP $j$, respectively; $d_j$ is the distance from device to AP $j$, and $n_{w}$ is the dimension of the RSS feature vector.
A similar geometrical indicator is presented for magnetic gradient fingerprints. The design matrix is
\begin{equation}
\begin{aligned}
&\textbf{H}^m_{d} =
\left[
\begin{matrix}
\frac{m_{h,1}}{\sqrt{m^2_{h,1} + m^2_{v,1}}} & \frac{m_{v,1}}{\sqrt{m^2_{h,1} + m^2_{v,1}}} \\
... & ... \\
\frac{m_{h,j}}{\sqrt{m^2_{h,j} + m^2_{y,j}}} & \frac{m_{v,j}}{\sqrt{m^2_{h,j} + m^2_{v,j}}} \\
... & ... \\
\frac{m_{h,n_m}}{\sqrt{m^2_{h,n_m} + m^2_{v,n_m}}} & \frac{m_{v,n_m}}{\sqrt{m^2_{h,n_m} + m^2_{v,n_m}}} \\
\end{matrix}
\right]
\end{aligned}
\end{equation}
where $m_{h,j}$ and $m_{v,j}$ are the $j$-th horizontal and vertical magnetic components, respectively; $n_{m}$ is the dimension of magnetic feature vector.
The matrix for DOP calculation is computed as
\begin{equation}
\textbf{M}^*_{d} = \left( \textbf{H}^{*T}_{d} \textbf{H}^*_{d} \right)^{-1}
\end{equation}
To predict the localization accuracy, the scaled DOP value is calculated by
\begin{equation}
\label{eq-sd-alpha}
[sd]^* = \alpha^*_{sd} \sqrt{{\textbf{M}}^*_{d}(1,1) + {\textbf{M}}^*_{d}(2,2)}
\end{equation}
where ${\textbf{M}}^*_{d}(i,i)$ is the $i$-th diagonal element in $\textbf{M}^*_{d}$ and $\alpha^*_{sd}$ is the scale factor for SD computation.
\subsubsection{Weighted DSF Based FAI (WD)}
\label{sec-wd-fai}
According to \cite{Lemelson2009}, the accuracy of one fingerprint can be described by its DSF, that is, the mean value of geographical distances between this fingerprint and fingerprints that has similar features. Compared to \cite{Lemelson2009}, there are two differences in the FAI computation in this paper. First, the DSF is calculated by likelihood, instead of Euclidean distance. Second, the weighted DSF (WDSF, abbreviated as WD), instead of DSF, is used as the FAI. The equations for DSF computation are
\begin{equation}
\label{eq-i}
\begin{aligned}
&[dsf]_{i} = \frac{1}{\kappa_d} { \sum_{k=1}^{\kappa_d} { { \prod_{j=1}^{n} { \left( { \frac{1}{\sigma_{i,j}\sqrt{2\pi}} \exp \left( {-\frac{(l_{i_k,j}-\mu_{i,j})^2}{2\sigma_{i,j}^2}} \right) } \right) } } } }
\end{aligned}
\end{equation}
where $[dsf]_{i}$ denotes the DSF value for fingerprint $i$; $i_k$ is the fingerprint that has the $k$-th closest features to fingerprint $i$; $\mu_{i,j}$ and $\sigma_{i,j}^2$ are the mean and variance of feature $j$ in fingerprint $i$; $\kappa_d$ is the number of similar fingerprints, and $n$ is the dimension of feature vector in fingerprints.
As shown by the results in Subsections \ref{sec-res-fai-wifi} and \ref{sec-res-fai-mag}, the DSF value may suffer from blunders in fingerprinting accuracy prediction. To improve the reliability, the WD of $\kappa$ selected fingerprints in Equation \eqref{eq-knn} are calculated by
\begin{equation}
[wd] = \frac{\sum_{i=1}^{\kappa} {\zeta_{i} [dsf]_{i}} } {\sum_{i=1}^{\kappa} {\zeta_{i}}}
\end{equation}
where $\zeta_i$ is the likelihood, which is calculated by the similarity between the measured feature vector and that in the $i$-th selected fingerprint in database.
As described in subsection \ref{sec-sdqs}, the crowdsourced RP location uncertainty $\bm{\sigma}_{sm}$ is added into each FAI by
\begin{equation}
\vartheta = \sqrt{\vartheta^2 + \bm{\sigma}^2_{sm}}, ~ \vartheta \in [[ss], [sd], [wd]]
\end{equation}
\subsubsection{Multi-FAI Combination}
\label{sec-mfc}
SS, SD, and WD are proposed from different levels. Results in Subsections \ref{sec-res-fai-wifi} and \ref{sec-res-fai-mag} indicate that each factor has its own advantages. Thus, these factors may be combined to take better advantage of their merits. Two combination strategies are used. The first is denoted as MC, which is the linear combination as
\begin{equation}
\label{eq-lin-model-mc}
[mc]_k = \rho_{ss} [ss]_k + \rho_{sd} [sd]_k + \rho_{wd} [wd]_k
\end{equation}
where the subscript $k$ represents the data epoch; $\rho_{\vartheta}$ ($\vartheta \in [[ss], [sd], [wd]]$) are the coefficients for combination, which can be estimated through least squares by using DR solutions selected from the method in Subsection \ref{sec-sdqs}.
The second combination strategy is denoted as MCM, which is the maximum one among these values, that is
\begin{equation}
[mcm]_k = max([ss]_k, [sd]_k, [wd]_k)
\end{equation}
The maximum value is chosen because increasing the setting of measurement noises to a certain extent is a common approach in engineering practices. The reason for this fact is that practical measurement noises may contain systematic errors, which breaks the Gaussian noise assumption in EKF. In this case, using a larger measurement noise setting may enhance the capability to avoid degradation on EKF solutions.
\subsection{Multi-Sensor Integration with FAI}
As mentioned above, position solutions from fingerprinting are used as EKF updates. The measurement model is
\begin{equation}
\hat{\textbf{p}}^n-\tilde{\textbf{p}}^n = \delta\textbf{p}^n + \bm{\iota}_p
\end{equation}
where $\hat{\textbf{p}}$ and $\tilde{\textbf{p}}$ denotes the positions predicted by the system model and that obtained from fingerprinting; $\bm{\iota}_p$ is the measurement noise vector. The variance values of position measurement noises are embedded in the position measurement noise matrix $\textbf{R}_p$. The elements in the $\textbf{R}_p$ matrix are commonly set at empirical constants or by a piece-wise model, that is
\begin{equation}
\textbf{R}_{p,k} = diag([\eta^2_{k}~~\eta^2_{k}]),
\end{equation}
\begin{equation}
\label{eq-cnd1}
\eta_{k}=\left\{
\begin{array}{ll}
\eta_1~\ni~ cond.~ 1\\
\eta_2~\ni~ cond.~ 2\\
... \\
\eta_i~\ni ~ cond.~ i\\
\end{array}
\right.
\end{equation}
where the sign $cond.$ represents the conditions. Since there are limited number of conditions in Equation \eqref{eq-cnd1}, the measurement noises are constants in the short term. When there is an outlier (i.e., an instantaneous and significant location error) in the fingerprinting position update, the integrated system still sets a constant weight on the measurement. In this case, the integrated solution will be degraded.
To alleviate this issue, the FAI values are used to set the measurement noise as
\begin{equation}
\eta_{k} = \vartheta_k, ~\vartheta \in [[ss], [sd], [wd], [mc], [mcm]]
\end{equation}
The details for EKF are described in the next subsection.
\subsection{EKF Computation}
The discrete-time EKF system and measurement models can be written in a general form as
\begin{equation}
\textbf{x}_k = \varrho_{k,k-1}(\textbf{x}_{k-1}) + \textbf{w}_{k-1}, ~ \textbf{w}_{k-1} \sim N(\textbf{0}, \textbf{Q}_{k-1})
\end{equation}
\begin{equation}
\textbf{z}_k = h_{k}(\textbf{x}_{k}) + \bm{\iota}_k, ~ \bm{\iota}_k \sim N(\textbf{0}, \textbf{R}_{k})
\end{equation}
where the subscripts $k$ and $k-1$ denote the data epochs; $\varrho()$ and $h()$ represents the system and measurement models, respectively; $\textbf{x}$ and $\textbf{z}$ are the state and measurement vectors; $\textbf{w}$ and $\bm{\iota}$ represent system and measurement noises, which have $\textbf{E}[\textbf{w}_i \textbf{n}_j]=0$ for all $i$ and $j$, where $\textbf{E}[~]$ denotes the expectation; $\textbf{Q}$ and $\textbf{R}$ are the system and measurement noise covariance matrices, and the sign $N(\bm{\gamma}_1,\bm{\gamma}_2)$ denotes the Gaussian distribution that has mean of $\bm{\gamma}_1$ and variance of $\bm{\gamma}_2$.
As the EKF estimates the process state and then obtains feedback from noisy measurements, it can be divided into prediction and update. In prediction, the EKF estimates the current states and uncertainties by
\begin{equation}
\textbf{x}^-_k= \bm{\Phi}_{k,k-1} \textbf{x}^+_{k-1} , ~ \bm{\Phi}_{k,k-1} \approx \frac{\partial \varrho_{k,k-1} }{\partial \textbf{x}_{k-1} }
\end{equation}
\begin{equation}
\textbf{P}^-_k= \bm{\Phi}_{k,k-1} \textbf{P}^+_{k-1} \bm{\Phi}^T_{k,k-1} + \textbf{Q}_{k-1}
\end{equation}
where $\bm{\Phi}$ is the state transition matrix and $\textbf{P}$ is the state error covariance matrix; the superscripts $-$ and $+$ indicates predicted and updated terms, respectively. Once a measurement is observed, the estimates are updated through the introduction of the Kalman filter gain $\textbf{K}_k$ by
\begin{equation}
\label{eq:kf-gain}
\textbf{K}_k= \textbf{P}^-_k \textbf{H}^T_k ( \textbf{H}_k \textbf{P}^-_k \textbf{H}^T_k + \textbf{R}_k ) ^{-1}, ~ \textbf{H}_k \approx \frac{\partial h_{k} }{\partial \textbf{x}_{k} }
\end{equation}
\begin{equation}
\label{eq:kf-gain2}
\textbf{x}^+_k= \textbf{x}^-_k + \textbf{K}_k (\textbf{z}_{k} - \textbf{H}_k \textbf{x}^-_k)
\end{equation}
\begin{equation}
\textbf{P}^+_k= (\textbf{I} - \textbf{K}_k \textbf{H}_k ) \textbf{P}^-_k
\end{equation}
where $\textbf{H}$ is the design matrix and $\textbf{I}$ is an identify matrix. From Equations \eqref{eq:kf-gain} and \eqref{eq:kf-gain2}, EKF solutions are the weighted average of predictions and measurements. In the proposed method, the errors in fingerprinting position measurements are predicted by the FAI factors. Thus, if the FAI factors can provide a reliable prediction of the fingerprinting accuracy, the EKF will be intelligent to increase the elements in $\textbf{R}_k$ adaptively when there is an outlier. Accordingly, the values of elements in $\textbf{K}_k$ will be decreased and the impact of measurement errors will be mitigated.
\section{Tests and Results}
\label{sec:test}
Figure \ref{fig:flow-chart-test} shows the organization of this section. The test description is first provided, followed by the generated fingerprinting databases. Meanwhile, the parameter setting for the tests is illustrated. Afterwards, the results for WiFi and magnetic fingerprinting, the existing DR/WiFi, DR/Magnetic, and DR/WiFi/Magnetic integration methods, RSS and magnetic FAI prediction, and the FAI-enhanced methods are demonstrated.
\begin{figure}
\centering
\includegraphics[width=0.48 \textwidth]{flow-chart-test}
\caption{ Flow chart of tests }
\label{fig:flow-chart-test}
\end{figure}
\subsection{Test Description}
Field tests were conducted on the main floor of the MacEwan Student Center at the University of Calgary. The test area was a typical indoor shopping mall environment. Figure \ref{fig:machall} demonstrates pictures in the test area. There were stores around the space, and seats, walls, and open spaces in the middle area. Therefore, it was a complex indoor environment that involved both line-of-sight and non-line-of-sight areas, as well as open areas and corridors. Second, there were distinct magnetic features in some areas, and flat magnetic field in other areas. The size of test area was around 160 m by 90 m.
Five Android smart devices were used, including a Samsung Galaxy S4, a Galaxy S7, a Huawei P10, a Lenovo Phab 2 Pro phone, and a Nexus 9 tablet. All devices were equipped with three-axis gyros, accelerometers, and magnetometers, a WiFi receiver, and a global positioning system (GPS) receiver. The data rates were set at 20 Hz for for gyros, accelerometers, and magnetometers, 0.5 Hz for WiFi, and 0.1 Hz for GPS. All sensor data were logged into files for post-processing. Data from various sensors were synchronized by the coordinated universal time (UTC) time in Android.
For testing, five testers held the devices and each walked for four trajectories, each lasted for over 15 minutes. The reference trajectories were generated by taking a Lenovo Phab 2 Pro smartphone to collect red-green-blue-depth (RGB-D) images and generate SLAM solutions. The location accuracy of SLAM solutions was evaluated by using landmarks points from a commercial floor map in the test area. The SLAM location solutions had accuracy of 0.1 m and thus can be used as the references for the proposed method.
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{machall}
\caption{Test environment}
\label{fig:machall}
\end{figure}
\subsection{Generated Database}
Five testers held the devices and walked randomly or guided around the test area for half an hour. Figure \ref{fig:gps} (left) shows the GPS localization points, while Figure \ref{fig:gps} (right) illustrates the zoomed-in figure, which contains the indoor map for the test area. GPS provided continuous localization solutions in outdoor areas, but suffered from signal outages indoors. Thus, it is difficult to use GPS to generate indoor RP locations. Sensor-based DR solutions can be used to fill this gap.
\begin{figure}
\centering
\includegraphics[width=0.40 \textwidth]{gps}
\caption{GPS trajectories with smartphones}
\label{fig:gps}
\end{figure}
The crowdsourced sensor data selection approach in Subsection \ref{sec-sdqs} was used to select reliable sensor data. Figure \ref{fig:pdr} illustrates an example of the selected sensor data. Both start and end points of this path had GPS locations outdoors. If using the quality assessment conditions in \cite{ZhangP2018}, this data was not qualified because it lasted for a time period that was longer than the threshold (i.e., 10 minutes). However, this data was qualified under the new quality assessment condition because the localization errors (compared to GPS solutions) at the end of both forward and backward DR solutions were within the distance threshold $\lambda_d$ (i.e., 20 m).
\begin{figure}
\centering
\includegraphics[width=0.28 \textwidth]{pdr}
\caption{Example of smoothed DR trajectory}
\label{fig:pdr}
\end{figure}
The smoothed DR locations were combined with the RSS and magnetic data to generate the fingerprinting databases. For this purpose, the two-dimensional space was first divided into grids that had a length of $l_g$ (i.e., 3 m). Then, fingerprints that located within each grid were combined to train the model for this grid. Both the mean and variance values for each feature were calculated. If the number of data within a grid was less than the threshold $\lambda_{n,1}$ (i.e., 5), the grid was discarded. Meanwhile, if the number of data within a grid was less than the threshold $\lambda_{n,2}$ (i.e., 20), a default variance value was used, which was (5 dBm)$^2$ for RSS and (0.03 Guass)$^2$ for the magnetic data.
Figure \ref{fig:mag-db} demonstrates the distribution of the mean horizontal and vertical magnetic intensities over the space, and Figure \ref{fig:wifi-db} shows the signal distribution of 10 APs selected for localization. Figure \ref{fig:ap-pp} shows the estimated AP location (left) and PP values (right) through the method in subsection \ref{sec-ss}. The uncertainties for the estimated AP locations are demonstrated by ellipses.
\begin{figure}
\centering
\includegraphics[width=0.42 \textwidth]{mag-db}
\caption{Magnetic map generated using crowdsourced data}
\label{fig:mag-db}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.42 \textwidth]{wifi-db}
\caption{WiFi RSS map generated using crowdsourced data}
\label{fig:wifi-db}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{ap-pp}
\caption{WiFi AP location and PP estimation solution}
\label{fig:ap-pp}
\end{figure}
\subsection{Parameter Setting}
\label{sec-test-para}
The EKF parameters include the initial states (i.e., the initial position vector $\textbf{p}^n(0)$, velocity vector $\textbf{v}^n(0)$, attitude vector $\bm{\psi}(0)$, gyro bias vector $\textbf{b}_{g}(0)$, and accelerometer bias vector $\textbf{b}_{a}(0)$), the initial EKF state vector (i.e., $\textbf{x}(0)$=$[\delta \textbf{p}^n(0) ~ \delta \textbf{v}^n(0)~ \delta \bm{\psi}(0)~ \textbf{b}_g(0)~ \textbf{b}_a(0)]$) and the corresponding initial state error covariance matrix (i.e., $\textbf{P}(0)$), the covariance matrix of system noises (i.e., $\textbf{Q}$) and that of measurement noises (i.e., $\textbf{R}$), including $\textbf{R}_f$, $\textbf{R}_m$, $\textbf{R}_v$, $\textbf{R}_{\omega}$, $\textbf{R}_{wifi}$, and $\textbf{R}_{mag}$, which represent the noise covariance matrices for accelerometer measurements, magnetometer measurements, velocity/zero velocity updates, zero angular rate updates, WiFi fingerprinting updates, and magnetic fingerprinting updates, respectively. The sign $\gamma(0)$ represents the initial value for the term $\gamma$. The parameters were set at
$\textbf{p}^n(0) = \textbf{p}_{wifi}(0); ~\textbf{v}^n(0) = \textbf{0}; ~\bm{\psi}(0)=[\phi(0)~\theta(0)~\psi(0)]^T$
$\textbf{x}(0) = [\textbf{0}~\textbf{0}~\textbf{0}~\textbf{0}~\textbf{0}]^T$
$\textbf{P}(0) = diag([\textbf{var}_{\textbf{p}^n(0)}~ \textbf{var}_{\textbf{v}^n(0)} ~ \textbf{var}_{\bm{\psi}(0)} ~ \textbf{var}_{\textbf{b}_{g}(0)} ~ \textbf{var}_{\textbf{b}_{a}(0) }])$
$\textbf{Q} = diag( [\textbf{0} ~ \textbf{vrw}^2 ~\textbf{arw}^2 ~ \bm{\epsilon}^2_g ~ \bm{\epsilon}^2_a]) $
$\textbf{R}_f = diag( [\textbf{var}_{f}]); ~ \textbf{R}_m = diag( [\textbf{var}_{m}])$
$\textbf{R}_v = diag( [\textbf{var}_{v}]); ~ \textbf{R}_{\omega} = diag( [\textbf{var}_{\omega}])$
$\textbf{R}_{wifi} = diag( [\textbf{var}_{wifi}]); ~ \textbf{R}_{mag} = diag( [\textbf{var}_{mag}])$ \\
where $\textbf{p}_{wifi}(0)$ is the initial location from WiFi fingerprinting. $\phi(0)$ and $\theta(0)$ are initial roll and pitch angles calculated by the accelerometer data, while $\psi(0)$ is the initial magnetometer heading. The values of $\textbf{var}_{\textbf{p}^n(0)}$, $\textbf{var}_{\textbf{v}^n(0)}$, $\textbf{var}_{\bm{\psi}(0)} $, $\textbf{var}_{\textbf{b}_{g}(0)}$, and $\textbf{var}_{\textbf{b}_{a}(0) }$ were set at $([ 20~20~20 ]~ m)^2$, $( [ 1~1~1]~ m/s )^2$, $( [ 10~10~90 ] ~deg )^2$, $( [ 1~1~1 ] ~deg/s)^2$, and $( [ 0.1~0.1~0.1 ] ~m/s^2 )^2$, respectively. The $\textbf{arw}$, $\textbf{vrw}$, $\bm{\epsilon}_g$, and $\bm{\epsilon}_a$ values were set according to the data sheet of typical low-cost inertial sensors \cite{MPU-6500} as $( [ 0.6~0.6~0.6]~ deg/\sqrt{h} )^2$, $( [ 0.18~0.18~0.18 ] ~m/s/\sqrt{h} )^2$, $( [ 0.05~0.05~0.05 ] ~deg/s)^2$, and $( [ 0.01~0.01~0.01 ] ~m/s^2 )^2$, respectively. The values of $\textbf{var}_{f}$, $\textbf{var}_{m}$, $\textbf{var}_{v}$, and $\textbf{var}_{\omega}$ are set at $( [ 2~2~2]~ m/s^2 )^2$, $( [ 0.03~0.03~0.03 ] ~Gauss )^2$, $( [ 0.3~0.3~0.3 ] ~m/s)^2$, and $( [ 0.1~0.1~0.1 ] ~deg/s )^2$, respectively.
Six strategies (i.e., the constant fingerprinting accuracy (CT), SS, SD, WD, MC, and MCM) were used to set values for $\textbf{var}_{wifi}$ and $\textbf{var}_{mag}$. CT represents setting constant values for WiFi and magnetic fingerprinting results, which were set at $([6~6~6]~ m)^2$ and $([5~5~5]~ m)^2$ respectively, according to the statistics of preliminary data. The parameters $\alpha^w_{ss}$ and $\alpha^m_{ss}$ in Equation \eqref{eq-ss-alpha} were set at 0.2 and 50, respectively. The parameters $\sigma^w_{sd}$ and $\sigma^m_{sd}$ in Equation \eqref{eq-sd-alpha} were set at 5 and 1, respectively. The trained $\rho_{ss}$, $\rho_{sd}$, and $\rho_{wd}$ values in Equation \eqref{eq-lin-model-mc} were approximately 0.2, 0.3, and 0.5, respectively. The $\kappa$ and $\kappa_d$ values were set at 5.
\subsection{Fingerprinting Results and Integration with DR}
Figure \ref{fig:wm-loc} shows an example WiFi and magnetic localization solution through the method in subsection \ref{sec-fp1}, as well as their integration with DR by using CT in Subsection \ref{sec-test-para}. Figure \ref{fig:wm-err} (left) demonstrates the zoomed-in solutions, while Figure \ref{fig:wm-err} (right) shows the cumulative distribution function (CDF) for localization errors from all test data sets. Table \ref{tab:wm-loc-err} illustrates the error statistics, including the standard deviation (STD), mean value, RMS, the error within which the probability is 80 \% (i.e., the 80 \% error), the error within which the probability is 95 \% (i.e., the 95 \% error), and the maximum value. It is shown that
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{wm-loc}
\caption{WiFi, magnetic, DR/WiFi-CT, and DR/Magnetic-CT location solutions \cite{LiY2017}}
\label{fig:wm-loc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{wm-err}
\caption{Zoomed-in WiFi, magnetic, DR/WiFi-CT, and DR/Magnetic-CT location solutions \cite{LiY2017} (left) and CDF of location errors (right)}
\label{fig:wm-err}
\end{figure}
\begin{itemize}
\item In Figures \ref{fig:wm-loc} and \ref{fig:wm-err}, both WiFi and magnetic results are generally fitted with the reference trajectories. However, the WiFi solutions suffered from several outliers, while the magnetic solutions suffered from regional errors (i.e., results were biased regionally). These outcomes indicate the importance of accuracy prediction for both WiFi and magnetic fingerprinting, as the accuracy may vary in real time.
\item Through integration with DR, the location error RMS values were reduced by 27.1 \% (from 5.9 m to 4.3 m) and 31.1 \% (from 4.5 m to 3.1 m) for WiFi and magnetic fingerprinting, respectively. Meanwhile, the maximum errors were reduced by 27.3 \% (from 27.4 m to 19.9 m) for WiFi and 25.5 \% (from 13.7 m to 10.2 m) for magnetic. These outcomes show the effectiveness of traditional DR/WiFi and DR/Magnetic integration methods \cite{LiY2017} on improving both the accuracy and reliability (i.e., the capability to resist outliers). However, the maximum errors for DR/WiFi integration is still high, which needs to be further reduced. This issue is alleviated in the following subsections.
\end{itemize}
\begin{table}
\centering
\begin{tabular}{p{1.5cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} }
\hline
\textbf{Strategy} & \textbf{STD} ($m$) & \textbf{Mean} ($m$) & \textbf{RMS} ($m$) & \textbf{80\%} ($m$) & \textbf{95\%} ($m$) & \textbf{Max} ($m$) \\ \hline
W & 3.7 & 4.6 & 5.9 & 7.0 & 12.6 & 27.4 \\
M & 2.5 & 3.7 & 4.5 & 5.8 & 9.2 & 13.7 \\
DW-CT & 2.7 & 3.4 & 4.3 & 5.1 & 8.4 & 19.9 \\
DM-CT & 1.9 & 2.5 & 3.1 & 4.1 & 6.1 & 10.2 \\ \hline
\end{tabular}
\caption{Statistics of location errors for WiFi, magnetic, and their integration with DR}
\label{tab:wm-loc-err}
\end{table}
\subsection{RSS FAI Results}
\subsubsection{WiFi Fingerprinting Accuracy Prediction Results}
\label{sec-res-fai-wifi}
To investigate the FAI accuracy, Figure \ref{fig:wifi-err-ss} demonstrates an example of real-time fingerprinting accuracy prediction solutions from SS, SD, DSF \cite{Lemelson2009}, WD, MC, and MCM, as well as the location error time series. It is shown that
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{wifi-err-ss}
\caption{Actual WiFi fingerprinting errors and those predicted by various FAI factors, including SS, SD, DSF \cite{Lemelson2009}, WD, MC, and MCM}
\label{fig:wifi-err-ss}
\end{figure}
\begin{itemize}
\item The SS time series had a similar trend with location errors in the long term (e.g., at over 100 s time level). However, SS was not sensitive to short-term (e.g., at 10 s time level) location errors and outliers. This phenomenon can be explained by the limited SS resolution. According to preliminary results, SS was useful for wide-area applications (e.g., building or room detection), but did not have enough resolution for applications within a small area (e.g., moving/static detection).
\item SD had predicted several outliers (e.g., at around 450 s) and short-term location errors (e.g., those during 200 to 400 s). However, the SD time series was not clearly correlated with location errors in the long term. One possible reason is that the performance of path-loss model is degraded at some occasions. Thus, SD may be used to detect outliers; however, it is not a reliable indicator for long-term location errors.
\item DSF predicted the most position errors and outliers. However, the DSF time series was ``noisy". This outcome was consistent with the analysis in Subsection \ref{sec-wd-fai}. Through weighted averaging, the WD time series became smoother and had a more significant correlation with location errors. Time periods that had larger location errors generally had higher WD values. Meanwhile, WD was able to detect outliers. From these outcomes, WD was a more accurate than DSF on WiFi fingerprinting accuracy prediction.
\item Other than WD, both MC and MCM time series were correlated with location errors. The difference was that the MC time series was smoother, while MCM was more sensitive to outliers. There were occasions at which the predicted MCM value was larger than actual localization accuracy. This phenomenon is acceptable, as discussed in Subsection \ref{sec-mfc}.
\end{itemize}
To evaluate the prediction accuracy, the correlation coefficients $\textit{c}$ between actual location errors and those predicted by FAI factors were calculated by
\begin{equation}
\begin{aligned}
\textit{c}_{\vartheta} = \frac { \sum_{i=1}^{n_e} { (e_{a,i}- \overline{\textbf{e}_a }) (e_{{\vartheta},i}- \overline{\textbf{e}_{\vartheta} }) } }
{ \sqrt{ \sum_{i=1}^{n_e}{ (e_{a,i} - \overline{\textbf{e}_a})^2 } } \sqrt{ \sum_{i=1}^{n_e}{ (e_{{\vartheta},i} - \overline{\textbf{e}_{\vartheta}})^2 } } }
\end{aligned}
\end{equation}
where $\textbf{e}_a$ represents the time series of actual location errors, $\textbf{e}_{\vartheta}$ is the error time series of locations that are predicted by the FAI ${\vartheta}$, where ${\vartheta} \in [[ct],[ss], [sd], [dsf], [wd], [mc], [mcm]]$. $e_{a,i}$ is the $i$-th value in $\textbf{e}_a$. $n_e$ is the number of location errors. The sign $\overline{\bm{\gamma}}$ represents the mean value of $\bm{\gamma}$. Table \ref{tab:cc-wifi} shows the correlation coefficients for multiple FAI factors.
\begin{itemize}
\item Excluding CT, WD had the highest score for correlation (0.46), while SS had the lowest (0.21). WD, MC, and MCM had correlation coefficients higher than 0.30. Such results are acceptable when considering there were outliers which are difficult to model and might reduce the correlation coefficient values.
\end{itemize}
\begin{table}
\centering
\begin{tabular}{p{0.77cm} p{0.75cm} p{0.75cm} p{0.9cm} p{0.85cm} p{0.85cm} p{1.1cm} }
\hline
\textbf{W-CT} & \textbf{W-SS} & \textbf{W-SD} & \textbf{W-DSF} &\textbf{W-WD} & \textbf{W-MC} & \textbf{W-MCM} \\ \hline
0.00 & 0.21 & 0.30 & 0.24 & 0.46 & 0.35 & 0.33 \\ \hline
\end{tabular}
\caption{Correlation coefficients between actual WiFi fingerprinting errors and those predicted by FAI}
\label{tab:cc-wifi}
\end{table}
Table \ref{tab:comp-fai-wifi} summarizes the WiFi FAI factors by their performance on predicting long-term and short-term location errors, outliers, and their computational complexity. The grades S, M, and W represent strong, medium, and weak capabilities, respectively.
\begin{itemize}
\item Generally, WD provided the most accurate prediction on both location errors and outliers, while SS was weak in tracking short-term errors and outliers, and SD was relatively weak in indicating long-term errors. MC was strong in predicting location errors but missed several outliers, while MCM predicted the majority of location errors and outliers.
\item On the other hand, the time complexity of WD, MC, and MCM are $O(n_{p} n_w)$, which are higher than that of CT ($O(1)$), SS ($O(n_w)$), and SD ($O(n_w)$). $n_w$ represents the dimension of RSS feature vector and $n_p$ denotes the number of RPs in the database.
\end{itemize}
\begin{table}
\centering
\begin{tabular}{p{1.2cm} p{1.3cm} p{1.35cm} p{1.25cm} p{1.25cm} }
\hline
\textbf{Strategy} & \textbf{Long-term} & \textbf{Short-term} & \textbf{Outlier} & \textbf{Complexity} \\ \hline
W-CT & W* & W & W & $O(1)$ \\
W-SS & M & W & W & $O(n_w)$ \\
W-SD & W & M & M & $O(n_w)$ \\
W-WD & S & S & M & $O(n_{p} n_w)$ \\
W-MC & S & S & M & $O(n_{p} n_w)$ \\
W-MCM & S & S & S & $O(n_{p} n_w)$ \\ \hline
\end{tabular}
\begin{tablenotes}
\item[*] * S-strong; M-medium; W-weak; $n_w$-dimension of RSS feature vector; $n_p$-number of RPs in database
\end{tablenotes}
\caption{ Performance of WiFi FAI factors }
\label{tab:comp-fai-wifi}
\end{table}
\subsubsection{FAI-enhanced DR/WiFi Integration Results}
Figure \ref{fig:dw-loc} shows a set of DR/WiFi location solutions that used various strategies for setting WiFi position measurement noises, while Figure \ref{fig:dw-err} (left) demonstrates the zoomed-in solutions and Figure \ref{fig:dw-err} (right) shows the CDF of location errors from all test data. Table \ref{tab:dw-loc-err} illustrates the corresponding error statistics. Figures \ref{fig:dw-loc}, \ref{fig:dw-err} and Table \ref{tab:dw-loc-err} indicate that
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{dw-loc}
\caption{DR/WiFi solutions with various FAI strategies}
\label{fig:dw-loc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{dw-err}
\caption{Zoomed-in DR/WiFi location solutions (left) and CDF of location errors (right) with various FAI strategies}
\label{fig:dw-err}
\end{figure}
\begin{table}
\centering
\begin{tabular}{p{1.5cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} }
\hline
\textbf{Strategy} & \textbf{STD} ($m$) & \textbf{Mean} ($m$) & \textbf{RMS} ($m$) & \textbf{80\%} ($m$) & \textbf{95\%} ($m$) & \textbf{Max} ($m$) \\ \hline
DW-CT & 2.7 & 3.4 & 4.3 & 5.1 & 8.4 & 19.9 \\
DW-SS & 2.1 & 3.0 & 3.6 & 4.5 & 7.1 & 13.3 \\
DW-SD & 2.0 & 3.0 & 3.7 & 4.6 & 6.7 & 12.7 \\
DW-WD & 1.8 & 2.6 & 3.1 & 3.9 & 5.8 & 13.8 \\
DW-MC & 1.8 & 2.7 & 3.3 & 4.2 & 6.0 & 12.0 \\
DW-MCM & 1.7 & 2.5 & 3.0 & 3.8 & 5.5 & 11.7 \\ \hline
\end{tabular}
\caption{Statistics of DR/WiFi integrated location errors}
\label{tab:dw-loc-err}
\end{table}
\begin{itemize}
\item The use of SS, SD, WD, MC, and MCM had improved DR/WiFi integrated location accuracy (RMS errors) by 16.3 \%, 14.0 \%, 27.9 \%, 23.3 \%, and 30.2 \%, respectively, and had reduced the maximum errors by 33.2 \%, 36.2 \%, 30.6 \%, 39.7 \%, and 41.2 \%, respectively. These outcomes illustrated the effectiveness of FAI on enhancing DR/WiFi integrated location.
\item DW-WD provided higher prediction accuracy than DW-SS and DW-SD but suffered from the largest maximum errors. This outcome was consistent with the FAI prediction solutions, as WD had an advantage in predicting long-term and short-term errors, instead of outliers.
\item The performance of DW-MC was a trade-off among DW-SS, DW-SD, and DW-WD. Thus, DW-MC was less accurate than DW-WD, but stronger in reducing maximum errors. DW-MCM also provided accurate and reliable location solutions. Therefore, although SS and SD did not provide predictions that were as accurate as WD, they may still be combined with WD to further enhance fingerprinting accuracy prediction.
\end{itemize}
\subsection{Magnetic FAI Results}
\subsubsection{Magnetic Fingerprinting Accuracy Prediction Results}
\label{sec-res-fai-mag}
Figure \ref{fig:mag-err-ss} demonstrates the real-time magnetic fingerprinting errors predicted by SS, SD, DSF, WD, MC, and MCM, respectively. It is shown that
\begin{itemize}
\item Compared with WiFi, the magnetic fingerprinting errors had more significant long-term trends because magnetic data was more susceptible to regional localization errors.
\item WD was capable to track most long-term errors, while SS and SD tracked part of them (e.g., 800 to 1000 s for SS, and 200 to 400 s for SD) but missed others.
\item Both MC and MCM predicted the majority of location errors. Compared to MC, MCM had a larger fluctuation range.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{mag-err-ss}
\caption{Actual magnetic fingerprinting errors and those predicted by various FAI factors}
\label{fig:mag-err-ss}
\end{figure}
Table \ref{tab:cc-mag} demonstrates the correlation coefficients between actual magnetic fingerprinting errors and those predicted by FAI factors. It can be found that
\begin{itemize}
\item Compared to WiFi fingerprinting, the magnetic correlation coefficients were higher (0.60) for WD but lower for SS (0.12) and SD (0.18). The higher WD value was partly because there were less outliers in magnetic fingerprinting results. The low values for SS and SD indicated the difficulty in predicting magnetic fingerprinting errors from the signal strength or geometric levels, as magnetic data had a lower dimension.
\item After combination, MC and MCM had similar correlation coefficient values (i.e., 0.51 and 0.45). Similar to WiFi, WD, MCM, and MC were the three FAI factors that had the highest correlation coefficients for magnetic fingerprinting.
\end{itemize}
\begin{table}
\centering
\begin{tabular}{p{0.77cm} p{0.75cm} p{0.75cm} p{0.9cm} p{0.85cm} p{0.85cm} p{1.12cm} }
\hline
\textbf{M-CT} & \textbf{M-SS} & \textbf{M-SD} & \textbf{M-DSF} & \textbf{M-WD} & \textbf{M-MC} & \textbf{M-MCM} \\ \hline
0.00 & 0.12 & 0.19 & 0.37 & 0.60 & 0.51 & 0.45 \\ \hline
\end{tabular}
\caption{Correlation coefficients between actual magnetic fingerprinting errors and those predicted by FAI}
\label{tab:cc-mag}
\end{table}
Table \ref{tab:comp-fai-mag} summarizes the performances of magnetic FAI factors on predicting location errors, outliers, and their computational complexity. A main difference to the WiFi results is that SD becomes weaker in detecting outliers in magnetic data, while WD becomes stronger in detection outliers.
\begin{table}
\centering
\begin{tabular}{p{1.2cm} p{1.3cm} p{1.35cm} p{1.25cm} p{1.25cm} }
\hline
\textbf{Strategy} & \textbf{Long-term} & \textbf{Short-term} & \textbf{Outlier} & \textbf{Complexity} \\ \hline
M-CT & W* & W & W & $O(1)$ \\
M-SS & M & W & W & $O(n_m)$ \\
M-SD & W & M & W & $O(n_m)$ \\
M-WD & S & S & S & $O(n_{p} n_m)$ \\
M-MC & S & S & M & $O(n_{p} n_m)$ \\
M-MCM & S & S & S & $O(n_{p} n_m)$ \\ \hline
\end{tabular}
\begin{tablenotes}
\item[*] * S-strong; M-medium; W-weak; $n_m$-dimension of magnetic feature vector; $n_p$-number of RPs in database
\end{tablenotes}
\caption{ Performance of magnetic FAI factors }
\label{tab:comp-fai-mag}
\end{table}
\subsubsection{FAI-enhanced DR/Magnetic Integration Results}
Figure \ref{fig:dm-loc} shows a set of DR/Magnetic localization solutions that used various strategies for setting position measurement noises, while Figure \ref{fig:dm-err} (left) demonstrates the zoomed-in solutions and Figure \ref{fig:dm-err} (right) shows the CDF of location errors from all test data. Table \ref{tab:dm-loc-err} illustrates the corresponding error statistics. Figures \ref{fig:dm-loc}, \ref{fig:dm-err} and Table \ref{tab:dm-loc-err} show that
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{dm-loc}
\caption{DR/Magnetic solutions with various FAI strategies}
\label{fig:dm-loc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{dm-err}
\caption{Zoomed-in DR/Magnetic location solutions (left) and CDF of location errors (right) with various FAI strategies}
\label{fig:dm-err}
\end{figure}
\begin{table}
\centering
\begin{tabular}{p{1.5cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} }
\hline
\textbf{Strategy} & \textbf{STD} ($m$) & \textbf{Mean} ($m$) & \textbf{RMS} ($m$) & \textbf{80\%} ($m$) & \textbf{95\%} ($m$) & \textbf{Max} ($m$) \\ \hline
DM-CT & 1.9 & 2.5 & 3.1 & 4.1 & 6.1 & 10.2 \\
DM-SS & 1.8 & 2.4 & 3.0 & 3.9 & 5.8 & 10.1 \\
DM-SD & 1.9 & 2.7 & 3.3 & 4.3 & 6.5 & 10.8 \\
DM-WD & 1.4 & 2.1 & 2.6 & 3.4 & 4.9 & 7.7 \\
DM-MC & 1.6 & 2.3 & 2.9 & 3.8 & 5.4 & 9.0 \\
DM-MCM & 1.4 & 2.1 & 2.5 & 3.3 & 4.7 & 7.4 \\ \hline
\end{tabular}
\caption{Statistics of DR/Magnetic location errors}
\label{tab:dm-loc-err}
\end{table}
\begin{itemize}
\item DM-SS provided similar localization accuracy to DM-CT, while DM-SD degraded the solutions by 6.1 \%. This outcome indicates the importance of using a proper FAI strategy.
\item Similar to the DR/WiFi case, DM-WD, DM-MC, and DM-MCM had improved location accuracy by 16.1 \%, 6.5 \%, and 19.4 \%, and reduced the maximum errors by 24.5 \%, 15.7 \% and 28.4 \%, respectively.
\item In contrast to DW-WD, DM-WD provided lower maximum errors because magnetic fingerprinting solutions had less outliers.
\end{itemize}
\subsection{FAI-enhanced DR/WiFi/Magnetic Integration Results}
Figure \ref{fig:dwm-loc} shows a set of DR/WiFi/Magnetic solutions that used various strategies for position measurement noises, while Figure \ref{fig:dwm-err} (left) demonstrates the zoomed-in solutions and Figure \ref{fig:dwm-err} (right) shows the CDF of localization errors from all test data. Table \ref{tab:dwm-loc-err} illustrates the error statistics. It is shown that
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{dwm-loc}
\caption{DR/WiFi/Magnetic solutions with various FAI strategies}
\label{fig:dwm-loc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{dwm-err}
\caption{Zoomed-in DR/WiFi/Magnetic location solutions (left) and CDF of location errors (right) with various FAI strategies}
\label{fig:dwm-err}
\end{figure}
\begin{table}
\centering
\begin{tabular}{p{1.5cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} p{0.6cm} }
\hline
\textbf{Strategy} & \textbf{STD} ($m$) & \textbf{Mean} ($m$) & \textbf{RMS} ($m$) & \textbf{80\%} ($m$) & \textbf{95\%} ($m$) & \textbf{Max} ($m$) \\ \hline
DWM-CT & 1.7 & 2.7 & 3.1 & 3.9 & 5.9 & 11.3 \\
DWM-SS & 1.3 & 2.3 & 2.7 & 3.4 & 4.8 & 7.2 \\
DWM-SD & 1.4 & 2.4 & 2.8 & 3.5 & 5.0 & 7.3 \\
DWM-WD & 1.2 & 2.0 & 2.3 & 2.9 & 4.1 & 7.7 \\
DWM-MC & 1.2 & 2.2 & 2.5 & 3.1 & 4.5 & 6.6 \\
DWM-MCM & 1.1 & 1.9 & 2.2 & 2.8 & 3.9 & 6.3 \\ \hline
\end{tabular}
\caption{Statistics of DR/WiFi/Magnetic integrated location errors}
\label{tab:dwm-loc-err}
\end{table}
\begin{itemize}
\item Similar to the existing works, DR/WiFi/Magnetic integrated location solutions were more accurate than those from DR/WiFi and DR/Magnetic integration.
\item The introduction of FAI further improved the DR/WiFi/Magnetic solution. To be specific, DWM-SS, DWM-SD, DWM-WD, DWM-MC, and DWM-MCM had improved location accuracy by 12.9 \%, 9.7 \%, 25.8 \%, 19.4 \%, and 29.0 \%, and reduced the maximum errors by 36.3 \%, 35.4 \%, 31.9 \%, 41.6 \% and 44.2 \%, respectively, compared to the DWM-CT case.
\item MCM, WD, and MC were the top three FAI strategies that provided the highest DR/WiFi/Magnetic integrated localization accuracy, while MCM and MC were the strongest FAI in resisting outliers.
\end{itemize}
Table \ref{tab:dw-top-three} illustrates the top three FAI strategies in terms of accuracy and reliability based on comprehensive analysis on DR/WiFi, DR/Magnetic, and DR/WiFi/Magnetic integrated solutions. MCM, WD, and MC were three most effective FAI strategies to the tests. It is notable that the purpose of Table \ref{tab:dw-top-three} is to provide a summary of test results, instead of drawing a conclusion that MCM is a superior FAI factor than others. The reason is that the performance of FAI factors may vary according to real-world application scenarios. However, the proposed crowdsourcing-based localization method can be straightforwardly used in other scenarios and applications. Meanwhile, the outcomes from this paper provide a reference for using FAI to predict fingerprinting accuracy and enhancing multi-sensor integrated localization.
\begin{table}
\centering
\begin{tabular}{p{2.0cm} p{0.8cm} p{0.8cm} p{0.8cm} }
\hline
\textbf{Term} & \textbf{DW} & \textbf{DM} & \textbf{DWM} \\ \hline
Accuracy 1\textsuperscript{st} & MCM & MCM & MCM \\
Accuracy 2\textsuperscript{nd} & WD & WD & WD \\
Accuracy 3\textsuperscript{rd} & MC & MC & MC \\
Reliability 1\textsuperscript{st} & MCM & MCM & MCM \\
Reliability 2\textsuperscript{nd} & MC & MC & MC \\
Reliability 3\textsuperscript{rd} & SD & WD & SS \\ \hline
\end{tabular}
\caption{FAI factors with highest prediction accuracy and reliability}
\label{tab:dw-top-three}
\end{table}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Conclusions}
\label{sec-conclusionsl-work}
The unpredictability of fingerprinting accuracy has greatly limited the localization-enhanced IoT applications. Till today, most existing indoor localization works focus on improving localization techniques, while there is a lack of investigation of indicator metric regarding the localization accuracy, especially fingerprinting accuracy. Therefore, this paper introduces the fingerprinting accuracy indicator (FAI) into multi-sensor integrated localization to fill this gap. FAI factors from the signal strength, geometry, and database levels and their combinations have been proposed and evaluated. It is found that a) the proposed weighted distance between similar fingerprints (weighted DSF) based FAI was effective in predicting both location errors and outliers. b) The geometry based FAI was stronger in predicting short-term errors and outliers, while c) the signal strength based FAI was stronger in predicting long-term location errors. Due to such complementary characteristics, FAI combination strategies were adopted to provide accurate and robust fingerprinting accuracy prediction.
Furthermore, this paper proposes an enhancement to the crowdsourcing-based localization method, which does not involve user intervention or parameter tuning. Crowdsourced sensor data is used for updating both wireless and magnetic databases simultaneously while localizing. Additionally, the FAI-enhanced dead-reckoning/wireless/magnetic integrated extended Kalman filter (EKF) is used for robust localization. By using off-the-shelf inertial sensors in consumer devices and existing WiFi access points (which provided location error root mean square (RMS) of 5.9 m and maximum error of 27.4 m) and the indoor magnetic field (which provided location error RMS of 4.5 m and maximum error of 13.1 m), the FAI-enhanced EKF provided localization solutions that had error RMS of 2.2 m and maximum error of 6.3 m. These values were respectively 29.0 \% and 44.2 \% more accurate than those from EKF without FAI. These outcomes indicate the effectiveness of the proposed FAI on improving both accuracy and reliability of multi-sensor integrated localization.
|
1,116,691,499,151 | arxiv | \section{Introduction}
For many security problems, researchers are relying on binary code analysis, as they need to inspect binary executable program files without access to any source code.
This is often needed when analyzing commercial code that is protected by intellectual property and its source code is not available, but can be also useful in other scenarios.
Those include dealing with unsupported or legacy executables, where the information about the exact version of the source code is lost, or even the original source code itself may be lost.
Additionally, it is frequently used for testing in order to improve the security of the system, like in black-box penetration testing when the goal is to check the binary for any weaknesses or vulnerabilities that can potentially be abused.
And finally, it is an important part of investigating how hard recovering key parts of the algorithm is, \textit{e.g.,} for the sake of preventing intellectual property theft.
The translation process of going from source code to binary executable programs also called compilation, is a lossy process in the sense that only basic low-level instructions and data representations understood by the target CPU are preserved. Because of this, it is impossible in the general case to reconstruct the original source code from compiled binary code. The task is even more complicated with commercial binaries as they are often stripped. Stripping of the binary removes any debug information and its symbol tables, which contain semantics of variables in the program. When dealing with stripped binaries, even reconstructing function entry points can be challenging.
With a constantly growing number of computing devices in consumer, commercial, industrial and infrastructure applications, as well as with the growing complexity of software applications, the scope of binary code analysis becomes increasingly large. Fast, automated analysis would allow preventing the spreading of bugs and vulnerabilities in all those complex software systems through shared and reused code.
Analyzing binary executable code is difficult because of two related challenges - the size of binary executable programs and the absence of high-level semantic structure in binary code. Indeed, when dealing with a compiled executable, a security engineer is often looking at a file containing up to megabytes of binary code. A precise analysis of such files with existing tools requires large amounts of computational power, and it is particularly difficult or even impossible to do manually. Instead, state-of-the-art tools often rely on a combination of formal models and heuristics to reason about binary programs. Replacing these heuristics with more advanced statistical learning and machine learning models has a high potential for improving performance while keeping the analysis fast.
In recent years we have seen a big surge in applications of machine learning (ML) to the field of security, where researchers routinely turn to ML algorithms for smarter automated solutions. For example, due to rapidly evolving modifications of malware, ML algorithms are frequently applied to malware detection problems. Similarly, ML algorithms allow detecting and reacting to network attacks faster.
Having ML algorithms operate on binary executable programs is a promising direction to bridge the large semantic gap between human abstractions and machine code representations, and to recover high-level semantics which was lost during compilation. Using ML requires obtaining a good, vectorized representation of the data. In the field of security, this problem is usually solved by hand-selecting useful features and feeding those into an ML algorithm for a prediction or a score. Approaches range from defining code complexity metrics and legacy metrics~\citep{DBLP:conf/icse/TheisenHMMW15}, to using a sequence of system calls~\citep{DBLP:conf/codaspy/GriecoGURFM16} and many more. Besides being non-trivial and laborious, hand-selecting features raises other issues as well.
First, for every task researchers come up with a new set of features. For example, what indicates memory safety violations is unlikely to also signal race conditions.
Additionally, some features get outdated and will need to be replaced with future versions of the programming language, compiler, operating system or computer architecture.
The state-of-the-art in machine learning, however, no longer relies on hand-designed features. Instead, researchers use learned features, or what is called \emph{distributed representations}.
These are high-dimensional vectors, modeling some of the desired properties of the data.
The famous word2vec model~\citep{DBLP:journals/corr/abs-1301-3781,DBLP:conf/nips/MikolovSCCD13}, for example, is representing words in a high-dimensional space, such that similar words are clustered together.
This property of word2vec has made it a long-time go-to model for representing words in a number of natural language processing tasks.
We can take another example from computer vision, where it was discovered that outputs of particular layers of VGG network~\citep{DBLP:journals/corr/SimonyanZ14a} are useful for a range of new tasks.
We see an important argument for trying to learn distributed representations - a good representation can be used for new tasks without significant modifications.
Unfortunately, some types of data are more challenging to obtain such a representation for, than others.
For instance, finding methods for representing longer sentences or paragraphs is still an ongoing effort in natural language processing~\citep{DBLP:conf/nips/ZhangSWGHC17,DBLP:conf/iclr/LinFSYXZB17}.
Representing graphs and incorporating structure and topology into distributed representations is not fully solved either.
Binary executable programs are a ``hard'' case for representing as they have traits of both longer texts and structured, graph-like data, with important properties of binaries best represented as control or data flow graphs.
Distributed representations for compiled C/C++ binaries -- the kind that engineers in the security field deal with the most -- have not received much attention, and with this work, we hope to start filling that gap. In fact, current approaches leveraging deep learning models to reason about binary code focus on code clone detection, and therefore, their application to algorithm classification and vulnerability detection is limited to syntactically similar patterns. In contrast, our approach aims to generalize code semantics based on new insights by introducing a graph embedding model which encompasses notions of local control-flow and data-flow in a novel way. We propose a graph-based representation for binary programs, that when used with a Graph Convolutional Network (GCN)~\citep{DBLP:conf/iclr/KipfW17}, captures semantic properties of the program.
Our main contributions are: (i) To the best of our knowledge we are the first to suggest a distributed representation learning model approach for binary executable programs that is demonstrated to work for different downstream tasks;(ii)
To this end, we present a deep learning model for modelling binary executable programs’ structure, computations, and learning their representations; (iii) To prove the concept that distributed representations for binary executable programs can be applied to downstream programs analysis tasks, we evaluate our approach on two distinct problems - functional algorithm classification (\textit{i.e.,} the task of recognizing functional aspects of algorithmic properties, as opposed to their syntactic aspects) and vulnerability discovery across multiple vulnerability classes, and show improvement over current state-of-the-art approaches on both.
\section{Related Work}
Many tasks that rely on the analysis of binary executables are frequently approached by rule-based systems and manually defined heuristics~\citep{DBLP:conf/securecomm/AaferDY13,DBLP:conf/iceis/SantosPDB09,DBLP:journals/di/KarbabDDM18,DBLP:conf/sp/YamaguchiGAR14,DBLP:conf/ssiri/RawatM12,DBLP:conf/sp/ChaARB12}.
Machine learning has a proven reputation for boosting performance compared to heuristics and there has been a lot of interest in applications of machine learning to security tasks.
We briefly discuss previous work in binary program analysis that relies on machine learning.
We structure the literature based on the types of features extracted and by the type of the embedding model applied.
\subsection*{Hand Designed Features}
Designing and extracting features can be considered equivalent to manually crafting representations of binaries.
We can classify such approaches based on which form of the compiled binary program was used to extract the features.
\paragraph{Code-based features}
The simplest approach to representing a binary is by extracting some numerical or textual features directly from the assembly code. This can be done by using n-grams of tokens, assembly instructions, lines of code, etc. N-grams are widely used in the literature for malware discovery and analysis \citep{DBLP:conf/ict/LiZYY19,DBLP:journals/tjs/LeeCSK18,DBLP:journals/ijcysa/KangYSM16}, as well as vulnerability discovery~\citep{DBLP:conf/icmla/PangXN15,DBLP:journals/jss/MurtazaKHB16}.
Additionally, there have been efforts focusing on extracting relevant API calls or using traces of system calls to detect malware~\citep{DBLP:journals/infsof/WuWLZ16,DBLP:conf/ausai/KolosnjajiZWE16}.
\paragraph{Graph-based features}
Many solutions rely on extracting some numerical features of Abstract Syntax Trees (ASTs), Control Flow Graphs (CFGs) and/or Data Flow Graphs (DFGs). We combine these under models with graph-based features.
discovRE~\citep{DBLP:conf/ndss/EschweilerYG16}, among other features, uses closeness of control flow graphs to compute similarity between functions.
Genius~\citep{DBLP:conf/ccs/FengZXCTY16} converts CFG into numeric feature vectors to perform cross-architecture bug search.
Yet other works have used quantitative data flow graph metrics to discover malware\citep{DBLP:conf/dimva/WuchnerOP15}.
\subsection*{Learned Features}
Besides manually crafting the representations it is also possible to employ neural models for that purpose.
This allows expressing and capturing more complicated relations of characteristics of code.
Here we can classify the approaches based on whether they use sequential neural networks or graph neural networks.
\paragraph{Sequence embeddings}
The body of work on the naturalness of software~\citep{DBLP:journals/cacm/HindleBGS16,DBLP:conf/icse/RayHGTBD16,DBLP:journals/csur/AllamanisBDS18} has inspired researchers to try applying NLP models for security applications in general, and binary analysis in particular.
Researchers have suggested serializing ASTs into text and using them with LSTMs for vulnerability discovery \citep{DBLP:conf/ccs/LinZLPX17}.
Some of previous vulnerability discovery efforts also use RNNs on lines of source code \citep{DBLP:conf/ndss/LiZXO0WDZ18}.
More recently, INNEREYE proposed to use LSTMs in a Siamese architecture for binary code similarity detection \citep{DBLP:conf/ndss/ZuoLYL0Z19}.
The closest to our work is that of \cite{DBLP:conf/sp/DingFC19}, which is starting by constructing a graph that is enriched with selective callee expansion. The authors then sample random walks from this graph to generate sequences of instructions and train a paragraph-to-vector model on these sequences. This approach is similar in spirit to earlier graph embedding approaches, such as DeepWalk~\citep{DBLP:conf/kdd/PerozziAS14}, that were sampling random walks of nodes and using word embedding models on sequences of adjacent nodes for representation learning. However, today these approaches for graph embedding are no longer popular, as graph neural networks based on message passing and neighbourhood aggregation have been shown to perform much better.
\paragraph{Graph embeddings}
Graph embedding neural models are a popular choice for tackling binary code-related tasks because the construction of Control Flow or Data Flow Graphs is frequently an intuitive and well-understood first step in binary code analysis. For instance, graph embedding models have successfully been used on top of Control Flow Graphs for tackling the task of code clone detection in source code and binary programs \citep{DBLP:conf/kbse/WhiteTVP16,DBLP:conf/ccs/XuLFYSS17,DBLP:conf/icml/LiGDVK19,DBLP:conf/nips/ZhouLSD019}. From these, Gemini \citep{DBLP:conf/ccs/XuLFYSS17} uses a Siamese architecture on top of a graph embedding model for binary code clone detection task. The graphs they use - attributed control flow graphs, or ACFGs, - are CFG graphs that are enriched with a few manually defined features. In our work, instead of enhancing the basic blocks in CFG with a few attributes, we suggest enriching them by expanding the computations in each basic block into a computational tree, and rely on the fact that the graph embedding model will be able to capture attributes like the number of instructions if necessary. Graph Matching Networks (GMNs), \citep{DBLP:conf/icml/LiGDVK19} on the other hand, are based on the idea that instead of computing an embedding and then either using a distance function, or a Siamese network for the comparison, it might be beneficial to directly compare two graphs. So, as opposed to Gemini, where representations for known vulnerable or benign programs were pre-computed, GMN needs to compute similarity for every pair of programs individually starting from scratch. They demonstrate that this approach has better performance compared to Siamese architectures, but it is clearly slower, and more importantly for us - it does not produce program embeddings.
Other research using graph structure of binary programs include using Conditional Random Fields on an enhanced Control Flow Graph for attempting to recover the debug information of the binary program~\citep{DBLP:conf/ccs/HeITRV18}.
\begin{figure*}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=\textwidth]{cfg_with_ast.pdf}
\caption{}
\label{fig:a}
\end{subfigure}
\begin{subfigure}{.38\textwidth}
\centering
\includegraphics[width=\textwidth]{ast_to_graph.pdf}
\caption{}
\label{fig:b}
\end{subfigure}
\begin{subfigure}{.23\textwidth}
\centering
\includegraphics[width=\textwidth]{cfg_to_graph.pdf}
\caption{}
\label{fig:c}
\end{subfigure}
\caption{This figure illustrates how the program graph is constructed from the CFG on a toy example with three basic blocks. Fig (a) demonstrates, that for each basic block independently we construct a computational tree. On fig (b) we can see, that each tree is completed with two nodes - a source and a sink, this way turning it into a directed acyclic graph (DAG). On fig (c) we demonstrate how all DAGs are connected following the same topology as CFG had initially.}
\label{fig:program_graphs}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{inst_to_tree.pdf}
\caption{An example of the program graph. Parts of the graph are highlighted in the same color, as instructions on lines 1 and 2, to demonstrate where those instructions were mapped to in the graph.}
\label{fig:inst_to_tree}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{redundant_inst.pdf}
\caption{Here we show how redundant instructions were removed to contract the graph. On the left the graph is shown before the contraction, and on the right it is demonstrated after the contraction.}
\label{fig:redundant_inst}
\end{figure*}
\section{Model}
We start by converting the binary executable to a program graph that is designed to allow mathematically capturing the semantics of the computations in the program.
Next, we use a graph convolutional neural network to learn a distributed representation of the graph.
Below we describe the process of constructing the program graph, followed by a brief introduction to how graph convolutional neural networks work. We also describe the baseline model that we use for evaluation and comparison, alongside previous existing approaches.
\subsection{Program Graphs}\label{sec:programgraphs}
We start by disassembling the binary program and constructing a control flow graph (CFG). We use static inter-procedural CFGs, which we construct using the angr library \citep{DBLP:conf/sp/Shoshitaishvili16}.
The fact that each basic block in CFG is executed linearly allows us to unfold the instructions within each basic block and represent them as a directed, computational tree, similar to an Abstract Syntax Tree (AST). The result of this process is schematically depicted in Figure~\ref{fig:a}.
Within each basic block, computations do not necessarily all depend on each other.
There may be chunks of code that can be reordered inside the basic block without affecting the final result.
In this case the approach described so far yields a forest of computations.
To connect the trees in the forest we add \textit{Source} and \textit{Sink} nodes at the beginning and at the end of each basic block as a parent, or correspondingly a child, for all the trees generated from that basic block, which is demonstrated in Figure~\ref{fig:b}. The resulting graphs are then connected following the same topology that basic blocks originally had in the CFG, as shown in Figure~\ref{fig:c}.
We construct the above-mentioned computational trees from VEX intermediate representation (IR) of the binary. Figures~\ref{fig:inst_to_tree}~and~\ref{fig:redundant_inst} provide demonstration of the process.
Every node of the resulting tree is thus labelled with a constant, a register, a temporary or an operation.
The edges of the tree are directed from the argument to the instruction.
Within each basic block we reuse nodes that correspond to addresses, temporaries, constants and registers to tie together related computations.
VEX IR provides Static Single Assignment form (SSA). This means that each assembly instruction in a basic block is lifted and ``spilled'' into multiple IR statements operating on temporary variables that are each used only once (the goal being to make all side effects of an instruction explicit).
However, VEX does not track instances of different \emph{definitions} and \emph{uses} of the same register across instructions within the basic block, which we implemented to ensure we do not introduce fake data-dependence edges. In our implementation, if an instruction overrides or redefines the content of a register, its subscript is incremented. For example, for the \texttt{eax} register, we start from \texttt{eax\_0} and increment it to \texttt{eax\_1}. This is necessary so that we do not reuse the same node for \texttt{eax\_0} and \texttt{eax\_1}.
As a last step, we remove redundant edges and nodes, particularly, the
$\texttt{Iex\_Const}$
node that follows every constant, and chains of
$\texttt{Iex\_WrtTmp}\rightarrow$
`t\%'
$\rightarrow\texttt{Iex\_RdTmp}$\footnote{In VEX $\texttt{Iex\_Const}$ represents a constant value, $\texttt{Iex\_WrtTmp}$ a write operation (to a temporary variable), and $\texttt{Iex\_RdTmp}$ a read operation (from a temporary variable)}. This is demonstrated in Figure~\ref{fig:redundant_inst}.
After the graph construction is complete, we remove SSA indices for temporary variables and registers to reduce the number of distinct labels.
From the labels of the nodes we construct a ``feature matrix'' of the graph, which is a matrix of size $n\times d$, where $n$ is the number of nodes in the graph, and $d$ is the number of all distinct labels seen in the entire dataset. Thus, every node has one row in the feature matrix associated with it. We choose a random fixed ordering of all labels, and then for a given node, to convert its label into its feature row we assign all positions of the row to zero with the exception of the position that corresponds to the label of the node in our fixed ordering of labels. This representation is known as a one-hot representation. We will further refer to the feature matrix as $X$. Note that we use words ``feature'' or ``features'', ``embeddings'' and ``representation'' interchangeably.
\subsection{Graph Convolutional Networks}\label{sec:gcn}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{GCN2.pdf}
\caption{Schematic depiction of obtaining a program representation with a single-layer GCN model}
\centering
\label{fig:gcn}
\end{figure*}
The model we used for learning representations is a Graph Convolutional Neural Network (GCN) \citep{DBLP:conf/iclr/KipfW17}. Graph neural embeddings is a fast developing field, and some alternative graph representation learning models include GraphSAGE \citep{DBLP:conf/nips/HamiltonYL17} or Gated Graph Neural Networks \citep{DBLP:journals/corr/LiTBZ15}, as well as a number of others. In the literature GCNs consistently perform on par or better than more recent variants of graph neural networks \citep{DBLP:conf/cvpr/MontiBMRSB17,DBLP:conf/aaai/LiuCLZLSQ19,DBLP:journals/corr/abs-1710-10903,DBLP:conf/iclr/ChenMX18}, while being simpler and oftentimes, faster. We chose GCN because it provides a good trade-off between simplicity, performance, and speed. The latter is important due to the low-level nature of the binary code; it is reasonable to expect the program graphs to grow quite large, which forces us to favour a model with weight updates that can be efficiently computed in batches.
GCN consists of a few stacked graph convolutional layers.
A graph convolutional layer is applied simultaneously to all nodes in the graph. For each node, it averages the features of that node with features of its neighbours. Features of different nodes are scaled differently in the process of averaging and these weights are learned, i.e. they are the parameters of the graph convolutional layer. After the averaging, each node is assigned the resulting vector as its new feature vector and we proceed to either apply a different graph convolutional layer, or compute the loss and perform backpropagation to update the parameters.
Formally, this process of computing new feature vectors, known as forward pass or propagation, for $(l+1)$-st graph convolutional layer can be described as follows:
\begin{equation}
H^{(l+1)} = \text{ReLU}
(D^{\frac{1}{2}}
\Tilde{A}
D^{\frac{1}{2}}
H^{l} W^{l})
\end{equation}
where $\Tilde{A}$ is the adjacency matrix of the graph with added self-loops, $D$ is its diagonal out-degree matrix, $\text{ReLU}(x) = max(0, x)$ is the non-linearity or activation function, $H^{(l)}$ is the result of propagation through previous layer, $H^{(0)}$ being $X$, and $W^{l}$ is a layer-specific trainable weight matrix.
Since one graph convolutional layer averages representations of the immediate neighborhood of the node, after performing $k$ graph convolutions we incorporate the information from $k$-th neighborhood of the node.
From our description, it follows that after the forward pass, the graph convolutional network outputs features for each node in the graph. We will refer to this new feature matrix as $Z$. Note that $Z$ still has $n$ rows - one row per node, but it can have a different number of columns.
To get the representation of the entire graph, we can aggregate the features of all nodes in the graph. Here it is possible to use any aggregation function - summation, averaging, or even a neural attention mechanism, but in our experiments we went for a simple sum aggregate.
A schematic illustration of this entire process is available in Figure~\ref{fig:gcn}.
The aggregated representation is used with a two-layer perceptron, and passed through a softmax which is defined like $\text{softmax}(x_i) = \frac{exp(x_i)}{\sum_{i}{exp(x_i)}}$, for the final classification.
We frame our tasks as classification and use cross-entropy error as the objective function for the optimization.We cover our procedure for selecting hyperparameters for GCN model in more detail in Section~\ref{sec:task1-exp-setup}.
\subsection{Baselines}\label{sec:baseline}
We wanted to compare our proposed representation to another task-independent representation, in particular, to one that used code-based features or embeddings.
We experimented with Long Short Term Memory (LSTM) neural networks and Support Vector Machine (SVM) classifiers for that purpose. We interpreted instructions as words, and a sequence of instructions as a sentence, following a number of similar approaches in the field, e.g. \cite{DBLP:conf/ndss/ZuoLYL0Z19}. We experimented using both SVM and LSTM with the assembly instructions directly, as well as with the code lifted to VEX IR. From our experiments, an SVM classifier with a Gaussian kernel and bag-of-words representation of VEX IR gave us the best performance, so that is the setup we chose as a baseline.
Each line of IR is tokenized to be a single ``word''.
Vocabulary for the bag-of-words was obtained from the training part of the dataset.
We used frequency thresholding to remove infrequent entries and reduce data sparsity.
Those frequencies were empirically found on the validation part of the dataset.
\begin{table*}
\centering
\caption{An example prompt for programming competition problems and their corresponding problem numbers and names. The example is taken from ACM Timus \url{http://acm.timus.ru/}}
\begin{tabular}{ll}
\toprule
\multicolumn{1}{c}{\textbf{Prompt}} & \multicolumn{1}{c}{\textbf{Problem \#}} \\
\midrule
\begin{tabular}{p{0.70\textwidth}}You have a number of stones with known weights $w_1, \ldots w_n$. Write a program that will rearrange the stones into two piles such that weight difference between the piles is minimal.\end{tabular} & 1005. Stone Pile \\
\hline
\end{tabular}
\label{tab:timus}
\end{table*}
\section{Task Description}
We evaluate the performance of our proposed representations on two independent tasks.
In the first, we test the proposed representations for functional algorithm classification in binary executable programs through classifying coding challenges.
In our second task, we want to demonstrate the performance of learned representations on a common security problem -- discovery of vulnerable compiled C/C++ files.
The two tasks are semantically different and we demonstrate in the later sections that both can be successfully tackled with representations constructed and learned in the same way.
\subsection{Task 1: Functional Algorithm Classification}
Algorithm classification is crucial for semantic analysis of code. We qualify it as ``functional'' by opposition to ``syntactic'', i.e., we aim to capture the semantics of functional properties of algorithms.
It can be used for creating assisting tools for security researchers to understand and analyze binary programs, or discover inefficient or buggy algorithms, etc.
In this task, we are looking at real-world programs submitted by students to solve programming competition problems. We chose such a dataset because the programs in it, being written by different students, naturally encompass more implementation variability than it would be possible to get by using, for instance, standard library implementations.
Our goal is to classify solutions by the problem prompts that the solution was written for.
We present a typical example of programming competition problem prompt in Table~\ref{tab:timus}.
Provided example is for illustrative purposes only, as it is taken from ACM Timus (\url{http://acm.timus.ru}) and is not part of our dataset\footnote{The dataset we used was collected as part of previous work for which the problem prompts are not released with the data}.
From our definition and the dataset, it follows that we define the equivalence of two programs as them solving the exact same problem.
Hence, in this task, we test the \emph{ability of the model to capture the higher-level semantic} similarity, and to take into account program behaviour, functionality and complexity, while \emph{ignoring syntactic differences} wherever possible.
\subsection{Task 2: Vulnerability Discovery}
Software contains bugs, which in the worst case can lead to weaknesses that leave vulnerable systems open to attacks.
Such security bugs, or vulnerabilities, are classified in a formal list of software weaknesses - Common Weakness Enumeration (CWE).
Vulnerability discovery is the process of finding parts of vulnerable code that may allow attackers to perform unauthorized actions. It is an important problem for computer security. The typical target of vulnerability discovery is programming mistakes accidentally introduced in \emph{benign commodity programs} by their authors. Our work excludes software specifically crafted to behave in a malicious way, and focuses on benign programs. Due to the large variability among vulnerabilities, increasingly large sizes of software and increasing costs of testing it, the problem of vulnerability discovery is not solved.
Most vulnerability discovery techniques rely on dynamic analysis for program exploration, the most common one being fuzzing~\citep{afl}. Such models offer a high level of precision, at the cost of shallow program coverage: only a subset of execution traces for a given program (along with a set of input test cases) can be observed in finite time, leaving large parts of the program unexplored.
On the other hand, static analysis provides better program coverage at the cost of lower precision.
In addition to these challenges come a range of fundamental problems in program analysis related to undecidability (\textit{e.g.,} the halting problem, \textit{i.e.,} ``Does the program terminate on all inputs?'') and implementation.
These issues emerge because vulnerabilities may span very small or very large chunks of code and involve a range of different programmatic constructs. This raises the question - at what level of granularity in the program should we inspect them for vulnerabilities or report to security researchers.
In this work, we are concerned with the question of learning representations for the entire binary program that will help to discover vulnerabilities statically, while leaving the questions of handling large volumes of source code and working on variable levels of granularity for future work. Our work builds on standard binary-level techniques for control-flow recovery (i.e., the reconstruction of a CFG), which is a well-studied problem where state-of-the-arts models perform well with high accuracy and scalability~\citep{victor}.
\section{Datasets and Experimental Setup}
Our first dataset, introduced by \citep{DBLP:conf/aaai/MouLZWJ16}, consists of 104 online judge competition problems and 500 C or C++ solutions for each problem submitted by students.
We only kept the files that could be successfully compiled on a Debian operating system, using gcc8.3, without any optimization flags. This left us with 49191 binary executable files, each belonging to one of 104 potential classes. Each class in this dataset corresponds to a different problem prompt and our goal is to classify the solutions according to their corresponding problem prompts.
The second dataset we used is the Juliet C/C++ test suite\citep{DBLP:journals/computer/BolandB12}.
This is a synthetically generated dataset, that was created to facilitate research of vulnerability scanners and enable benchmarking.
The files in the dataset are grouped by their vulnerability type -- CWE-ID.
Each file consists of a minimal example to recreate the vulnerability and/or its fixed version.
Juliet test suite has $OMITGOOD$ and $OMITBAD$ macros, surrounding vulnerable and non-vulnerable functions correspondingly.
We compiled the dataset twice - once with each macro, to generate binary executable files that contain vulnerabilities and those that do not.
The dataset contains 90 different CWE-IDs.
However, some of them consist of Windows-only examples, that we omitted.
Note that even though our approach is not platform-specific, in this work we limit our experimentation to Linux only.
Most CWE-IDs had too few examples to train a classifier and/or to report any meaningful statistics on.\footnote{In the future, we consider combining some CWEs into their umbrella categories, for example following the classification by Research Concepts:\\ \mbox{\url{https://cwe.mitre.org/data/definitions/1000.html}}}
Thus, we also omitted any CWE-ID that had less than 100 files in its testing set after 70:15:15 for training:validation:test split, because for those cases the reported evaluation metric would be too noisy.
As a result, we experimented on vulnerabilities belonging to one of 30 different CWE-IDs, presented in Table~\ref{tab:juliet-cwe-counts}.
We trained a separate classifier for every individual CWE-ID, which was required because files associated with each CWE-ID may or may not contain other vulnerability types.
We trained the neural network model with early stopping, where the number of training epochs was found on the validation set.
\subsection{Task 1. Experimental Setup}\label{sec:task1-exp-setup}
For experiments in the functional algorithm classification task, we randomly split all the binaries in the first dataset into train:test:validation sets with ratios 70:15:15.
We use the training set for training and extracting some additional helper structures, such as vocabulary for the bag of words models and counting frequencies for thresholding in neural network models.
We use the validation set for model selection and finding the best threshold values. After finding the best model, we evaluate its performance on the testing set.
The experiments are cross-validated and averaged over 5 random runs.
For SVMs, in the model selection phase, we perform a grid-search over the penalty parameter C and pick a value for the vocabulary threshold to remove any entry that does not have a substantial presence in the training set to be useful for learning.
After the trimming our vocabulary contains about 10-11K entries (the exact number changes from one random run to another).
For GCN-based representation, we follow similar logic and use the training set to find and remove infrequent node labels.
Here too the exact threshold is decided via experimentation on the validation set.
On average, we keep about 7-8K different node labels.
Very infrequent terms are replaced with a placeholder $UNK$, or $CONST$ if it is a hexadecimal.
We pick hyperparameters of the GCN model by their performance on the validation set. Figure~\ref{fig:hyperparams} demonstrates the influence of the \textit{depth} (number of graph convolution layers) and \textit{width} (size of each graph convolution layer) on the performance of the model for Task 1. Figure~\ref{fig:hyperparams}\textbf{(A)} shows the peformance of models with depths from 1 to 8 layers, while the dimensionality of every layer is set to 64. As it can be seen, increasing the depth of the model up until 4 layers improves performance, however additional layers after that do not always improve performance. Figure~\ref{fig:hyperparams}\textbf{(C)} compares performances of four models where each model has the same number of layers (3), but different sizes of layers - 32, 64, 128 or 256. From here we see, that increasing the size of the layers from 64 to 128 provides a moderate improvement, but increasing the size further does not affect the performance. Figures~\ref{fig:hyperparams}\textbf{(B)} and \textbf{(D)} show the duration of training in seconds of each of the discussed above models on 100 examples. Based on these general findings we perform some additional experimentation and deploy a GCN with 3 layers, that has 128 dimensions in its first two layers, and 64 dimensions in its last layer.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{hyperparams.pdf}
\caption{Comparison of GCN models with different numbers of layers (A), and different sizes of each layer(C).\textbf{(A)} demonstrates the accuracy of GCN models with different numbers of layers - from 1 to 8 - on the validation set of Task 1; the size of each layer is 64. \textbf{(B)} shows the time in seconds that models from (A) take to train on 100 samples. \textbf{(C)} demonstrates the accuracy of GCN models with 3 layers, but different sizes of layers - from 32 to 256. \textbf{(D)} shows the time in seconds that models from (C) take to train on 100 samples.}
\label{fig:hyperparams}
\end{figure*}
\begin{table*}
\centering
\caption{In this table we provide total counts of binary executables for each of the CWE-IDs we studied in the Juliet Test Suite.}
\label{tab:juliet-cwe-counts}
\begin{tabular}{ll@{\hskip 0.10\textwidth}ll
@{\hskip 0.10\textwidth}ll}
\toprule
CWE-ID & \# examples & CWE-ID & \# examples & CWE-ID & \# examples \\
\midrule
CWE121 & 9486 & CWE197 & 2664 & CWE476 & 888 \\
CWE122 & 11946 & CWE23 & 2960 & CWE563 & 1116 \\
CWE124 & 3612 & CWE36 & 2960 & CWE590 & 6954 \\
CWE126 & 2639 & CWE369 & 2736 & CWE606 & 760 \\
CWE127 & 3612 & CWE400 & 2280 & CWE617 & 918 \\
CWE134 & 3800 & CWE401 & 4176 & CWE680 & 1776 \\
CWE190 & 12093 & CWE415 & 2588 & CWE690 & 2368 \\
CWE191 & 9048 & CWE416 & 888 & CWE758 & 1046 \\
CWE194 & 3552 & CWE427 & 740 & CWE761 & 888 \\
CWE195 & 3552 & CWE457 & 2104 & CWE762 & 6429 \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Task 2. Experimental Setup}
In the vulnerability discovery experiments, we train a separate classifier for each of 30 different CWE-IDs.
Note, that for each CWE-ID classifier in its training and testing we only include the binaries that are specifically marked as good or bad with regard to that CWE-ID.
For every CWE-ID, we split its corresponding binaries into train:validation:test with ratios 70:15:15, and report results averaged over 5 random runs.
We use training sets for training the models and validation sets for grid search of the penalty parameter C in SVMs. We report the performance of the best model measured on testing sets.
Here we reuse some statistics obtained on the first dataset, in particular, we reuse frequency thresholds and bag-of-words vocabularies.
We need to train a separate classifier for each CWE-ID, 30 SVM classifiers and 30 NN classifiers in total, which would lead to a huge search space at the phase of the model selection.
We are not aware of related work on vulnerability discovery that performs their evaluation on Juliet Test Suite. Thus, to give the readers a better understanding of how our proposed model would fare compared to other existing approaches, we performed an additional experiment using Asm2Vec model\citep{DBLP:conf/sp/DingFC19} on the Juliet Test Suite.
Asm2Vec is a clone search engine that relies on vector representations of assembly functions. In the original paper the authors suggested its usefulness as a vulnerability detection tool which allows finding duplicates of known vulnerable functions. We tried replicating that scenario as faithfully as possible, by training Asm2Vec\footnote{We used implementation available here: https://github.com/Lancern/asm2vec} on Juliet Test Suite, and comparing resulting representations to differentiate between vulnerable and non-vulnerable instances. Since Asm2Vec poses the vulnerability detection as a retrieval problem, we follow their example in the paper and report Precision@15 metric. For each vulnerable function, we find 15 most similar functions to it according to cosine similarity and compute the percentage of vulnerable functions among them. It is worth noting that we are looking for similar functions among all vulnerable and non-vulnerable functions per CWE-ID.
We set most of the hyperparameters of Asm2Vec following the original paper, but finetune for dimensionality of the representation and learning rate. To find best values for those we use grid search in intervals \{50,100,150,200\} and \{0.05, 0.025, 0.01 \} correspondingly. The final results that we report for Asm2Vec are computed on the testing set, and are the average of 5 random runs.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{vulnerability_detection_accuracy.pdf}
\caption{Experimental results for vulnerability discovery on the Juliet test suite.}
\label{fig:julietresults}
\end{figure*}
\begin{table}
\centering
\caption{Accuracy obtained for the first task on the online judge problem classification.}
\label{tab:task1}
\begin{tabular}{lll}
\toprule
Model & Accuracy \\
\midrule
SVM on VEX IR & 0.93 \\
TBCNN~\citep{DBLP:conf/aaai/MouLZWJ16} & 0.94\\
inst2vec~\citep{DBLP:conf/nips/Ben-NunJH18} & 0.9483 \\
Ours & 0.97 \\
\bottomrule
\end{tabular}
\end{table}
\section{Evaluation and Results}
For evaluating performance in our experiments we used accuracy following previous work that we proceed to compare our results to.
\subsection{Task 1}
Table~\ref{tab:task1} contains quantitative evaluation of our representation for Task 1.
Our proposed representation outperforms our own SVM baseline, TBCNN model\citep{DBLP:conf/aaai/MouLZWJ16}, and current state-of-the-art for this task - \textit{inst2vec}\citep{DBLP:conf/nips/Ben-NunJH18}. We manage to reduce the error by more than 40\%, thus setting a new state-of-the-art result. It should be additionally mentioned that both TBCNN and \textit{inst2vec} start from the C source code of the programs to make predictions, whereas our baseline SVM and our proposed model are only using compiled executable versions.
Highlighting a few important differences between our approach and \textit{inst2vec} helps better understanding some of the contributions of our approach.
To construct the contextual flow graphs, the authors of \textit{inst2vec} compile the source code to LLVM IR, which contains richer semantic information than VEX IR that we use in this work. Because it is more high-level, LLVM IR is a difficult target for lifting from binary executable files.\footnote{More discussion on this topic is provided at Angr's FAQ page: \url{https://docs.angr.io/introductory-errata/faq}}.
Another key difference is that instead of learning the representations of individual tokens and then combining the tokens into a program using a sequential model, we learn the representations of all the tokens in the program jointly, thus learning the representation of the entire program. The \textit{inst2vec}, on the other side, ignores the structural properties of the program at that step.
Our results show that we can achieve better performance, despite \textit{inst2vec} starting from a semantically richer LLVM IR. We believe this indicates the importance of using the structural information at all stages of learning for obtaining good program embeddings.
\begin{table}[!ht]
\caption{Asm2vec Precision@15 on Juliet Test Suite}\label{tab:asm2vec}
\centering
\begin{tabular}{ll@{\hskip 0.17\textwidth}ll
@{\hskip 0.17\textwidth}ll}
\toprule
CWE-ID & P@15 & CWE-ID & P@15 & CWE-ID & P@15 \\ \midrule
CWE121 & 0.72 & CWE197 & 0.71 & CWE476 & 0.73\\
CWE122 & 0.61 & CWE23 & 0.68 & CWE563 & 0.63\\
CWE124 & 0.63 & CWE36 & 0.68 & CWE590 & 0.64\\
CWE126 & 0.69 & CWE369 & 0.54 & CWE606 & 0.56\\
CWE127 & 0.66 & CWE400 & 0.56 & CWE617 & 0.64\\
CWE134 & 0.66 & CWE401 & 0.50 & CWE680 & 0.75\\
CWE190 & 0.53 & CWE415 & 0.66 & CWE690 & 0.55\\
CWE191 & 0.59 & CWE416 & 0.77 & CWE758 & 0.72\\
CWE194 & 0.71 & CWE427 & 0.47 & CWE761 & 0.55\\
CWE195 & 0.72 & CWE457 & 0.69 & CWE762 & 0.58\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Task 2}
Figure~\ref{fig:julietresults} contains the evaluation of our representation for Task 2.
Here, the classifier based on our proposed representation outperforms our SVM baseline in all cases except 2 -- CWE-ID590, Free of Memory not on the Heap, and CWE-ID761, Free of Pointer not at Start of Buffer. In both cases we are seeing less than 5\% difference in accuracy. On the other hand, our proposed representation demonstrates a significant gain in terms of performance. In the extreme case of CWE-617, Reachable Assertion, it outperforms the baseline by about 25\%, in many other cases the gain is from 10\% to 20\% of prediction accuracy.
Table~\ref{tab:asm2vec} reports the results we obtained from running Asm2Vec on Juliet Test Suite. It is important to keep in mind that these numbers are not directly comparable to our results, as they correspond to two different metrics. Rather, this experiment demonstrates the complexity of the dataset and the capacity of Asm2Vec to capture vulnerabilities on it. While Bin2Vec achieves more than 80\% accuracy for all CWE-ID, Asm2Vec has Precision@15 equal to 0.5 or 0.6 in many cases, which means only about half of the retrieved similar functions were in fact vulnerable. Asm2Vec has highest Precision@15 of 0.77 for CWE-ID 416, Use After Free, which corresponds to about 1 in 4 retrieved functions being incorrectly labelled as vulnerable. For comparison, for the same vulnerability type Bin2Vec achieves near perfect performance.
Additionally, we can indirectly compare our results for the second task with those presented in two surveys that use Juliet Test Suite as a benchmark for evaluating commercial static analysis vulnerability discovery tools~\citep{DBLP:conf/csiirw/VelichetiFPRH14,DBLP:journals/infsof/Goseva-Popstojanova15}. It must be noted, that the commercial tools in those experiments probably did not use most of the programs for each CWE-ID as a training set. Additionally, the tools considered in those surveys are making their predictions based on source code and not binaries. Nevertheless, the comparison of the reported accuracies in those surveys with ours tells us that our proposed representation performs better for vulnerability discovery than static analysis commercial tools. For example, on CWE-IDs from 121 to 126, which are all memory buffer errors, \citep{DBLP:conf/csiirw/VelichetiFPRH14} report less than 60\% accuracy, whereas our model scores higher than 80\% for each of those CWE-IDs. For tools studied in \citep{DBLP:journals/infsof/Goseva-Popstojanova15}, our model consistently outperforms three out of four static analysis tools, and for the last one it outperforms it by a considerable margin in all cases but two. Those two are CWE-ID122, Heap-based Buffer Overflow, where the commercial tool scores a few percents higher, and CWE-ID590, Free of Memory not on the Heap.
These results suggest that our representation has good prospects to be used in vulnerability discovery tools. For almost every vulnerability type our prediction accuracy performance is better than 80\% and for many it is higher than 90\%.
\section{Discussion and future work}
Software in production is usually complex and large, capable of performing many different functions in different use cases.
On the contrary, programs in our evaluation datasets are single-purpose, solving a single task with a relatively small number of steps.
Additionally, the entirety of each program in Juliet test suite is relevant to vulnerability discovery tasks, unlike real software where most of the code is not vulnerable and only a small part of it may have an issue.
This can potentially be solved by introducing representations that can be computed on different levels of coarseness.
This is a non-trivial task, but our findings hint that once completed we may be able to achieve far better results for different problems on production software than is currently possible.
Additionally, we need to get a better understanding of what properties are captured with such a representation and how is best to use those or how to add other desirable properties.
Another challenge left for future work is extending this approach to cross-architecture and cross-compiler binaries.
There are several avenues for extending our work. First, it will be interesting to see whether using recent extensions of GCNs, such as the MixHop model~\cite{sami} that propagates information through higher-order node neighbourhoods, will result in better performance. Additionally, to test the utility of Bin2Vec in real-world problems, we would like to apply it to analyze more complex and larger-scale vulnerability datasets.
\section{Conclusion}
We introduced Bin2Vec, a new model for learning distributed representations of binary executable programs. Our learned representation has strong potential to be used in the context of a wide variety of binary analysis tasks. We demonstrate this by putting our learned representations to use for classification in two semantically different tasks - algorithm classification and vulnerability discovery. We show that for both tasks our proposed representation achieves better qualitative and quantitative performance in comparison to state-of-the-art approaches, including inst2vec and common machine learning baselines.
\section*{Availability of data and materials}
All of the data used in this study is publicly available.
\section*{Funding}
The authors are grateful for the funding to Information Sciences Institute of University of Southern California.
\bibliographystyle{spbasic}
|
1,116,691,499,152 | arxiv | \section{Introduction}
The response of a hadronic calorimeter to an incident pion or jet of energy $E$ can be written as
\begin{equation}
\hbox{hadronic response} = E [\,f_{em}+(1-f_{em}) (h/e) ]
\label{eqn:hadresponse}
\end{equation} %
where a fraction $f_{em}$ is deposited in electromagnetic (EM) cascades, mostly initiated
by $\pi^0$ decay gamma rays, and $h/e$ is the energy-independent ratio of detection
efficiencies for the hadronic and EM energy deposits.\footnote{Whether one writes this
ratio as $e/h$, as is conventional, or $h/e$ is usually unimportant.
This is not the case in Sec.~\ref{sec:discuss}, where we regard $h/e$ as a stochastic variable.}
(Here and elsewhere the energy $E$ is normalized to the electron response.) In the case of a
dual readout calorimeter, in which a Cherenkov signal $Q$ and scintillator signal $S$ are
read out for each event, Eq.~\ref{eqn:hadresponse} can be
generalized:\cite{Wigmans_Perugia_04,DREAM05,groom07,RPP12}.
\begin{eqnarray}
Q &=& E[ f_{em} + (1-f_{em}) (h/e|_Q)]
\label{eqn:Q0}\\
S&=& E[ f_{em}+ (1- f_{em}) (h/e|_S) ]
\label{eqn:S0}
\end{eqnarray}
John Hauptman has suggested the less cumbersome notation $\eta_X \equiv (h/e|_X)$,
which we use in this paper:
\begin{eqnarray}
Q &=& E[ f_{em} + (1-f_{em}) \, \eta_Q]
\label{eqn:Q}\\
S &=& E [f_{em}+ (1- f_{em}) \, \eta_S]
\label{eqn:S}
\end{eqnarray}
The EM fraction $f_{em}$ is a feature of the event, while the efficiency ratios
$\eta_Q$ and $\eta_S$ are different for the two channels.
Equations~\ref{eqn:Q} and~\ref{eqn:S} are the starting point
for any analysis of dual-readout hadron calorimetry data.
If $f_{em}$, $\eta_Q$, and $\eta_S$ are known exactly, and if there are no photoelectron
or other statistical contributions, then $Q$ and $S$ are uniquely determined by the incident
hadron energy $E$. If, on the other hand, all of these quantities are subject to statistical
fluctuations, then $E$ as determined from the equations must be regarded as the \it estimator\rm\
of the hadron (or jet) energy for a particular event.
These equations appear explicitly in Fig.~11 of the first DREAM paper at the Perugia
Conference on Calorimetry in High Energy Physics\cite{Wigmans_Perugia_04} and appear
either explicitly or implicitly in subsequent DREAM papers. The most complete description
of the DREAM analysis is given by Akchurin, et al.\cite{DREAM05} (henceforth Ak05),
and it is the basic reference for this paper.
Several data reduction schemes are described, but the algorithm considered most basic is
the fairly convoluted ``Energy-independent $Q/S$
correction method.'' We show here that it can be obtained in a few lines from
Eqns.~\ref{eqn:Q} and~\ref{eqn:S}.
\section{The energy-independent $Q/S$ correction method.} \label{QoverS}
The estimator $E$, whose determination is the object of the analysis, can be
eliminated by dividing Eq.~\ref{eqn:Q} by Eq.~\ref{eqn:S}, to obtain
\begin{equation}
\frac{Q}{S} = \frac{ f_{em} + (1-f_{em}) \eta_Q}{f_{em} + (1-f_{em}) \eta_S} \ .
\end{equation}
This is Eq.~2 in Ak05, except that in that paper values of $h/e$ special to the DREAM
experiment are inserted for $\eta_Q$ and $\eta_S$.
It can be solved for $f_{em}$. Although the result,
\begin{equation}
f_{em} = \frac {(Q/S)\eta_S-\eta_Q} {(1-\eta_Q) -(Q/S)(1- \eta_S) } \ ,
\end{equation}
is not given in the paper, its availability is assumed in the rest of its
discussion.
Leakage corrections are incorporated as part of the Method. They are obviously important, but here we assume they have already been made to $Q$ and $S$ as given in
Eqns.~\ref{eqn:Q} and~\ref{eqn:S}.
The final estimator of the energy, called $S_{\rm final}$, is given by Ak05's Eq.~7:
\begin{equation}
S_{\rm final} = S_{\rm corr}\left[\frac{1+p_1/p_0}{1+f_{em}\,p_1/p_0}\right] \ ,
\end{equation}
where $p_1/p_0 = e/h-1$. We identify $S_{\rm final}$ with the energy estimator $E$, and
replace $S_{\rm corr}$ by $S$ because the leakage corrections hve already been made.
From context, $e/h$ is $e/h|_S$. The equation then becomes
\begin{eqnarray}
E &= & S\,\frac{e/h|_S}{1 + f_{em}(e/h|_S -1)}\\
&=& \frac{S}{\eta_S + f_{em}(1-\eta_S)}\ ,
\end{eqnarray}\
which we recognize as just a rearrangement of Eq.~\ref{eqn:S}.
It remains to insert the expression for $f_{em}$ into this equation.
Simplification of the result is fairly tedious, but finally yields
\begin{equation}
E = S\,\left[ \frac{(1-\eta_Q) - (Q/S)\eta_S}{\eta_S - \eta_Q}\right] \ .
\label{eqn:EviaQoverS}
\end{equation}
\section{Direct solution}
We can write the simultaneous equations \ref{eqn:Q} and \ref{eqn:S} as
\begin{equation}
\left( \matrix{ Q &-(1- \eta_Q)\cr S& -(1- \eta_S) }\right)
\left( \matrix{1/E \cr f_{em} }\right)
= \left( \matrix{\eta_Q \cr \eta_S }\right)
\label{eqn:matrixversion}
\end{equation}
with immediate solutions
\begin{eqnarray}
E &=&S\,\left[\displaystyle \frac {(1- \eta_Q) - (Q/S)\eta_S}{\eta_S - \eta_S}\right]
\label{eqn:Elongform} \\
f_{em} &=& \frac {(Q/S)\eta_S-\eta_Q} {(1-\eta_S) -(Q/S)(1- \eta_Q) }
\label{eqn:femSolution}
\end{eqnarray}
for the \it estimators \rm of $E$ and $f_{em}$ on an event-by-event basis. This method has
been published elsewhere\cite{groom07,RPP12}, but the identity of the approach
with the ``$Q/S$ method'' was not previously recognized.
\section{Discussion}\label{sec:discuss}
\begin{figure}
\centerline{\includegraphics[height=3in]{Q_vs_S_locus_QoS-ai10}}
\caption{Energy-independent event locus in the $Q/E$--$S/E$ plane. With increased energy, resolution improves and the mean moves upward along the locus.}
\label{fig:locus}
\end{figure}
In part because of the relatively small number of particles involved early in a hadronic cascade,
the efficiency with which the hadronic energy deposit is visible in either the Cherenkov or scintillator channel varies from event to event. In contrast, the efficiency with which the EM deposit is detected varies little. The result is that $\eta_Q$ and $\eta_S$ are stochastic variables, mostly reflecting the variation of $h$. The values of $\eta_Q$ and $\eta_S$ required to compute the energy estimator for each event via Eq.~\ref{eqn:EviaQoverS} (or Eq.~\ref{eqn:Elongform})
are not only unknown but unknowable, give ``only'' dual readout.
In actual data reduction, there is little choice but to replace them by their mean values:
\begin{equation}
E =S\,\left[\displaystyle
\frac {(1- \vev{\eta_Q}) - (Q/S)\vev{\eta_S}}{\vev{\eta_S} - \vev{\eta_S}}\right]
\label{eqn:aveElongform} \\
\end{equation}
It is also useful to rewrite Eqs.~\ref{eqn:Q} and \ref{eqn:S}:
\begin{eqnarray}
\vev{Q/E} &=& f_{em} + (1-f_{em}) \, \vev{\eta_Q}
\label{eqn:meanQoE}\\
\vev{S/E} &=& f_{em}+ (1- f_{em}) \, \vev{\eta_S}
\label{eqn:meanSoE}
\end{eqnarray}
Since $\vev{Q/E}$ and $\vev{S/E}$ are linear in $f_{em}$, $\vev{Q/E}$ is a linear function
of $\vev{S/E}$, describing a line segment from $(\vev{Q/E} , \vev{S/E}) = (\vev{\eta_Q}, \vev{\eta_S})$ at the all-hadronic extreme, $f_{em}=0$, to $(\vev{Q/E} , \vev{S/E}) = (1,1)$,
at the all-EM extreme, $f_{em}=1$. This event locus is shown in Fig.~\ref{fig:locus}. As the energy increases, the Monte Carlo event scatter shown in the figure moves upward and becomes more
clustered as the resolution improves.
The energy-independent event locus has slope
\begin{equation}
R = \displaystyle \frac {1- \vev{\eta_Q}}{1- \vev{\eta_S }} \ .
\label{eqn:slopedefinition}
\end{equation}
This slope can be determined either by linear fits to monoenergetic (test beam) event distributions in the $Q/E$--$S/E$ plane, or, perhaps more accurately,
by separately finding $\vev{\eta_Q}$ and $\vev{\eta_S}$ via $\pi/e$ measurements as a function of energy. It can be used to cast Eq.~\ref{eqn:Elongform} into a more tractable
form\cite{groom07,RPP12}:
\begin{equation}
E = \frac{RS-Q}{R-1}
\label{eqn:Eshortform}
\end{equation}
Since $f_{em}$ is not needed in data reduction, it is only of academic interest. Experimental distributions based on DREAM data are shown in Refs.~\cite{Wigmans_Perugia_04} and \cite{DREAM05} (Ak05). These are broadened by resolution effects, and so do not necessarily conform to $0\leqf_{em}\leq1$.
\section*{Acknowledgments}
John Hauptman's critical comments and suggestions have been particularly helpful.
This work was supported by the U.S. Department of
Energy under Contract No.\ DE-AC02-05CH11231.
|
1,116,691,499,153 | arxiv | \section{Introduction}
With the recent advances in the fabrication of passive materials and metamaterials \cite{smith, pendry2004}, it seems plausible that novel classes of active materials may be developed. While the basic electromagnetic concepts, such as microscopic and relativistic causality, certainly apply to active media, the analysis of electromagnetic wave propagation may differ. For example, at normal incidence to a boundary between two passive media, the transmitted field is proportional to $\exp(in\omega z/c)$, where $n$ is the refractive index, $\omega$ is the angular frequency, $z$ is the coordinate in the direction away from the boundary, and $c$ is the vacuum velocity of light. Since $z$ is arbitrary, relativistic causality dictates $n$ to be an analytic function in the upper half of the complex $\omega$-plane \cite{nussenzveig,brillouin}. On the other hand, at least in principle there exist realizable, active media for which $n$ cannot be identified as an analytic function in the upper half-plane. Indeed, consider a Lorentz model with relative permittivity $\epsilon=1-f$, where $f$ is given by
\begin{equation}
\label{lorentz}
f(\omega)=\frac{F\omega_{0}^2}{\omega_{0}^2-\omega^2-i\omega\Gamma}.
\end{equation}
Here, $\omega_{0}$, and $\Gamma$ are positive parameters. If $F>0$ this medium is active; furthermore, if $F>1$, $\epsilon$ has a simple zero in the upper half-plane. Thus, assuming the relative permeability $\mu=1$, when $F>1$ the refractive index $n=\sqrt{\epsilon}\sqrt{\mu}$ contains a branch point in the upper half-plane.
These media would seem to violate relativistic causality. This is certainly not the case; to resolve this apparent paradox, I will provide a rigorous derivation of the Fresnel equations in the general case, based on the electromagnetic solutions of a finite slab. The definition of the refractive index is generalized so that it is valid for active media. It will become clear that when the refractive index contains branch points in the upper half-plane, the Fresnel equations and the refractive index do not have unique physical meaning for real frequencies. The Kramers-Kronig relations for the refractive index must be modified accordingly.
As an extra bonus, the general framework in this paper can be used to analyze novel classes of active media. Recently, it has been suggested that certain active, nonmagnetic media with simultaneously passive and active material resonances can yield negative refraction \cite{chen2005}. I will consider the Fresnel equations for these types of materials, and demonstrate that at normal incidence, the ``backward'' wave is excited so that both phase velocity and Poynting's vector point towards the excitation source. While negative refraction is found at oblique incidence, it is shown that evanescent field amplification, similarly to that in the Veselago-Pendry lens \cite{veselago,pendry2000}, does not happen.
The remaining of the paper is structured as follows: In Section II the theory of the electromagnetic parameters is reviewed. Section III contains an electromagnetic analysis of a finite slab surrounded by vacuum. From the resulting fields, in Section IV the Fresnel equations are proved in the general case, and the associated refractive index is identified. The results are interpreted in the form of examples (Section V); inverted Lorentzian media, and more complex media featuring negative refraction.
\section{The electromagnetic parameters}
Any electromagnetic medium must be causal in the microscopic sense; the polarization and magnetization cannot precede the electric and magnetic fields, respectively. This means that the relative permittivity $\epsilon(\omega)$ and the relative permeability $\mu(\omega)$ obey the Kramers-Kronig relations. In terms of the susceptibilities $\chi=\epsilon-1$ or $\chi=\mu-1$, these relation can be written
\begin{subequations}
\label{KKchi}
\begin{align}
&\im \chi=\mathcal H\,\re \chi,\label{KKchi1}\\
&\re \chi=-\mathcal H\, \im \chi,\label{KKchi2}
\end{align}
\end{subequations}
where $\mathcal H$ denotes the Hilbert transform \cite{kronig,kramers,nussenzveig,landau_lifshitz_edcm}. These conditions are equivalent to the fact that $\chi$ is analytic in the upper half-plane ($\im\omega>0$), and uniformly square integrable in the closed upper half-plane (Titchmarsh' theorem) \cite{nussenzveig,titchmarsh}\footnote{If the medium is conducting at zero frequency, the electric $\chi$ is singular at $\omega=0$. Although $\chi$ is not square integrable in this case, similar relations as \eqref{KKchi} can be derived \cite{landau_lifshitz_edcm}.}. The susceptibilities are defined for negative frequencies by the symmetry relation
\begin{equation}
\chi(-\omega)=\chi^*(\omega), \label{sym}
\end{equation}
so that their inverse Fourier transforms are real.
For passive media, in addition to \eqref{KKchi} and \eqref{sym} we have:
\begin{equation}
\im \chi(\omega)>0 \text{ for } \omega>0. \label{loss}
\end{equation}
The losses, as given by the imaginary parts of the susceptibilities, can be vanishingly small; however they are always present unless we are considering vacuum \cite{landau_lifshitz_edcm}. Eqs. \eqref{KKchi}-\eqref{loss} imply that $\im\chi>0$ in the first quadrant ($\re\omega>0$ and $\im\omega\geq 0$), and that $1+\chi$ is zero-free in the upper half-plane \cite{landau_lifshitz_edcm}. Thus for passive media, the refractive index $n=\sqrt{\epsilon}\sqrt{\mu}$ can always be chosen as an analytic function in the upper half-plane. With the additional choice that $n\to +1$ as $\omega\to\infty$, $n$ is determined uniquely by
\begin{equation}
\label{nfromepsilonmu}
n=\sqrt{|\epsilon||\mu|}\exp[i(\arg\epsilon+\arg\mu)/2],
\end{equation}
where the complex arguments are restricted to the interval $(-\pi,\pi]$. From Eq. \eqref{nfromepsilonmu} it is immediately found that \eqref{sym} and \eqref{loss} hold for the substitution $\chi\to n$. Moreover, the Kramers-Kronig relations for the refractive index are established using Titchmarsh' theorem by noting that $n-1$ is analytic in the upper half-plane, and that $n-1$ has the required asymptotic behavior: The fact that $\chi$ satisfies \eqref{KKchi} means that $\chi$ satisfies the square integrability condition. From $n=\sqrt{\epsilon}\sqrt{\mu}$, it follows that $n-1$ has a similar asymptotic behavior. This gives
\begin{subequations}
\label{KKn}
\begin{align}
&\im n = \mathcal H\,\{\re n-1\},\label{KKn1}\\
&\re n-1 =-\mathcal H\, \im n.\label{KKn2}
\end{align}
\end{subequations}
On the basis of causality, it is clear that the susceptibilities of active media also satisfy \eqref{KKchi} and \eqref{sym}. However, since \eqref{loss} does not hold for active media, it is no longer true that $\epsilon$ and $\mu$ are zero-free in the upper half-plane. Indeed, we have already seen that the permittivity associated with an inverted Lorentz model may have a simple zero in the upper half-plane. Thus, the refractive index cannot always be chosen as an analytic function in the upper half-plane.
To clarify this point, I will consider one-dimensional electromagnetic wave propagation in an active medium. The main goal is to identify the reflection and transmission at a single interface, and to prove that these waves obey relativistic causality. The Fresnel equations could be proved in the conventional way; however, it might not be clear {\it a priori} how to specify the wavevector of the transmitted wave in the active medium. In fact, contrary to the case with passive media, it is not necessarily correct to let the Poynting vector of the transmitted wave point away from the interface. Moreover, it is not always possible to choose the wavevector as an analytic function for $\im\omega>0$. Therefore, the active medium is assumed to have finite thickness initially. Then the transmitted wavevector at the far end of the slab can be specified trivially, and the electromagnetic solutions can be found. Subsequently, using the result of the finite slab, the fields of a half-infinite medium can be determined.
\section{Laplace transform analysis of an active slab}
\begin{figure
\includegraphics[height=4.5cm,width=4.5cm]{slab.eps}
\caption{A plane wave incident to a slab of thickness $d$.}
\label{fig:slab}
\end{figure}
Consider a plane wave in vacuum, normally incident to a linear, isotropic, and homogenous slab with parameters $\epsilon$ and $\mu$, see Fig. \ref{fig:slab}. While vacuum is chosen as the surrounding medium, it should be clear that any other passive media yield similar results. The slab is located in the interval $0\leq z\leq d$, where $d$ is the thickness of the slab. A causal excitation on the left-hand side of the slab ($z=0^-$) is assumed, i.e., the fields for $z\geq 0$ vanish for time $t\leq 0$. Since the medium is active, the physical, time-domain fields may diverge as $t\to\infty$, so Fourier transformed fields do not necessarily exist. (Strictly speaking, in practical materials, the fields do not diverge. When the fields become sufficiently strong, they deplete the gain. However, we restrict ourselves to the linear regime of the susceptibilities, and bear in mind that the model breaks down when the fields become large. The divergences are indicators of instabilities such as, for example, the ignition of lasing.) Thus, instead of Fourier transforms, we use Laplace transforms of the fields. Denoting the physical, time-domain electric field $\mathcal E(z,t)$, the Laplace transformed field for $z\geq 0$ is
\begin{equation}
\label{laplzgeq0}
E(z,\omega)=\int_0^\infty \mathcal E(z,t)\exp(i\omega t)\diff t,
\end{equation}
where $\im\omega\geq\gamma$, for a sufficiently large, real constant $\gamma$. Assuming that the source is located at $z=z_{\text{s}}<0$, \eqref{laplzgeq0} still applies for $z_{\text{s}}<z<0$ provided the lower limit in the integral is replaced by $z_{\text{s}}/c$. Since the fields everywhere are assumed to vanish for $t$ equal to the lower limit in the integral, the partial time derivatives in Maxwell's equations become simply $-i\omega$ in the transform domain. At the boundaries the Laplace transformed electric and magnetic fields must be continuous. Thus we can use a similar analysis method as in the Fourier case, considering the transformed fields at each $\omega$. Assuming the forward propagating electric field $\mathcal E^+(z,t)$ for $z<0$, and recalling that $\mathcal E(z,t)$ for $z>d$ is assumed to vanish for negative time, we can write
\begin{eqnarray}
&& E(z,\omega)/E^+(0,\omega) \\
&& =
\begin{cases}
\exp(i\omega z/c)+R\exp(-i\omega z/c) & \text{for $z_{\text{s}}<z<0$},\\
S^+\exp(ikz)+S^-\exp(-ikz) & \text{for $0\leq z\leq d$},\\
T\exp(i\omega z/c) & \text{for $z > d$},
\end{cases} \nonumber
\end{eqnarray}
where $E^+(0,\omega)$ is the Laplace transform of the excitation $\mathcal E^+(0,t)$. By matching the electric and magnetic fields at both interfaces, the total reflection coefficient $R$, field amplitudes in the slab ($S^+$ and $S^-$), and transmission coefficient $T$ are found to be
\begin{subequations}
\label{slabrt}
\begin{align}
&R=\frac{(\eta^2-1)\exp(-ikd)-(\eta^2-1)\exp(ikd)}{(\eta+1)^2\exp(-ikd)-(\eta-1)^2\exp(ikd)}, \label{slabr}\\
&S^+=\frac{2\eta(\eta+1)}{(\eta+1)^2-(\eta-1)^2\exp(2ikd)}, \label{slabsp}\\
&S^-=\frac{2\eta(\eta-1)}{(\eta-1)^2-(\eta+1)^2\exp(-2ikd)},\label{slabsm}\\
&T=\frac{4\eta}{(\eta+1)^2\exp(-ikd)-(\eta-1)^2\exp(ikd)}.\label{slabt}
\end{align}
\end{subequations}
Here, $k=n\omega/c$, $\eta=\mu/n$, and $n=\sqrt{\epsilon\mu}$. Note that $R$, $S\equiv S^+\exp(ikz)+S^-\exp(-ikz)$, and $T$ are unchanged if $n\to-n$, so the choice of the sign of $n$ does not matter. In other words, only even powers of $n$ are present in $R$, $S$, and $T$, which means that the fields contain no branch points in the upper half-plane.
For active media, $R$, $S$, and $T$ may contain poles in the upper half-plane.
Despite poles, the fields are relativistically causal. This fact is established using the analyticity of the respective functions in the half-plane $\im\omega>\gamma$ and their asymptotic behavior in that region. For example, $T$ tends to $\exp(i\omega d/c)$ as $\im\omega\to\infty$, which means that the signal-front delay through the slab is $d/c$ \footnote{\label{signalfrontdelay}Let $F(s)$ be the Laplace transform of the Laplace transformable function $f(t)$. A well-known result in the theory of Laplace transforms states that if $F(-i\omega)\exp(-i\omega\tau)\to 0$ as $\im\omega\to\infty$, then the inverse transform of $F$ is zero for $t<\tau$ (See for example \cite{papoulis}).}.
The time-domain solutions are found by inverse Laplace transforms of the associated fields. For example, if the excitation is $\mathcal E^+(0,t)=u(t)\exp(-i\omega_1 t)$, where $u(t)$ is the unit step function and $\im\omega_1<0$, the time-domain field in the slab is \footnote{Any physical excitation is real and can be represented by two complex terms, e.g., $u(t)\exp(-i\omega_1 t)+u(t)\exp(i\omega_1^* t)$. This excitation would require the substitution $1/(i\omega_1-i\omega)\to 1/(i\omega_1-i\omega)+1/(-i\omega_1^*-i\omega)$ in \eqref{invlapl}. For clarity it is often convenient to consider the positive frequency excitation separately.}
\begin{equation}
\label{invlapl}
\mathcal E(z,t)=\frac{1}{2\pi}\int_{i\gamma-\infty}^{i\gamma+\infty} \frac{S}{i\omega_1-i\omega}\exp(-i\omega t)\diff\omega.
\end{equation}
When $S$ has no poles in the closed upper half-plane, $\gamma$ can be set to zero. It follows then that the field $S$ can be interpreted in the usual way at real frequencies. When $S$ contains poles in the upper half-plane, the integration path in \eqref{invlapl} should be located above all of them. If we still insist on locating the integration path along the real axis, the resulting discrepancy would correspond to the sum of residues of the poles in the upper half-plane. Such residues are exponentially increasing and demonstrate clearly the fact that when $S$ contains poles in the upper half-plane, the slab is electromagnetically unstable.
\section{Fresnel equations}
To facilitate the analysis and interpretation of active media, it is useful to obtain the fields when the slab fills the entire half-plane $z\geq 0$. For passive media, one can take the limit $d\to\infty$ in \eqref{slabrt} to obtain
\begin{subequations}
\label{fresnel}
\begin{align}
&R=\frac{\eta-1}{\eta+1},\label{fresnelr}\\
&S=\frac{2\eta}{\eta+1}\exp(in\omega z/c). \label{fresnels}
\end{align}
\end{subequations}
In obtaining \eqref{fresnel} the sign of $n$ has been chosen such that $\im n$ is positive in the first quadrant ($\re\omega>0$ and $\im\omega\geq 0$). For passive media, this choice is equivalent to the choice in \eqref{nfromepsilonmu}, or in the conventional definition of $n$ based on analyticity. Eqs. \eqref{fresnel} are essentially the Fresnel equations; however the propagation factor $\exp(in\omega z/c)$ has been inluded in the transmitted field, to emphasize the position dependence of the field in the active medium.
For active media, one could guess that \eqref{fresnel} remains valid, but it is not clear {\it a priori} how to choose $n$. The imaginary parts of $\epsilon$ and $\mu$ may take both signs in the first quadrant, and it may not be possible to choose $n$ as an analytic function in the upper half-plane. Surprisingly, as shown by the following example, taking the limit $d\to\infty$ in \eqref{slabrt} leads in general to unphysical results. Consider a slab of an inverted Lorentzian medium $\epsilon=1-f$ and $\mu=1$, where $f$ is given by \eqref{lorentz}. Assuming $F$ is small (corresponding to low gain), $n\approx \pm(1-f/2)$. In the limit $d\to\infty$, we find $R=(\eta+1)/(\eta-1)$ and $S=2\eta\exp(-in\omega z/c)/(\eta-1)$, where $n=1-f/2$. This solution is clearly unphysical as $R,S\to\infty$ as $\omega\to\infty$; a convergence half-plane $\im\omega\geq\gamma$ does not exist.
To find the correct solution, we start with the time-domain solution \eqref{invlapl}, where $d$ is finite. Due to the finite bandwidth of physical media, $|\chi|\propto 1/\omega^2$ as $\omega\to\infty$ in the upper half-plane \cite{nussenzveig}. Thus, for a sufficiently large $|\omega|$, $\epsilon\approx\mu\approx 1$. Choosing the sign of $n$ such that $n\approx 1$ here leads to $|n-1|\propto 1/\omega^2$ for $\omega\to\infty$. This implies in turn that there exists a $\gamma>0$ such that $|[(\eta-1)/(\eta+1)]\exp(ikd)|<1$ for any $\omega$ with $\im\omega\geq\gamma$. We can now expand \eqref{slabsp} and \eqref{slabsm} into geometrical series in $[(\eta-1)/(\eta+1)]^2\exp(2ikd)$. Hence, $S\equiv S^+\exp(ikz)+S^-\exp(-ikz)$ becomes
\begin{eqnarray}
S&&=\frac{2\eta e^{ikz}}{\eta+1}\left[1+\left(\frac{\eta-1}{\eta+1}\right)^2 e^{2ikd}+\ldots \right] \\
&&-\frac{2\eta(\eta-1)e^{2ikd-ikz}}{(\eta+1)^2}\left[1+\left(\frac{\eta-1}{\eta+1}\right)^2 e^{2ikd}+\ldots \right] \nonumber
\end{eqnarray}
After substitution of $S$ into \eqref{invlapl}, and using the signal-front delay property of the Laplace transform [14], it is apparent that only the first term leads to a nonzero result for $t<2d-z$. This term coincide with \eqref{fresnels}. In a similar way, we find that for time $t<2d$, the reflected wave at $z=0^-$ is given by the term \eqref{fresnelr} substituted for $S$ in \eqref{invlapl}. Now we can clearly take the limit $d\to\infty$. This will not alter \eqref{fresnel}, but will ensure the validity of the associated time-domain solutions for all times. The solution has now been found; however, the evaluation of the fields \eqref{fresnel} far away in the upper half-plane is impractical. Fortunately, by Cauchy's integral theorem we can move the line $\gamma$ down towards the first branch point of $n$ or zero of $\eta+1$. If there are no such points in the closed upper half-plane, we can even evaluate \eqref{fresnel} on the real axis. Note that although the original functions $R$ and $S$ from \eqref{slabrt} are not analytic in the upper half-plane, the terms \eqref{fresnel} are analytic provided $\eta+1$ has no zeros and $n$ is analytic. Also note that in all cases, the expressions \eqref{fresnel} are analytic in some region $\im\omega>\gamma$, and $S$ tends to $\exp(i\omega z/c)$ as $\im\omega\to\infty$. In other words, $\mathcal E(z,t)=0$ for $t<z/c$ [14], and relativistic causality is guaranteed.
We conclude that the Fresnel equations \eqref{fresnel} retain their form, but not always their interpretation for active media. When $\epsilon\mu$ has no odd-order zeros in the upper half-plane, the sign of the refractive index is determined such that $n\to+1$ as $\omega\to\infty$, and such that $n$ is analytic in the upper half-plane. Furthermore, if $\eta\neq -1$ everywhere in the closed upper half-plane, the Fresnel equations \eqref{fresnel} can be evaluated and interpreted in the usual way along the real frequency axis. On the other hand, if $\epsilon\mu$ has odd-order zeros, or $\eta+1$ has zeros in the upper half-plane, \eqref{fresnel} contain points of non-analyticity. The reflected and transmitted fields \eqref{fresnel} should then be evaluated along the line $\im\omega=\gamma$, where $\gamma$ is larger than the maximum $\im\omega$ of the points of non-analyticity. The sign of the refractive index is determined so that $n\to +1$ as $\omega\to\infty$, and so that $n$ is analytic for $\im\omega\geq\gamma$. Note that when $\epsilon\mu$ has odd-order zeros in the upper half-plane, $n$ is not uniquely determined along the real frequency axis, and consequently, one must be careful with its interpretation there (See Appendix A).
For non-orthogonal incidence we can use a similar argument as that given above. When the incident wave vector is ${\bf k}=(k_x,k_y,k_z)$, the TE fields of a finite slab are given by the substitutions $\eta\to\mu k_z/k_z'$ and $k\to k_z'$ in \eqref{slabrt}. Here, $k_z'^2=\epsilon\mu\omega^2/c^2-k_x^2-k_y^2$. Again, we note that the choice of sign of $k_z'$ does not matter as only even powers of $k_z'$ are present. Following the argument leading to \eqref{fresnel} for active media, we arrive at the similar Fresnel expressions
\begin{subequations}
\label{fresneloblique}
\begin{align}
&R=\frac{\mu k_z-k_z'}{\mu k_z+k_z'},\label{fresnelroblique}\\
&S=\frac{2\mu k_z}{\mu k_z+k_z'}\exp(ik_z' z), \label{fresnelsoblique}
\end{align}
\end{subequations}
valid for $\im\omega\geq\gamma$. Here, the sign of $k_z'$ is determined such that $k_z'$ is analytic for $\im\omega>\gamma$, and such that $k_z'\to+\omega/c$ as $\omega\to\infty$. The parameter $\gamma$ is chosen larger than the maximum imaginary part of branch points of $k_z'$ or zeros of $\mu k_z+k_z'$. If there are no such points in the closed upper half-plane, $\gamma$ can be chosen to be zero, i.e., the Fresnel equations can be interpreted along the real frequency axis in the usual way.
Since the refractive index is analytic in the half-plane $\im\omega>\gamma$, and since $n-1$ has the required asymptotic behavior for $\im\omega\geq\gamma$, the Kramers-Kronig relations for active media can still be written as in \eqref{KKn}; however, the integrals are taken along the line $\im\omega=\gamma$. Note that the conventional Kramers-Kronig relations for $n$, as integrated and evaluated along the real frequency axis, are not valid.
\section{Examples}
\subsection{Inverted Lorentzian media}\label{example1}
In this example, we consider an inverted Lorentzian medium with $\epsilon=1-f$, where $f$ is given by \eqref{lorentz}, and $\mu=1$. Since $f\neq 0$ everywhere, $\eta+1$ has no zeros. Conventional optical gain media, such as the Erbium-doped silica fiber, have relatively low gain per length of fiber. We will therefore first consider the well-known case where $f$ satisfies $|f|\ll 1$. Then $\epsilon$ has no zeros, and following the recipe above, we find that the Fresnel equations \eqref{fresnel} can be interpreted straightforwardly for real frequencies, and $n\approx 1-f/2$.
Lifting the assumption of low gain, we observe that the zeros of $\epsilon$ are located at $\omega_{\pm}=-i\Gamma/2\pm i\sqrt{\Gamma^2/4+(F-1)\omega_0^2}$. When $F>1$, the zero $\omega_{+}$ is located in the upper half-plane. Consequently, in this case neither $n$ nor the Fresnel equations have unique physical meaning along the real frequency axis. The fields should instead be evaluated along the line $\im\omega=\gamma$, where $\gamma$ is chosen such that $\gamma>\im\omega_{+}$.
It is useful to recall that the fields, as given for $\im\omega>\im\omega_+$, can be interpreted by choosing an exponentially increasing excitation. For example, we may choose an excitation $\mathcal E^+(0,t)=u(t)\exp(-i\omega_1 t)$, where $\im\omega_+<\im\omega_1<\gamma$. For sufficiently large $t$, the time-domain field \eqref{invlapl} gets the main contribution from the residue of $S\exp(-i\omega t)/(i\omega_1-i\omega)$ at $\omega_1$. In other words, $S$ is roughly the complex amplitude of the exponentially increasing wave. The resulting field dependence $\exp(in\omega_1 z/c-i\omega_1 t)$ means that the wave has its ``phase velocity'' in the positive $z$-direction. (For this medium $\re n>0$ in the first quadrant of complex frequency.)
The exponentially increasing excitation is somewhat artifical, and it would be interesting to evaluate the time-domain solution when $\im\omega_1<0$, i.e., when the excitation pulse envelope is exponentially decreasing. In Appendix A, the time-domain field at a fixed $z$ is evaluated, and it is argued that the pulse diverges as $t\to\infty$. Note that the field in the medium diverges as a result of the medium alone, not as a result of, for instance, amplified, multiple reflections.
\subsection{Excitation of a ``backward'' wave}\label{example2}\label{backexcitation}
Consider the medium $\epsilon=(1+f)^2$ and $\mu=1$, where $f$ is still given by \eqref{lorentz}. The associated susceptibilities satisfy \eqref{KKchi} and \eqref{sym}, so at least in principle, this medium is realizable. The product $\epsilon\mu$ is zero-free in the upper half-plane, and we can determine the refractive index such that it is analytic in the upper half-plane (and such that $n\to+1$ as $\omega\to\infty$). This gives $n=1+f$ and $\eta=1/(1+f)$. Note that $\eta\neq-1$ for $\im\omega\geq 0$. It follows that the Fresnel equations \eqref{fresnel} can be interpreted straightforwardly along the real frequency axis.
When $F$ is sufficiently large, there exists a real frequency $\omega_1=\omega_0\sqrt{1+F/2}$ with $\re n(\omega_1)\approx -1$ while $\im n(\omega_1)\approx \frac{4\Gamma}{F\omega_0}\sqrt{1+F/2}$ is small. This seems to be a potential route to negative refractive index at optical frequencies \cite{chen2005}. Since $\re\epsilon(\omega_1)\approx 1$ and $\mu=1$, not only the phase velocity but also the Poynting vector point in the negative $z$-direction at this frequency. As $\im n(\omega_1)>0$ and $\im\epsilon(\omega_1)<0$, the field can be interpreted as a growing wave propagating in negative $z$-direction. Nevertheless, the excitation of this ``backward'' wave does neither rely on finiteness of the thickness of the active medium nor excitation from $z=+\infty$. Note that any causal excitation involves an infinite band of frequencies, and outside the limited negative-index band, the phase velocity and Poynting vector point in the forward direction.
For this medium, the transmitted time-domain field at $z=0^+$ can be calculated analytically. In this case we choose a real excitation of the form $\mathcal E^+(0,t)=u(t)\cos(\omega_1 t)$, where $\omega_1=\omega_0\sqrt{1+F/2}$. With the help of the corresponding Laplace transform and the Fresnel transmission coefficient $2/(n+1)$, we find after some simple algebra:
\begin{equation}
\label{invlaplex2}
\mathcal E(0,t)=\frac{i}{2\pi}\int_{i\gamma-\infty}^{i\gamma+\infty} \frac{(\omega^2+i\omega\Gamma-\omega_0^2)\omega\exp(-i\omega t)}{(\omega-\omega_2)(\omega-\omega_{-2})(\omega^2-\omega_1^2)}\diff\omega.
\end{equation}
Here $\gamma=0^+$, and $\omega_{\pm 2}=\pm\omega_1-i\Gamma/2$ to first order in $\Gamma/\omega_1$. The inverse transform \eqref{invlaplex2} can be found exactly by calculating the residues in the closed, lower half-plane. Assuming $\Gamma/\omega_1\ll 1$, this procedure yields
\begin{equation}
\label{invlaplex2f}
\mathcal E(0,t)=
\begin{cases}
\cos(\omega_1 t), & \text{for $0<\Gamma t/2\ll 1$}, \\
-\frac{2}{\im n(\omega_1)}\sin(\omega_1 t), & \text{for $\Gamma t/2\gg 1$}.
\end{cases}
\end{equation}
(The steady-state solution could certainly be found directly from \eqref{fresnels}.) We observe that the transmitted field is equal to the excitation for small $t$. Thus, initially the reflection from the boundary vanishes. On the other hand, when $t$ is large, the field has grown considerably. This fact is interpreted by noting that the ``backward wave'', with phase velocity and Poynting's vector in the negative $z$-direction, has been built up. This wave draws energy from the active medium, yielding a large field at the boundary. Note that also the ``reflected wave'' on the left-hand side of the boundary now is large. Nevertheless, the fields are produced in a stable manner. A plot of the backward wave and the associated, forward propagating forerunner is given in Fig. \ref{fig:backwave}.
\begin{figure}[t!]
\includegraphics[height=5.5cm,width=7.5cm]{backwave.eps}
\caption{The electric field $\mathcal E(z,t)$ of the example in Subsection \ref{backexcitation}. The frequency $\omega_0=1$ (normalized), and $t=30$. The parameters used are $\Gamma/\omega_0=0.1$ and $F=10$. The field was calculated by a straightforward numerical evaluation of \eqref{invlaplex2} using $\gamma=0.01\omega_0$. The field plot for small $z$ shows the ``backward wave'', with phase velocity and Poynting's vector in the $-z$-direction. The forerunner, located close to $z=30c$, moves at a speed $c$ in the $+z$-direction.}
\label{fig:backwave}
\end{figure}
At oblique incidence the backward wave is refracted negatively. Although this may seem obvious intuitively, the fact can be proved by calculating $k_z'$ and identifying its sign along the real frequency axis using the procedure outlined above. Despite negative refraction, the medium has rather different properties from the left-handed negative-index medium considered by Veselago and Pendry \cite{veselago,pendry2000}. Considering a finite slab of sufficiently small thickness such that $T$ has no poles in the upper half-plane, we find $T\approx\exp(+i\omega_1 d/c)$ at the frequency $\omega_1$. Also, evanescent-field amplification, as in the Veselago-Pendry lens, will not arise in this medium. For TE polarization this is seen by the substitutions $\eta\to\mu k_z/k_z'$ and $k\to k_z'$ in \eqref{slabt}. As before, $k_z'^2=\epsilon\mu\omega^2/c^2-k_x^2-k_y^2$. For an incident evanescent field with sufficiently large $k_y$, $k_z'$ is close to an imaginary number. Substitution into \eqref{slabt} gives exponential decay. Note that the choice of sign of $k_z'$ does not matter.
An interesting point arises by comparing the results in this example with those of the previous example. Assuming low gain in the previous example, $\epsilon\approx 1-i\alpha$ for some real frequency $\omega_1$. Here, $\alpha$ is a small positive number. In the present example, at the frequency $\omega_1$ we can also write $\epsilon\approx 1-i\alpha$ for a small positive $\alpha$. Indeed, it is possible to tune the parameters in the two examples such that the two $\epsilon$'s (and $\mu$'s) are identical at this frequency. Nevertheless, the Fresnel equations \eqref{fresnel} give completely different answers; in the previous example the ``forward'' wave, with phase velocity and energy growth in the $+z$-direction, is excited, whereas in the present example a ``backward'' wave is excited. The dilemma is resolved by noting that although the $\epsilon$'s are identical for a single frequency, they are not identical globally. As a result, the causal excitation at $z=0^-$, which necessarily contains an infinite band of frequencies, will have different consequences in the two situations.
\subsection{Instability at the boundary}\label{example3}
In the previous example, although $n\approx -1$ at the real frequency $\omega_1$, $n=-1$ is exactly fulfilled only in the lower half-plane (at the complex frequencies $\omega_{\pm 2}$). In principle, one can construct a causal, active medium for which $n=-1$ at a real frequency or even in the upper half-plane. Indeed, let $\epsilon=(1+g)^2$ and $\mu=1$, where
\begin{equation}
\label{gfunc}
g(\omega)=\frac{i G\omega}{\omega_{0}^2-\omega^2-i\omega\Gamma}.
\end{equation}
We assume $\Gamma/\omega_0\ll 1$ and $G\sim\Gamma$. The associated susceptibility $\chi=\epsilon-1$ is clearly analytic and uniformly square integrable in the closed upper half-plane; thus it satisfies the Kramers-Kronig relations \footnote{One may argue that the asymptotic form of $\epsilon-1$ for large $\omega$ is different from the usually assumed dependence $1/\omega^2$. Without significantly altering $g$ in the region $|\omega-\omega_0|\lesssim\Gamma$ this can be fixed for instance by letting $g\to (g+\tilde F)\tilde\omega_0^2/(\tilde\omega_{0}^2-\omega^2-i\omega\tilde\Gamma)$ where $|\tilde F|\ll 1$, $\tilde\omega_0\gg\omega_0$, and $0<\tilde\Gamma\ll\tilde\omega_0-\omega_0$.}. By determining the refractive index such that it is analytic in the upper half-plane, we obtain $n=1+g$. Thus we can find the frequencies $\omega_{a\pm}$ where $n=a$ is fulfilled:
\begin{equation}
\label{freqnm1}
2\omega_{a\pm}=\pm 2\omega_0-i\Gamma-iG/(a-1).
\end{equation}
Here we have assumed that $|a-1|\gtrsim 1$ and omitted second or higher order terms in $\Gamma/\omega_0$. Using \eqref{freqnm1} we find that if $G\geq 2\Gamma$, the solutions to the equation $n=-1$ are located in the closed, upper half-plane (equality implies locations on the real axis).
Substituting \eqref{fresnels} into \eqref{invlapl}, it is realized that the locations of the zeros of $n+1$ are crucial for the electromagnetic stability. Clearly if $1/(n+1)$ has poles in the upper half-plane, the associated residues when calculating \eqref{invlapl} grow exponentially with $t$. In other words, the system consisting of the two media (vacuum and the active medium) is electromagnetically unstable. For a given active medium, this instability may be eliminated if the vacuum on the left-hand side of the boundary is replaced by another passive medium: Let the new medium have $\mu=1$ and (approximately) constant refractive index $n_{\text{p}}$ for $|\omega-\omega_0|\lesssim\Gamma$. The instability would not be present provided $n\neq -n_{\text{p}}$ in the upper half-plane. According to \eqref{freqnm1}, if $2\Gamma\leq G\leq(n_{\text{p}}+1)\Gamma$, there is no instability, in contrast to what we found above for vacuum.
\section{Conclusions}
General forms of the Fresnel equations, valid for active media, have been proved. When the refractive index contains branch points in the upper half-plane, the index is not determined uniquely along the real frequency axis, but is dependent on the choice of branch cuts. Thus the Fresnel equations and the refractive index cannot be interpreted along the real frequency axis unless their behavior along the branch cut are taken into account as well. The expressions should rather be evaluated or interpreted in some upper half-plane $\im\omega\geq\gamma$, where the line $\im\omega=\gamma$ is located above the non-analytic points. The presence of branch cuts in the upper half-plane leads generally to instabilities in the sense that any vanishing small pulse excites an infinite field (or saturation) in the medium.
The sign of the refractive index should be defined using the asymptotic form of $n$ ($n\to+1$ as $\omega\to\infty$), and analyticity for $\im\omega>\gamma$. It would have been useful to have an explicit relation for the refractive index, similar to \eqref{nfromepsilonmu}. However, for active media such a relation is not valid as the complex arguments of $\epsilon$ and $\mu$ may take any values. Nevertheless, if $\gamma$ is above all non-analytic points of $n$, $n$ is analytic for $\im\omega=\gamma$. Thus the sign can be identified up to a global sign by unwrapping the phase $\arg\epsilon+\arg\mu$ before using \eqref{nfromepsilonmu}. The global sign is determined easily by requiring $n\to +1$ as $\omega\to\infty$.
For most media of interest, there are no branch points of $n$ for $\im\omega\geq 0$, and the sign of $n$ is to be determined along the real frequency axis. In principle, $n$ is not necessarily analytic for real frequencies (since $\epsilon$ and $\mu$ are not necessarily analytic there). With the extra assumption that $\epsilon\mu$ is continuous for real frequencies, $\epsilon\mu$ is continuous for $\im\omega\geq 0$. Then the sign of $n$ may still be found by unwrapping the phase $\arg\epsilon+\arg\mu$ in \eqref{nfromepsilonmu}, and requiring $n\to +1$ as $\omega\to\infty$.
Using the general framework, we have seen that in certain active media, only the ``backward'' propagating wave is excited. For this wave both phase velocity and Poynting's vector point towards the excitation source. These media are conveniently described using a negative refractive index. Nevertheless, evanescent-field amplification, similar to that in the Veselago-Pendry lens, does not exist.
|
1,116,691,499,154 | arxiv | \section{Introduction}
Waves in periodic media is a vast subject of great interest in numerous fields. Classical methods of averaging and mathematical homogenisation are useful in applications where the wavelength is large relative to the periodicity. There are many fields, however, where the r\'egime of interest concerns wavelengths commensurate with the periodicity, as in gratings, layered media, photonic (similarly phononic, platonic,...) crystals and solid-state quantum mechanics \cite{Sakoda:Book,Joannopoulos:Book}. In these applications, the basic theoretical constituents are Bloch-wave solutions, i.e., periodically modulated plane waves supported by an infinitely periodic medium \cite{Brillouin:Book}. Dispersion surfaces relating the Bloch eigenfrequencies and eigen-wavevectors encapsulate a breadth of information including the rate and direction in which energy may propagate \cite{Notomi:10} and the existence of frequency band gaps wherein propagation is forbidden \cite{Yablonovitch:87}.
An unbounded, exactly periodic, medium is an idealisation and applications usually involve crystal boundaries, tapered designs or defects. Fortunately, there is often a scale separation between the periodicity (wavelength) and a ``macro-scale'' associated with crystalline boundaries, the length on which the crystal is tapered, the width of an incident beam, or the position of a source. It is then intuitive that the wave field is locally, in some approximate sense and away from boundaries, sources and singularities, a finite superposition of Bloch solutions. It is possible to intuitively build on this picture by encoding the local Bloch-dispersion characteristics in a Hamiltonian model \cite{Russel:99,Cassan:11}.
The equations of motion determine the variation of the Bloch wavevector along particle paths, or ``Bloch rays'', which trace the group-velocity vector. The group velocity linearly relates the macroscopic energy flux and density \cite{Sakoda:Book}; hence, given the local ray pattern and Bloch eigenmodes, the change in amplitude along a ray follows from energy conservation. Furthermore, analysing the iso-frequency contours of the dispersion surfaces in conjunction with Bragg's law allow identifying reflected, refracted (and high-order diffracted) Bloch rays at crystalline interfaces \cite{Zengerle:87,Joannopoulos:Book} (their amplitudes and phases, however, sensitively depend on details of the interface). This physical picture has been invaluable in studies of transmission through finite photonic- \cite{Notomi:10,luo:02} and platonic- \cite{Smith:12} crystal slabs, and slowly varying photonic and sonic crystals designed for cloaking and steering \cite{Kurt:08,Urzhumov:10,Romero:13}. It is difficult, however, to systematically cast these ideas into a detailed quantitative approximation of the wave field, especially when crystalline interfaces and dispersion singularities are important. Recent studies have also highlighted the role played by the Berry phase \cite{Berry:84} in graded \cite{Johnson:02} and symmetry-breaking crystals \cite{Raghu:08}, and slowly varying waveguides \cite{Tromp:92}; the Berry phase is sometimes incorporated in Hamiltonian formulations through a small ``anomalous velocity'' \cite{Panati:03,Raghu:08,Weinan:13}.
It is well known that traditional ray theory (i.e., for single-scale materials) can be derived as the short-wavelength asymptotic limit of the wave equation \cite{Holmes:Book}. Analogously, a suitable starting point for an asymptotic theory of Bloch rays is a \emph{multiple-scale} Wentzel--Kramers--Brillouin (WKB) ansatz \cite{Bensoussan:11}. It may seem surprising, given the huge applicability of geometric optics, that works along these lines have largely focused on rigorous proofs in the context of propagation and localisation of wave packets \cite{Allaire:10,Allaire:11,Cherednichenko:15,Harutyunyan:16}. These analyses are compatible with the intuitive notion of Bloch rays; they show, however, that slow Berry-like phases are important even when seeking only a leading-order approximation to the wave field. These analyses disregard diffraction at crystalline boundaries and caustics, and the dominant Bloch eigenfrequencies are assumed to be within a pass band, away from singularities in the dispersion landscape. It seems that, to date, there has been little effort to implement and pragmatically extend these ideas towards a more applicable approximation scheme.
Dispersion singularities, namely critical points at the edges of complete and partial band gaps, along with branch crossings, are responsible for phenomena such as slow light, dynamic anisotropy \cite{Notomi:00,Ceresoli:15}, unidirectional propagation \cite{Colquitt:16} and non-trivial topological invariants \cite{Lu:14}. Fortunately, wave propagation at near-critical frequencies is well described by a distinct multiple-scale asymptotic scheme known as High-Frequency Homogenisation (HFH) \cite{Craster:10}. HFH furnishes a (typically unimodal) homogenised description where the wave field is provided as a product of an envelope function, governed by a macro-scale equation, and a rapidly varying Bloch eigenfunction, governed by the Bloch cell problem at the nominal (typically standing-wave) frequency. The method has been extensively applied to study photonic \cite{Antonakakis:13,Maling:15} (similarly phononic \cite{Antonakakis:14,Colquitt:15high}, platonic \cite{Antonakakis:12}) crystals, Rayleigh--Bloch waves along gratings \cite{Colquitt:15rayleigh}, and layered media \cite{Joseph:15}. The method is a multiple-scale analogue of near-cut-off expansions in waveguide theory \cite{Craster:09} and is also intimately connected to the KP perturbation method and the idea of an effective mass in solid-state physics \cite{Kittel:Book,Naraigh:12}.
The frequency intervals covered by multiple-scale WKB expansions and HFH complement each other. In fact, the respective regions of validity overlap at intermediate perturbations away from critical frequencies. Naively, it may appear that only one of these two methods is necessary at any given frequency. This is certainly incorrect for a graded crystal where the dispersion surfaces vary on a macro-scale and may easily become singular. Indeed, Johnson \textit{et al.}~warn that this is a ``pitfall to avoid in the design of tapered photonic crystal waveguides'' \cite{Johnson:02}; an otherwise adiabatically propagating Bloch wave would strongly reflect from the singularity. While less evident, monochromatic dispersion singularities are also ubiquitous in uniform crystals, at least in two and three dimensions. For instance, if the operating frequency lies in a partial band gap, propagation is forbidden in certain directions \cite{Ceresoli:15}; there are then singular Bloch rays separating bright and shadow regions along which the frequency is critical. To date, asymptotic analyses have exclusively focused on either non-critical regions, using the WKB method or related approaches \cite{Johnson:02}, or studied scenarios where the frequency is everywhere nearly critical, using HFH. We argue here that appropriately combining the two methods provides an avenue to a more complete asymptotic theory of high-frequency waves in locally periodic media, covering all dispersion branches, both near and away from dispersion singularities. Specifically, we anticipate that WKB Bloch waves can be asymptotically matched in space with localised HFH-type expansions describing intermediate \emph{length} scales about dispersion singularities.
In this paper we demonstrate this approach in the simplest case of time-harmonic Helmholtz waves in a 1D locally periodic medium. We begin in \S\ref{sec:bulk} by considering the ``outer'' problem where Bloch waves propagate in a pass-band region, adiabatically varying together with the local dispersion relation of the medium. Our multiple-scale WKB analysis differs from previous works in several aspects. First, we work in the frequency domain. Second, we decompose the complex-valued transport equation into two simpler real equations respectively governing the amplitude and a slow phase; moreover, by considering perturbations of the Bloch eigenvalue problem we derive exact identities that allow us to simplify the asymptotic formulae. Third, we consider a more general multiple-scale representation of the locally periodic medium, which is consistent also with layered media.
In \S\ref{sec:behaviour} we investigate the divergence of the WKB description as a spatial band-gap edge singularity is approached. In \S\ref{sec:hfh} we study the region close to a band-gap edge using a HFH-like expansion. In \S\ref{sec:reflection} we asymptotically match the WKB and HFH expansions in the scenario where a Bloch wave is reflected from a band-gap edge. In \S\ref{sec:layered} we demonstrate the validity of our approach by comparing the asymptotic results with numerical computations in the case of a layered medium. In the concluding section \S\ref{sec:discussion} we anticipate generalisations and extensions to this work.
\section{Slowly varying Bloch waves}\label{sec:bulk}
Consider the reduced wave equation
\begin{equation} \label{master eq}
\epsilon^2\frac{d^2 u}{dx^2}+{\omega^2}\Lambda u=0.
\end{equation}
In Eq.~\eqref{master eq}, $x$ is a spatial coordinate normalised by a long scale $L$, and $\omega$ denotes the frequency normalised by $c/l$, where $c$ is a reference wave speed and $l$ is a short length scale such that $\epsilon=l/L\ll1$. Our interest is in a locally periodic medium described by the real function $\Lambda=\Lambda[\xi,r(x)]$, which is an arbitrary $2$-periodic function of the short-scale coordinate $\xi=x/\epsilon$ and a smooth function of the long scale $x$ through the parameter $r$. Specifically, our interest is in the limit where $\epsilon\to0$, with $\Lambda,\omega=O(1)$. This limit process corresponds to a high-frequency r\'egime where wavelength and periodicity are commensurate and small compared to the long scale $L$; we emphasise that ${|\Lambda-1|}$ is not assumed to be small.
As explained in \S\ref{sec:layered}, in the case of a layered medium it is necessary to assume $\Lambda$ has an expansion in powers of $\epsilon$. For the sake of calculating the wave field $u$ to leading order, however, only the first \emph{two} terms in this expansion are relevant:
\begin{equation}\label{Lambda def}
\Lambda = \chi[\xi,r(x)]+\epsilon \tau[\xi,r(x)].
\end{equation}
Here $\chi,\tau$ are real, $\epsilon$-independent, $2$-periodic functions of $\xi$, and smooth functions of $x$ through the parameter $r(x)$.
In this section we seek asymptotic solutions in the form of a multiple-scale WKB ansatz
\begin{equation} \label{master}
u = A(\xi,x)e^{i\varphi(x)/\epsilon},
\end{equation}
where $\varphi$ is real and $A$ is complex and $2$-periodic in $\xi$. Substituting \eqref{master} into \eqref{master eq} yields
\begin{multline}\label{Aeq}
\pd{^2A}{\xi^2}+2i \frac{d\varphi}{dx}\pd{A}{\xi}+\left[\omega^2\Lambda-\left(\frac{d\varphi}{dx}\right)^2\right]A \\
+ \epsilon\left(2\pd{^2A}{x\partial \xi}+2i\frac{d\varphi}{dx}\pd{A}{x}+i\frac{d^2\varphi}{dx^2}A\right)+\epsilon^2\pd{^2A}{x^2}=0.
\end{multline}
The form of \eqref{Aeq} suggests expanding $A$ as
\begin{equation} \label{A exp}
A(\xi,x) \sim A_0(\xi,x)+\epsilon A_1(\xi,x) + \cdots,
\end{equation}
where $A_0,A_1,\ldots=O(1)$ are complex-valued and $2$-periodic in $\xi$. The $O(1)$ balance of \eqref{Aeq} is:
\begin{equation}\label{A0eq}
\mathscr{L}^2_{k,\chi}A_0=0,
\end{equation}
where we define the $x$-dependent operator $\mathscr{L}^2_{k,\chi}$ and wavenumber $k$ as
\begin{equation}\label{L2}
\mathscr{L}^2_{k,\chi}=\pd{^2}{\xi^2}+2i k\pd{}{\xi}+\left(\omega^2\chi-k^2\right), \quad k(x) = \frac{d\varphi}{dx}.
\end{equation}
For $x=x'$ fixed, \eqref{A0eq} together with periodicity conditions constitutes the familiar Bloch eigenvalue problem \cite{Joannopoulos:Book} with periodic modulation $\chi[\xi,r(x')]$ and Bloch wavenumber $k(x')$. Thus, standard techniques can be employed in order to calculate a dispersion relation
\begin{equation}\label{disp}
\Omega(k,r)=\omega,
\end{equation}
where $\Omega$ is multivalued. As an example, the dispersion relation for the layered medium discussed in \S\ref{sec:layered} [cf.~\eqref{layered chi}] is visualised in Fig.~\ref{fig:disp}; note the existence of frequency band gaps, wherein $k$ is complex valued, and crossing points at higher frequencies, where $k$ is a degenerate eigenvalue. In this section, we assume $x$ lies within a pass band, namely that the eigenvalue $k$ is real and non-degenerate. Without loss of generality, we only need to consider wavenumbers in the first Brillouin zone, $-\pi/2\le k\le \pi/2$. Let $U(\xi,\{k,r\})$ denote the eigenfunctions, which satisfy $\mathscr{L}^2_{k,\chi}U=0$ and are $2$-periodic in $\xi$. From time-reversal symmetry, if $k$ is an eigenvalue then so is $-k$ and $U(\xi,\{-k,r\})={U}^*(\xi,\{k,r\})$, where an asterisk denotes complex conjugation. In the appendix we derive the result \cite{Sakoda:Book}
\begin{equation}\label{gv relation}
\pd{\Omega}{k}= \frac{k\int|U|^2\,d\xi-i\int U^*\pd{U}{\xi}\,d\xi}{\omega\int\chi|U|^2\,d\xi},
\end{equation}
which gives the group velocity in terms of integrals of the eigenfunctions over an elementary cell, say $\xi\in[-1,1)$.
The above discussion suggests writing the solution of \eqref{A0eq} as
\begin{equation}\label{A form}
A_0 = f(x)U\left(\xi;\{k,r\}\right),
\end{equation}
where $f$ is complex valued; it will be convenient to treat the amplitude $|f|$ and phase $\gamma$ of $f$ separately,
\begin{equation}
f(x)=|f(x)|e^{i\gamma(x)}.
\end{equation}
It is important at this stage to note that the eigenfunctions $U$ are determined for a fixed $x$ only up to an arbitrary complex-valued prefactor. Indeed, the product $fU$ remains invariant under the gauge transformation
\begin{multline}\label{gauge fU}
U\left(\xi;\{k,r\}\right)\rightarrow a(x)e^{ib(x)}\acute{U}\left(\xi;\{k,r\}\right), \\ |f(x)| \rightarrow \frac{|\acute{f}(x)|}{a(x)}, \quad \gamma(x) \rightarrow \acute{\gamma}(x) - b(x),
\end{multline}
where $a(x)$ and $b(x)$ are real.
We shall assume that a specific set of eigenfunctions $U$, varying smoothly with $x$, has been chosen.
\begin{figure}
\begin{center}
\includegraphics[scale=0.33]{twodim_comb.eps}
\caption{Bloch dispersion relation for the layered medium considered in \S\ref{sec:layered}.}
\label{fig:disp}
\end{center}
\end{figure}
Consider now the $O(\epsilon)$ balance of \eqref{Aeq}:
\begin{equation}\label{A1eq}
\mathscr{L}^2_{k,\chi}A_1 + 2\pd{^2A_0}{x\partial\xi}+2ik\pd{A_0}{x}+i\frac{dk}{dx}A_0 +\omega^2\tau A_0= 0.
\end{equation}
Eq.~\eqref{A1eq} is a forced variant of \eqref{A0eq} and we derive a solvability condition by multiplying \eqref{A1eq} by ${A}^*_0$, subtracting the conjugate of \eqref{A1eq} multiplied by $A_0$, and integrating by parts over the unit cell $\xi\in[-1,1)$. Noting the periodicity of $A_0$ and $A_1$ we find the transport equation:
\begin{equation}\label{solv complex}
-2\int \pd{{A}^*_0}{\xi}\pd{A_0}{x}\,d\xi + 2ik\int \pd{A_0}{x}{A}^*_0\,d\xi
+i\frac{dk}{dx}\int|A_0|^2\,d\xi+\omega^2\int\tau|A_0|^2\,d\xi =0.
\end{equation}
Inspection of the imaginary and real parts of \eqref{solv complex} leads to more useful equations separately governing the ``amplitude'' $|f|$ and the ``slow phase'' $\gamma$, respectively. Thus, subtracting from \eqref{solv complex} its conjugate, and using \eqref{gv relation}, we find the energy balance
\begin{equation}\label{amp eq}
\frac{d}{dx}\left(\pd{\Omega}{k}|f|^2\int\chi|U|^2\,d\xi\right) = 0.
\end{equation}
The converse manipulation yields
\begin{equation}\label{gamma A}
\mathrm{Re}\left[\int\left(\pd{A_0}{\xi}+ikA_0\right)\pd{A_0^*}{x}\,d\xi\right]=\frac{\omega^2}{2}\int\tau|A_0|^2\,d\xi,
\end{equation}
which, by eliminating $|f|$ using \eqref{A form} and \eqref{gv relation}, gives an equation for $\gamma$:
\begin{equation}\label{gamma}
\frac{d\gamma}{dx}\pd{\Omega}{k}\omega\int\chi|U|^2\,d\xi = -\mathrm{Re}\int\left(\pd{U}{\xi}+ikU\right)\pd{ U^*}{x}\,d\xi+\frac{\omega^2}{2}\int\tau|U|^2\,d\xi.
\end{equation}
By substituting the two identities \eqref{dk} and \eqref{dU}, derived in the appendix, \eqref{gamma} can be rewritten in a more convenient form where differentiation with respect to $\xi$ is circumvented:
\begin{equation}\label{gamma2}
\frac{d\gamma}{dx}=\frac{\mathrm{Im}\int\pd{\chi}{r}\pd{U^*}{x}U\,d\xi}{\int\pd{\chi}{r}|U|^2\,d\xi}+\frac{\omega\int\tau|U|^2\,d\xi}{2\pd{\Omega}{k}\int\chi|U|^2\,d\xi}.
\end{equation}
The asymptotic solution in a spatial pass band may also involve an oppositely propagating Bloch wave. A general leading-order approximation is therefore
\begin{equation}\label{WKB solution}
u \sim \mathcal{A}_+\frac{Ue^{i\gamma^+}e^{i\varphi^+/\epsilon}}{\sqrt{\left|\pd{\Omega}{k}\right|\int{\chi}|U|^2\,d\xi}}
+\mathcal{A}_-\frac{ U^*e^{i\gamma^-}e^{i\varphi^-/\epsilon}}{\sqrt{\left|\pd{\Omega}{k}\right|\int{\chi}|U|^2\,d\xi}},
\end{equation}
where
\begin{equation}\label{phi pm}
\frac{d\varphi^{\pm}}{dx}=\pm k(x),
\end{equation}
\begin{equation}\label{gamma pm}
\frac{d\gamma^{\pm}}{dx}=\pm\frac{\mathrm{Im}\int\pd{\chi}{r}\pd{U^*}{x}U\,d\xi}{\int\pd{\chi}{r}|U|^2\,d\xi}\pm\frac{\omega\int\tau|U|^2\,d\xi}{2\pd{\Omega}{k}\int\chi|U|^2\,d\xi},
\end{equation}
and $\mathcal{A}_{\pm}$ are complex constants.
In writing \eqref{gamma pm} we note that the group velocity is odd in $k$ and that under the transformation $k\rightarrow-k$ the first integral on the right-hand side of \eqref{gamma} is conjugated. We further note that the direction of propagation of the WKB solutions in \eqref{WKB solution} is determined by the sign of the group velocity rather than that of $k$.
\section{Breakdown of the WKB description near a band-gap edge}\label{sec:behaviour}
In this section we investigate the manner in which the WKB solutions \eqref{WKB solution} loose validity near a band-gap edge. Thus, let $x=x_t$ divide the macro-scale into spatial regions wherein $\omega$ respectively falls within a pass band ($k$ real) and a stop band ($k$ complex). We denote $r(x_t)=r_t$ and $k(x_t)=k_t$, and assume the usual case where the critical wavenumber is at the centre or edge of the Brillouin zone (see Fig.~\ref{fig:disp}), i.e., $k_t=0$ or $\pi/2$, respectively. Assuming $r(x)$ is not critical at $x_t$ we write
\begin{equation}\label{r exp}
r(x) \sim r_t + (x-x_t)\left(\frac{dr}{dx}\right)_t + O(x-x_t)^2 \quad \text{as} \quad x\to x_t
\end{equation}
and
\begin{equation}\label{chi exp}
\chi(\xi,r) \sim \chi_t(\xi) + (x-x_t)\left(\pd{\chi}{r}\right)_t\left(\frac{dr}{dx}\right)_t+ O(x-x_t)^2 \quad \text{as} \quad x\to x_t,
\end{equation}
where a $t$ subscript denotes evaluation at $x=x_t$.
Time-reversal symmetry and periodicity of the dispersion curves suggest that
\begin{equation}\label{zero gv}
\left(\pd{\Omega}{k}\right)_t = 0, \quad \left(\pd{^2\Omega}{k\partial r}\right)_t=0, \quad \left(\pd{^3\Omega}{k^3}\right)_t=0, \quad \ldots,
\end{equation}
which in turn imply the expansion
\begin{multline}\label{Omega exp B}
\Omega(r,k) \sim \Omega_t + (r-r_t)\left(\pd{\Omega}{r}\right)_t + \frac{1}{2}(k-k_t)^2\left(\pd{^2\Omega}{k^2}\right)_t \\ + O[(r-r_t)^2,(r-r_t)(k-k_t)^2,(k-k_t)^4] \quad \text{as} \quad x\to x_t.
\end{multline}
But $\Omega(r,k)=\omega$ for all $x$, hence
\begin{equation}\label{k exp}
k(x)\sim k_t + k_{1/2}(x-x_t)^{1/2}+ O\left(|x-x_t|^{3/2}\right) \quad \text{as} \quad x\to x_t,
\end{equation}
where
\begin{equation}\label{k12 thermo}
k_{1/2}^2=-2{\left(\frac{dr}{dx}\right)_t\left(\pd{\Omega}{r}\right)_t}\Big{/}{\left(\pd{^2\Omega}{k^2}\right)_t}.
\end{equation}
In \eqref{k exp} and throughout the paper the square-root function has a branch cut along the negative imaginary axis and is positive along the positive real axis. In \eqref{k12 thermo}, the sign of $k_{1/2}$ is chosen so that $k-k_t$ has the correct sign when $x$ is on the side of $x_t$ where $k$ is real. It follows from \eqref{Omega exp B} and \eqref{k exp} that
\begin{equation}\label{gv expansion}
\pd{\Omega}{k} \sim k_{1/2}(x-x_t)^{1/2}\left(\pd{^2\Omega}{k^2}\right)_t + O(x-x_t) \quad \text{as} \quad x\to x_t.
\end{equation}
Having the above expansions for the dispersion characteristics of the material near $x_t$, we are ready to obtain the corresponding asymptotics of \eqref{WKB solution}. Consider first the fast phases $\varphi^{\pm}$; from \eqref{phi pm} together with \eqref{k exp} we find
\begin{equation}
\varphi^{\pm}(x) \sim \varphi^{\pm}_t \pm k_t (x-x_t) \pm \frac{2}{3}k_{1/2}(x-x_t)^{3/2} + O\left(|x-x_t|^{5/2}\right) \quad \text{as} \quad x\to x_t.
\end{equation}
Subtler are the asymptotics of $U$ and $\gamma^{\pm}$, particularly because $U$ and $\gamma$, considered separately, are ``gauge dependent'' [cf.~\eqref{gauge fU}]. Fixing the standing-wave eigenfunction $U(\xi,{k_t,r_t})=U_t(\xi)$, the perturbation theory in the appendix gives
\begin{equation}\label{U exp}
U(\xi,\{k,r\}) \sim U_t(\xi) + (x-x_t)^{1/2}U_{1/2}(\xi) + \cdots \quad \text{as} \quad x\to x_t.
\end{equation}
As explained in the appendix, $U_{1/2}$ remains gauge dependent even after fixing the standing-wave eigenfunction; in fact, in general one could add to \eqref{U exp} an intermediate-order term proportional to $U_t$. We assume the chosen gauge entails no such intermediate term, which would only arise in an analytical or computational scheme if intentionally forced to.
Substituting \eqref{U exp} and \eqref{gv expansion} into \eqref{gamma pm} shows that the $x$ derivatives of $\gamma^{\pm}$ are $O\left(|x-x_t|^{-1/2}\right)$ as $x\to x_t$, which implies
\begin{equation}\label{gamma exp}
\gamma^{\pm} (x) \sim \gamma^{\pm}_t + O\left(|x-x_t|^{1/2}\right) + \cdots \quad \text{as} \quad x\to x_t.
\end{equation}
The asymptotic estimates \eqref{gv expansion}, \eqref{U exp} and \eqref{gamma exp} together yield the behaviour of the \emph{leading-order} WKB solution \eqref{WKB solution}:
\begin{multline}\label{WKB asym}
u \sim \frac{\mathcal{A}_+ e^{i\gamma_t^+ + i\varphi_t^+/\epsilon}}{\sqrt{\left|k_{1/2}\left(\pd{^2\Omega}{k^2}\right)_t\right|\int{\chi_t}|U_t|^2\,d\xi}}\left\{\frac{U_t(\xi)}{|x-x_t|^{1/4}}+O\left(|x-x_t|^{1/4}\right)\right\} \\
\times \exp\left\{ ik_t \frac{x-x_t}{\epsilon} + \frac{2}{3}i k_{1/2}\frac{(x-x_t)^{3/2}}{\epsilon} + O\left(|x-x_t|^{5/2}/\epsilon\right)\right\} \\
+ \frac{\mathcal{A}_- e^{i\gamma_t^- + i\varphi_t^-/\epsilon}}{\sqrt{\left|k_{1/2}\left(\pd{^2\Omega}{k^2}\right)_t\right|\int{\chi_t}|U_t|^2\,d\xi}}\left\{\frac{\bar{U}_t(\xi)}{|x-x_t|^{1/4}}+O\left(|x-x_t|^{1/4}\right)\right\} \\
\times \exp\left\{ -ik_t \frac{x-x_t}{\epsilon} - \frac{2}{3}i k_{1/2}\frac{(x-x_t)^{3/2}}{\epsilon} + O\left(|x-x_t|^{5/2}/\epsilon\right) \right\} \quad \text{as} \quad x\to x_t.
\end{multline}
Expansion \eqref{WKB asym} can be made more revealing by noting that $U_t\exp(ik_t\xi)$ is anti-periodic and real up to a multiplicative complex prefactor. We may therefore write $U_t=\exp(i\beta)\exp(-ik_t\xi)\mathcal{U}_t(\xi)$, where $\beta$ is a real phase depending on the gauge choice and $\mathcal{U}_t(\xi)$ is a periodic (if $k_t=0$) or anti-periodic (if $k_t=\pi/2$) real function; similarly, $U^*_t=\exp(-i\beta)\exp(ik_t\xi)\mathcal{U}_t(\xi)$. We next note that $\exp(ik_t\xi)\exp(-ik_t x/\epsilon)$ and $\exp(-ik_t\xi)\exp(ik_t x/\epsilon)$ represent exactly the same function. Note that, for $k_t=\pi/2$, the latter function is \emph{not} the unity function, since the dependence on $\xi$ is extended from the unit cell $\xi\in(-1,1)$ to all $x$ by a periodic continuation, whereas $\exp[i\pi x/(2\epsilon)]$ has a period of two cells. These observations allow us to rewrite \eqref{WKB asym} as
\begin{multline}\label{WKB asym reduced}
u \sim \frac{\exp{(ik_t x/\epsilon)}U_t(\xi)}{|x-x_t|^{1/4}\sqrt{\left|k_{1/2}\left(\pd{^2\Omega}{k^2}\right)_t\right|\int{\chi_t}|U_t|^2\,d\xi}} \times \\ \left\{\mathcal{A}_+ e^{i\gamma_t^+ + i\varphi_t^+/\epsilon-ik_tx_t/\epsilon}
\exp\left[\frac{2}{3}i k_{1/2}\frac{(x-x_t)^{3/2}}{\epsilon}\right] + \right. \\ \left.
\mathcal{A}_- e^{i\gamma_t^- + i\varphi_t^-/\epsilon+ik_tx_t/\epsilon-2i\beta}\exp\left[ - \frac{2}{3}i k_{1/2}\frac{(x-x_t)^{3/2}}{\epsilon}\right] \right\} \quad \text{as} \quad x\to x_t.
\end{multline}
We now see that the field $u$ is asymptotic to an oscillating tale of an Airy function, varying on an intermediate $O(\epsilon^{2/3})$ scale, and modulated on the short $O(\epsilon)$ scale by the periodic or anti-periodic standing-wave Bloch eigenfunction $\exp{(ik_t x/\epsilon)}U_t(\xi)$. On one hand, an Airy tale is reminiscent of the behaviour of a single-scale WKB solution near a turning point \cite{Holmes:Book}. On the other hand, a multiple-scale solution where an envelope function is modulated by a short-scale standing-wave Bloch eigenmode is nothing but the ansatz assumed in HFH \cite{Craster:10}. These two observations together suggest looking next at the region $x-x_t=O(\epsilon^{2/3})$ using a localised HFH-like expansion.
\section{High-frequency homogenisation of the band-gap edge}\label{sec:hfh}
Thus near the band-gap edge $x=x_t$ we look for a multiple-scale solution in the form
\begin{equation}\label{hfh ansatz}
u = \epsilon^{-1/6}e^{ik_t x/\epsilon}T(\xi,\bar{x}), \quad \bar{x}=\frac{x-x_t}{\epsilon^{2/3}},
\end{equation}
where $T$ is $2$-periodic in $\xi$, and $k_t=0$ or $\pi/2$. In terms of $\bar{x}$,
\begin{equation}
\Lambda \sim \chi_t(\xi)+\epsilon^{2/3}\bar{x}\left(\frac{dr}{dx}\right)_t\left(\pd{\chi}{r}\right)_t(\xi) + O(\epsilon);
\end{equation}
note that $\tau$ enters only at $O(\epsilon)$ and will not affect the leading order of $T$.
Substitution of \eqref{hfh ansatz} into the governing equation \eqref{master eq} gives
\begin{equation}\label{T eq}
\mathscr{L}^2_{k_t,\Lambda}T+ \epsilon^{1/3}\left(2\pd{^2T}{\bar{x}\partial\xi}+2ik_t\pd{T}{\bar{x}}\right) + \epsilon^{2/3}\pd{^2T}{\bar{x}^2} = 0.
\end{equation}
The form of \eqref{T eq} suggests the expansion
\begin{equation}
T(\xi,\bar{x}) \sim T_0(\xi,\bar{x}) + \epsilon^{1/3}T_1(\xi,\bar{x}) + \epsilon^{2/3}T_2(\xi,\bar{x})+ \cdots,
\end{equation}
where as usual $T_0,T_1,\ldots$ are 2-periodic in $\xi$.
The $O(1)$ balance of \eqref{T eq},
\begin{equation}\label{T O1}
\mathscr{L}^2_{k_t,\chi_t}T_0=0,
\end{equation}
in conjunction with periodicity on the short scale, is identified as the zone-centre ($k_t=0$) or zone-edge ($k_t=\pi/2$) Bloch problem. The solution \eqref{T O1} therefore reads
\begin{equation}
T_0 = g_0(\bar{x})U_t(\xi),
\end{equation}
where $g_0$ is a leading-order envelope function, and the standing-wave Bloch eigenfucntion $U_t(\xi)=U[\xi;(k_t,r_t)]$ is specifically chosen as in \S\ref{sec:behaviour}. The $O(\epsilon^{1/3})$ balance of \eqref{T eq} gives
\begin{equation}
\mathscr{L}^2_{k_t,\chi_t}T_1=-2\frac{dg_0}{dx}\left(ik_tU_t+\frac{dU_t}{d\xi}\right).
\end{equation}
It is readily verified that solvability is identically satisfied. The general solution is
\begin{equation}
T_1 = g_1(\bar{x})U_t(\xi)+\frac{dg_0}{d\bar{x}}\Phi(\xi),
\end{equation}
where $\Phi$ is any particular solution of
\begin{equation}
\mathscr{L}^2_{k_t,\chi_t}\Phi=-2ik_tU_t-2\frac{dU_t}{d\xi}
\end{equation}
that is $2$-periodic.
Finally, consider the $O(\epsilon^{2/3})$ balance of \eqref{T eq},
\begin{multline}
\mathscr{L}^2_{k_t,\chi_t}T_2=-2\frac{dg_1}{dx}\left(ik_tU_t+\frac{dU_t}{d\xi}\right) \\
-\frac{d^2g_0}{d\bar{x}^2}\left(2\frac{d\Phi}{d\xi}+2ik_t\Phi_1+U_t\right)-\omega^2\bar{x}g_0\left(\frac{dr}{dx}\right)_t\left(\pd{\chi}{r}\right)_t U_t.
\end{multline}
A solvability condition is derived in the usual way. We find that the envelope function $g_0(\bar{x})$ is governed by an Airy equation,
\begin{equation}\label{scary Airy}
\frac{d^2g_0}{d\bar{x}^2}+\left[\frac{\omega^2\left(\frac{dr}{dx}\right)_t\int \left(\pd{\chi}{r}\right)_t |U_t|^2\,d\xi}{2\int \frac{d\Phi}{d\xi}U_t^*\,d\xi+2ik_t\int\Phi U^*_t\,d\xi+\int|U_t|^2\,d\xi}\right]\,\bar{x}g_0 =0,
\end{equation}
where the constant in the square brackets is provided in terms of integrals of the leading- and first-order short-scale eigenfunctions $U_t$ and $\Phi$. Remarkably, however, this constant turns out to be equal to $k^2_{1/2}$ [cf.~\eqref{k12 thermo}]; this connection is derived in the appendix, see \eqref{k12}. The Airy equation \eqref{scary Airy} therefore reduces to
\begin{equation}\label{Airy}
\frac{d^2g_0}{d\bar{x}^2}+k_{1/2}^2\bar{x}g_0 =0.
\end{equation}
Hence
\begin{equation}\label{g solution}
g_0(\bar{x})=\mathcal{A}\,\mathrm{Ai}(\pm |k_{1/2}|^{2/3}\bar{x})+\mathcal{B}\,\mathrm{Bi}(\pm |k_{1/2}|^{2/3}\bar{x}),
\end{equation}
where $\mathcal{A}$ and $\mathcal{B}$ are constants, $\mathrm{Ai}$ and $\mathrm{Bi}$ are the first and second Airy functions, and the plus (minus) sign applies if crossing into the band gap as $x$ is increased (decreased).
We shall see that the leading-order multiple-scale solution $\epsilon^{-1/6}e^{ik_t x/\epsilon}U_t(\xi)g_0(\bar{x})$, with $g_0$ given by \eqref{g solution}, can be matched, on the pass-band side of $x_t$, with the propagating WKB solutions \eqref{WKB solution}. On the stop-band side of $x_t$, where $k$ is complex, we anticipate that the solution can be matched with evanescent WKB solutions, though we do not consider this here. The evanescent WKB waves may be ignored when tunnelling effects are absent or negligible and in what follows we consider one such scenario in detail.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.55]{reflection_schematic.eps}
\caption{Reflection from a spatial band-gap edge. An incident Bloch wave, described by a multiple-scale WKB approximation, adiabatically propagates to the right. Adiabatic propagation breaks down as the Bloch wavenumber $k$ approaches $k_t=\pi/2$. Asymptotically matching the WKB approximation with a multiple-scale HFH solution localised about the singularity determines the reflected Bloch wave propagating left from the singularity.
}
\label{fig:reflection_schematic}
\end{center}
\end{figure}
\section{Bloch-wave reflection from a band-gap edge}\label{sec:reflection}
Consider a slowly varying Bloch wave incident on a band-gap edge singularity, assuming that the band gap is spatially wide so that tunnelling-assisted transmission across it is absent or of exponentially small order (see Fig.~\ref{fig:reflection_schematic}). We accordingly set $\mathcal{B}=0$ in \eqref{g solution}. The leading-order approximation in the transition region is therefore
\begin{equation}\label{trans leading}
u \sim \epsilon^{-1/6}\mathcal{A}e^{ik_t x/\epsilon}U_t(\xi) \mathrm{Ai}\left(\pm |k_{1/2}|^{2/3}\bar{x}\right),
\end{equation}
where the $\pm$ sign implies matching with the propagating WKB solution \eqref{WKB solution} in the limit as $\bar{x}\to\mp\infty$. We carry out the matching using van Dyke's rule \cite{Van:Book} with respect to the long-scale variables $x$ and $\bar{x}$, keeping the short-scale variable $\xi$ fixed. Thus, rewriting \eqref{trans leading} in terms of $x$ rather than $\bar{x}$, then expanding to leading order in $\epsilon$, yields
\begin{multline}\label{trans expansion}
u \sim \mathcal{A}\frac{e^{ik_t x/\epsilon}U_t(\xi)}{2i\sqrt{\pi}|k_{1/2}|^{1/6}|x-x_t|^{1/4}}\left[e^{i\frac{\pi}{4}}e^{i\frac{2}{3}|k_{1/2}||x-x_t|^{3/2}/\epsilon} \right. \\ \left. - e^{-i\frac{\pi}{4}}e^{-i\frac{2}{3}|k_{1/2}||x-x_t|^{3/2}/\epsilon}\right].
\end{multline}
We compare \eqref{trans expansion} to the leading-order WKB solution \eqref{WKB solution}, rewritten in terms of $\bar{x}$ rather than $x$, then expanded to leading order in $\epsilon$. We thereby see that \eqref{trans expansion} and \eqref{WKB asym reduced} must coincide.
Consider first the case where the band gap is to the right of $x_t$. If $k_t=0$ then $k>0$ for $x-x_t<0$ and hence $k_{1/2}=-i|k_{1/2}|$. Alternatively, if $k_t=\pi/2$, then $k-k_t<0$ for $x-x_t<0$ and hence $k_{1/2}=i|k_{1/2}|$; note also that $|x-x_t|^{3/2}=i(x-x_t)^{3/2}$.
Thus matching yields the connection formulae
\begin{gather}\label{connection1}
\frac{\mathcal{A}e^{i\frac{\pi}{4}}}{2i\sqrt{\pi}|k_{1/2}|^{1/6}} = \frac{\mathcal{A}_{\mp} e^{i\gamma_t^{\mp} + i\varphi_t^{\mp}/\epsilon\pm ik_tx_t/\epsilon-(1\pm 1)i\beta}}{\sqrt{\left|k_{1/2}\left(\pd{^2\Omega}{k^2}\right)_t\right|\int{\chi_t}|U_t|^2\,d\xi}}, \\ \label{connection2}
-\frac{\mathcal{A}e^{-i\frac{\pi}{4}}}{2i\sqrt{\pi}|k_{1/2}|^{1/6}}= \frac{\mathcal{A}_{\pm} e^{i\gamma_t^{\pm} + i\varphi_t^{\pm}/\epsilon\mp ik_tx_t/\epsilon-(1\mp1)i\beta}}{\sqrt{\left|k_{1/2}\left(\pd{^2\Omega}{k^2}\right)_t\right|\int{\chi_t}|U_t|^2\,d\xi}},
\end{gather}
where here the upper sign is for $k_t=0$ and the lower sign is for $k_t=\pi/2$.
Similar considerations show that \eqref{connection1} holds also in the case where the band gap is to the left of $x_t$.
Either $\mathcal{A}_+$ or $\mathcal{A}_-$, whichever corresponds to the incident WKB wave propagating towards the singularity, is assumed known. The other coefficient, determining the reflected WKB wave, is found by eliminating $\mathcal{A}$ from \eqref{connection1} and \eqref{connection2}:
\begin{equation}\label{reflection}
\frac{\mathcal{A}_+}{\mathcal{A}_- }= {-ie^{i(\gamma_t^- -\gamma_t^+)+i(\varphi_t^- -\varphi_t^+)/\epsilon+2ik_tx_t/\epsilon-2i\beta} }.
\end{equation}
The incident and reflected waves in \eqref{WKB solution} are equal in amplitude up to the pre-factors $\mathcal{A}_+$ and $\mathcal{A}_-$. Thus \eqref{reflection} confirms that the incident wave is totally reflected and determines the initial phase of the reflected wave. Finally, obtaining $\mathcal{A}$ from either \eqref{connection1} or \eqref{connection2} determines the solution in the transition region.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{layer_model.eps}
\caption{Two-variable representation of the locally periodic layered medium considered in \S\ref{sec:layered}, in the case $r=1.5 +0.5\tanh(3x/2)$ and $\epsilon=0.1$. The black line depicts the material parameter $\Lambda$ exactly. The red dashed line shows the leading-order approximation $\Lambda\sim \chi(\xi,r)$, which is insufficiently accurate to serve the leading-order WKB approximation developed in this paper. The red solid line depicts the improved and sufficient approximation $\Lambda\sim\chi(\xi,r)+\epsilon\tau(\xi,r)$.}
\label{fig:layer_model}
\end{center}
\end{figure}
\section{Example: A layered medium}
\label{sec:layered}
\subsection{Consistent two-variable representation}
We now describe an implementation of the above asymptotic theory in the case of a locally periodic layered medium. Specifically, consider a layered medium made out of unit cells of width $2\epsilon$, where
in the left half of each cell $\Lambda=1$ whereas in the right-half $\Lambda$ equals the value of $r^2(x)$ \emph{evaluated at the centre of that cell} (see black line in Fig.~\ref{fig:layer_model}). It is tempting to set $\Lambda = \chi[\xi,r(x)]$, where
\begin{equation}\label{layered chi}
\chi =\begin{cases}
1, & -1<\xi<0 \\
r^2, & 0<\xi<1.
\end{cases}
\end{equation}
As prompted in \S\ref{sec:bulk}, however, this choice would not accurately represent our layered medium (see dashed red line in Fig.~\ref{fig:layer_model}). Indeed, $r(x)$ varies by $O(\epsilon)$ across the positive-$\xi$ half of each cell, while the true layered medium consists of strictly homogeneous layers. For this reason we allowed $\Lambda$ to have an $O(\epsilon)$ correction [cf.~\eqref{Lambda def}]. Specifically, a layered medium can be consistently modelled as $\Lambda = \chi[\xi,r(x)] + \epsilon \tau[\xi,r(x)] + O(\epsilon^2)$, where $\chi$ is provided by \eqref{layered chi} and
\begin{equation}\label{layered tau}
\tau =\begin{cases}
0, & -1<\xi<0 \\
-2\xi r\frac{dr}{dx}, & 0<\xi<1.
\end{cases}
\end{equation}
It is easy to see that the corrected material coincides, to $O(\epsilon)$, with the true layered medium (see red line in Fig.~\ref{fig:layer_model}). Our asymptotic theory reveals that the correction $\epsilon\tau$ is crucial in propagating the slow Berry-like phases $\gamma^{\pm}(x)$ and accordingly plays an important role in fixing the crests and troughs of the WKB Bloch waves; in contrast, the correction was not important in the leading-order HFH analysis of the transition region.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.37]{adiab_om3_2ep_1.eps}
\caption{Comparison between the WKB approximation and a TM simulation in the scenario where a Bloch-like wave propagates to the right through a locally periodic medium ($r=1.9-0.9\tanh(x^2),\, \epsilon=0.1,\,\omega=3.2$). The medium is homogeneous for $|x|\gg1$ and an incident plane wave is prescribed at large negative $x$. The weak reflection observed in the TM simulation is neglected in the adiabatic WKB approximation.}
\label{fig:ad1}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.37]{adiab_om_2ep_05.eps}
\caption{
Like Fig.~\ref{fig:ad1} but for $r=7-6\tanh(4x^2),\,\epsilon=0.05,\,\omega=0.2$. }
\label{fig:ad2}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.37]{BGref_om_6ep_04.eps}
\caption{
A Bloch-like wave, propagating to the right through a locally periodic medium ($r=2.1+1.1\tanh x,\, \epsilon=0.04,\,\omega=0.65$), is totally reflected from a spatial band gap. An incident plane wave is prescribed at large negative $x$ where the medium is homogeneous. A comparison is shown between a TM simulation and an asymptotic approximation compound of WKB waves matched to a HFH solution localised about the band-gap edge. }
\label{fig:nonad}
\end{center}
\end{figure}
\subsection{Bloch eigenvalue problem}
To implement the asymptotic scheme we need to solve the Bloch eigenvalue problem at fixed $\omega$, as a function of the material property $r$ varying on the long scale. That problem consists of the differential equation $\mathscr{L}^2_{k,\chi}U=0$ together with periodic boundary conditions on the cell boundaries $\xi=\pm1$. The solution when $\chi$ is given by \eqref{layered chi} is well known \cite{Kittel:Book}. In particular, the dispersion relation is
\begin{equation}\label{layer dispersion}
2r\cos(2k)=2r\cos\omega\cos(r\omega)-(1+r^2)\sin\omega\sin(r\omega),
\end{equation}
while the periodic Bloch eigenfunctions can be written as
\begin{equation}\label{mode}
U(\xi,\{r,k\})=e^{-ik\xi}
\begin{cases}
a_1e^{i\omega\xi}+a_2e^{-i\omega\xi}, & -1<\xi<0 \\
b_1e^{i\omega r\xi}+b_2e^{-i\omega r\xi}, & 0<\xi<1.
\end{cases}
\end{equation}
We fix the gauge by setting $a_1=e^{i\omega}\left[1+2e^{ir\omega}(r-1)+r\left(1-2e^{i(2k+\omega+\omega r)}\right)\right]$;
the remaining coefficients $a_2$, $b_1$ and $b_2$ are then determined from periodicity along with continuity of $U$ and its first derivative at $\xi=0$.
\subsection{Numerical validation: Adiabatic propagation}
In what follows we compare our asymptotic approach with a numerical transfer-matrix (TM) scheme \cite{Yeh:Book}.
This numerical scheme takes advantage of the fact that within each layer the solution of \eqref{master eq} is a superposition of two harmonic waves propagating in opposite directions. The solution can therefore be represented by two layer-dependent complex coefficients that characterise the amplitudes and phases of the harmonic waves; if the two coefficients are known for one layer, the solution everywhere can be determined by applying continuity conditions at successive interfaces. Usually one does not know \emph{a priori} the values of both coefficients in any given cell and a shooting method is used.
We first demonstrate propagation in a locally periodic material, which, at a given operating frequency, is free of dispersion singularities. Accordingly, we expect the WKB solutions \eqref{WKB solution} to hold for all $x$. For the sake of simplicity we assume $r\to1$ as $x\to\pm\infty$, which means that at infinity the medium is homogeneous and accordingly the solution tends to a superposition of oppositely propagating plane waves. To simulate an incident wave from the left, we numerically prescribe at a large negative $x$ the coefficient of the right-propagating wave and determine the coefficient of the left-propagating wave by a shooting method that ensures the radiation condition is satisfied at large positive $x$ (no incoming plane wave). The corresponding asymptotic solution is given by the right-propagating WKB Bloch wave in \eqref{WKB solution}, which is advanced using \eqref{phi pm} and \eqref{gamma pm} starting from the incident plane wave at large negative $x$; the asymptotic scheme does not predict a reflected Bloch wave in this case hence the second solution in \eqref{WKB solution} is discarded.
Figs.~\ref{fig:ad1} and \ref{fig:ad2} show a comparison between the WKB asymptotics and the TM scheme in the respective cases (i) $\omega=3.2,\,\epsilon=0.1,\,r=1.9-0.9\tanh(x^2)$ and (ii) $\omega=0.2,\,\epsilon=0.05,\,r=7-6\tanh(4x^2)$. The figures also show the variation of the material parameter $r$ along with the local Bloch wavenumber and group velocity. The agreement between the asymptotic and numerical solutions is excellent. Fig.~\ref{fig:ad1} shows that the method remains accurate even when the long scale is not very large compared with the short scale. Moreover, at $x=0$ the local Bloch eigenvalue is close to a branch crossing and this too does not result in appreciable deviations. The comparison in Fig.~\ref{fig:ad2} demonstrates the efficacy of the WKB solution for strong and moderately rapid variations. The frequency $\omega$ is low, which explains the long wavelength for $|x|\gtrsim1$. For $|x|\lesssim 1$, however, $r$ goes up to $7$ and the wavelength is intermittently short and long in the right and left halves of each unit cell (the wave field is accordingly curved in the grey layers and almost linear in the white layers). Note also that at $x=0$ the Bloch eigenvalue is close to a band-gap edge, where the WKB description breaks down. Thus we have demonstrated agreement under extreme circumstances. We stress that if a naive approach is taken, where either the slow phase $\gamma$ or the medium correction $\tau$ are ignored, the agreement is quite poor and the error does not vanish as $\epsilon\to0$.
\subsection{Numerical validation: Reflection from a band-gap edge}
We next demonstrate the scenario discussed in \S\ref{sec:reflection} where a Bloch wave is reflected from a band-gap edge. Specifically, we consider a locally periodic layered medium $r=2.1+1.1\tanh x$. The medium is homogeneous at large negative $x$, slowly transforming into a band-gap medium at larger $x$; for an operating frequency $\omega=0.65$ the Bloch wavenumber $k$ is real for $x<x_t\approx 0.5$ and complex for $x>x_t$ with real part $k_t=\pi/2$ (see Fig.~\ref{fig:nonad}). To construct the asymptotic solution we advance the right-propagating WKB wave from a prescribed incident wave at large negative $x$. We then use the theory in \S\ref{sec:behaviour}--\S\ref{sec:reflection} to match the diverging solution with a local HFH description for $|x-x_t|=O(\epsilon^{2/3})$, which in turn determines through matching initial conditions for advancing a reflected left-propagating WKB wave. The numerical scheme is applied as before but now with a shooting method ensuring there is no exponentially growing evanescent wave for $x>x_t$. Fig.~\ref{fig:nonad} shows excellent agreement between the asymptotic and numerical solutions for $\epsilon=0.04$. We note that $\gamma$ and $\tau$ play a particularly crucial role in this scenario since matching sensitively depends on the limiting value $\gamma_t$ of the slow phase. We lastly note that in principle the HFH solution could also be matched with an exponentially decaying evanescent Bloch wave for $x>x_t$, though this will not affect the reflected wave in the present scenario where tunnelling is absent.
\section{Discussion}\label{sec:discussion}
The key message of this paper is that multiple-scale WKB approximations and HFH are complementary and have overlapping domains of validity. The two methods can therefore be systematically combined, through the method of matched asymptotic expansions, to furnish a more complete scheme for high-frequency waves in periodic, or locally periodic, media. We here demonstrated this idea for a locally periodic medium in 1D, focusing on the scenario of total reflection of a slowly varying Bloch wave from a band-gap edge. Clearly this is only a first step towards a more general theory.
The present 1D analysis can be directly applied to calculate trapped modes for a locally periodic medium with a pass-band interval bounded by two stop-band intervals \cite{Johnson:02}. By generalising the analysis to account for off-axis propagation, the latter calculation could be adapted to model Bragg waveguides \cite{Yeh:76}. An analogous 1D analysis could be devised to describe transmission and reflection of WKB Bloch waves in the presence of degenerate points where two dispersion branches touch; this would entail matching the WKB solutions with modified inner regions treated by a degenerate HFH-like expansion \cite{Colquitt:15high}. It would also be useful to generalise the WKB scheme to include evanescent Bloch waves; one could then describe tunnelling-assisted transmission through a locally periodic medium with a stop-band interval bounded by two pass-band intervals. Further objectives, still in the 1D framework, include implementing the method for additional periodic modulations $\Lambda$ and to generalise the asymptotics to other wave equations; in particular, in electro-magnetics the wave equation \eqref{master eq} describes s-polarised light and it would be desirable to study p-polarised light as well.
In higher dimensions the multiple-scale WKB method generalises to a geometric ray theory \cite{Allaire:11}. We anticipate that WKB solutions could still be matched with HFH-like expansions localised about geometric places where the Bloch eigenvalue becomes critical or degenerate. Spatially localised dispersion singularities would be present even in non-graded crystals, since the Bloch wavevector at fixed frequency would vary with the direction of propagation. In the latter case, the Bloch wavevector is conserved along straight rays that are directed along the corresponding group-velocity vector. HFH expansions would thus be sought about singular rays whose Bloch wavevector is critical. In higher dimensions there would also be the usual ray singularities, in physical rather than Bloch space, namely focus points and caustics \cite{Chapman:99}. Finally, a general description would address diffraction at crystalline interfaces. While Bragg's law determines the directions of reflected, refracted and diffracted rays, a complete quantitative picture would generally entail matching with inner regions wherein the crystal appears to be semi-infinite.
\section*{Acknowledgement}
I am grateful to Richard V. Craster for fruitful discussions.
|
1,116,691,499,155 | arxiv | \section{Introduction}
The famous Coulomb potential of electrodynamics is very important as
an essential ingredient in any non-relativistic problem involving
charged particles and consequently as being responsible for many
phenomena in everyday life. It is thus no surprise that its analogue
in chromodynamics has also been of great interest since 20 years.
Although the QCD potential certainly is not as ubiquitous in the
macroscopic world, it still represents a fundamental concept which,
besides the fact that potential models have been astonishingly
successful in the description of quarkonia, might give us some deeper
understanding of non-abelian theories and especially of confinement.
There is, of course, no known way to analytically derive the confining
part of the potential from first principles up to now, and
consequently this paper will also be restricted to an analysis of the
perturbative part. Nevertheless one may hope to obtain some hints
on the non-perturbative regime in this way. In addition, a calculation
of the perturbative potential is required for comparisons between
continuum QCD and lattice results.
The first investigations of the interaction energy between an
infinitely heavy quark-antiquark pair date back to 1977
\cite{Susskind,F77,ADM77} and were initiated by L. Susskind's Les Houches
lectures \cite{Susskind}. In these works some general properties of
the static potential were discussed and the one-loop and parts of the
higher-order diagrams were computed. In the years that followed, much
effort went into deriving (quark-mass suppressed) spin- and
velocity-dependent corrections in various frameworks, but it is
surprising that for a very long time there was no complete two-loop
calculation of the true static case available. Only recently has this
gap been closed \cite{MP96}, and the present paper is essentially an
extended version of \cite{MP96} presenting details about the two-loop
calculation, which might be of interest for other problems as well.
Before turning to the real problem, however, a brief description of the
structure of the paper should not be missing: as an introduction of the
notation and for illustrative purposes, the QED potential is discussed in
section \ref{sec:QED}, which is followed by a section describing the main
additional problems encountered in the non-abelian theory. Section
\ref{sec:Tech} is concerned with the techniques needed for the actual two-loop
calculation, and finally sections \ref{sec:Res} and \ref{sec:pos} contain the
result for the potential in momentum and position space, respectively.
\section{Abelian case: the QED potential \label{sec:QED}}
A definition of the static potential in QED which is both convenient
for analytical and lattice calculations and which in addition is
manifestly gauge invariant can be given as follows:\\
\parbox{10cm}{
consider the vacuum expectation value of a rectangular Wilson loop
of spatial extent $r$ and temporal extent $t$,
$$
\langle 0|{\rm T}\;\exp\{ie\oint dx_\mu A^\mu\}|0\rangle.
$$
}\hspace{15mm}\parbox{35mm}{ \epsfxsize2.5cm \epsfbox{fig0.ps} }\\
If we let $t$ approach infinity, the lines corresponding to $|t|=\infty$ will
only give a negligible contribution to the line integral because the length of
the integration path is much smaller than that of the other two
lines\footnote{It should be evident from the definition (\ref{def}) that a
small contribution from these two lines would indeed
not affect the potential.}. In other words,
\begin{eqnarray*}\lefteqn{
\langle 0|{\rm T}\;\exp\{ie\oint dx_\mu A^\mu\}|0\rangle}&& \\
&\stackrel{t\to\infty}{\longrightarrow}&
\langle 0|{\rm T}\;\exp\{ie\int_{-t/2}^{t/2}d\tau\Big(A_0(\tau,-{\bf r}/2)
-A_0(\tau,{\bf r}/2)\Big)\}|0\rangle.
\end{eqnarray*}
The path integral representation of the remaining expectation value
equals the partition function of a system described by the
usual (purely photonic) QED Lagrangian with the addition of a source
term $J_\mu(x)A^\mu(x)$ with
$$
J_\mu(x) = ev_\mu\Bigg[\delta^{(3)}\Big({\bf x}+\frac{\bf r}{2}\Big)
- \delta^{(3)}\Big({\bf x}-\frac{\bf r}{2}\Big)\Bigg];\qquad
v_\mu \equiv \delta_{\mu0}.
$$
As the partition function is dominated by the ground state energy in this
limit, i.e.\ the path integral approaches $\exp(-itE_0(r))$, and as the ground
state energy is exactly what we would term the potential, we are led to
define
\begin{equation}\label{def}
V(r) = -\lim_{t\to\infty}\frac{1}{it}\ln\langle 0|{\rm T}\;\exp\{ie\oint
dx_\mu A^\mu\}|0\rangle.
\end{equation}
For pure QED the vacuum expectation value is a gaussian path integral
and an exact calculation is therefore possible, with the result
\begin{equation}\label{QED}
V(r) = e^2\int\!\frac{d^3q}{(2\pi)^3}\Bigg(\frac{1}{{\bf q}^2} -
\frac{e^{i{\bf qr}}}{{\bf q}^2}\Bigg) = \Sigma+V_{\rm\scriptscriptstyle Coul.}
\end{equation}
where as the second term the expected Coulomb potential appears, but
there is also an infinite constant representing the self-energy of the
sources known from classical electrodynamics.
An exact solution will no longer be possible as soon as light fermions are
included or a non-abelian theory is considered. It is therefore useful to
perform a perturbative analysis as well. The Feynman rules for the sources can
be read off easily: each source-photon vertex corresponds to a factor
$iev^\mu$, an anti-source obtains an additional minus sign. When we expand the
time-ordered exponential, we may introduce $\Theta$-function to express the
different possible time-orderings of the fields $A^\mu$, and in turn can
re-interpret them as source propagators in position space,
\begin{equation}
S_F(x-x^\prime) = -i\Theta(x_0-x_0^\prime)\delta^{(3)}({\bf x-x}^\prime),
\end{equation}
the reverse time ordering must be used for the anti-source.
Transforming the expression to momentum space one obtains
\begin{equation}
S_F(p) = \frac{1}{vp+i\varepsilon}
\end{equation}
with $v\to-v$ for the anti-source. One should note that in the path-integral
approach we could in fact omit the time-ordering prescription, as it is
already implicit in that formalism, but the final integrals we would encounter
would not be easier to solve. The approach we take, however, will prove
to be very useful in the non-abelian theory.
In addition, an immediate observation is that the Feynman rules are exactly the
same as those of Heavy Electron Effective Theory, the QED analogue of HQET,
where $v$ would represent the electron's velocity. This comes as no surprise,
as we are investigating the infinite mass limit of QED, and it is well known
that the potential can also be derived from the scattering operator. The static
QED potential thus should be derivable from the scattering matrix of HEET,
correspondingly the QCD potential from HQET, and this is in fact what is done
in practice. This approach can even be used to determine the spin-dependent
corrections to the potential \cite{CKO95}.
\begin{figure}\begin{center}
\epsfxsize 8cm \mbox{\epsfbox{fig1.ps}}
\end{center}
\caption{One-loop ladder diagrams. The double-lines represent the static
sources.\label{fig:one}}
\end{figure}
At tree level, one obtains the scattering amplitude
$$
-ie^2 \int\!\!dx_0dy_0\;v_\mu v_\nu D^{\mu\nu}(x-y)\Big|_{{\bf x}=0,{\bf y}
={\bf r}}=
-ie^2t\int\!\frac{d^3q}{(2\pi)^3}\frac{e^{i{\bf qr}}}{-{\bf q}^2}
$$
where $D^{\mu\nu}$ is the photon propagator. At first glance this seems to be
the complete result already and one might wonder what happens with the
higher order diagrams. But some care is required because the sum of all
diagrams does not correspond to the potential, but to its exponential. That
is exactly what the loop graphs are needed for. At one-loop order
for example, the ladder diagrams shown in Fig.\ \ref{fig:one} appear. When
working in position space it is easy to see that summing the two effectively
removes the anti-source propagator --- because it is only a
$\Theta$-function. Hence, adding them once more with $x\leftrightarrow
x^\prime$ the source propagator is also removed and one finds
$$
2!\times\mbox{Fig.}~\ref{fig:one}
= \Big(-ie^2t \int\!\!dx_0dy_0\;D^{00}(x-y)\Big|_{{\bf x}=0,{\bf y}
={\bf r}}\Big)^2.
$$
(A similar analysis in momentum space is also feasible, but more difficult
\cite{MP96}.) It is clear from the way the ladder diagrams are constructed
that this behaviour persists to all orders: the ladder diagrams are derived
from the uncrossed ladder by permuting all vertices, i.e.\ generating all
time-orderings, on one of the two fermion lines. Hence after summing all of
them we end up with the uncrossed ladder again, where the $\Theta$-functions on
one of the fermion lines are removed. The source propagators on the other line
are removed by adding the same sums, with all possible permutations of the
names of the vertices, just as was done in the one-loop example. Since there
are $n!$ permutations at $n$-loops, the exponential thus forms.
The other types of graphs --- source self-energy and vertex correction
diagrams --- can be shown, along the same line, to produce the
products of the form $\Sigma^n\cdot V^m_{\rm\scriptscriptstyle Coul.}(r)$ predicted by
the exact formula (\ref{QED}).
If light fermions are to be included, an explicit exact expression for the
potential can no longer be derived and one has to rely on the perturbative
treatment. The effect of the fermions is to introduce a running coupling in
two different ways. The obvious point is that fermion loops cause vacuum
polarization effects, which can be accounted for by defining an effective
charge in momentum space via
$$
\alpha_{\rm\scriptscriptstyle eff}({\bf q}^2) = \frac{\alpha}{1+\Pi({\bf q}^2)}
$$
where $\Pi({\bf q}^2)$ represents the vacuum polarization function and
$\alpha$ the fine structure constant. This definition is gauge
invariant and includes a Dyson summation. One might thus guess that
\begin{equation}\label{QEDeff}
V({\bf q}^2) = -\frac{4\pi\alpha_{\rm\scriptscriptstyle eff}({\bf q}^2)}{{\bf q}^2},
\end{equation}
but the formula need only be correct at the one- and two-loop level,
because starting at three loops light-by-light scattering diagrams enter.
The correct formula should read
\begin{equation}\label{QEDV}
V({\bf q}^2) = -\frac{4\pi\alpha_{\rm\scriptscriptstyle V}({\bf q}^2)}{{\bf q}^2},
\end{equation}
with $\alpha_{\rm\scriptscriptstyle V}\neq\alpha_{\rm\scriptscriptstyle eff}$, which, however, merely serves
as a definition of $\alpha_{\rm\scriptscriptstyle V}$. Nevertheless, this definition is
quite convenient, especially when turning to the non-abelian case where
a gauge-invariant definition of an effective charge is nontrivial.
\section{\label{sec:QCD}Complications in the non-abelian case}
One novelty that arises in QCD is well known: the non-linear nature of
non-abelian theories introduces additional diagrams which cause a running
coupling even in the absence of light fermions, and the fact that gluons carry
colour also requires a redefinition of the Wilson loop and consequently
of the potential as follows:
\begin{equation}
V(r) = -\lim_{t\to\infty}\frac{1}{it}\ln
\langle0|\mbox{Tr~P}\exp\Big(ig \oint\!dx_\mu\;A^\mu_aT^a \Big)|0\rangle.
\end{equation}
The generators $T^a$ have to be inserted in order to absorb the gluons' colour
index, and the fact that the generators do not commute requires the
introduction of a path-ordering prescription. An important point is that this
prescription is not automatically taken into account by using the path-integral
formalism, in contrast to the time-ordering. Consequently, all we can do even
in the path-integral approach is use the definition of the path-ordered
exponential, i.e.\ expand it, and the usefulness of the way we analyzed the
abelian case should become clearer. The net effect of the modifications in the
non-abelian case is a complication of the way the exponentiation of the
potential arises, and that the sources are automatically in a colour singlet
state.
In principle we are free to choose any representation for the generators, but
since we intend to describe quarks we are going to use the fundamental one. On
tree level, the only change with respect to QED then is the colour factor $C_F
= T_F(N^2-1)/N$ that multiplies the coupling in the potential, where $N$ is
the number of colours and $T_F$ the normalization of the generators. It is
thus convenient to define, in analogy to (\ref{QEDV}),
\begin{equation}\label{QCDV}
V({\bf q}^2) = -C_F\frac{4\pi\alpha_{\rm\scriptscriptstyle V}({\bf q}^2)}{{\bf q}^2},
\end{equation}
as this has the following advantage: $C_F$ is the value of the Casimir
operator $T^aT^a=C_R${\bf 1} in the fundamental representation
$T^a_{ij}=\lambda^a_{ij}/2$, where the $\lambda^a$ are the Gell-Mann matrices.
If we chose the adjoint representation $T^a_{ij}=-if^{aij}$, which would be
equally acceptable, the only thing that would change in (\ref{QCDV}) is the
overall factor $C_F$ which would be replaced by $C_A$, the coupling
$\alpha_{\rm\scriptscriptstyle V}$ would remain the same. The statement is trivial at tree
level, but it is in fact true to all orders as will be explained after
the discussion on the exponentiation. By choosing the adjoint representation
we obtain the static potential for gluinos in a colour singlet state.
In higher orders the presence of the generators causes the individual
diagrams to obtain different colour factors and consequently the
exponentiation has to be more involved than in the abelian case.
Although this point has already been analyzed in \cite{F77}, it seems
appropriate to recall the discussion here. It should, however, suffice to
consider the ladder diagrams to explain the basic idea.
The QCD one-loop ladder diagrams corresponding to Fig.\ \ref{fig:one}
obtain the colour factors
\begin{equation}
\mbox{Fig.\ \ref{fig:one}a}\propto C_F^2 \qquad,\qquad
\mbox{Fig.\ \ref{fig:one}b}\propto C_F^2-\frac{C_A}{2}C_F
\end{equation}
and we immediately can identify the abelian-like terms $\propto C_F^2$
that are needed to build the exponential of the Coulomb potential. But
it is also obvious that the remainder of Fig.\ \ref{fig:one}b,
together with a contribution from the vertex corrections involving the
same colour factor, gives an additional contribution to the potential,
which then has to be exponentiated by the higher order diagrams.
\begin{figure}[ht]\begin{center}
\epsfxsize 12cm \mbox{\epsfbox{fig2.ps}}
\end{center}
\caption{Two-loop ladder diagrams in QCD.\label{fig:two}}
\end{figure}
Fig.\ \ref{fig:two} shows the two-loop ladder diagrams, which involve the
colour factors
\begin{eqnarray}
\mbox{Fig.}~\ref{fig:two}a~~ & \propto & C_F^3 \\
\mbox{Fig.}~\ref{fig:two}b,c & \propto & C_F^3 - C_F^2\frac{C_A}{2}
\label{tlbc}\\
\mbox{Fig.}~\ref{fig:two}d,e & \propto & C_F^3 - 2C_F^2\frac{C_A}{2}
+\frac{C_A^2}{4}C_F\label{tlde}\\
\mbox{Fig.}~\ref{fig:two}f~~ & \propto & C_F^3 - 3C_F^2\frac{C_A}{2}+
\frac{C_A^2}{2}C_F.\label{tlf}
\end{eqnarray}
As expected, each diagram contains the ``abelian'' term $C_F^3$, thus forming
an iteration of the tree-level potential, but the terms $\propto C_F^2C_A$ are
also iterations, exactly those that arise from exponentiating the additional
one-loop term. This can be seen as follows: when working in coordinate space
and neglecting the colour factors, we can look for combinations of the diagrams
in which one of the anti-source propagators is removed, with the result
\begin{eqnarray*}
(b)+(e)+(f) &=&\raisebox{-0.35in}{\epsfxsize1.25in
\epsfbox{laddersa.ps}} \\
(c)+(d)+(f) &=&\raisebox{-0.35in}{\epsfxsize1.25in
\epsfbox{laddersb.ps}} \\
(d)+(e)+(f) &=&\raisebox{-0.35in}{\epsfxsize1.25in
\epsfbox{laddersc.ps}}.
\end{eqnarray*}
Summing the three lines we find that the coefficients of $-C_F^2C_A/2$ in
Eqs.\ (\ref{tlbc}--\ref{tlf}) equal the corresponding diagrams' coefficients in
the sum and that the $\Theta$-functions fixing the position of the marked
gluon line are removed. Hence the sum is a product of one-gluon exchange and
the crossed ladder diagram with exactly the right colour factor. A similar
analysis for the remaining two-loop diagrams leads to the conclusion that the
exponentiation works in the non-abelian case as well, and that all terms
$\propto C_F^3$ or $C_F^2C_A$ indeed constitute iterations.
In consequence, the number of diagrams that really have to be computed
to determine the two-loop contribution to the QCD potential is
somewhat reduced. Nevertheless, when compared to QED the number of diagrams is
still quite large, as in the abelian case only very few remain
(essentially the vacuum polarization).
Having discussed the exponentiation we are able to proof the statement
that $\alpha_{\rm\scriptscriptstyle V}$ is independent of the representation the sources are
in. It is obvious that for any representation $R$, the colour factor for a
given diagram is the same as the one for $R=F$ if we replace $C_F$ by $C_R$,
because it is completely determined by the algebra of the generators: only
$C_R$ and $C_A$ can appear since the generators themselves and the structure
constants arising from commutators are the only ``matrices'' around, and their
combination is fixed. We have also seen that at each loop order the only net
contribution to the potential arises from the colour factors linear in $C_F$,
all other terms are iterations. The definition (\ref{QCDV}) then implies that
$\alpha_{\rm\scriptscriptstyle V}$ does not involve $C_F$ at all%
\footnote{We only consider factors $C_F$ coming from the sources here. Of
course $\alpha_{\rm\scriptscriptstyle V}$ does involve $C_F$, but these terms arise from
fermion loops and are unrelated to the representation of the external
sources. Consequently they should not be replaced.}
and thus the replacement $C_F\to C_R$ does not affect the coupling.
\section{\label{sec:Tech}Details of the two-loop calculation}
It is worth presenting at least some details of the techniques used to
compute the two-loop diagrams, first of all because they are helpful when
trying to reproduce the results, but also because they might prove useful
for other calculations as well.
The remaining diagrams are best analyzed in momentum space, using the
kinematics that follows from the Wilson loop definition: the ``on-shell
condition'' for the source reads $vp=0$ where $p$ denotes the four-momentum
carried by the source. Thus the sources may have any three-momenta, the
actual values of which are, however, unimportant, as the only quantity that
appears is the momentum transfer $q^\mu=(0,{\bf q})$.
By choosing a convenient gauge the number of graphs can be further reduced:
if we employ Feynman gauge, all diagrams with a three- or four gluon vertex
with all gluons directly coupled to source lines vanish. This welcome feature
is caused by the replacement of the Dirac matrices $\gamma^\mu$ in the
source-gluon vertex by the simple factor $v^\mu=\delta^{\mu0}$, which means
that all vertices are interchangeable as far as their Lorentz structure and
momentum dependence is concerned (the momentum dependence is mentioned
because it is the property which is destroyed by other gauges). The
symmetry properties of both the three- and four gluon vertices then imply
that the diagrams vanish.
\begin{figure}[ht]\begin{center}
\epsfxsize 12cm \mbox{\epsfbox{fig3.ps}}
\end{center}
\caption{Two-gluon exchange diagrams in QCD that remain in Feynman gauge.
\label{fig:rest}}
\end{figure}
Consequently, apart from the ladders Fig.\ \ref{fig:two}d--f, the two-loop
gluon self-energy and double insertions of one-loop corrections, only a few of
the two-loop vertex correction graphs and the six two-gluon exchange diagrams
of Fig.\ \ref{fig:rest} have to be calculated. In fact the number of diagrams
is even slightly lower because Fig.\ \ref{fig:rest}f has a vanishing colour
factor for sources in the colour singlet state, and of the ladder diagrams
only Fig.\ \ref{fig:two}f is required, because the relation
\begin{equation}
\mbox{Fig.~\ref{fig:two}d+\ref{fig:two}e} = - \mbox{Fig.~\ref{fig:two}f}
\end{equation}
holds if the corresponding colour factors are neglected. The equality
can easily be proven in momentum space by applying the trivial identity
$$
\frac{1}{lv-kv+i\varepsilon}\frac{1}{kv+i\varepsilon} -
\frac{1}{lv-kv+i\varepsilon}\frac{1}{lv+i\varepsilon} =
\frac{1}{kv+i\varepsilon}\frac{1}{lv+i\varepsilon}
$$
to one of the two source lines.
Using dimensional regularization with $D=4-2\epsilon$ for both the infrared
and ultraviolet divergencies, the integration by parts technique \cite{IbP}
can be used to reduce most of the integrals that arise to products or
convolutions of four types of one-loop integrals, that in turn can be
computed by standard methods. With the notation
$$ \widetilde{dp} \equiv \frac{(2\pi\mu)^{2\epsilon}}{i\pi^2}d^Dp\qquad,\qquad
{\cal P}_n(p) \equiv p^{\mu_1}p^{\mu_2}\cdots p^{\mu_n},
$$
\begin{equation}
G(n,m;k,j) = (4\pi\mu^2)^\epsilon\frac{\Gamma(n+m-j-{D\over2})}
{\Gamma(n)\Gamma(m)}B\Big(\frac{D}{2}-n+k-j,\frac{D}{2}-m+j\Big),
\end{equation}
\begin{equation}
G_H(n,m;k,j) = (4\pi\mu^2)^\epsilon\frac{\Gamma(\frac{D}{2}-n+k-j)
\Gamma(2n+m-k-D)}{\Gamma(n)\Gamma(m)}
\end{equation}
the integrals are, omitting the $i\varepsilon$ in the propagators for
brevity:
\begin{eqnarray}
\lefteqn{
\int\widetilde{dp}\Bigg(\frac{1}{-p^2}\Bigg)^n\Bigg(\frac{1}{-(p-q)^2}
\Bigg)^m {\cal P}_k(p)} && \nonumber \\ &=&
(-q^2)^{D/2-n-m}\sum_{j=0}^{[k/2]}G(n,m;k,j)
\Big(\frac{q^2}{4}\Big)^j\frac{1}{j!}\Bigg(\frac{\partial}{\partial q_\mu}
\frac{\partial}{\partial q^\mu}\Bigg)^j{\cal P}_k(q),\label{two:std}\\
\lefteqn{
\int\widetilde{dp}\Bigg(\frac{1}{-p^2}\Bigg)^n\Bigg(\frac{w}{pv+w}
\Bigg)^m {\cal P}_k(p)} && \nonumber \\ &=&
(-1)^k(-2w)^{D+k-2n}\sum_{j=0}^{[k/2]}G_H(n,m;k,j)\Big(\frac{-1}{4}\Big)^j
\Bigg(\frac{\partial}{\partial v_\mu}\frac{\partial}{\partial v^\mu}
\Bigg)^j{\cal P}_k(v),\label{two:hqet}\\
\lefteqn{
\int\widetilde{dp}\Bigg(\frac{1}{-p^2}\Bigg)^n\Bigg(\frac{1}{-(p-q)^2}
\Bigg)^m\Bigg(\frac{1}{-pv}\Bigg)^a\Bigg|_{vq=0}} && \nonumber \\ &=&
(-q^2)^{\frac{D-a}{2}-n-m}\frac{\sqrt\pi}{\Gamma(\frac{a+1}{2})}
G(n,m;-a,-a/2),\label{three:one}\\
\lefteqn{
\int\widetilde{dp}\Bigg(\frac{1}{-p^2}\Bigg)^n\Bigg(\frac{w}{pv+w}
\Bigg)^a\Bigg(\frac{w}{pv}\Bigg)^b} && \nonumber \\ &=&
(-2w)^{D-2n}\frac{\Gamma(D-2n-b)}{\Gamma(D-2n)}G_H(n,a;-b,-b).
\label{three:two}
\end{eqnarray}
The formula for the standard massless two-point function Eq.\ (\ref{two:std})
is a well known result which can be found for example in \cite{IbP}. Its HQET
analogue (\ref{two:hqet}) and the HQET three-point functions (\ref{three:one})
and (\ref{three:two}) can be calculated in a similar way using the
parametrization
$$ \frac{1}{a^n b^m} = \frac{\Gamma(n+m)}{\Gamma(n)\Gamma(m)}\int_0^\infty\!
d\alpha \frac{\alpha^{m-1}}{(a+\alpha b)^{n+m}}
$$
to combine gluon and source propagators, instead of the usual Feynman
parameters. Eq.\ (\ref{two:hqet}) without the factor ${\cal P}_k(p)$ has
already been given in \cite{BG91}, and Eq.\ (\ref{three:two}) can be derived
from Eq.\ (8) of \cite{BBG93}.
The reduction of a given graph to the two functions $G$ and $G_H$ can be
automated with the help of a computer program such as FORM \cite{Form}, but
an additional final reduction to a few basic $G_{(H)}$-functions seems
quite difficult because of the appearance of $\epsilon$-dependent values for
the last two arguments of $G$ and $G_H$ and additional ratios of
$\Gamma$-functions. As an example consider the integral
\begin{eqnarray*}
\lefteqn{
\int\!\widetilde{dp}\widetilde{dp^\prime}\frac{1}{p^2(p+q)^2(p^\prime)^2
(p^\prime v)^2(p^\prime v+pv)(pv)}} && \\
&\stackrel{(\ref{three:two})}{=}&
-\frac{2^{D-2}\Gamma(D-4)}{\Gamma(D-2)}G_H(1,1;-2,-2)
\int\!\widetilde{dp}\frac{1}{(-p^2)(-(p+q)^2)(-pv)^{6-D}} \\
&\stackrel{(\ref{three:one})}{=}&
\frac{-8\Gamma(-2\epsilon)\Gamma(1+\epsilon)}{\Gamma(2-2\epsilon)
\Gamma(2+2\epsilon)}G_H(1,1;-2,-2)G(1,1;-2-2\epsilon,-1-\epsilon)
(-q^2)^{-1-2\epsilon}
\end{eqnarray*}
which corresponds to Fig.\ \ref{fig:rest}b. It thus seems easier to directly
rewrite the resulting expressions in terms of $\Gamma$-functions and
immediately expand them in $\epsilon$.
However, as already mentioned, not all of the graphs can be reduced in an
easy way to the above one-loop integrals, a few diagrams remain that involve
the following type of true two-loop integrals:
\begin{equation}
I(a,b,c;n,m) = \int\!\widetilde{dp}\widetilde{dr}\Bigg(\frac{-1}{p^2}\Bigg)^a
\Bigg(\frac{-1}{r^2}\Bigg)^b\Bigg(\frac{-1}{(p-r-q)^2}\Bigg)^c
\frac{1}{(pv)^n}\frac{1}{(rv)^m}
\end{equation}
with $vq=0$ and $n,m>0$. From
$$
\int\!\widetilde{dp}\widetilde{dr}\;v_\mu\frac{\partial}{\partial p_\mu}
f(p,r;q) = 0
$$
one can derive the recursion relation
\begin{equation}
n{\rm N}^+I = 2(a{\rm A}^+ + c{\rm C}^+){\rm N}^-I - 2c{\rm C}^+{\rm M}^- I.
\end{equation}
Here the operator ${\rm N}^+$ means that the argument $n$ of $I$ should be
increased by one, and correspondingly for the other operators. The relation
can be used to reduce integrals with $n>1$ or $m>1$ to those with $n=m=1$. For
example, with $n=1,m=2$ we find for the integral corresponding to Fig.\
\ref{fig:two}f:
\begin{equation}
I(1,1,1;2,2) = 2I(2,1,1;0,2)+2I(1,1,2;0,2)-2I(1,1,2;1,1).
\end{equation}
The first two terms can be computed with the help of
(\ref{two:std})--(\ref{three:two}), but the last one cannot be simplified any
further by the recursion relation (due to the absence of the propagators
$1/(p-q)^2$ and $1/(r-q)^2$ there are no other simple relations).
It turns out that the following three ``irreducible'' integrals,
the calculation of which is sketched in the appendix, are needed:
\begin{eqnarray}
I(1,1,1;1,1) & = & \frac{2\pi}{3}(-q^2)^{-2\epsilon}G(1,1;-1,-\frac{1}{2})
G(1,\frac{1}{2}+\epsilon;-1,-\frac{1}{2})\\
I(1,1,2;1,1) & = & -\frac{2}{q^2}\Bigg(\frac{4\pi\mu^2}{-q^2}\Bigg)^{
2\epsilon}\Bigg[\frac{1}{\epsilon^2}-\frac{2}{\epsilon}+4-\frac{5}{6}\pi^2
\nonumber \\ &&
-\epsilon\Big(8-\frac{5}{3}\pi^2+\frac{32}{3}\zeta_3\Big)
+{\cal O}(\epsilon^2)\Bigg]\\
I(2,1,2;1,1) & = & \frac{1}{(q^2)^2}\Bigg(\frac{4\pi\mu^2}{-q^2}\Bigg)^{
2\epsilon}\Bigg[\frac{2}{\epsilon^2}+\frac{2}{\epsilon}-4-\frac{5}{3}\pi^2
\nonumber \\ &&
-\frac{\epsilon}{3}\Big(24-17\pi^2-64\zeta_3\Big)
+{\cal O}(\epsilon^2)\Bigg].
\end{eqnarray}
With these equations all the basic formul\ae\ for the determination of
the two-loop diagrams are given. The expressions for the individual graphs
will, however, not be listed here, we will directly turn to the complete
result instead.
\section{\label{sec:Res}The QCD potential in momentum space}
A convenient way of writing the QCD potential in momentum space is\footnote{
The second and third equations might be incomplete for $n>2$, because $a_n$
might depend on $\ln\alpha$, see the discussion in \cite{ADM77}. The present
paper, however, is restricted to the three-loop potential, i.e.\ $n\le2$.}
\begin{eqnarray}
V({\bf q}^2) & = & -C_F\frac{4\pi\alpha_{\rm\scriptscriptstyle V}({\bf q}^2)}{{\bf q}^2}
\label{Vq}\\
\alpha_{\rm\scriptscriptstyle V}({\bf q}^2) & = & \alpha_{\overline{\rm\scriptscriptstyle MS}}(\mu^2)
\sum_{n=0}^\infty \tilde{a}_n(\mu^2/{\bf q^2})
\Big(\frac{\alpha_{\overline{\rm\scriptscriptstyle MS}}(\mu^2)}{4\pi}\Big)^n
\label{orig} \\ &=&
\alpha_{\overline{\rm\scriptscriptstyle MS}}({\bf q}^2)
\sum_{n=0}^\infty a_n\Big(\frac{\alpha_{\overline{\rm\scriptscriptstyle MS}}({\bf q}^2)}
{4\pi}\Big)^n \label{vms}
\end{eqnarray}
with $a_0=\tilde a_0=1$ and where
\begin{equation}
a_1 = \frac{31}{9}C_A - \frac{20}{9}T_Fn_f\quad,\quad
\tilde a_1 = a_1 +\beta_0\ln\frac{\mu^2}{\bf q^2}
\end{equation}
is the long-known one-loop result and
\begin{eqnarray}
a_2 &=& \Big(\frac{4343}{162}+6\pi^2-\frac{\pi^4}{4}+\frac{22}{3}\zeta_3
\Big)C_A^2 -\Big(\frac{1798}{81}+\frac{56}{3}\zeta_3\Big)C_AT_Fn_f
\nonumber \\ &&
-\Big(\frac{55}{3}-16\zeta_3\Big)C_FT_Fn_f + \Big(\frac{20}{9}T_Fn_f\Big)^2
\\
\tilde a_2 &=& a_2 + \beta_0^2\ln^2\frac{\mu^2}{\bf q^2}
+(\beta_1+2\beta_0 a_1)\ln\frac{\mu^2}{\bf q^2}
\end{eqnarray}
is the new information added by the present analysis. Note that the last term
of $a_2$ could have been predicted from the one-loop result, it is exactly the
contribution that would be Dyson-summed in QED by introducing an effective
coupling. The third term originates from the two-loop vacuum polarization and
thus would also be included in the effective coupling in the case of QED.
Knowledge of $a_2$ now allows us to consistently use the three-loop
expression for the running coupling in the $\overline{\rm MS}$-scheme. An
equation of the form (\ref{vms}) can then be used to derive the
$\beta$-function of one of the two couplings from the knowledge of the
$\beta$-function of the second scheme. If we define $\beta$ and the
coefficients $\beta_n$ in each scheme via
\begin{equation}
\frac{1}{\alpha(\mu^2)}\frac{d\alpha(\mu^2)}{d\ln\mu^2}
= - \beta(\alpha)
= - \sum_{n=0}^\infty \beta_n\Big(\frac{\alpha(\mu^2)}{4\pi}\Big)^{n+1},
\end{equation}
we immediately find
\begin{equation}
\beta^{\rm V}(\alpha_{\rm\scriptscriptstyle V}) = \beta^{\overline{\rm MS}}(
\alpha_{\overline{\rm\scriptscriptstyle MS}})
\frac{\alpha_{\overline{\rm\scriptscriptstyle MS}}}{\alpha_{\rm\scriptscriptstyle V}}
\frac{d\alpha_{\rm\scriptscriptstyle V}}{d\alpha_{\overline{\rm\scriptscriptstyle MS}}}\qquad
\mbox{with}~\alpha_{\overline{\rm\scriptscriptstyle MS}}
= \alpha_{\overline{\rm\scriptscriptstyle MS}}(\alpha_{\rm\scriptscriptstyle V})
\end{equation}
or $\beta_{0,1}^{\rm V}=\beta_{0,1}^{\overline{\rm MS}}$ and
\begin{eqnarray}
\beta_2^{\rm V} &=&
\beta_2^{\overline{\rm MS}}-a_1\beta_1^{\overline{\rm MS}}
+ (a_2-a_1^2)\beta_0^{\overline{\rm MS}} \\
& = & \Big(\frac{618+242\zeta_3}{9}+\frac{11(24\pi^2-\pi^4)
}{12}\Big)C_A^3 \nonumber \\ &&
-\Big(\frac{445+704\zeta_3}{9}+\frac{24\pi^2-\pi^4}{3}\Big)C_A^2T_F
n_f \nonumber \\ &&
+ \frac{2+224\zeta_3}{9}C_A(T_Fn_f)^2 -\frac{686-528\zeta_3}{9}C_AC_FT_Fn_f
\nonumber \\ &&
+2C_F^2T_Fn_f+\frac{184-192\zeta_3}{9}C_F(T_Fn_f)^2.
\end{eqnarray}
Inserting the numbers one finds that the numerical value of $\beta_2^V$ is
considerably larger than that of $\beta_2^{\rm \overline{MS}}$, which, for
comparison, is given by the expression \cite{TVZ80}
\begin{eqnarray}
\beta_2^{\rm \overline{MS}} & = & \frac{2857}{54}C_A^3+2C_F^2T_Fn_f
-\frac{205}{9}C_AC_FT_Fn_f-\frac{1415}{27}C_A^2T_Fn_f \nonumber \\
&& +\frac{44}{9}C_F(T_Fn_f)^2+\frac{158}{27}C_A(T_Fn_f)^2.
\end{eqnarray}
Hence the coupling in the potential runs faster than the
$\overline{\rm MS}$-coupling.
\begin{figure}[ht]\begin{center}
\epsfxsize 12cm \mbox{\epsfbox{fig4.ps}}
\end{center}
\caption{Comparison of the results for $\alpha_{\rm\scriptscriptstyle V}({\bf q}^2)$ at
different loop-orders, where $\alpha_{\rm\scriptscriptstyle V}({\bf q}^2)$ is determined
from $\alpha_{\overline{\rm\scriptscriptstyle MS}}({\bf q}^2)$. Input parameters are
$\alpha_{\overline{\rm\scriptscriptstyle MS}}(M_Z^2)=0.118$, $n_f=5$.\label{fig:ava}}
\end{figure}
Figure \ref{fig:ava} compares the result for $\alpha_{\rm\scriptscriptstyle V}$ at the various
loop-orders in graphical form, neglecting quark thresholds for simplicity: the
dotted line represents the tree-level prediction, i.e.\ a pure Coulomb
potential without a running coupling; the dashed line shows the one-loop
result in the sense that only the (one-loop) running of
$\alpha_{\overline{\rm\scriptscriptstyle MS}}$ is taken into account, but the two couplings
still coincide (which means that it is still tree level as far as the diagrams
contributing to the potential are concerned and should thus better be termed
leading order); the dashed-dotted line shows the next-to-leading order and the
solid line the next-to-next-to-leading order results. It is evident that the
two-loop contribution is nearly as important as the one-loop effect: the
additional shift in $\alpha_{\rm\scriptscriptstyle V}$ caused by $a_2$ is roughly two third
the size of the shift introduced by $a_1$, and both corrections increase the
coupling. The impact of this shift on the would-be toponium system, as an
example, should be measurable: in the interval $|{\bf q|}=10\ldots30$GeV
including the NNLO-corrections amounts to a net increase of $\alpha_{\rm\scriptscriptstyle V}$
by about 5 to 7\%. The charmonium and bottomonium systems on the other hand
are mainly sensitive to momenta below 2GeV and thus lie predominantly outside
of the perturbative domain.
At this point we should mention that for simplicity we have chosen a fixed
number of five light flavours to be valid over the whole range of momenta. For
a more realistic study, the bottom and charm quark thresholds would, of
course, have to be taken into account by matching effective theories with
decreasing $n_f$ in order to restore decoupling. Such a procedure would,
however, leave our results qualitatively unchanged, it would mainly cause an
even faster running of both couplings at low momenta.
The plot just discussed was generated by employing (\ref{vms}) as it stands,
i.e.\ by evolving the $\overline{\rm MS}$-coupling, taken to equal $0.118$ at
the $Z$-mass, to the required scale (assuming $n_f=5$) and then calculating
$\alpha_{\rm\scriptscriptstyle V}$ at that scale. In principle the evolution can be
performed in different ways: one may express the coupling in terms of the
QCD scale parameter $\Lambda_{\rm\scriptscriptstyle QCD}$, or one may employ the QED-like
formula
\begin{eqnarray}\lefteqn{
\frac{\alpha(\mu^2)}{\alpha(q^2)} = 1 +
\frac{\alpha(\mu^2)}{4\pi} \beta_0\ln\frac{q^2}{\mu^2} } && \nonumber\\ &&
+\Big(\frac{\alpha(\mu^2)}{4\pi}\Big)^2 \beta_1\ln\frac{q^2}{\mu^2}
+\Big(\frac{\alpha(\mu^2)}{4\pi}\Big)^3 \Big( \beta_2\ln\frac{q^2}{\mu^2}
-\frac{1}{2}\beta_0\beta_1\ln^2\frac{q^2}{\mu^2} \Big).
\end{eqnarray}
Although the two approaches are formally equivalent, in practice only the
second one guarantees that the running coupling really coincides with its
input value $\alpha(\mu^2)$ for $q^2=\mu^2$. Therefore the second approach
has been adopted throughout this paper.
\begin{figure}[ht]\begin{center}
\epsfxsize 12cm \mbox{\epsfbox{fig5.ps}}
\end{center}
\caption{Comparison of the running of $\alpha_{\rm\scriptscriptstyle V}({\bf q}^2)$ (solid
line) and $\alpha_{\overline{\rm\scriptscriptstyle MS}}({\bf q}^2)$ (dashed line)
at three loops. The dotted curve displays $\alpha_{\rm\scriptscriptstyle V}({\bf q}^2)$
with the initial value $\alpha_{\rm\scriptscriptstyle V}(M_Z^2)=\alpha_{\overline{\rm\scriptscriptstyle MS}}
(M_Z^2)$\label{fig:rge}}
\end{figure}
One may argue that to first evolve the $\overline{\rm MS}$-coupling and then
determine the potential is not the best idea because, as is evident from
Fig.\ \ref{fig:ava}, the expansion parameter becomes large at low scales and
thus the next terms of the perturbation series might also become important.
An alternative would be to determine $\alpha_{\rm\scriptscriptstyle V}$ from (\ref{vms}) at
the $Z$-mass and then evolve it to the scale ${\bf q}^2$. It turns out, not
surprisingly, that the result is essentially the same, as can be seen from
Figure \ref{fig:rge} where the running of the two couplings is compared: the
solid line shows the three-loop running of $\alpha_{\rm\scriptscriptstyle V}$ and the dashed
one $\alpha_{\overline{\rm\scriptscriptstyle MS}}$, again for $\alpha_{\overline{\rm\scriptscriptstyle
MS}}(M_Z^2)=0.118$ and five flavours; the dotted line shows the
corresponding result for $\alpha_{\rm\scriptscriptstyle V}$ for the hypothetical case
$\alpha_{\rm\scriptscriptstyle V}(M_Z^2)=0.118$ as well, and thus really displays the different
$\beta-$functions. It exemplifies that the large numerical mismatch
between the two couplings at low scales mainly arises from an amplification
of the small mismatch at large scales. Note that in a sense the two
couplings are the same at the two-loop level as the $\beta$-functions then
coincide and a difference merely arises because the evolutions start at
different values.
\section{\label{sec:pos}Position space}
With the momentum-space representation of $V_{\rm\scriptscriptstyle QCD}$ at our disposal, we
are able to compute the real analogue of the Coulomb potential, i.e.\ the QCD
potential in position space. In order to make the expressions simpler, it is
convenient to introduce the notation
\begin{equation} \label{Fu}
{\cal F}(r,\mu,u) = \mu^{2u}
\int\!\!\frac{d^3q}{(2\pi)^3}\frac{e^{i{\bf qr}}}{({\bf q}^2)^{1+u}}
\end{equation}
for the Fourier transform of a general power of $1/{\bf q}^2$. Using
a Schwinger parameter,
$$
\frac{1}{({\bf q}^2)^{1+u}} = \frac{1}{\Gamma(1+u)}\int_0^\infty\!
dx\;x^u\;e^{-x{\bf q}^2},
$$
and a few relations between $\Gamma$-functions such as
$$
\Gamma(1+u) = \sqrt\pi \frac{\Gamma(1+2u)}{2^{2u}\Gamma(\frac{1}{2}+u)},
\qquad
\ln\Gamma(1+u) = \gamma_E u +\sum_{n=2}^\infty\frac{\zeta(n)}{n}u^n\quad
\mbox{for}~|u|<1,
$$
several equivalent representations of ${\cal F}$ can be derived, two of which
will be useful in the following:
\begin{eqnarray}
{\cal F}(r,\mu,u) & = &
\frac{(\mu r)^{2u}}{4\pi^2 r}\frac{\Gamma(\frac{1}{2}+u)\Gamma(\frac{1}{2}
-u)}{\Gamma(1+2u)} \label{Fua} \\
& = &
\frac{(\mu r e^{\gamma_E})^{2u}}{4\pi r}\exp\Bigg(\sum_{n=2}^\infty
\frac{\zeta(n)u^n}{n}\Big(2^n-1-(-1)^n\Big)\Bigg) \label{Fub}
\end{eqnarray}
where the first formula is applicable if $-1<u<1/2$, the second if $|u|<1/2$
and where $\gamma_E$ denotes the Euler-Mascheroni constant.
Adopting a strictly perturbative approach, we can use the original
result leading to (\ref{Vq}) (or in other words re-expand
$\alpha_{\overline{\rm\scriptscriptstyle MS}}({\bf q}^2)$) to find the potential in
position space. From
\begin{eqnarray}
\alpha_{\rm\scriptscriptstyle V}({\bf q}^2) & = & \alpha_{\overline{\rm\scriptscriptstyle MS}}(\mu^2)\Bigg(
1 + \frac{\alpha_{\overline{\rm\scriptscriptstyle MS}}(\mu^2)}{4\pi}\Big( -\beta_0\ln
\frac{\bf q^2}{\mu^2} + a_1\Big) \nonumber \\
&& \hspace{-2em} +
\Big(\frac{\alpha_{\overline{\rm\scriptscriptstyle MS}}(\mu^2)}{4\pi}\Big)^2 \Big(
(\beta_0\ln\frac{\bf q^2}{\mu^2})^2 - (2\beta_0a_1+\beta_1)\ln
\frac{\bf q^2}{\mu^2} + a_2\Big) + \ldots \Bigg),
\end{eqnarray}
where $\mu$ is the renormalization scale, we see that we need the Fourier
transform of $\ln^m(\mu^2/{\bf q^2})$ which we easily obtain from ${\cal F}$
because
$$
\ln^m\frac{\mu^2}{\bf q^2} = \Bigg[\frac{\partial^m}{\partial u^m}
\Big(\frac{\mu^2}{\bf q^2}\Big)^u\Bigg]\Bigg|_{u=0}
$$
and thus
\begin{equation}
\int\!\!\frac{d^3q}{(2\pi)^3}\ln^m\frac{\mu^2}{\bf q^2}
\frac{e^{i{\bf qr}}}{\bf q^2} = \Bigg(\frac{\partial^m}{\partial u^m}
{\cal F}(r,\mu,u)\Bigg)\Bigg|_{u=0} \label{lnq}.
\end{equation}
Hence the potential in position space becomes
\begin{eqnarray}
V(r) & = & -C_F\frac{\alpha_{\overline{\rm\scriptscriptstyle MS}}(\mu^2)}{r}\Bigg(
1 + \frac{\alpha_{\overline{\rm\scriptscriptstyle MS}}(\mu^2)}{4\pi}\Big(
2\beta_0\ln(\mu r^\prime)+a_1\Big) \nonumber \\
&& + \Big(\frac{\alpha_{\overline{\rm\scriptscriptstyle MS}}(\mu^2)}{4\pi}\Big)^2 \Big(
\beta_0^2(4\ln^2(\mu r^\prime)+\frac{\pi^2}{3}) \label{Vr} \\
&& \quad +2(\beta_1+2\beta_0a_1)\ln(\mu r^\prime)+a_2\Big) + \ldots \Bigg)
\nonumber
\end{eqnarray}
with $r^\prime\equiv r\exp(\gamma_E)$. One should note that going beyond the
strictly perturbative approach is quite difficult and may lead to
inconsistencies as the running coupling has a pole and thus the Fourier
transform of (\ref{Vq}) does not exist. But as the pole is an artifact which
arises because a perturbative expression is extrapolated deep into the
non-perturbative regime, it should not be taken too seriously.
\begin{figure}\begin{center}
\epsfxsize 12cm \mbox{\epsfbox{fig6.ps}}
\end{center}
\caption{The potential (times $-r/C_F$) in position space at different orders
choosing $\mu=1/r$. The range in $r$ roughly corresponds to the range in
$|{\bf q}|$ displayed in the previous figures: $r=10^{-2.6}$fm $\approx
0.002$fm $\sim 100$GeV and $r=10^{-0.8}$fm $\approx 0.16$fm $\sim
1$GeV.\label{fig:rvr}}
\end{figure}
The result (\ref{Vr}) may be exploited in several ways: we certainly would not
use it as it stands because of the possibly large logarithms, but choose a
scale $\mu$ that reduces the higher order corrections. The first and in some
sense natural choice is $\mu_1=1/r$, leading to the curves displayed in Fig.\
\ref{fig:rvr} where the effect of the loop-corrections on $rV(r)$ is shown,
using the same input parameters as before. As the tree-level prediction would
correspond to a constant in this plot, it has been omitted. The graph is
essentially the same as Figure \ref{fig:ava} and therefore will not be
discussed further.
A second choice motivated by noticing that due to (\ref{Fub}), $\mu$ will
always appear in combination with $r^\prime$, is $\mu_2=1/r^\prime$. This
way we remove all terms involving $\gamma_E$ form the coefficients. Note that
these terms are not small numerically because in $n$th order they involve
$(2\gamma_E\beta_0)^m\approx\beta_0^m$ for all $m\le n$ as well as
similar contributions from the other $\beta_m$.
A third choice, which is frequently encountered in other contexts as well,
would be to select $\mu$ in such a way as to remove the first-order coefficient
completely, i.e.\ $\mu_3(r)=\exp[-\gamma_E-a_1/(2\beta_0)]/r$, which leads to
\begin{equation}
V_3(r) = -C_F\frac{\alpha_3}{r}\Bigg(1+\Big(\frac{\alpha_3}{4\pi}\Big)^2
\Big(a_2-a_1^2-\frac{\beta_1}{\beta_0}a_1
+\frac{(\pi\beta_0)^2}{3}\Big)\Bigg)
\end{equation}
with $\alpha_3\equiv\alpha_{\overline{\rm\scriptscriptstyle MS}}(\mu^2_3(r))$. This approach
is in effect similar to defining an effective charge like in QED, but one
should remember that it cannot be interpreted as a Dyson summation of the
one-loop vacuum polarization and therefore there is no reason why it should
constitute a superior choice. A real Dyson summation would lead to problems
with gauge invariance except for some part of the fermion loop contribution.
Yet another possibility, which is similar in spirit, would be to choose $\mu$
in such a way as to remove all $n_f$-dependence from the coefficients $a_n$,
i.e.\ to use the BLM scale setting prescription \cite{BLM}. This procedure
leads to
\begin{equation}
V_{\rm _{BLM}}(r) = -\frac{C_F}{r}\Bigg(\alpha_{\overline{\rm\scriptscriptstyle MS}}\Big(
\frac{1}{r_1^2}\Big)+a_1^{\mbox{\tiny BLM}}
\frac{\alpha^2_{\overline{\rm\scriptscriptstyle MS}}(1/r_2^2)}{4\pi}
+a_2^{\mbox{\tiny BLM}}
\frac{\alpha^3_{\overline{\rm\scriptscriptstyle MS}}(1/r^{\prime2})}{(4\pi)^2}\Bigg)
\end{equation}
with
\begin{eqnarray}
a_1^{\mbox{\tiny BLM}} & = & -\frac{8}{3}C_A \\
a_2^{\mbox{\tiny BLM}} & = & \Big(\frac{133-396\zeta_3}{9}+\frac{24\pi^2
-\pi^4}{4}\Big)C_A^2 - \frac{385-528\zeta_3}{12}C_AC_F \\
r_1^2 & = & r^{\prime2} e^{5/3}\\
r_2^2 & = & r^{\prime2} \exp\Big(
\frac{434-504\zeta_3}{192}-\frac{315-432\zeta_3}{192}\frac{C_F}{C_A}
\Big)\approx r^{\prime2} e^{-0.42}.
\end{eqnarray}
Finally, a fifth choice follows from proceeding in analogy to momentum space
and defining
\begin{equation}
V(r) = -C_F\frac{\bar\alpha_{\rm\scriptscriptstyle V}(1/r)}{r}.
\end{equation}
From (\ref{Vr}) one may calculate the $\beta$-function of the new
coupling, with the result
\begin{equation}
\bar\beta_2^{V} = \beta_2^V + \frac{\pi^2}{3}\beta_0^3,
\end{equation}
and, after determining the initial value for $\bar\alpha_{\rm\scriptscriptstyle V}$ at $M_Z$
form (\ref{Vr}) (where there are again different choices for $\mu$ possible,
we have used $\mu=1/r$) evolve it to the right distance. It is evident that
the appearance of $\gamma_E$ and the $\zeta$-functions makes $\bar\alpha_{
\rm\scriptscriptstyle V}$ differ from $\alpha_{\rm\scriptscriptstyle V}$, and consequently the same
statements holds for the $\beta$-functions.
\begin{figure}\begin{center}
\epsfxsize 12cm \mbox{\epsfbox{fig7.ps}}
\end{center}
\caption{$-rV(r)/C_F$ in the different schemes described in the
text.\label{fig:vschemes}}
\end{figure}
The five approaches obviously result in different predictions for the
three-loop potential --- the difference formally being of the order $\alpha^4$
--- as is explicitly demonstrated in Figures \ref{fig:vschemes} and
\ref{fig:vschemesr}. One should note that there is no a priori reason to
prefer any choice of scale. The second choice $\mu=1/r^\prime$ might be an
exception, as it consistently absorbs terms which only arise from the Fourier
transformation and are not directly related to the dynamics. It suggests that
the distance complementary to $|\bf q|$ is not $r$ but $r^\prime$. But still
``non-dynamical'' terms $\zeta_n$ remain even in this scheme. The
BLM-prescription, which requires more than a single scale, is also physically
motivated, and the nice agreement between the two curves and the curve
resulting from the renormalization group evolution (labelled ``RGE'' in the
figures) may serve as an argument in favour of these approaches.
The coupling $\bar\alpha_{\rm\scriptscriptstyle V}$ is already quite large at a distance
$r\sim$ 0.5GeV$^{-1}$ or 0.1fm, and its size is even strongly increased in the
second and third approach due to a reduction of its scale implying that the
non-perturbative regime starts at smaller separations. For five flavours for
example we have $\mu_2\approx0.56/r$ and $\mu_3\approx0.41/r$. As a
consequence there is a large scheme-dependence of the potential in the region
$r>0.07$fm. From Figure \ref{fig:vschemesr} we see that the situation is even
worse as the perturbation series must break down near $r=0.1$fm because the
potential starts to bend down there already. We thus should not trust the
perturbative result for distances larger than about 0.08fm, which is
consistent with the assumption that the perturbative potential should be
reliable if $r\Lambda_{\rm\scriptscriptstyle QCD} < 0.07\ldots0.1$ is satisfied%
\footnote{For five flavours and $\alpha_{\rm\scriptscriptstyle \overline{MS}}(M_Z)=0.118$ we
find $\Lambda_{\rm\scriptscriptstyle QCD}\approx210$MeV.} \cite{BT81,KMP86}.
\begin{figure}\begin{center}
\epsfxsize 12cm \mbox{\epsfbox{fig8.ps}}
\end{center}
\caption{Three-loop potential in the different schemes described in the
text.\label{fig:vschemesr}}
\end{figure}
An interesting question is whether our result gives any indication on the onset
of a linear rise of the potential, or in other words on the existence of
confinement. Unfortunately, the situation seems unclear in this respect. As
Figure \ref{fig:rvr} proves, each new term of the perturbation series that is
added makes the deviation from a pure Coulomb prediction larger and the
potential more attractive. But according to the conventional definition
(\ref{QCDV}), a linearly {\em rising} potential requires a {\em negative}
coupling, the perturbative curve for $\bar\alpha_{\rm\scriptscriptstyle V}$ thus tends into the
wrong direction, and there is no sign of a nontrivial zero of this function.
The only indication on the existence of a zero is the breakdown of perturbation
theory at distances of the order $r\approx0.1$fm (or in other words the
presence of the Landau pole) and the fact that, with increasing number of
loops, this point appears at smaller and smaller distances. In some sense the
gap between the perturbative and the linear part thus becomes broader.
\section{Conclusion}
We have presented results of a full NNLO-calculation of the perturbative
static potential in QCD both in momentum and position space, including a
description of the technique employed, and we have argued that the respective
couplings $\alpha_{\rm\scriptscriptstyle V}$ and $\bar\alpha_{\rm\scriptscriptstyle V}$ are universal in the
sense that they describe the potential for other infinitely heavy colour
sources such as static gluinos as well. The $\beta$-functions corresponding to
the two couplings have been derived. The main result is that the two-loop
contribution is nearly as large as the one-loop term, both make the interaction
more attractive, and that the perturbative potential seems to be reasonably
reliable up to distances $r\Lambda_{\rm\scriptscriptstyle QCD}<0.07$. For larger
values a strong scale-dependence remains and the perturbation series even
breaks down above about 0.1fm.
Although the results may be interpreted as an indication of a general
tendency of higher loop corrections to strengthen the force between a
quark-antiquark pair, to take them as a proof of confinement would be going
too far.
Starting from the results presented in this paper, it would be important to
know, first, the impact of the two-loop contribution to the potential as a
whole and, second, of the ambiguity due to different choices of scale on the
energy levels and decay widths of quarkonia. The first point could be
investigated in the toponium system as this is only sensitive to very short
distances and does not suffer from the uncertainties arising from
intermediate and large separations. The second point, however, would require
the specification of some potential model for the latter region and thus the
analysis would depend on additional parameters. Nevertheless, a revised
study of the type performed in \cite{KMP86} would be very interesting.
\subsection*{Acknowledgments}
The author would like to thank Prof.\ J. H. K\"uhn for carefully reading the
manuscript and his continuous comments and advice, and Prof.\ Th. Mannel
for many interesting discussions.
\section*{Appendix: Calculation of the two-loop integrals}
When trying to compute the two-loop integrals $I(a,b,c;1,1)$, it is convenient
to combine gluon propagators using the standard formula for Feynman parameters
and to combine gluon and source propagators using the modified version
$$
\frac{1}{a^m b^n} = \frac{\Gamma(n+m)}{\Gamma(n)\Gamma(m)}
\int_0^\infty\!d\alpha\frac{\alpha^{m-1}}{(a\alpha+b)^{n+m}}.
$$
With this technique the two momentum integrations can be performed one after
the other, and after some rescalings of the $\alpha$-parameters one is left
with
\begin{equation} \label{eq:I}
I(a,b,c;1,1) =
\Big(4\pi\mu^2)^{2\epsilon}(-q^2)^{D-\Sigma}\frac{2\Gamma(\Sigma-D)}{
\Gamma(a)\Gamma(b)\Gamma(c)}\tilde I(a,b,c)
\end{equation}
where $\Sigma=a+b+c+1$ and
\begin{eqnarray*}
\tilde I(a,b,c) &=&
2\frac{\Gamma(\Sigma+1-D)}{\Gamma(\Sigma-D)}\int_0^1\!\!dx\;dy
\int_0^\infty\!\!d\alpha\;d\beta\;
x^{\frac{D}{2}-b-2}(1-x)^{\frac{D}{2}-c-1} \\ &&
\times y^{b+c-\frac{D}{2}-1}(1-y)^{a-1}
\Bigg[(\alpha+\beta)^2+ \alpha^2\frac{1-x}{xy}+y(1-y)\Bigg]^{D+1-\Sigma}.
\end{eqnarray*}
Making the change of variables $\alpha = u\rho$ and $\beta=(1-u)\rho$, the
two integrations corresponding to $\alpha$ and $\beta$ can be done, with the
result
\begin{equation} \label{eq:It}
\tilde I =
\int_0^1\!\!dx\;dy\;x^{\frac{D-3}{2}-b}(1-x)^{\frac{D-3}{2}-c}
y^{\frac{D-3}{2}-a}(1-y)^{D-2-b-c}\arctan\sqrt{\frac{1-x}{xy}}.
\end{equation}
The presence of the $\arctan$-function makes it impossible (at least for the
author) to compute the remaining integral exactly for all values of $a,b,c$
and $\epsilon=(4-D)/2$. Hence only the cases really needed and only the
expansion in $\epsilon$ to the order required were treated.
For $I(1,1,1;1,1)$ this task is quite straightforward as $\tilde I(1,1,1)$ is
in fact finite in the limit $\epsilon\to0$ and its expansion in $\epsilon$ can
be computed without encountering any problems. But it should be noted that due
to (\ref{eq:I}) $\tilde I$ must be multiplied by $1/\epsilon$ to obtain $I$ and
thus the latter integral is divergent. It should also be mentioned that the
result quoted in section \ref{sec:Tech} has not been derived in the way just
described, but by constructing an equation involving the integral. This
method will be explained at the end of the appendix.
The problem with the other two integrals $\tilde I$ is that they do not exist
if we take $\epsilon=0$ and that it is difficult to factor out the divergence.
As a first step in this direction the substitution
$$ x = \frac{1}{1+t^2y} $$
can be applied to turn (\ref{eq:It}) into
\begin{equation}\label{Ii}
\tilde I(i,1,2) = \int_0^\infty\!dt\;t^{-2-2\epsilon}\arctan t\cdot K_i(t^2)
\end{equation}
with
\begin{eqnarray*}
K_1(t^2) & = & \int_0^1\!dy\;y^{-1-2\epsilon}(1-y)^{-1-2\epsilon}
(1+t^2y)^{2\epsilon} \\
& = & \frac{\Gamma^2(-2\epsilon)}{\Gamma(-4\epsilon)}
F(-2\epsilon,-2\epsilon,-4\epsilon;-t^2) \\
K_2(t^2) & = & \int_0^1\!dy\;y^{-2-2\epsilon}(1-y)^{-1-2\epsilon}
(1+t^2y)^{2\epsilon} \\
& = & \frac{\Gamma(-2\epsilon)\Gamma(-1-2\epsilon)}{\Gamma(-4\epsilon)}
F(-2\epsilon,-1-2\epsilon,-1-4\epsilon;-t^2)
\end{eqnarray*}
and where $F(a,b,c;x)$ denotes the hypergeometric function. Although this
result may look quite impractical, it in fact helps: one factor $1/\epsilon$
has been factored out, and the expansion of the two hypergeomtric functions is
also possible up to terms which are not needed anyhow. Let us first examine
$K_1$:
\begin{eqnarray*}
\lefteqn{F(-2\epsilon,-2\epsilon,-4\epsilon;x)} && \\
& = & 1-\epsilon\sum_{n=1}^\infty
\Bigg(\prod_{j=1}^{n-1}\frac{(j-2\epsilon)^2}{j-4\epsilon}\Bigg)
\frac{x^n}{n!}
= 1-\epsilon\sum_{n=1}^\infty\Bigg(\prod_{j=1}^{n-1}\Big(j+{\cal O}
(\epsilon^2)\Big)\Bigg) \frac{x^n}{n!} \\
& = & 1-\epsilon\sum_{n=1}^\infty\frac{x^n}{n} + {\cal O}(\epsilon^3)
= 1+\epsilon\ln(1-x) + {\cal O}(\epsilon^3).
\end{eqnarray*}
Thus we find
$$ K_1(t^2) = 1+\epsilon\ln(1+t^2)+{\cal O}(\epsilon^3)k_1(t^2) $$ where the
residual function $k_1(t^2)$ is well behaved for $t\to0$. A similar
trick can be applied to $K_2$ if we first employ the relation \cite{Grad}
$$ F(a,b,c;x) = (1-x)^{-b}F\Big(b,c-a,c;\frac{x}{x-1}\Big)$$
which transforms $K_2$ into
$$
K_2(t^2) = \frac{\Gamma(-2\epsilon)\Gamma(-1-2\epsilon)}{\Gamma(-4\epsilon)}
(1+t^2)^{1+2\epsilon}F\Big(-1-2\epsilon,-1-2\epsilon,-1-4\epsilon;
\frac{t^2}{1+t^2}\Big).
$$
Now
\begin{eqnarray*}
\lefteqn{F(-1-2\epsilon,-1-2\epsilon,-1-4\epsilon;x)} && \\
& = & 1-\frac{(1+2\epsilon)^2}{1+4\epsilon}x
+\epsilon\frac{(1+2\epsilon)^2}{1+4\epsilon}\sum_{n=1}^\infty
\Bigg(\prod_{j=1}^{n-1}\frac{(j-2\epsilon)^2}{j-4\epsilon}\Bigg)
\frac{x^{n+1}}{(n+1)!} \\
& = & 1-\frac{(1+2\epsilon)^2}{1+4\epsilon}x
+\epsilon\frac{(1+2\epsilon)^2}{1+4\epsilon}\sum_{n=1}^\infty
\frac{x^{n+1}}{n(n+1)} + {\cal O}(\epsilon^3) \\
& = & 1-x+\epsilon(1-4\epsilon)x +\epsilon(1-x)\ln(1-x)+ {\cal O}(\epsilon^3).
\end{eqnarray*}
By noting that $F(0)=1$ and $F(a,a,c;1)=\Gamma(c)\Gamma(c-2a)/\Gamma^2(c-a)$
we can even improve the above expansion as $F$ must be of the form
$F=1-x+\Gamma(-1-4\epsilon)/\Gamma^2(-2\epsilon)x+\phi(x)$ where $\phi(x)$
vanishes for both $x=0$ and $x=1$. Thus
$$
K_2(t^2) = (1+t^2)^{2\epsilon}\Bigg[
\frac{\Gamma(-2\epsilon)\Gamma(-1-2\epsilon)}{\Gamma(-1-4\epsilon)}
\Big(1-\epsilon\ln(1+t^2)\Big)-\frac{t^2}{1+2\epsilon} \Bigg]
+ {\cal O}(\epsilon^3)k_2(t^2)
$$
where $k_2$ vanishes at the origin and at infinity.
With these approximate formulae for the $K_i$ we can calculate the two missing
integrals from (\ref{Ii}) up to terms of the order ${\cal O}(\epsilon^2)$. The
divergences still present can be extracted by writing
\begin{equation}\label{DT}
\int_0^\infty\!dt\;t^{-1-2\epsilon} f(t) = -\frac{f(0)}{2\epsilon}
+ \int_0^\infty\!dt\;t^{-1-2\epsilon}\Big( f(t)-f(0)\Theta(1-t)\Big)
\end{equation}
where the function $f(t)$ should be regular at the origin. The new integral is
finite in the limit $\epsilon\to0$, and consequently may be expanded. One
should note that in the case of $I(2,1,2)$ there is a term
$t^2(1+t^2)^{2\epsilon}$ in $K_2$ that removes the bad behaviour of the
integrand at the origin, but introduces the same kind of divergence for large
$t$. The trivial change of variables $t\to1/t$ reduces this term to the above
case again. It is essential that we treat the whole combination as of
order $1$: suppose, for example, we would expand $t^2(1+t^2)^{2\epsilon}$,
generating $2\epsilon t^2\ln(1+t^2)$. Inserted into (\ref{Ii}) and proceeding
as described this would produce an integral of the form
$$
2\epsilon\int_0^\infty\!dt\; t^{-1+2\epsilon}\ln t\frac{\arctan t}{t}
$$
which is not of ${\cal O}(1)$ as we would assume, but of ${\cal
O}(1/\epsilon)$. The term also demonstrates that it is crucial to know
that the omitted functions $k_i$ grow at most logarithmically after expansion
in $\epsilon$ and thus do not invalidate our power-counting in this
parameter.
Let us now come back to $I(1,1,1;1,1)$. Starting from the original definition
given in section \ref{sec:Tech}, shifting first $p\to p+r+q$, using $vq=0$
and
$$
\frac{1}{(pv+rv+i\varepsilon)(rv+i\varepsilon)}=
\frac{1}{pv+i\varepsilon}\Bigg[\frac{1}{rv+i\varepsilon}
-\frac{1}{pv+rv+i\varepsilon}\Bigg]
$$
and then replacing $r\to-r$ in the first and shifting $r\to r-p-q$ in the
second term, one obtains:
\begin{eqnarray*}
\lefteqn{ I(1,1,1;1,1) } && \\ &=&
\int\!\widetilde{dp}\widetilde{dr}\frac{1}{(-p^2)(-r^2)(-(r+p+q)^2)}
\frac{1}{(pv+rv+i\varepsilon)(rv+i\varepsilon)} \\ &=&
-\int\!\widetilde{dp}\widetilde{dr}\frac{1}{(-p^2)(-r^2)(-(r-p-q)^2)}
\frac{1}{pv+i\varepsilon}
\Bigg[\frac{1}{rv-i\varepsilon}+\frac{1}{rv+i\varepsilon}\Bigg].
\end{eqnarray*}
Hence
\begin{eqnarray*}
\lefteqn{ I(1,1,1;1,1) } && \\ &=&
\frac{1}{3}\int\!\widetilde{dp}\widetilde{dr}\frac{1}{
(-p^2)(-r^2)(-(r-p-q)^2)(pv+i\varepsilon)}\Big[\frac{1}{rv+i\varepsilon}
-\frac{1}{rv-i\varepsilon}\Big] \\
&=& -\frac{2\pi}{3}i\int\!\widetilde{dp}\widetilde{dr}\frac{\delta(rv)}{
(-p^2)(-r^2)(-(r-p-q)^2)(pv+i\varepsilon)} \\
& = & \frac{2\pi}{3}i\sqrt\pi G(1,1;-1,-\frac{1}{2})\int\!\widetilde{dr}
\frac{\delta(rv)}{(-r^2)(-(r-q)^2)^{1/2+\epsilon}}.
\end{eqnarray*}
In the last expression we can replace $i\pi\delta(rv)$ by
$-1/(rv+i\varepsilon)$ (due to $vq=0$ the principal value part of the
resulting integral vanishes) and use (\ref{three:one}) again, or we can
calculate the effective $D-1$-dimensional integral directly. Via both routes
we find the result given in section \ref{sec:Tech}.
|
1,116,691,499,156 | arxiv | \section{Introduction}
The nuclear shell model \cite{Mayer,MJ,Brussaard,Heyde,Talmi} is an appropriate tool for reproducing the observed magic numbers in atomic nuclei, which correspond to unusual stability of nuclei possessing these particular numbers of protons and/or neutrons. This model is based on the assumption that each nucleon is moving inside the nucleus in an average potential due to the other nucleons, resembling a three-dimensional (3D) isotropic harmonic oscillator (HO) potential with a flattened bottom, to which the spin-orbit interaction is added, playing a crucial role in changing the 3D HO \cite{Wybourne,Smirnov,IacLie} magic numbers, 2, 8, 20, 40, 70, 112, \dots into the nuclear magic numbers, 2, 8, 20, 28, 50, 82, 126, \dots, observed experimentally. The single-particle energy levels, {i.e., the nucleon energy levels within the 3D isotropic HO potential,} are filled in accordance to the Pauli exclusion principle. Spherical coordinates are used for the description of the atomic nucleus. Each energy level characterized by total angular momentum $j$ can accommodate up to $ 2(2j+1) $ protons or neutrons. Groups of levels forming a magic number are supposed to be inert, while the nuclear properties are decided by the protons and neutrons in incomplete shells, called the valence protons and neutrons. {The picture described so far corresponds to nuclei with few valence protons and neutrons, therefore lying near to closed shells. These nuclei are known to have spherical or near-spherical shapes \cite{BM1}}.
{Moving away from closed shells, the nucleus acquires several valence protons and neutrons, and quadrupole deformation sets in \cite{BM}. As a consequence,} the order of the single-particle energy levels is modified. The levels may cross each other. For the first time, Nilsson \cite{Nilsson,MoNi,MotNil,NilssonP4,NilssonSH,Ragnarsson,RN,NR,Mottelson,Casten} formulated such a study,
using a modified 3D anisotropic HO with axially symmetric quadrupole deformation \cite{Takahashi,RD,ND,PVI,Lenis,Sugawara,ArimaJPG}, to which the spin-orbit interaction is added. Although the spherical coordinates remain convenient for small deformations, it has been realized that for large deformations the cylindrical coordinates become more appropriate, since they offer a suitable basis for the definition of asymptotic quantum numbers \cite{MN,Rassey,Quentin,Boisson}, which are exact for very large deformations but remain satisfactorily good even at moderate deformations. {The use of asymptotic quantum numbers for deformed nuclei is just an example of choosing a basis as close as possible to the dynamics of the physical system under study \cite{Welsh,Rowe}.}
It should be noted that the Nilsson model is still in wide use, 65 years after its introduction \cite{Nilsson}, either in the framework of Nilsson mean-field plus pairing calculations \cite{Guan1,Guan2}, cranking shell model calculations based on the Nilsson potential with pairing correlations \cite{Zhang}, or in relation to approximate SU(3) symmetries \cite{Kota} like the pseudo-SU(3) symmetry \cite{Adler,Shimizu,pseudo1,pseudo2,Ginocchio1,Ginocchio2,Quesne,Velazquez,DW1,DW2}, the quasi-SU(3) symmetry \cite{Zuker1,Zuker2}, the proxy-SU(3) symmetry \cite{proxy1,proxy2,proxy3,Sobhani,EPJASM,CRC,EPJAHW,EPJASC}, and the recovery of SU(3) symmetry at superdeformation \cite{Arima}.
From the above it becomes clear that each single-particle energy level can be described by two sets of quantum numbers, an exact one in the spherical coordinates and an asymptotic one in the cylindrical coordinates. One can see this correspondence in the Nilsson diagrams \cite{Nilsson,NR,FS}, which are figures presenting the single-particle energies as a function of the quadrupole deformation. Looking into the details of this correspondence, one sees a {\sl spin paradox}: bunches of levels forming an orbital in the spherical basis, sharing the same orbital angular momentum $l$ and the same total angular momentum $j$, correspond to a mixture of asymptotic levels in the cylindrical basis, some of them having spin up and some having spin down. Furthermore, the correspondence between spherical and cylindrical labels for the same single-particle level is not always the same for protons and for neutrons.
In this paper, we are going to resolve the spin paradox by pointing out a different, well-defined way for making the correspondence between the spherical quantum numbers and the asymptotic cylindrical quantum numbers for each single-particle level. Within this new scheme, bunches of levels forming an orbital in the spherical basis, sharing the same orbital angular momentum $l$ and the same total angular momentum $j$, correspond to asymptotic levels in the cylindrical basis having the same spin projection (up or down), and not a mixture of them. Furthermore, within this new scheme the correspondence between spherical and cylindrical labels for the same single-particle level is guaranteed to be the same for protons and for neutrons.
In Section 2 of the present work we review the description of the 3D isotropic HO in different coordinate systems, while in Section 3 a review of the Nilsson anisotropic, axially symmetric oscillator in the asymptotic cylindrical coordinates is given. The spin paradox is described in Section 4, while in Section 5 the new scheme leading to its resolution is introduced. Numerical results obtained within the new scheme are presented in Section 6, while in Section 7 the conclusions of this work are given.
\section{The three dimensional isotropic harmonic oscillator in different coordinate systems}
We start with a brief review of the description of the three-dimensional (3D) isotropic harmonic oscillator (HO) \cite{Wybourne,Smirnov,IacLie} in three different coordinate systems \cite{Greiner}, in order to introduce the notation, bearing in mind that the results for the measurable quantities of a physical system must be independent from the coordinate system used.
The potential of the isotropic HO is written as \cite{Greiner}
\begin{multline}
\label{simple harmonic potential}
V(\mathbf{r}) = \frac{1}{2} m \omega^2 \left( x^2 + y^2 + z^2 \right) \\
= \frac{1}{2} m \omega^2 \left( \rho^2 + z^2 \right)
= \frac{1}{2} m \omega^2 r^2,
\end{multline}
where $ x, y, z $ are the Cartesian coordinates, while $ \rho = \sqrt{x^2 + y^2} $ and $ r = \sqrt{x^2 + y^2 + z^2} $ are the radial coordinates in the cylindrical and the spherical coordinate systems, respectively, and $\omega$ is the angular frequency of the oscillator. For the further details we refer to Refs. \cite{Greiner,GM}.
The energy eigenvalues in the three coordinate systems are given by
\begin{align}
\label{E-cart}
E_{\text{Cart.}} =& \hbar \omega \left( n_x + n_y + n_z + \frac{3}{2} \right),\\
\label{E-cyl}
E_{\text{Cyl.}} =& \hbar \omega \left( 2 n_\rho + n_z +
{|\Lambda|}
+ \frac{3}{2} \right), \\
\label{E-sph}
E_{\text{Sph.}} =& \hbar \omega \left( 2 n_r + L + \frac{3}{2} \right),
\end{align}
where the quantum numbers in different directions are shown by $ n_i$, with $i=x,y,z,\rho,r$, while $L$ and $\Lambda$ represent the angular momentum and its projection on the $z$-axis respectively.
Separation of variables is possible in the Cartesian coordinates, therefore the wave function of the 3D isotropic HO is simply the product of the three one-dimensional (1D) HO wave functions \cite{Greiner}
\begin{multline}
\label{phi_x}
\phi_{n} (\xi) = N_n e^{-\frac{\xi^2}{2}} H_n (\xi), \\
{
N_n = \sqrt{\frac{1}{2^n n!} \sqrt{\frac{\lambda}{\pi}}}, \quad
\xi=x \sqrt{\lambda},
\quad
\lambda = \frac{m \omega}{\hbar}
}
\end{multline}
where $H_n$ are the Hermite polynomials, reading
\begin{align}
\psi_{n_x, n_y, n_z} = \phi_{n_x} (x) \phi_{n_y} (y)\phi_{n_z} (z).
\end{align}
The eigenfunctions of the harmonic oscillator in cylindrical coordinates are given by \cite{GM}
\begin{multline}
\psi_{n_z, n_\rho, \Lambda} = N_c \exp \left[-\frac{k}{2} \left( z^2 + \rho^2 \right)\right] \\ H_{n_z} (k z) \rho^{|\Lambda|} L_{n_{\rho}} ^{|\Lambda|} \left(k \rho^2\right) e^{i \Lambda \phi}
\end{multline}
where $L_n$ are the Laguerre polynomials, $N_c$ is a normalization constant and $ k = m \omega / \hbar $. It is seen that the azimuthal part of this wave function is a general phase which does not affect the modules of the wave function. In other words, we have $ \left[ H, L_z \right] = 0 $, which means $ \Lambda $ is a constant of the motion.
Finally, the wave function of the 3D isotropic HO in spherical coordinates is \cite{Greiner,GM}
\begin{multline}
\psi_{n_r, L, \Lambda} = N_c r^L \exp \left[-\frac{m \omega}{2 \hbar}r^2\right] \\ _1F_1 \left( -n_r, L + \frac{3}{2}, \frac{m \omega}{\hbar} r^2 \right) Y_{L \Lambda} (\theta, \varphi),
\end{multline}
where $ Y_{L \Lambda} (\theta, \varphi) $ are the spherical harmonics $ Y_{L \Lambda} (\theta, \varphi) $ and $ _1 F_1 $ denote the confluent hypergeometric functions.
In principle, it is not important which system of coordinates is used for the description of nuclei, but depending on the available symmetries in the problem, the use of each system of coordinates has its own advantages. For instance, for small {$ (\delta \lesssim 0.1) $} and moderate {$ (0.1 \lesssim \delta \lesssim 0.3) $} deformations {(see Eq. (\ref{delt}) for the definition of the deformation $\delta$)} it is customary to use the spherical coordinates, because in these cases the deformations in the shape of a nucleus do not lead far from the spherical shape \cite{NR}.
On the other hand, for large deformations {($\delta > 0.3$)}, where the nucleus assumes an axially symmetric shape, it is preferable to use cylindrical coordinates \cite{NR}. Nevertheless, the results must be independent from the selection of the coordinate system. {More details will be given in the next section.}
\section{The modified harmonic oscillator}
The starting point of the shell model was the interpretation of the nuclear magic numbers, which are reproduced by adding to the 3D isotropic HO the spin-orbit interaction, i.e., a term describing the coupling of the spin with the orbital angular momentum \cite{Mayer,MJ}. Subsequently, the Nilsson model was introduced, in which the 3D isotropic HO is replaced by an axially symmetric 3D anisotropic HO with cylindrical symmetry. The Nilsson Hamiltonian reads \cite{Nilsson,NR}
\begin{equation}
H = H_{\text{osc}} + H',
\end{equation}
where $ H_{\text{osc}} $ is the Hamiltonian of the 3D anisotropic HO to be discussed below, while the correction terms are
\begin{equation}\label{Hprime}
H' = - 2 \kappa \hbar \omega_0
\mathbf{L} \cdot \mathbf{s} - \mu \kappa \hbar \omega_0 \left( \mathbf{L}^2 - \left< \mathbf{L}^2 \right>_N \right),
\end{equation}
with the first term representing the spin-orbit interaction and the second term, proportional to the square of the orbital angular momentum, rounding up the bottom of the potenial. The term
$ \left< L^2 \right>_N = N(N+3)/2 $ has a constant value in each shell, while $ \kappa $ and $ \mu $ are parameters that have different values within each proton or neutron major shell, listed, for example, in
Refs. \cite{NR,BM}.
The Hamiltonian of the 3D anisotropic HO with cylindrical symmetry used in the Nilsson model is \cite{NR}
\begin{equation}
H_{\text{osc}} = -\frac{\hbar^2}{2M} \Delta + \frac{M}{2} \left[ \omega_\perp ^2 \left(x^2+y^2\right) + \omega_z ^2 z^2
\right],
\end{equation}
where the frequencies in the $x$-$y$ plane and in the $z$-direction are denoted by $ \omega_\perp $ and $ \omega_z $, respectively, {$M$ is the nucleon mass, and $\Delta$ is the usual Laplacian operator}. Then the deformation parameter is defined in the form
\begin{equation}\label{delt}
\delta = \frac{\omega_\perp - \omega_z}{\omega_0}.
\end{equation}
The modified harmonic oscillator Hamiltonian can be solved in different ways, depending on the size of the deformation. For small deformations {($\delta \lesssim 0.1$)}, perturbation techniques within the spherical basis can be used \cite{NR}, while at large deformations {($\delta > 0.3$)} the asymptotic basis in cylindrical coordinates is used \cite{NR}.
{In the case of small deformations ($\delta \lesssim 0.1$), one can employ perturbation theory in the spherical coordinates, the result being \cite{NR}
\begin{align}
\left< N L s j \Omega \left| \delta \hbar' \right| N L s j \Omega \right> = \frac{1}{6} \delta M \omega_0 ^2 \left< r^2 \right> \frac{3 \Omega^2 - j (j+1)}{j(j+1)},
\end{align}
where $ \hbar' $ is the first order of the perturbation, $N$ is the number of oscillator quanta, $L$ is the orbital angular momentum, $s$ is the spin, $j$ is the total angular momentum and $\Omega$ is its ptojection on the $z$-axis. }
According to our purpose in this paper, we start from the case of large deformations {($\delta> 0.3$)} and we show how the solutions obtained for large deformations can be used for obtaining solutions valid at small deformations {($\delta\lesssim 0.1$)}, providing in parallel the correct nuclear magic numbers, without making any change in the Hamiltonian, but by simply imposing a rule of correspondence between the cylindrical quantum numbers used for large deformations and the spherical quantum numbers used at small deformations.
In the case of large deformations, it is convenient to introduce what one may call ``stretched'' coordinates \cite{NR}
\begin{equation}
\xi = x \sqrt{\frac{M \omega_\perp}{\hbar}},
\quad
\eta = y \sqrt{\frac{M \omega_\perp}{\hbar}},
\quad
\zeta = z \sqrt{\frac{M \omega_z}{\hbar}}.
\end{equation}
Consequently, the components of angular momentum are defined in terms of the ``stretched'' coordinates as
\begin{equation}
\left(L_t\right)_x = -i \hbar \left( \eta \frac{\partial}{\partial \zeta} - \zeta \frac{\partial}{\partial \eta}\right),
\end{equation} {
\begin{equation}
\left(L_t\right)_y = -i \hbar \left( \zeta \frac{\partial}{\partial \xi} - \xi \frac{\partial}{\partial \zeta}\right),
\end{equation}
\begin{equation}
\left(L_t\right)_z = -i \hbar \left( \xi \frac{\partial}{\partial \eta} - \eta \frac{\partial}{\partial \xi}\right).
\end{equation}}
As a consequence, the correction terms in the Hamiltonian are taking the form
\begin{align}\label{Ham}
H' _{\text{def}} = - 2 \kappa \hbar \omega_0
\mathbf{L}_t \cdot \mathbf{s} - \mu \kappa \hbar \omega_0 \left( \mathbf{L}_t ^2 - \left< L_t ^2 \right>_N \right),
\end{align}
with the index $ t $ dropped hereafter, according to the approximation introduced by Nilsson \cite{Nilsson,NR}. {In other words, Eq. (\ref{Ham}) is considered as the analog in the ``stretched'' coordinates of Eq. (\ref{Hprime}) in the usual cartesian coordinates. The parameters $\kappa$ and $\mu$ are the ones used in Eq. (\ref{Hprime}),
while $\omega_0$ is slighlty dependent on the deformation, as seen from the relation obtained from the volume conservation condition \cite{Nilsson}
\begin{align}
\omega_0 (\delta ) =
\overset{0}{\omega}_0 \left(
1 - \frac{4}{3} \delta^2 - \frac{16}{27} \delta^3
\right)^{-1/6},
\end{align}
where $ \overset{0}{\omega}_0 $ is $ \omega_0 (\delta = 0) $.}
In these coordinates the eigenfunctions and eigenvalues for $ H_{\text{osc}} $ are found to read
\begin{multline}
\label{eigen function}
\psi_{n_z,n_\perp, \Lambda} (\rho, \varphi, z)= N_c e^{-\zeta^2 /2}H_{n_z} (\zeta)
\rho^{|\Lambda|} e^{-\rho^2 /2} \\
_1 F_1 \left( - \frac{n_\perp - |\Lambda|}{2}, |\Lambda|+1; \rho^2 \right) e^{i \Lambda \varphi}, \\
E_{n_z, n_\perp} = \hbar \omega_0 \left(
N + \frac{3}{2} + \left( n_\perp - 2n_z \right) \frac{\delta}{3} \right),
\end{multline}
with $ \rho^2 = \xi^2 + \eta^2 $, $ N = n_\perp + n_z $, $ n_\perp = n_x + n_y $. When $H'_{\text{def}}$ is added, these functions are not eigenfunctions of the total Hamiltonian $ H_{\text{tot}} = H_{\text{osc}} + H' _{\text{def}} $. The mean value of $ \mathbf{L}^2 $ is taking a constant value within each shell, but for the other terms of $H'_{\text{def}}$ the matrix elements read \cite{Nilsson,NR}
\begin{multline}
\left< n_z-1, n_\perp+1, \Lambda+1, \Sigma-1 \left|
\mathbf{L} \cdot \mathbf{s}
\right|n_z, n_\perp, \Lambda, \Sigma
\right> = \\ -\frac{1}{2} \sqrt{n_z \left(n_\perp + \Lambda + 2 \right)}, \\
\left< n_z+1, n_\perp-1, \Lambda+1, \Sigma-1 \left|
\mathbf{L} \cdot \mathbf{s}
\right|n_z, n_\perp, \Lambda, \Sigma
\right> = \\ \frac{1}{2} \sqrt{\left(n_z+1\right) \left(n_\perp - \Lambda\right)}, \\
\left< n_z+1, n_\perp-1, \Lambda-1, \Sigma+1 \left|
\mathbf{L} \cdot \mathbf{s}
\right|n_z, n_\perp, \Lambda, \Sigma
\right> = \\ -\frac{1}{2} \sqrt{\left(n_z+1\right) \left(n_\perp + \Lambda\right)}, \\
\left< n_z-1, n_\perp+1, \Lambda-1, \Sigma+1 \left|
\mathbf{L} \cdot \mathbf{s}
\right|n_z, n_\perp, \Lambda, \Sigma
\right> =\\ \frac{1}{2} \sqrt{n_z \left(n_\perp - \Lambda + 2 \right)},
\end{multline}
and
\begin{multline}
\left< n_z, n_\perp, \Lambda, \Sigma \left|
\mathbf{L}^2 \right|n_z, n_\perp, \Lambda, \Sigma
\right> = \\2n_z \left( n_\perp + 1 \right) + n_\perp + \Lambda^2,\\
\left< n_z + 2, n_\perp - 2, \Lambda, \Sigma \left|
\mathbf{L}^2 \right|n_z, n_\perp, \Lambda, \Sigma
\right> =\\ - \sqrt{\left( n_z + 2 \right) \left( n_z + 1 \right) \left( n_\perp + \Lambda \right) \left( n_\perp - \Lambda \right)},\\
\left< n_z - 2, n_\perp + 2, \Lambda, \Sigma \left|
\mathbf{L}^2 \right|n_z, n_\perp, \Lambda, \Sigma
\right> =\\ - \sqrt{\left( n_z -1 \right) n_z \left( n_\perp + \Lambda + 2 \right) \left( n_\perp - \Lambda + 2\right)},
\end{multline}
where $ \Sigma = \pm 1/2 $ is the projection of the spin and the notation $ \psi = \left|n_z, n_\perp, \Lambda, \Sigma \right> $ is used. The selection rules appearing in the above equations indicate that in order to obtain the eigenfunctions and eigenvalues of $H_{\text{tot}}$, the Hamiltonian matrix within each $ N $ shell should be constructed and subsequently diagonalized.
Up to here, we have reviewed the asymptotic solutions of the Nilsson Hamiltonian, valid at large deformations {($\delta >0.3$)}. In Section 5, a rule will be presented, according to which each state is uniquely labeled both in the cylindrical and in the spherical coordinates, so that one can recover the nuclear magic numbers at zero deformation, as well as correct solutions for small {($\delta \lesssim 0.1$)} and moderate
{($0.1\lesssim \delta \lesssim 0.3$)} deformations using the asymptotic quantum numbers $ \Omega,N, n_z, \Lambda, \Sigma $ with $ \Omega = \Lambda + \Sigma $. But before doing so, we are going to discuss the spin paradox appearing in the Nilsson model, which will be resolved by the just mentioned rule.
\section{The spin paradox}
As we have already seen, Nilsson states in the asymptotic basis {\cite{Nilsson,NR}} are labeled as $\Omega[N n_z \Lambda]$, where $N$ is the total number of oscillator quanta and $n_z$ is the number of quanta along the $z$-axis, while $\Lambda$ and $\Omega$ denote respectively the projections of the orbital angular momentum and the total angular momentum on the $z$-axis. {The full wave functions can be found on page 115 of Ref. \cite{NR}}. On the other hand, in the spherical shell model basis orbitals are denoted by
$n l j _\Omega$, where $l$ and $j$ are the quantum numbers associated respectively with the orbital angular momentum and the total angular momentum, and $n=n_r+1$.
The correspondence between the asymptotic cylindrical Nilsson orbitals and the spherical shell model orbitals for neutrons, as taken from the standard Nilsson diagrams \cite{FS}, is shown in Table \ref{TA}. The following observations can be made, with the spherical orbitals involved in the discussion shown in boldface.
a) In several cases the spherical shell model orbitals are made up from Nilsson orbitals of mixed spin projection $\Sigma$. For example, the 1g7/2 orbital is made out of three Nilsson orbitals with spin up (1/2[420],
3/2[411], 5/2[402]) and one Nilsson orbital with spin down (7/2[404]). We call this fact {\sl the spin paradox}.
b) In the same cases as in a), the spherical shell model orbitals are made up from Nilsson orbitals of mixed $n_\rho$ values. For example, the 1g7/2 orbital is made out of three Nilsoon orbitals with $n_\rho=1$ (1/2[420],
3/2[411], 5/2[402]) and one Nilsson orbital with $n_\rho=0$ (7/2[404]).
c) The cases of a) and b) are in on-to-one correspondence with cases of groups of Nilsson orbitals possessing mixed values of $n_r$. For example, in the group of Nilsson orbitals 1/2[431], 3/2[422], 5/2[413], 7/2[404], the first three correspond to $n_r=1$, while the last one to $n_r=0$.
Furthermore, the correspondence between the asymptotic cylindrical Nilsson orbitals and the spherical shell model orbitals for protons, again taken from the standard Nilsson diagrams \cite{FS}, is shown in Table \ref{TB}. The following observations can be made.
a) The spin paradox occurs only in a few cases, much fewer than in the neutron case, shown in boldface. {This difference can be attributed to the different parameter sets used for protons and for neutrons, as will be discussed in the last paragraph of the present section. }
b) As a consequence, there are several Nilsson orbitals which have different shell model counterparts for neutrons and for protons. For example, the 1/2[420] Nilsson orbital corresponds to the 1g7/2$_{1/2}$ spherical shell model orbital for neutrons, but to the 2d5/2$_{1/2}$ spherical shell model orbital for protons. More examples of this kind can be found by considering the orbitals appearing in boldface in Table \ref{TA}, but not appearing in boldface in Table \ref{TB}.
We realize that not only the spin paradox appears in the correspondence between certain asymptotic cylindrical Nilsson orbitals and spherical shell model orbitals, but in addition the correspondence between asymptotic cylindrical Nilsson orbitals and spherical shell model orbitals is not the same for neutrons and protons.
It is worth examining the source of these discrepancies by looking at the numerical results of standard Nilsson model calculations, reported in Tables \ref{TD} and \ref{TE}.
In Table \ref{TD} some examples of proton orbitals (upper part) and neutron orbitals (lower part) are shown. Each Nilsson orbital is expanded in the shell model basis $| N l j \Omega\rangle$. The expansion coefficients are given for three different values of the deformation $\delta$. We remark that in all cases there is a leading contribution by a term having a coefficient very close to unity, shown in boldface. Therefore in these cases there is a clear one-to-one correspondence between the asymptotic cylindrical Nilsson orbitals and the spherical shell model orbitals, as shown in Tables \ref{TA} and \ref{TB}. No spin paradox or mixing of $n_\rho$ or $n_r$ values is observed in these cases.
Another set of examples of proton orbitals (upper part) and neutron orbitals (lower part) are shown in Table \ref{TE}. In the upper part, we see that there is again a leading term with a large coefficient, shown in boldface, although the dominance of this term gets less absolute for high deformation $\delta$, especially for vectors with low $\Omega$. In other words, high deformation $\delta$ in the case of Nilsson vectors with low $\Omega$ favors non-negligible contributions for more than one spherical shell model vectors. Anyway, the one-to-one correspondence between cylindrical and sphrical orbitals is preserved in this case, with no spin paradox or mixing of $n_\rho$ or $n_r$ values observed.
A different situation appears in the example shown in the lower part of Table \ref{TE}, which regards neutron orbitals. In this case it is clear that there is a dominant term, shown in boldface, at low deformation, but the situation is radically different at higher deformations. In most of the cases there is a leading contribution, shown in boldface, but it comes from a spherical vector different from the one appearing at low deformation.
Non-negligible contributions from more than one spherical vectors become more often in Nilsson vectors with low $\Omega$. In Table \ref{TA} it is clear that the correspondence between Nilsson and spherical orbitals is the one appearing at low deformation, which is different from the one appearing at high deformation. Only for the Nilsson orbital 9/2[514] there is agreement on the leading contribution in low and high deformations. In other words, it is the disagreement between the leading contribution in low and in high deformations in Table \ref{TE} which leads to the spin paradox in Table \ref{TA}, as well as to the mixing of the $n_\rho$ and of the $n_r$ eigenvalues.
{In relation to the physics origin of the spin paradox, the following comments can be made. Since the expression for the energy eigenvalues of the 3D anisotropic (Nilsson) HO, Eq. (\ref{eigen function}), is the same for both protons and neutrons, the different numerical results are due to the different parameter values used for neutrons and for protons, which can be found in the tables given in \cite{NR,BM,proxy1}. For example, in the neutron shell with $N=82$-126, the parameters used are $\kappa=0.0635$ and $\mu=0.422$, while in the proton shell with $Z=82$-126 the parameters are $\kappa=0.0575$ and $\mu=0.652$ \cite{proxy1}. As a consequence, the coefficient of the spin-orbit term in Eq. (\ref{Ham}), which is proportional to $\kappa$, differs in the present example between protons and neutrons by about 10\%, but the coefficient of the $\textbf{L}^2$ term, which is proportional to $\kappa\mu$, obtaining the values 0.0268 for neutrons and 0.0375 for protons, differs by 40\%, making the bottom of the proton potential much flatter than the bottom of the neutron potential. As a result, the ``spaghetti'' of Nilsson single particle energy levels obtained for protons and for neutrons is different in each case and the correspondence between single particles states in the spherical case and in the largely deformed (asymptotic) case gets different, when using the sets of quantum numbers described above.}
In the next section we are going to show how the spin paradox and the disagreement between neutron and proton Nilsson diagrams can be resolved.
\section{``The rule'' and Nilsson orbitals}
In this section, we discuss how to label states in a manner different from the traditional approach \cite{NR,FS} presented above, using a {\sl rule}. Following this new labeling, magic numbers and Nilsson diagrams are reproduced exactly using the asymptotic quantum numbers for all values of the deformation, large and small.
Let us recall the energy of the 3D isotropic HO in cylindrical and in spherical coordinates. As stated above, it is not important which system of coordinates is employed. Thus, one can expect that the energy of the 3D isotropic HO must be the same both in cylindrical and in spherical coordinates
\begin{align}
E_\text{Cyl.} =&E_{\text{Sph.}
},\nonumber \\
\hbar \omega \left(2 n_\rho + n_z + \Lambda + \frac{3}{2}\right) =& \hbar \omega \left( 2 n_r + L + \frac{3}{2} \right),
\end{align}
which leads to
\begin{align}
2 n_\rho + n_z + \Lambda = 2 n_r + L.
\end{align}
Since several quantum numbers appear in this equation, this equation does not suffice to provide detailed relations between the cylindrical and spherical quantum numbers.
This gives us the freedom to consider an assumption, namely to equate quantum numbers appearing in this equation with coeffecient equal to one, and separately to equate quantum numbers appearing in this equation with coeffecient equal to two, getting in this way
\begin{align}
\label{firstrule}
n_r =& n_\rho, \\
\label{secondrule}
L =& n_z + \Lambda.
\end{align}
Hereafter we call these arbitrarily assumed equations ``{\sl the rule}''.
These equations show an arbitrary way of mapping the quantum numbers of the spherical coordinates onto the quantum numbers of the cylindrical coordinates when there is no deformation. Applying Eqs. (\ref{firstrule}) and (\ref{secondrule}) for the correspondence between the cylindrical Nilsson orbitals and the spherical shell model orbitals we obtain Table \ref{TC}, which is valid for both protons and neutrons and is replacing Table \ref{TA} for neutrons and Table \ref{TB} for protons. On Table \ref{TC} the following remarks can be made.
a) The spin paradox is resolved. Each bunch of orbitals is characterized by the same spin projection $\Sigma$. For example, the 1g7/2 bunch of orbitals is characterized by spin down, while in Table \ref{TA} it was characterized by a mixture of spin up and spin down orbits.
b) Each bunch of orbitals is characterized by a common value of $n_\rho$ in cylindrical coordinates and a common value of $n_r$ in spherical coordinates, with $n_\rho=n_r$. For example, the 1g7/2 bunch of orbitals is characterized by $n_\rho=n_r=0$, while in Table \ref{TA} it corresponds to a mixture of $n_r=0$ and $n_r=1$ orbits.
c) Table \ref{TC} is valid for both protons and neutrons, while separate tables for neutrons, Table \ref{TA}, and protons, Table \ref{TB}, are obtained within the traditional approach \cite{NR,FS}.
It should be emphasized that ``the rule'' is an arbitrary assumption, which is not related to any transformation between the spherical and cylindrical bases, as the ones appearing in the literature \cite{Chasman,Davies}.
It is now time to examine how ``the rule'' works within specific numerical examples.
\begin{figure*}[htb]
\includegraphics[width=120mm]{Nilsson-Asymptotic-N2-The-rule}
\caption{Behavior of the levels within the major shell $ N = 2 $ predicted by ``the rule''. The labels 1/2[200+] and 1/2[211$-$] are interchanged, in comparison to the standard Nilsson diagrams \cite{NR,FS}.
{The constants are $ \kappa = 0.105 $ and $ \mu = 0 $. This figure holds for both protons and neutrons \cite{NR}.}
}
\label{FN2}
\end{figure*}
\begin{figure*}[htb]
\includegraphics[width=120mm]{Nilsson-Asymptotic-N3-The-rule}
\caption{The same as Fig. 1, but for $ N = 3 $. The labels 1/2[310+] and 1/2[321$-$] have been interchanged, and in addition the labels 3/2[301+] and 3/2[312$-$] have also been interchanged, in comparison to the standard Nilsson diagrams \cite{NR,FS}.
{The constants are $ \kappa = 0.090 $ and $ \mu = 0.25 $. This figure holds only for neutrons \cite{NR}.}
}
\label{FN3}
\end{figure*}
\section{Numerical results}
\label{numerical results}
After determination of the quantum numbers of the various levels according to ``the rule'', as shown in Table \ref{TC}, we now intend to study their behavior as a function of the deformation parameter, $ \delta $, in order to see if it is possible to use ``the rule'' for plotting Nilsson diagrams. In this direction, we construct explicitly the full Hamiltonian matrix of the modified harmonic oscillator for the shells with $ N = 2$, 3, the results shown in Tables \ref{TN2} and \ref{TN3}.
In order to obtain the eigenvalues as a function of the deformation parameter $\delta$, these Hamiltonian matrices are diagonalized, the results of this process plotted in Figs. \ref{FN2} and \ref{FN3}.
In Fig. \ref{FN2} we see that the lines obtained are identical to these in the standard Nilsson diagrams \cite{NR,FS}, the only difference appearing in the labeling of the states. In particular, the labels 1/2[200+] and 1/2[211$-$] have been interchanged, as expected by comparing Table \ref{TC} to Tables \ref{TA} and \ref{TB}.
In Fig. \ref{FN3} we see that the lines obtained are identical to these in the standard Nilsson diagrams \cite{NR,FS}, the only difference appearing in the labeling of the states. In particular, the labels 1/2[310+] and 1/2[321$-$] have been interchanged, and in addition the labels 3/2[301+] and 3/2[312$-$] have also been interchanged, as expected by comparing Table \ref{TC} to Tables \ref{TA} and \ref{TB}.
Let us give an illustrative example of the above considerations. The spherical shell model wave functions $| N L j \Omega\rangle$ can be expanded in terms of $|N L \Lambda\Sigma\rangle$ wave functions, which explicitly contain $\Sigma$, as follows
\begin{equation}
\label{the expansion}
\left|N, L, j= l \pm \tfrac{1}{2}, \Omega \right>
= \sum _{\Sigma} \left. \left<l \Lambda \tfrac{1}{2} \Sigma \right| j \Omega \right> \left|N L \Lambda \Sigma \right>,
\end{equation}
where the symbol $ \left. \left<l \Lambda \tfrac{1}{2} \Sigma \right| j \Omega \right> $ stands for the appropriate Clebsch-Gordan coefficient, and there is no summation over $\Lambda$ since $\Omega=\Lambda+\Sigma$. Consider, for example, the state $ | n L j_\Omega\rangle = |1 \text{g} 7/2_{5/2}\rangle $. Expanding this state according to Eq. \eqref{the expansion} results in
\begin{align}
\left| 1 \text{g} 7/2 _{5/2} \right> =&
\left< 4 2 \tfrac{1}{2} \tfrac{1}{2} | \tfrac{7}{2} \tfrac{5}{2} \right> \left| 4 4 2 + \right> +
\left< 4 3 \tfrac{1}{2} -\tfrac{1}{2} | \tfrac{7}{2} \tfrac{5}{2} \right> \left| 4 4 3 - \right> \nonumber \\
=& -\frac{\sqrt{2}}{3} \left| 4 4 2 + \right> + \frac{\sqrt{7}}{3} \left| 4 4 3 - \right>.
\end{align}
According to the above expression the state contains non-zero
probabilities of both $\Sigma = +1/2$ and $-1/2$ components, with the $\Sigma = -1/2$ component having a larger probability. Since $j = 4 - 1/2$ in the $ 1\text{g}7/2 $ orbital, in a classical picture only the $\Sigma = -1/2$ term might be expected to appear, while according to Table \ref{TA} and Ref. \cite{FS}, for the considered state we have
\begin{align}
\left| 1 \text{g} 7/2 _{5/2} \right> =
\begin{cases}
5/2 \left[402+\right] & \text{for neutrons},\\
5/2 \left[413-\right] & \text{for protons}.\\
\end{cases}
\end{align}
In order to find where the paradox occurs, let us calculate the quantum number
{
$ n_x + n_y \equiv n_\perp = N - n_z = 2 n_\rho + |\Lambda| $,
where the last equation is obtained by equating Eqs. \eqref{E-cart} and \eqref{E-cyl}}
needed in the Nilsson Hamiltonian using the asymptotic quantum numbers.
{ One may check at this point that the same constraint for $ n_\perp $ has been obtained using a different method in Ref. \cite{NR} page 115. }
In the neutron case we have
\begin{align}
5/2 \left[402+\right] \rightarrow
n_{\perp} =
\begin{cases}
N - n_z = 4 - 0 = 4,\\
2n_\rho + \Lambda = 0 + 2 = 2,
\end{cases}
\end{align}
while for the proton case we obtain
\begin{align}
5/2 \left[413-\right] \rightarrow
n_{\perp} =
\begin{cases}
N - n_z = 4 - 1 = 3,\\
2n_\rho + \Lambda = 0 + 3 = 3.
\end{cases}
\end{align}
It is seen that in the neutron case we cannot reach the same value of $ n_\perp $, while we do obtain the same value of $ n_\perp $ in the proton case, which follows ``the rule''.
In other words, since the Clebsch-Gordan coefficients are the same for both protons and neutrons, the same results would have been expected in both cases. Although in the classical picture in $ 1\text{g}7/2 $ one expects $ \Sigma=-1/2 $, this is indeed the case for protons, shown in Table \ref{TB}, but not for neutrons, shown in Table \ref{TA}. This is what we call a paradox. ``The rule'' gives the right answer. The correspondence under discussion is valid only for protons, i.e., in the case in which ``the rule'' is satisfied.
We conclude that Nilsson diagrams can be exactly reproduced following ``the rule''. The single-particle energy levels as a function of the quadrupole deformation $\delta$ look exactly the same as in the traditional approach \cite{NR,FS}, the only difference being that the Nilsson labels for certain pairs of energy levels are interchanged. These interchanges resolve the spin paradox seen in the traditional Nilsson diagrams, and in addition they make the Nilsson diagrams for protons and for neutrons identical to each other, while the traditional Nilsson diagrams for protons and for neutrons are different from each other \cite{NR,FS}.
\section{Conclusion}
In this paper a new way of establishing the correspondence between the quantum numbers characterizing single-particle nuclear levels in the spherical coordinates used in the shell model and in the cylindrical coordinates used in the Nilsson model is introduced. The new correspondence resolves the spin paradox seen in the traditional approach, since in the new description each bunch of levels characterized by the same orbital angular momentum $l$ and the same total angular momentum $j$ in the spherical coordinates corresponds to a single value of the spin projection in the cylindrical coordinates, and not to a mixture of spin projections, like in the traditional description. Furthermore, the correspondence between spherical shell model orbitals and asymptotic cylindrical Nilsson orbitals becomes the same for both protons and neutrons, while different sets of correspondence are obtained for protons and for neutrons within the traditional approach. Numerical results prove that the dependence of the single-particle energy levels on the quadrupole deformation, shown in the Nilsson diagrams, remains exactly the same as in the traditional approach, the only difference being that for certain pairs of single particle energy levels the Nilsson labels corresponding to two spherical shell model orbitals get interchanged.
{
\textbf{Acknowledgement}:
The authors are grateful to the referees for helpful comments and suggestions
on this work.
}
|
1,116,691,499,157 | arxiv | \section{Introduction}
Among possible interacting superconformal field theories (SCFTs) in diverse dimensions,
the six-dimensional (6d) theories are less well understood.
It is expected that 6d $\mathcal{N}=(1,0)$ SCFTs involve tensionless strings
and show very non-trivial physics.
This non-trivial nature of the 6d SCFTs is deeply related to the strong coupling physics of superstring theory, but its detailed features have yet to be discovered.
The E-string theory is a typical 6d $\mathcal{N}=(1,0)$ SCFT.
Originally, the E-string theory was found in the heterotic strings for the small $E_8$ instanton \cite{Witten:1995gx,Ganor:1996mu,Seiberg:1996vs}.
There are also several other realizations of the theory.
For instance, by considering M5-brane probing the end of the world M9 boundary \cite{Horava:1995qa,Horava:1996ma},
we can find the E-string theory on the M5 world-volume.
M-theory or a topological string, on the local $\frac{1}{2}$K3 surface,
which describes a Calabi--Yau 3-fold in the vicinity of the $\frac{1}{2}$K3 4-cycle, is dual to the E-string theory \cite{Klemm:1996hh}.
Shrinking a $\frac{1}{2}$K3 4-cycle in Calabi--Yau compactification of F-theory
also leads to the E-string theory \cite{Witten:1996qb,Morrison:1996na,Morrison:1996pp}.
The Seiberg--Witten curve, the prepotential, and the partition function of the E-string theory
have been studied in \cite{Minahan:1997ct,Minahan:1998vr,Hosono:1999qc,Mohri:2001zz,Eguchi:2002fc,
Iqbal:2002rk,Eguchi:2002nx,Sakai:2011xg,Sakai:2012zq,Sakai:2012ik,Huang:2013yta,Ishii:2013nba,Sakai:2014hsa}.
Moreover, quite recently there was an attempt at classifying all
6d $\mathcal{N}=(1,0)$ SCFTs \cite{Heckman:2013pva,Gaiotto:2014lca,DelZotto:2014hpa,Heckman:2014qba,Haghighat:2014vxa}.
The anomaly polynomials of these 6d theories were also determined in \cite{Ohmori:2014pca,Ohmori:2014kda,Intriligator:2014eaa}.
Recently there has been significant progress on the localization computation in 5d gauge theories.
In \cite{Kim:2012gu,Hwang:2014uwa}, the localization calculation was formulated
for the superconformal index of 5d $\mathcal{N}=1$ $Sp(N)$ gauge theories with $N_f$ flavors,
and it was found that this index shows the enhancement of global symmetry
$SO(2N_f)\times U(1)\subset E_{N_f+1}$
as was expected from Seiberg's argument \cite{Seiberg:1996bd}.
The same index was also derived from the topological string theory in \cite{Iqbal:2012xm},
and the enhancement was studied in more detail in \cite{Bao:2013pwa,Hayashi:2013qwa}
from the perspective of topological strings on local Calabi--Yau 3-folds.
In \cite{Mitev:2014jza},
it was found that Nekrasov partition functions also show the enhancement.
These developments led to exact computation on the partition function and superconformal index of a 5d theory starting directly from the IR Lagrangian or from the corresponding brane web configuration/Calabi--Yau geometry.
This technology is the main ingredient of our approach to the E-string theory.
Thanks to the developments in localization,
quantitative study on 6d $\mathcal{N}=(1,0)$ SCFTs
is getting more manageable.
In \cite{Haghighat:2014pva,Kim:2014dza},
based on the elliptic genus computation, the E-string partition function up to four E-strings was computed. See also \cite{Cai:2014vka}.
In this paper we propose a different description of the E-string theory
based on the familiar type IIB 5-brane setup.
In the case of 5d $\mathcal{N}=1$ gauge theories,
the world-volume theory of the corresponding $(p,q)$ 5-brane web configuration
leads to the desired gauge theory \cite{Aharony:1997ju,Aharony:1997bh,Leung:1997tw,Benini:2009gi}.
The same web diagram specifies the toric Calabi--Yau 3-fold of the compactified M-theory dual to the 5-brane web,
and then we calculate the 5d Nekrasov partition function \cite{Nekrasov:2002qd}
using the topological string method \cite{Aganagic:2002qg,Iqbal:2002we,Iqbal:2003ix,Iqbal:2003zz,Eguchi:2003sj,Taki:2007dh}
known as the topological vertex \cite{Aganagic:2003db,Awata:2005fa,Iqbal:2007ii,Awata:2008ed,Iqbal:2012mt} through
geometric engineering \cite{Katz:1996fh,Katz:1997eq}.
If these techniques are also applicable to the 6d E-string theory,
they will be a very powerful method conceptually and computationally to investigate 6d dynamics.
To this end, we utilize the key fact: the E-string theory of interest, which is a 6d $\mathcal{N}=(1,0)$ SCFT,
appears as the UV fixed point of {\it {5d}} $\mathcal{N}=1$ $SU(2)$ gauge theory with eight flavors.
Indeed, the agreement of the corresponding partition functions is checked in \cite{Kim:2014dza}. This means that once one finds a 5-brane web for $SU(2)$ gauge theory with eight flavors, one can apply all the established techniques to study the E-strings.
In this paper, we discuss such $(p,q)$ 5-brane web description of the E-string theory. The $(p,q)$ 5-brane web that we found is of a spiral shape with a cyclic structure associated with the spiral direction, which is the source of the hidden 6d direction accounting for the Kaluza--Klein (KK) direction.
By implementing the topological vertex to this spiral web,
we can write down a combinatorial expression of the full-order partition function,
which is the generating function of the elliptic genera of the E-strings.
This generating function is precisely the full partition function of topological strings on the local $\frac{1}{2}$K3 surface.
Our analysis may provide a stepping stone allowing one to look for a similar description for other 6d theories as well as a new class of 5-brane web.
The paper is organized as follows. In Sect. \ref{sec:tao 5 and 7branes}, we review type IIB $(p,q)$ 5-brane web construction for $SU(2)$ gauge theory with eight flavors based on 7-brane monodromies, and find a spiral structure of the web diagram, which we call the ``Tao diagram.'' In Sects. \ref{sec:Taopara} and \ref{sec:E-stringviaTopver}, applying the topological vertex method to this Tao diagram, we compute the partition function and compare the obtained result to the elliptic genera computed in \cite{Kim:2014dza}.
We discuss various Tao diagrams as well as other future work in Sect. \ref{sec:discussion}.
\section{Tao 5-brane web via 7-branes}\label{sec:tao 5 and 7branes}
\subsection{7-brane background and 5d $\boldsymbol{SU(2)}$ gauge theories}
A large number of 5d $\mathcal{N}=1$ gauge theories and their UV fixed point SCFTs
are realized as the world-volume theories on 5-brane $(p,q)$ webs
\cite{Aharony:1997ju,Aharony:1997bh,Benini:2009gi,Bao:2011rc,Bao:2013pwa,Hayashi:2013qwa,Taki:2013vka,Bergman:2013aca,Taki:2014pba,Hayashi:2014wfa,Bergman:2014kza,Hayashi:2014hfa}.
This setup is precisely the 5d version of the Hanany--Witten brane configuration.
We can utilize this web realization to know the strongly coupled UV fixed point,
the BPS spectra, the Seiberg--Witten curves and so forth.
In order to obtain the 5-brane $(p,q)$ webs for a given theory, one often finds it convenient to start from the system of 7-branes studied in \cite{DeWolfe:1999hj}.
Technology for treating 7-brane backgrounds was developed in \cite{Gaberdiel:1997ud,Gaberdiel:1998mv,DeWolfe:1998zf,DeWolfe:1998eu,DeWolfe:1998pr}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm, bb=0 0 971 310]{YM7brane.pdf}
\end{center}
\caption{\small (a) A 5-brane web for $SU(2)$ pure Yang--Mills theory
that corresponds to the local first del Pezzo $\mathbb{CP}^1\times \mathbb{CP}^1$.
Introducing 7-branes (b) regularizes the configuration. The Hanany--Witten effect leads to (c) where the 7-brane background ${\bf B}{\bf C}{\bf B}{\bf C}$
is probed by a 5-brane loop.}
\label{fig:YM7brane}
\end{figure}
The relation between 7-brane background and 5-brane configuration is as follows.
Figure \ref{fig:YM7brane} illustrates a typical example of the correspondence between a 5-brane web system
and a 7-brane configuration.
Figure \ref{fig:YM7brane}(a) is the $(p,q)$ 5-brane web for 5d $\mathcal{N}=1$ $SU(2)$ pure Yang--Mills theory.
There are four external legs in this configuration, whose $(p,q)$ charges are $(1,1)$ and $(1,-1)$.
We can regularize these semi-infinite external legs by terminating a $(p,q)$ leg on a $[p,q]$ 7-brane,
and then we obtain Figure \ref{fig:YM7brane}(b).
Without changing the world-volume theory, these 7-branes illustrated by colored circles can move freely along $(p,q)$ line.
We can therefore move them inside the 5-brane quadrilateral,
and then the configuration in Figure \ref{fig:YM7brane}(c) appears.
Notice that the four external legs have disappeared
since the Hanany--Witten brane annihilation transition removes these 5-branes
after the 7-branes cross the 5-brane quadrilateral.
The 5-brane quadrilateral are now highly curved because of the non-trivial metric coming from the 7-brane inside.
\begin{figure}[t]
\begin{center}
\includegraphics[width=16cm, bb=0 0 2204 602]{TaoWeb.pdf}
\end{center}
\caption{\small On the left-hand side, the starting configuration with 12 7-branes in the 5-brane loop.
Moving them outside leads to the middle diagram (up to flop transition). On the right-hand side,
we can move two 7-branes (blue dots) to infinity, which makes a diagram with two arms that rotate in a spiral infinitely many times.}
\label{fig:HowToMakeTao}
\end{figure}
In summary, the $(p,q)$ brane web for $SU(2)$ Yang--Mills theory
is related to a 5-brane loop configuration probing the 7-brane configuration
\begin{align}
\hat{\bf E}_1
={\bf B}{\bf C}{\bf B}{\bf C},
\end{align}
where ${\bf B}$ is a $[1,-1]$ 7-brane and ${\bf C}$ is a $[1,1]$ 7-brane.
The world-volume theory on this 5-brane loop is also $SU(2)$ Yang--Mills theory
because of the above construction.
In this way, we can convert a $(p,q)$ web into a 5-brane loop probing a 7-brane background.
For 5d $\mathcal{N}=1$ $SU(2)$ gauge theory with $N_f$ flavors,
the following 7-brane background appears inside the 5-brane loop:
\begin{align}
\label{affine7branebkgd}
\hat{\bf E}_{N_f+1}
={\bf A}^{N_f}{\bf B}{\bf C}{\bf B}{\bf C}.
\end{align}
Here, ${\bf A}$ is a $[0,1]$ 7-brane.
Up to seven flavors, the 7-brane system can be recast into a 5-brane web configuration
by pulling out all the 7-branes to infinity using Hanany--Witten transition
\cite{Aharony:1997bh,DeWolfe:1999hj,Benini:2009gi,Taki:2014pba}.
In the language of the toric Calabi--Yau
associated with the corresponding $(p,q)$ web diagram \cite{Leung:1997tw},
this 7-brane configuration is dual to the local $\mathbb{P}^2$ blown up at $N_f+1$ points or local $\mathbb{P}^1\times\mathbb{P}^1$
geometry, namely the local del Pezzo surfaces $\mathcal{B}_{N_f+1}$.
Up to the $N_f=4$ flavors, it is easy to see the corresponding $(p,q)$ 5-brane web diagrams (See the appendix of \cite{Kim:2014nqa}). It is, however, not so straightforward to find $(p,q)$ 5-brane web diagrams for $N_f=5,6,7$ flavors. It was studied in
\cite{Benini:2009gi,Bao:2013pwa,Hayashi:2013qwa} that the corresponding 7-brane configurations for $N_f=5,6,7$ flavors are realized in the $(p,q)$ 5-brane web diagrams as 5d uplifts of tuned (Higgsed) $T_N$ diagrams : The 7-brane configuration for the $N_f=5$ flavor case is exactly mapped to the $T_3$ diagram showing $E_6$ symmetry; the configurations for the $N_f=6,7$ flavor cases are tuned $T_4$ and $T_6$ diagrams showing $E_7$ and $E_8$ symmetry, respectively.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=7.5cm, bb=0 0 1484 1075]{Tao2-web.pdf
\caption{\small A trial Tao web associated with \\$SU(2)$ gauge theory with $N_f=8$ flavors.}
\label{fig:Nf8}
\end{subfigure}%
~~~
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=25mm, bb=0 0 507 1075]{taijitu.pdf
\caption{\small Yin-yang that is a symbolic representation of Taoism.}
\label{fig:YY}
\end{subfigure}
\caption{\small Similarity between a Tao web diagram and a Taoism symbol}
\label{fig:test}
\end{figure}
Our focus in this paper is the 5d $\mathcal{N}=1$ $SU(2)$ gauge theory with $N_f=8$ flavors
and its 6d UV fixed point theory, namely the the E-string theory.
Our starting point is therefore the affine 7-brane background (\ref{affine7branebkgd}) for $N_f=8$.
This 7-brane configuration corresponds to the $\frac{1}{2}$K3 surface,
and M-theory or topological strings on the Calabi--Yau is dual to the E-string theory.
The $N_f=8$ configuration is a one-point blow-up of the $N_f=7$ configuration.
Using 7-brane monodromies one easily finds that
\begin{align}
\label{nf9config}
\hat{\bf E}_{9}
={\bf A}^{8}{\bf B}{\bf C}{\bf B}{\bf C}={\bf A}^4 {\bf X}_{[2,1]} {\bf N} {\bf A}^4 {\bf X}_{[2,1]} {\bf N},
\end{align}
where $ {\bf N}$ is a $[0,1]$ 7-brane and ${\bf X}_{[p,q]}$ is a $[p,q]$ 7-brane.
See Appendix \ref{app:Convention} for more detail.
The corresponding 5-brane loop on this 7-brane background is illustrated in Figure \ref{fig:HowToMakeTao}(a).
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm, bb=0 0 494 191]{jumping.pdf}
\end{center}
\caption{\small Straight 5-branes crossing a 5-brane are introduced to denote the 5-branes that are actually jumping over a 5-brane, for simplicity.}
\label{fig;jump}
\end{figure}
When we try to pull out the 7-branes, we find that the $[2,1]$ 7-branes change their charges
due to the monodromy cut created by $[1,0]$ 7-branes and $[0,1]$ 7-branes,
which makes the attached 5-branes form spiral arms.
After taking all 7-branes to infinity, we find a spiral
web diagram with multiple arms which rotate infinitely many times, as in Figure
\ref{fig:test}(a).
We will call such a spiral diagram ``Tao diagram'' for short
because its spiral structure is close to a symbolic representation
of the Taoism philosophy Figure \ref{fig:test}(b).
This Tao diagram is an intuitive one that reveals the spiral nature of the $(p,q)$ 5-brane web diagram for $N_f=8$ flavors.
Note that Figure \ref{fig:test}(a) contains multiple coincident 5-branes intersecting at one vertex.
These 5-branes actually jump over the other 5-brane at such a vertex in the sense of \cite{Benini:2009gi}.
For instance, see Figure \ref{fig;jump}.
In topological vertex formalism, such jumping is realized by a degenerate K\"ahler parameter
\cite{Kozcaz:2010af,Dimofte:2010tz,Taki:2010bj,Aganagic:2012ne,Hayashi:2013qwa}.
We will show definite way to treat it in the next section.
Unfortunately, the above Tao web Figure \ref{fig:test}(a) involves strange ``$(0,2)$ 5-branes'' which can
not be any bound state.
To avoid discussion of the proper treatment of such 5-branes,
we switch to a more healthy web description in the following.
In fact, a web description for a given 7-brane background
is not unique because changing the ordering of 7-brane movement results in a different web \cite{Taki:2014pba}.
In the next subsection, we demonstrate in detail that there exists another Tao web for the E-string theory,
and we will use this new Tao diagram throughout this paper.
\subsection{E-string theory via Tao $\boldsymbol{(p,q)}$ web}
As we have observed in the previous subsection,
$N_f=8$ theory leads to spiral 5-brane configuration
and opens up a new dimension associated with the cyclic nature.
Such a Tao web would give an intuitive and effective description of the 6d E-string theory.
In this subsection, we give a simpler and more useful Tao diagram which describes the E-string theory.
The starting point is the affine 7-brane background again,
\begin{align}
\hat{\bf E}_9
={\bf A}^8\,{\bf B}{\bf C}{\bf B}{\bf C}.
\end{align}
We will consider the 5-brane loop configuration on this configuration to construct a dual description
of the local $\frac{1}{2}$K3 surface
as was done in the cases of the local del Pezzo surface \cite{DeWolfe:1999hj,Benini:2009gi,Bao:2013pwa,Hayashi:2013qwa,Taki:2014pba}.
The resulting Tao web is therefore a generalized toric description of the local $\frac{1}{2}$K3 surface.
As is explained in Appendix \ref{app:Convention}, reordering these 7-branes leads to another expression of the $\hat{\bf E}_9$ configuration:
\begin{align}
\label{Tao7branes}
\hat{\bf E}_9
={\bf A}{\bf N}{\bf C}\,{\bf A}
{\bf N}{\bf C}\,{\bf A}{\bf N}
{\bf C}\,{\bf A}{\bf N}{\bf C}.
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=14cm, bb=0 0 1447 427]{taoE9.pdf}
\end{center}
\caption{\small A 5-brane web configuration that gives the local ninth del Pezzo surface.}
\label{fig:7braneE9}
\end{figure}
This configuration probed by a 5-brane loop is the left-hand side of Figure \ref{fig:7braneE9}. The solid
black line is the 5-brane loop probing 7-branes,
and all the branch-cuts illustrated with the gray dashed lines are outgoing.
Pulling out the 7-branes from the 5-brane loop together with the flop transitions
leads
to the right-hand side of Figure \ref{fig:7braneE9}.
The Hanany--Witten effect introduces new 5-brane prongs attached to the 7-branes.
We can continue to move the 7-branes away from the middle 5-brane loop;
however, a 7-brane soon hits a branch-cut coming from another 7-brane.
The shape of the resulting web configuration therefore depends on the ordering of the 7-brane motion.
Let us consider the 7-brane motion illustrated in
Figure \ref{fig:7branemove}.
Continuing to move 7-branes with such an ordering,
we finally find the spirally growing 5-brane web in Figure \ref{fig:taoweb}.
This is precisely the Tao brane web which describes $SU(2)$ gauge theory with $N_f=8$ flavors,
that is, the E-string theory.
\begin{figure}[t]
\begin{center}
\includegraphics[width=13cm, bb=0 0 1352 612]{taoweb2.pdf}
\end{center}
\caption{\small The 7-brane motion that leads to the 5-brane Tao web.}
\label{fig:7branemove}
\end{figure}
This Tao web in Figure \ref{fig:taoweb} has six external legs making spiral cycles,
and each leg has period 6 of its joints.
In addition, there are six straightly outgoing legs which consist of bundles of 5-branes of the same $(p,q)$ type.
A bundle increases the number of constituent 5-branes by one each time the bundle crosses a spiral leg.
This structure comes from the Hanany--Witten effect that occurs when one moves all remnants of 7-branes to infinity
after creating the six spiral legs in Figure \ref{fig:7branemove}.
This local structure is a degenerate version of the $T_{N\to\infty}$ 5-brane web.
From this point of view,
we can construct the Tao web by gluing together six $T_{\infty}$ sub-webs.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm, bb=0 0 2221 1028]{web-grid.pdf}
\end{center}
\caption{\small A 5-brane Tao web and its black and white dots diagram.
This is a ``toric-like'' description of the E-string theory.}
\label{fig:taoweb}
\end{figure}
Treatment of such a degenerate toric diagram was studied in \cite{Kozcaz:2010af,Dimofte:2010tz,Taki:2010bj,Aganagic:2012ne,Hayashi:2013qwa}.
We will explain it in the next section.
\subsection{What will happen when $\boldsymbol{N_f\geq9}$?}
It is claimed in \cite{Seiberg:1996bd} that
5d $Sp(1)$ gauge theory with $N_f$ flavors has 5d UV fixed point only for $N_f \le 7$
and that it does not for $N_f\geq 9$.
The E-string case $N_f=8$ is critical, where it has 6d UV fixed point.
However, it looks as if we can consider $N_f\geq 9$ configuration at least formally starting from the 7-brane background (\ref{affine7branebkgd}).
Does our brane web system ``know'' that $N_f=8$ is critical?
The $N_f\leq 7$ cases lead to finite web diagrams associated with the local del Pezzo surfaces; the $N_f=8$ configuration opens up a new $S^1$ direction, and it gives the Tao diagram associated with the local $\frac{1}{2}$K3 surface.
Then, what will happen for $N_f\geq 9$?
A way to see that the $N_f=8$ case is critical is that if one successively applies 7-brane monodromy to move a $[p,q]$ 7-brane through other 7-branes, then it comes back to the original configuration after one rotation, which is a necessary condition for the spiral shape of the 5-brane web. For instance, the 7-brane configuration corresponding to the middle diagram in Figure \ref{fig:HowToMakeTao} is given by
\begin{align}
\label{eq:7braneconf4fig3}
{\bf X}_{[2,1]}{\bf N}{\bf A}^4{\bf X}_{[2,1]}{\bf N}{\bf A}^4,
\end{align}
and two $[2,1]$ 7-branes (green dots in the diagram) are rotating clockwise. This means that these two 7-branes undergo the following monodromies due to the branch cuts from the remaining 7-branes:
\begin{align}
{\bf N}{\bf A}^4{\bf N}{\bf A}^4,
\end{align}
hence the charges of the rotating 7-branes changes as it passes through each cut. For one rotation,
the monodromy matrix that two $[2,1]$ 7-branes go through is
\begin{align}
K=K_{[1,0]}^{-4}
K_{[0,1]}^{-1}
K_{[1,0]}^{-4}
K_{[0,1]}^{-1} = \begin{pmatrix}
5~&~-8\cr
2~&~-3
\end{pmatrix},
\end{align}
which yields
\begin{align}
\begin{pmatrix}
5~&~-8\\
2~&~-3
\end{pmatrix}
\begin{pmatrix}
2\\
1
\end{pmatrix}
=\begin{pmatrix}
2\\
1
\end{pmatrix}.
\end{align}
Hence, the charge of two $[2,1]$ 7-branes remain the same after one rotation.
One can apply the same logic to $N_f=9$ flavors. A configuration for $N_f=9$ that one can find is one which adds one more flavor $[1,0]$ 7-branes to Figure \ref{fig:HowToMakeTao}:
\begin{align}
{\bf X}_{[2,1]}{\bf N}{\bf A}^5{\bf X}_{[3,1]}{\bf N}{\bf A}^4.
\end{align}
If we take the $[3,1]$ 7-brane and let it go clockwise through all the branch cuts then it is easy to see that the charge changes due to monodromy can never be same as any of the 7-branes above. In a similar way, the charge of a $[3,1]$ 7-brane becomes $[10,3]$ after one rotation, given by
\begin{align}
K_{[1,0]}^{-5}
K_{[0,1]}^{-1}
K_{[1,0]}^{-4}
K_{[0,1]}^{-1} \cdot [3,1] = [10,3].
\end{align}
Note that the direction that the resultant charge points is inward rather than outward. The more it goes around, the more it points inward. The 5-brane attached to this inwardly rotating 7-brane eventually crosses into the 5-brane loop in the middle, and thus it shrinks to the origin rather than going away from the origin. In other words the configuration with $N_f=9$ flavors never makes a proper 5-brane web, it all collapses. In a similar fashion, one finds collapsing of brane configuration for higher flavors $N_f>9$. This is consistent with the known result and it is a geometric account of why $N_f=8$ configuration is critical.
\section{Physical parameters of the Tao web}\label{sec:Taopara}
In this section, we go back to the Tao diagram, which corresponds to $N_f=8$.
It is discussed in \cite{Leung:1997tw} that a $(p,q)$ 5-brane web can be reinterpreted as a toric diagram.
There are various reports \cite{Hayashi:2013qwa, Hayashi:2014wfa,Mitev:2014jza}
that this claim also works for the ``toric-like diagram''\footnote{It was termed a ``dot diagram'' in their paper.} introduced in \cite{Benini:2009gi}, which contains 5-branes jumping over other 5-branes.
Therefore, we expect that it is also the case for our Tao diagram even if it extends to infinity.
In other words, we expect that our Tao diagram gives a toric-like description for local $\frac{1}{2}$K3.
Assuming this, we compute
the E-string partition function
using the topological vertex in the next section.
As a preparation, we need to study the relations among the K\"ahler parameters of the corresponding geometry
as well as their relation with the gauge theory parameters.
For each segment $E\in\{\textrm{edges}\}$ in the web diagram, we associate the length parameter $t_E$
and exponentiated one $Q_E=\exp (-t_E)$.
In the language of toric-like geometry,
this parameter is the K\"ahler parameter of the two-cycle corresponding to the segment.
Unlike a usual web diagram,
there are infinitely many segments and their K\"ahler parameters in our diagram.
They are, however, highly constrained
since the six radial legs trigger the turns of the spirals.
We can solve such constraint equations and finally find only ten free parameters.
We introduce a notation for organizing infinitely many K\"ahler parameters.
In the Tao web there are six spiral external legs,
and we label them by integers $\ell=1,2,\ldots,6$ in counterclockwise order.
An infinite number of straight-line segments compose a spiral leg.
To describe this spiral curve, we label K\"ahler parameters $Q_{i(\ell)}$, with $i$ being the turn number of the $\ell$th arm.
It is convenient to introduce the ``distance'' $\Delta_\ell$ between adjacent arms,
\begin{align}
Q_{i+1(\ell)} = \Delta_\ell\,Q_{i(\ell+1)},
\label{eq:DeltaQ}
\end{align}
where $\ell=1,2, \ldots, 6$, and $i=1,2,\ldots$.
We define
\begin{align}
\Delta_{\ell+6}=\Delta_{\ell},\qquad Q_{i(\ell+6)}=Q_{i(\ell)},
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=16cm, bb=0 0 3907 2023]{webparameters.pdf}
\end{center}
\caption{\small The K\"ahler parameters of the Tao web.}
\label{fig:notation}
\end{figure}
so that \eqref{eq:DeltaQ} is also satisfied for $\ell=0$ and/or $\ell=6$.
This parametrization is depicted in Figure \ref{fig:notation}.
By iteration, one finds that any K\"ahler parameter can be expressed as
\begin{align}
Q_{i(\ell)} = \bigg(\prod^{i-1}_{\ell'=1}\Delta_{\ell-1+\ell'}\bigg) Q_{1(\ell+i-1)}.
\end{align}
By introducing the ``period'' $d$
\begin{align}\label{period}
d=\prod^{6}_{\ell=1}\Delta_\ell,
\end{align}
one also finds that the $\ell$th arms are aligned parallel every six turns:
\begin{align}
Q_{i+6(\ell)}=d\cdot Q_{i(\ell)}.
\end{align}
We thus have 12 parameters: $\Delta_\ell$ and $Q_{1(\ell)}$. We note, however, that they depend only on nine out of ten physical parameters, eight flavors, and one gauge coupling.\footnote{In the Tao diagram, we have
the K\"ahler moduli parameters $Q_F$, $Q_B$, and $Q_{mf}$, which parametrize the structure of the middle part of the diagram, as well as $Q_{i(\ell)}$, which are associated with the spiral legs, as depicted in Figure \ref{fig:notation}.
They depend on the ten physical parameters in total: eight mass parameters, one gauge coupling constant, and one Coulomb moduli parameter. However, it turns out that $Q_{i(\ell)}$ do not depend on the Coulomb moduli parameter and thus depend only on nine parameters. On the other hand, the remaining K\"ahler moduli parameters $Q_F$, $Q_B$, and $Q_{mf}$ do depend on the Coulomb moduli parameter as we will explain later.}
Hence, the 12 parameters are not all independent, but are subject to the following three constraints:\\
(i) vertical constraint ($Q_{6(1)}Q_{5(1)} = d\, Q_{2(1)}Q_{3(1)}$):
\begin{align}
\frac{Q_{1(2)}\,Q_{1(3)} }{Q_{1(5)}\,Q_{1(6)}}
\, = \,\frac{\Delta_3\,\Delta_4}{\Delta_6\,\Delta_1} \ ,
\end{align}
(ii) horizontal constraint ($Q_{5(1)}Q_{4(1)} = d\, Q_{1(1)}Q_{2(1)}$):
\begin{align}
\frac{Q_{1(1)}\,Q_{1(2)} }{Q_{1(4)}\,Q_{1(5)}}
\, = \,\frac{\Delta_2\,\Delta_3}{\Delta_5\,\Delta_6} \ ,
\end{align}
(iii) constraint on the origin of the Coulomb branch:
\begin{align}\label{eq:cons3}
\prod^6_{\ell=1} Q_{1(\ell)} = \prod^6_{\ell=1} \Delta_\ell.
\end{align}
The last constraint requires some explanation. If we perform local deformation to make the vanishing Coulomb moduli (see the right-hand side of Figure \ref{fig:notation}), then the origin of the Coulomb branch is supposed to be the starting point of the K\"ahler parameters $\delta$ and $\tilde\delta$. By taking the horizontal projection associated with $\delta$, one finds that
\begin{align}\label{eq:delta}
Q_{1(1)} \, Q_{1(2)}\,\delta = \Delta_2\, Q_{1(2)}\,\delta^{-1}\, \quad \Rightarrow \quad \delta^2 =\Delta_2\, Q_{1(1)}^{-1}.
\end{align}
Likewise, one obtains
\begin{align}\label{eq:tdelta}
\tilde\delta^2 = \Delta_5\, Q_{1(4)}^{-1}.
\end{align}
From the vertical projection, it is easy to see that
\begin{align}\label{eq:deltdel}
\delta\,\tilde\delta = Q_{1(2)}\,Q_{1(3)}\, \Delta_3^{-1}\,\Delta_4^{-1}.
\end{align}
It follows from \eqref{eq:delta},\eqref{eq:tdelta}, and \eqref{eq:deltdel}
that one finds \eqref{eq:cons3}. Therefore, the parameters $\Delta_\ell$ and $Q_{1(\ell)}$, constrained by (i), (ii), and (iii), can be a set of building blocks describing all the K\"ahler parameters for the Tao spiral.
Since only ten parameters are independent in the Tao web,
all the K\"ahler parameters can be written in terms of $Q_{mf}$, $Q_F$, and $Q_B$ in Figure \ref{fig:notation}.
These bases are usually used for constructing a Nekrasov partition function in the context of a topological string.
For considering the flavor symmetry of the topological string partition function,
more useful parametrization is given by the $SO(16)$ fugacities corresponding to the masses $y_{f}=e^{-R m_f}$ ($f=1,2,\ldots8$),
the instanton factor
$\mathfrak{q}$
and the Coulomb moduli $A=e^{-Ra}$
introduced through the relations:
\begin{align}
Q_{mf}& = A^{-1}\,y_f \quad (f=1,2,3,\ldots,8), \cr
Q_{F} &= A^{2}, \label{eq: 3 relation}\\
Q_{B} &= \frac{A^2 }{\prod^8_{f=1} y_f{}^{\frac12} } \mathfrak{q}. \nonumber
\end{align}
The first and the second relations of \eqref{eq: 3 relation} are straightforward to understand if we carefully follow the sequence of the Hanany--Witten transition from Figure \ref{fig:HowToMakeTao} to Figure \ref{fig:7branemove}
and study which parameter in one diagram corresponds to which parameter in the other diagram.
We will see that $Q_{mf}$ corresponds to the
distance between one of the color D5 branes and one of the flavor D5-branes,
while $Q_F$ corresponds to the distance between the two color D5-branes
in Figure \ref{fig:HowToMakeTao}.
We need further explanation for the third relation.
It is natural that the horizontal distance $Q_B$ is proportional to the instanton factor $\mathfrak{q}$
since the distance between two NS5-branes corresponds to the gauge coupling constant.
On top of that, the prefactor in front of the right-hand side of $Q_B$ appears as follows:
When we compute the topological string partition function, the factor in the form $(1 - q^n Q)$ typically appears,
where $Q$ is a certain product of the K\"ahler parameters.
When we rewrite this factor in the form of $\sinh$, we obtain the prefactor $\sqrt{q^n Q}$.
In order for the topological string amplitude to agree with the Nekrasov partition function,
we need to absorb the collection of such factors into $Q_B$ and regard it as the instanton factor $\mathfrak{q}$.
For $0 \le N_f \le 7$, it is explicitly checked that such a factor\footnote{For instance, see \cite{Bao:2011rc} for $N_f=4$ flavors.}
is given by (the inverse of) $A^2 \prod^{N_f}_{f=1} y_f{}^{-\frac12}$.
We assumed that it is also the case for $N_f=8$.
From Figure \ref{fig:notation}, one finds that $\Delta_\ell$ are expressed in terms of $Q_{mf}, Q_{F}$, and $Q_{B}$,
\begin{align}
\Delta_{1}&= Q_{m1}/Q_{m2}, &\Delta_{2}&= Q_{m2}Q_{m3}Q_F, & \Delta_{3}&= Q_{m4}Q_{m6}Q_B, \cr
\Delta_{4}&= Q_{m5}/Q_{m6}, &\Delta_{5}&= Q_{m6}Q_{m7}Q_F, & \Delta_{6}&= Q_{m8}Q_{m2}Q_B.
\end{align}
This leads to the period \eqref{period} being the instanton factor squared:
\begin{align}
d=\Big(\prod^8_{f=1} Q_{mf}\Big) Q_F^2Q_B^2 = \mathfrak{q}^2.
\end{align}
In a similar fashion, all the $Q_{1(\ell)}$ are expressed in terms of $y_i, A,$ and $\mathfrak{q}$, and so are all K\"ahler parameters.
The results for the other parameters are summarized in Appendix \ref{app:Kahler}.
Since it is known that the instanton factor of the 5d $Sp(1)$ gauge theory with $N_f=8$ flavor is identified
as the modulus of the compactified torus of the E-string,
we find that the period of our Tao diagram is also given by this modulus.
This is consistent with the intuition that the spiral structure will correspond to the KK mode of the E-string
and, thus, the cyclic structure of the spiral in our Tao diagram is a key to the uplift to 6d.
In the next section, we compute the partition function
and give quantitative support for this claim.
Here, we briefly comment on the subtle difference between the parameters used in
the Nekrasov partition function for the 5d $Sp(1)$ gauge theory with eight flavors and
those used in the E-string partition function, which is clarified in \cite{Kim:2014dza}.
Our parameters $y_f$, $\mathfrak{q}$, and $A$ are those for the 5d Nekrasov partition function.
If we would like to obtain the E-string partition function in an $\hat{E}_8$ manifest form,
we need to use other parameters $y'_8$ and $A'$ defined as
\begin{align}
y'_8 = y_8 \,\mathfrak{q}^{-2}, \qquad A' = A\, \mathfrak{q}\, y_8^{-1} ,
\label{eq:Estring-param}
\end{align}
while the other parameters are identical.
Since our formula for the partition function will be given in a closed form and in a way that does not depend on the detail of the
parametrization of the K\"ahler parameters,
it should be possible to interpret it either as the 5d Nekrasov partition function
or as the E-string partition function depending on which parametrization we use.
However, when we compare our formula with the known result \cite{Hwang:2014uwa,Kim:2014dza},
we use the parameters for the 5d Nekrasov partition function.
\section{E-string partition function via topological vertex}\label{sec:E-stringviaTopver}
In the previous section, we found a new web description of the 6d E-string theory.
In this section,
we compute the E-string partition function by applying
the topological vertex computation to our web.
We will see that our partition function precisely agrees with
the partition function computed by a completely different method \cite{Hwang:2014uwa,Kim:2014dza}.
For simplicity, we concentrate on the
self-dual $\Omega$-background $\varepsilon_1= - \varepsilon_2$ in this paper.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm, bb=0 0 917 191]{degenerate.pdf}
\end{center}
\caption{\small In topological vertex formalism, jumping 5-branes are actually decoupled.}
\label{fig;deg}
\end{figure}
\subsection{Combinatorial expression of E-string partition function}
In this section, we compute the topological string partition function of the Tao web Figure \ref{fig:taoweb}.
This Tao web contains a number of 5-branes jumping other 5-branes as we discussed in the previous section.
On the left-hand side of Figure \ref{fig;deg}, such a configuration is illustrated.
This jumping is realized by degenerating the corresponding K\"ahler parameters as in the middle diagram in the topological vertex formalism,
and these jumping 5-branes are decoupled from the nontrivial trivalent vertex as in the right hand of Figure \ref{fig;deg}.
We therefore need to take only this nontrivial vertex into account in the topological vertex computation.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6.5cm, bb=0 0 744 584]{block.pdf}
\end{center}
\caption{\small A spiral leg which is a building block of the Tao web.}
\label{fig:spiralleg}
\end{figure}
In the topological vertex computation of Nekrasov partition functions,
we first decompose a toric web into basic building blocks \cite{Iqbal:2003ix,Iqbal:2003zz,Eguchi:2003sj}.
Following this idea, we consider the basic spiral block Figure \ref{fig:spiralleg} of the Tao diagram.
On the two internal edges located at the starting point of the spiral,
two generic Young diagrams $R_1$ and $R_2^t$ are assigned.
This is because we need to glue six such building blocks
for reconstructing the original Tao diagram in the topological vertex formalism.
This gluing procedure is performed by summation over all such Young diagrams associated with the legs we want to glue together.
All the other external legs without labels are associated with the empty Young diagram $\emptyset$.
The partition function for this sub-diagram is then given by the topological vertex as
\begin{align}
\label{block}
Z^{\, leg}_{R_1,R_2}(Q_0;\,Q_1,Q_2,Q_3,\ldots;q)
=\sum_{Y_0,Y_1,Y_2,\ldots}
C_{R_1R_2^tY_0}(q)
\prod_{k=0}^\infty\,(-Q_k)^{\vert Y_k\vert}\,
C_{\emptyset Y_{k}^tY_{k+1}}(q).
\end{align}
In the original Tao diagram, the exponentiated K\"ahler parameters $Q_i$ are strongly correlated with each other because of the 7-brane construction of the web,
but we temporarily assign generic values to all these K\"ahler parameters just for simplicity.
By substituting the definition of the topological vertex function $C_{R_1R_2R_3}(q)$,
we can write down this sub-diagram explicitly in terms of Shur and skew-Shur functions.
The result is
\begin{align}
\label{block2}
Z^{\, leg}_{R_1,R_2}(Q_0;\,Q_1,Q_2,Q_3,\cdots;q)
=q^{\frac{\kappa_{R_1}}{2}}\sum_{Y,Y_0,Y_1,Y_2,\cdots}&
S_{Y_0}(q^{\rho})S_{R_1^t/Y}(q^{Y_0+\rho})
S_{R_2^t/Y}(q^{Y_0^t+\rho})\nonumber\\
\times \prod_{k=0}^\infty&\,(-Q_k)^{\vert Y_k\vert}\,
S_{Y_{k+1}}(q^{\rho})S_{Y_k^t}(q^{Y_{k+1}^t+\rho})
.
\end{align}
Notice that the topological vertex function has the cyclic symmetry $C_{R_1R_2R_3}=C_{R_2R_3R_1}=C_{R_3R_1R_2}$,
and thus the above expression in terms of Schur functions is not unique.
Finding infinite product expressions of the sub-diagram by performing all the summations is not easy,
and it is an interesting open problem.
Instead of solving it,
we deal with this sub-diagram as a series expansion in the K\"ahler parameters in a later subsection.
We can then evaluate this building block up to any order
as far as a computer runs.
The generating function of the elliptic genera of the E-strings,
which is the topological string partition function for the local $\frac{1}{2}$K3,
therefore takes the following combinatorial form
\begin{align}
\label{MasterFormula}
Z_{E{\textrm{-}}string\,\,Tao}(y,d,A;q)
=M(q)\sum_{R_1,\ldots,R_6}
\prod_{\ell=1}^6
(-I_{\ell})^{\vert R_{\ell} \vert}\,
Z^{\, leg}_{R_{\ell},R_{\ell+1}}(Q_{0(\ell)};\,Q_{1(\ell)},Q_{2(\ell)},Q_{3(\ell)},\cdots;q),
\end{align}
where $R_7:=R_1$ and $Q_{0(\ell)}$s are
\begin{align}
Q_{0(1)}=Q_{m1},\,\,
Q_{0(2)}=Q_{m3},\,\,
Q_{0(3)}=Q_{m4},\,\,
Q_{0(4)}=Q_{m5},\,\,
Q_{0(5)}=Q_{m7},\,\,
Q_{0(6)}=Q_{m8}.
\end{align}
Here we have added the contribution from the constant map by hand which takes the form of the MacMahon function
\begin{align}
M(q)={\rm PE}\Big[\frac{q}{(1-q)^2}\Big],
\end{align}
where the Plethystic exponential PE is defined as
\begin{align}
{\rm PE}\Big[f(\cdot)\Big] = {\rm exp}\Big[\sum_{n=1}^\infty \frac{1}{n} f(\cdot ^n)\Big].
\end{align}
The K\"ahler parameters $I_\ell$ for the six sides of the central hexagon in Figure \ref{fig:taoweb} are:
\begin{align}
I_1=Q_BQ_{m2},\,\,
I_2=Q_{m2}^{-1},\,\,
I_3=Q_FQ_{m2},\,\,
I_4=Q_BQ_{m6},\,\,
I_5=Q_{m6}^{-1},\,\,
I_6=Q_FQ_{m6}.
\end{align}
This is a simple and closed expression for the generating function of the E-string elliptic genera.
By expanding it in terms of the Coulomb branch parameter $A$,
we can obtain the $n$ E-string elliptic genus as the coefficient of $A^n$.
Here, we comment on the discrete symmetry that our partition function enjoys.
It is straightforward to see that the expression (\ref{MasterFormula}) is invariant under the following transformation
\begin{eqnarray}
Q_{i (\ell)} \to Q_{i (\ell+1)},\qquad
I_{\ell} \to I_{\ell+1},
\label{eq:Rotation}
\end{eqnarray}
where $Q_{i (7)} = Q_{i (1)}$ and $I_7=I_1$.
This transformation is generated by the ``$\pi / 3$ rotation'' of the Tao diagram in Figure \ref{fig:notation},
which transforms one of the arms to the next one.
By using the explicit parametrization of the K\"ahler moduli parameters summarized in Appendix \ref{app:Kahler},
the transformation in (\ref{eq:Rotation}) is rewritten in terms of the mass parameters $y_i$ and the instanton factor $\mathfrak{q}$ as
\begin{eqnarray}
y_f \to \frac{\mathfrak{q}^{\frac{1}{2}} y_f}{\prod_{i=1}^8 \left( y_i \right) ^{\frac{1}{4}}},
\qquad
A \to \frac{\mathfrak{q}^{\frac{1}{2}} A}{\prod_{i=1}^8 \left( y_i \right) ^{\frac{1}{4}}},
\label{eq:PartofE9}
\end{eqnarray}
combined with a certain $SO(16)$ Weyl transformation%
\footnote{
More concretely, this $SO(16)$ Weyl transformation is given by
\begin{eqnarray*}
y_2 \leftrightarrow y_6{}^{-1},
\qquad
y_1 \to y_3 \to y_4 \to y_5 \to y_7 \to y_8 \to y_1.
\end{eqnarray*}
}
acting on $y_i$ ($i=1,\ldots 8$.)
Or, if we use the parameters for the E-string \eqref{eq:Estring-param}, it is rewritten as
\begin{eqnarray}
y'_f \to \frac{y'_f}{\prod_{i=1}^8 \left( y'_i \right) ^{\frac{1}{4}}},
\qquad
A' \to A'.
\end{eqnarray}
This transformation
can be interpreted as part of the expected $E_9$ symmetry.
This is analogous to the ``fiber-base duality map" studied in \cite{Bao:2011rc, Mitev:2014jza}.
\subsection{Comparing with the elliptic genus approach}
Now that we have the all-order generating function of the elliptic genera (\ref{MasterFormula})
that is the topological string partition function for the local $\frac{1}{2}$K3 in the
self-dual $\Omega$-background,
we can derive various results.
One surprising application is reproducing the known partial results on E-strings from our partition function. The expansion of the generating function in $A$,
\begin{align}\label{Etaoextra}
Z_{E{\textrm{-}}string\,\,Tao}(y,\mathfrak{q},A)
=Z_{extra}(y,\mathfrak{q})\,M(q)\left(1+
\sum_{n=1}^\infty
A^n\,
Z_{n{\textrm{-}}string}(y,\mathfrak{q})\right),
\end{align}
should lead to the known elliptic genera $Z_{\,n{\textrm{-}}string}(y_i,\mathfrak{q};q)$
of the E-string theory \cite{Hosono:1999qc,Kim:2014dza}.
In this expression we introduce the extra factor $Z_{extra}$ as the $A$-independent coefficient
which comes from the additional contribution that is not contained in the E-string theory. The E-string partition function is therefore defined by removing the extra contribution from the E-string Tao partition function \eqref{Etaoextra}.
The normalized partition function $Z_{E\textrm{-}string}$ corresponds to the E-string theory
\begin{align}\label{Estringtao}
Z_{E{\textrm{-}}string\,\,Tao}(y,\mathfrak{q},A)
=Z_{extra}(y,\mathfrak{q})\,Z_{E\textrm{-}string}(y,\mathfrak{q},A).
\end{align}
The E-string partition function can be expressed as the plethystic exponential form \begin{align}
Z_{E\textrm{-}string} ={\rm PE}\bigg[\sum_{m=0}^{\infty}\mathcal{F}_m(y,A,q)\mathfrak{q}^m\bigg]
\end{align}
Likewise, the extra factor\footnote{The extra factor
is an analogue of what is called the contribution coming from effective classes in \cite{KonishiMinabe}, the missing state/sector in \cite{Bergman:2013ala}, non full spin content in \cite{Hayashi:2013qwa}, U(1) factor in \cite{Bao:2013pwa}, and stringy contributions in \cite{Hwang:2014uwa}.}
can also be expressed as
\begin{align}
Z_{\,extra}(y,\mathfrak{q})\,=
\textrm{PE}\bigg[
\sum_{k=0}^\infty \mathcal{E}_k(y,q)\,\mathfrak{q}^k
\bigg].
\end{align}
In what follows, we compute the E-string Tao partition function as the instanton expansion. At each order of the expansion we determine the extra factor and the E-string partition function
\begin{align}
{\rm PE}\Big[\big(\mathcal{E}_0+\mathcal{F}_0\big)+\big(\mathcal{E}_1+\mathcal{F}_1\big)\mathfrak{q}+\big(\mathcal{E}_2+\mathcal{F}_2\big)\mathfrak{q}^2+\cdots
\Big].
\end{align}
\subsection{Perturbative and instanton parts}
\begin{figure}[t]
\begin{center}
\includegraphics[width=11cm, bb=0 0 1456 724]{pert.pdf}
\end{center}
\caption{\small Sub-diagrams of the Tao web that give the perturbative part.}
\label{fig:pert}
\end{figure}
Now we expand the partition function in terms of the instanton factor and compare to
the 5d Nekrasov partition function or the E-string partition function
studied in \cite{Hwang:2014uwa,Kim:2014dza}.
Although our Tao diagram spirally extends to infinity and thus \eqref{MasterFormula} includes infinitely many Young diagram sums, it turns out that we can truncate the arm at a finite place if we compute only the finite order of instanton expansion.\\
\noindent\underline{\bf Perturbative} \\
First, we start by computing the perturbative part.
In the topological vertex computation of 5d Nekrasov partition functions,
the perturbative contribution to a partition function is given by
the sub-diagrams that correspond to turning off the instanton factor $\mathfrak{q}$ \cite{Iqbal:2003ix,Iqbal:2003zz,Eguchi:2003sj}.
Indeed, we see that by
taking into account the parametrization in Appendix \ref{app:Kahler}, setting $\mathfrak{q}=0$ makes most Young diagram summations in \eqref{MasterFormula} become trivial (only empty Young diagrams contribute).
Figure \ref{fig:pert} shows two sub-pieces, which survive after setting $\mathfrak{q}=0$. %
In practice, it is more convenient to use the technique developed in \cite{Iqbal:2004ne} rather than directly using \eqref{MasterFormula}.
In this computation only the Young diagram sums for the horizontal lines corresponding to D5-branes remain.
We summed over these Young diagrams up to a total of ten boxes for each of the two sub-pieces,
which corresponds to the expansion of the partition function in terms of $y_3$ and $y_7$ up to degree ten.
The topological vertex method then leads to the following expression of the perturbative partition function:
\begin{eqnarray}
Z_{\, pert+extra}
&=& {\rm PE} \bigg[ \frac{q}{(1-q)^2}
\bigg( 1 - (y_1 + y_3 + y_4 + y_5 + y_7 + y_8) A^{-1}
\nonumber \\
&& \qquad
- ( y_1 + y_2 + y_2{}^{-1} + y_3 + y_4 + y_5 + y_6 + y_6{}^{-1} + y_7 + y_8 ) A + 2 A^2
\bigg)
\nonumber \\
&& \qquad
+ \mathcal{E}_0(y)
+ \mathcal{O} (y_3{}^{11}, y_7{}^{11})
\bigg]
\label{pert}
\end{eqnarray}
Taking into account that $y_3$ and $y_7$ to the power more than one do not appear up to 10, we expect that the higher-order correction $\mathcal{O} (y_3{}^{11}, y_7{}^{11})$ may also vanish.
Here, $\mathcal{E}_0(y)$ is the factor which does not depend on $A$ and is given by
\begin{eqnarray}
\mathcal{E}_0(y) = \frac{q}{(1-q)^2}\left( y_1 y_2{}^{-1} + y_1 y_3 + y_2 y_3 + y_5 y_6{}^{-1} + y_5 y_7 + y_6 y_7\right).
\end{eqnarray}
We should regard such factor as the extra factor, and it should be removed by hand.
If we suitably use the analytic continuation with a certain regularization,
\begin{eqnarray}
{\rm PE} \bigg[ \frac{q}{(1-q)^2} Q \bigg]
\to {\rm PE} \bigg[ \frac{q}{(1-q)^2 } Q^{-1} \bigg],
\end{eqnarray}
the obtained result is consistent with the expected perturbative contribution calculated in \cite{Kim:2014dza}:
\begin{eqnarray}\label{pertanaly}
Z_{\, pert}=
{\rm PE} \big[\mathcal{F}_0(y)\big]
= {\rm PE} \bigg[ \frac{q}{(1-q)^2}
\left(1 - \chi_{\bf 16}(y) A + 2 A^2 \right)
\bigg],
\end{eqnarray}
where we have defined the character of the $\bf16$ representation of $SO(16)$ flavor symmetry
\begin{eqnarray}
\chi_{\bf 16}(y) = \sum_{i=1}^{8} ( y_i + y_i{}^{-1} ).
\end{eqnarray}
Such an analytic continuation usually
appears in the flop transition \cite{Iqbal:2004ne,Konishi:2006ev,Taki:2008hb}
of topological strings on toric Calabi--Yau.
Discussing flop transition of a Tao web seriously is an interesting open problem,
but in practice we use the above continuation as a mathematical trick.
It should be emphasized that when we define the instanton contribution, we should divide
the E-string Tao partition function by the perturbative part (\ref{pert}) for which the analytic continuation is not performed:
\begin{eqnarray}
Z_{\, inst+extra} = \frac{Z_{E{\textrm{-}}string\,\,Tao} }{Z_{\,pert+extra}}.
\end{eqnarray}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm, bb=0 0 2500 1816]{1-inst.pdf}
\end{center}
\caption{\small Sub-diagram of the Tao web that gives the one-instanton part.}
\label{fig:1-inst}
\end{figure}
\noindent\underline{\bf One-instanton}\\
When we compute the one-instanton contribution,
the K\"ahler parameters including high powers of instanton factor $\mathfrak{q}$ should all be truncated. When we consider the sub-diagram which includes instanton factor with power up to 1,
we obtain the sub-diagram Figure \ref{fig:1-inst}.
Using {\textit{Mathematica}},
we show that the one-instanton contribution is a finite polynomial and given by
\begin{align}
Z_{\, inst+extra}
&= 1
+ \mathfrak{q}\,\Bigg[ \frac{q}{(1-q)^2}
\frac{A}{(1-A^2)^2}
\left(
- (1+A^2) \chi_{\bf\overline{128}} (y)
+ 2A \chi_{\bf128} (y)
\right) \cr
&\qquad\qquad + \mathcal{E}_1(y) +\mathcal{O}( y_3{}^\frac{5}{2},y_7{}^{\frac{5}{2}})
\Bigg] +\mathcal{O} \big( \mathfrak{q}^2 \big)
\cr
&= \mathrm{PE} \Big[\big(\mathcal{E}_1 + \mathcal{F}_1 + \mathcal{O}( y_3{}^\frac{5}{2},y_7{}^{\frac{5}{2}})
\big)\mathfrak{q}\Big]
+ \mathcal{O} \bigl( \mathfrak{q}^2 \bigr),
\end{align}
$\mathcal{E}_1(y)$ is the factor which does not depend on $A$ given as
\begin{eqnarray}
\mathcal{E}_1(y) &=& \frac{q}{(1-q)^2}
\frac{1}{\prod_{i=1}^8 \left( y_i \right)^{\frac{1}{2}}}
(y_4 y_5 +y_1 y_3 y_4 y_5 +y_2 y_3 y_4 y_5 +y_4 y_6 +y_1 y_3 y_4 y_6 +y_2 y_3 y_4 y_6
\nonumber \\
&& \qquad \qquad
+y_1 y_4 y_5 y_6
+y_4 y_5 y_6 y_7 +y_1 y_3 y_4 y_5 y_6 y_7 +y_2 y_3 y_4 y_5 y_6 y_7 +y_1 y_8
\nonumber \\
&& \qquad \qquad
+y_2 y_8 +y_1 y_2 y_3 y_8
+y_1 y_2 y_5 y_8 +y_1 y_5 y_7 y_8 +y_2 y_5 y_7 y_8
\nonumber \\
&& \qquad \qquad
+y_1 y_2 y_3 y_5 y_7 y_8 +y_1 y_6 y_7 y_8
+y_2 y_6 y_7 y_8 + y_1 y_2 y_3 y_6 y_7 y_8 ).
\end{eqnarray}
The one-instanton computation result is given by
\begin{eqnarray}
\mathcal{F}_1
=\frac{q}{(1-q)^2}
\left[
\frac{A}{(1-A^2)^2}
\left(
- (1+A^2) \chi_{\bf\overline{128}} (y)
+ 2A \chi_{\bf128} (y)
\right)
\right],
\end{eqnarray}
or
\begin{eqnarray}\label{F1}
\mathcal{F}_1
= - \frac{q A}{2 (1-q)^2 (1+A)^2}
(\chi_{\bf s} (y) + \chi_{ \bf c} (y))
+ \Big[ \{ y_i, A\} \to \{ - y_i , -A \} \Big].
\end{eqnarray}
In \eqref{F1} we used the following notation: The characters $\chi^{(n)}$ ($n\le 6$) are $n$th anti-symmetric tensor representations whose Dynkin label is given by $[0, 0, \ldots, 1, \ldots, 0 ]$ where each entry is zero except for the $n$th entry being one. One can also call them the characters of the fundamental weights of $SO(16)$ together with two spinor representations identified as $\chi_{\bf c}=\chi^{(7)}, \chi_{\bf s}=\chi^{(8)}$.
We note that $\chi^{(2)}$, $\chi^{(4)}$, $\chi^{(6)}$, and $\chi^{(8)}$ are invariant under the transformation $y_i \to - y_i$, while $\chi^{(1)}$, $\chi^{(3)}$, $\chi^{(5)}$, and $\chi^{(7)}$ obtain a minus sign by this transformation.\footnote{In terms of the irreducible representations, the fundamental weights of $SO(16)$ are as follows:
\begin{align*}
\chi^{(1)}&=\chi^{SO(16)}_{\bf 16}, &\chi^{(2)}&=\chi^{SO(16)}_{\bf 120},&\chi^{(3)}&=\chi^{SO(16)}_{\bf 560},&
\chi^{(4)}&=\chi^{SO(16)}_{\bf 1820},\\
\chi^{(5)}&=\chi^{SO(16)}_{\bf 4368},&\chi^{(6)}&=\chi^{SO(16)}_{\bf 8008}, &\chi^{(7)}&=\chi^{SO(16)}_{\bf\overline{128}},&\chi^{(8)}&=\chi^{SO(16)}_{\bf{128}}.
\end{align*}
}
From here on, we assume that the higher-order correction $\mathcal{O}( y_3{}^\frac{5}{2},y_7{}^{\frac{5}{2}})$ vanishes.
\noindent\underline{\bf Two-instanton}\\
The two-instanton contribution including the extra factor is
\begin{eqnarray}
\mathrm{PE} \big[ (\mathcal{E}_2 + \mathcal{F}_2 + \mathcal{O}( y_3{}^\frac{5}{2},y_7{}^{\frac{5}{2}}) )\mathfrak{q}^2 \big],
\end{eqnarray}
where the extra factor $\mathcal{E}_2 $
is given by
\begin{align}
\mathcal{E}_2 =
\frac{q}{(1-q)^2}
\bigg(&q+4+\frac{1}{q}+ \frac{y_1}{y_2} + \frac{y_2}{y_1} + \frac{1}{y_1 y_3} + \frac{1}{y_2 y_3} +
y_1 y_3 + y_2 y_3 + \frac{y_5}{y_6} + \frac{y_6}{y_5} \nonumber\\
& +\frac{1}{y_5 y_7} + \frac{1}{ y_6 y_7 } + y_5 y_7 + y_6 y_7 \bigg).
\end{align}
The two-instanton contribution takes the following form:
\begin{eqnarray}
\mathcal{F}_2 =
\frac{q}{(1-q)^2} \frac{1}{(1-q^{-1}A^2 )^2 ( 1- q A^2 )^2}
& \Bigl( c_1\, \chi^{(1)} + c_2\, \chi^{(2)} + c_3 \,\chi^{(3)} + c_4 \,\chi^{(4)} + c_5\, \chi^{(5)}
\nonumber\\
& + c_6\, \chi^{(6)}
+ c_7 \,\chi_{\bf s}\,\chi_{\bf c}
+ c_8\, (\chi_{\bf s}^2 + \chi_{\bf c}^2) + c_9 \Bigr).
\nonumber
\end{eqnarray}
The coefficient functions $c_{k}$ are functions of $q$ and $A$ such that they are expressed in terms of the $SU(2)$ characters of $q$, $\chi_{\bf dim}(q)$,
for instance, $\chi_{\bf 2} (q) = q^{-1} + q$, $\chi_{\bf 3} (q) = q^{-2} + 1 + q^2$ and $\chi_{\bf 4} (q) = q^{-3} + q^{-1} + q + q^3$.
The coefficient functions $c_k$ are given as follows:
\begin{flalign}
c_1 &= - A (1 + A^2) \Bigl( (2 {\chi_{\bf 2} (q)}+1) (1+A^4)+ (2 {\chi_{\bf 2} (q)}+{\chi_{\bf 3} (q)}+1) A^2 \Bigr), &&
\end{flalign}
\vspace{-10mm}
\begin{flalign}
c_2 &= \frac{A^2 ( 1 + A^2 + A^4 ) }{(1 + A^2)^2}
\Bigl( (3 {\chi_{\bf 2} (q)}+2)(1+A^4) +(7 {\chi_{\bf 2} (q)}+6) A^2 \Bigr), &&
\end{flalign}
\vspace{-10mm}
\begin{flalign}
c_3 &= - A (1 + A^2) \Bigl( (1+A^4) + ( 2 \chi_{\bf 3} (q) + 1) A^2 \Bigr), &&
\end{flalign}
\vspace{-10mm}
\begin{flalign}
c_4 &= A^2 \Bigl( 2 (1+A^4) +(2 {\chi_{\bf 2} (q)}+3) A^2 \Bigr), &&
\end{flalign}
\vspace{-8mm}
\begin{flalign}
c_5 &= - 3 A^3 (1 + A^2) , &&
\end{flalign}
\vspace{-10mm}
\begin{flalign}
c_6 &= \frac{A^4 }{(1 + A^2)^2} \Bigl( 4 (1+A^4) - ({\chi_{\bf 2} (q)}-6) A^2 \Bigr), &&
\end{flalign}
\vspace{-8mm}
\begin{flalign}
c_7 &= - \frac{A^5 (1 + A^2)}{(1 - A^2)^4} \Bigl( 5 (1+A^4) - (2 \chi_{\bf 3} (q) + 4 ) A^2 \Bigr), &&
\end{flalign}
\vspace{-8mm}
\begin{flalign}
c_8 &= \frac{A^6}{(1 - A^2)^4(1 + A^2)^2} \Bigl( 6 (1+A^8) - ({\chi_{\bf 2} (q)}+3) A^2 (1 + A^4) - {\chi_{\bf 2} (q)} A^4 \Bigr), &&
\end{flalign}
\vspace{-8mm}
\begin{flalign}
c_9 &= A^2 \Bigl( 4 ({\chi_{\bf 2} (q)}+{\chi_{\bf 3} (q)}+1) (1+A^4) + (-2 {\chi_{\bf 2} (q)}+2 {\chi_{\bf 3} (q)}-{\chi_{\bf 4} (q)}+2) A^2 \Bigr). &&
\end{flalign}
\noindent\underline{\bf Three-instanton}\\
The three-instanton contribution including the extra factor is
\begin{eqnarray}
\mathrm{PE} \big[ (\mathcal{E}_3 + \mathcal{F}_3 + \mathcal{O}( y_3{}^\frac{5}{2},y_7{}^{\frac{5}{2}}) )\mathfrak{q}^3 \big],
\end{eqnarray}
where the extra factor $\mathcal{E}_3$ is given by
\begin{eqnarray}
\mathcal{E}_3 = \mathcal{E}_1
- \frac{q}{(1-q)^2}
\left(
\sqrt{ \frac{y_1 y_4 y_5 y_6}{y_2 y_3 y_7 y_8} } + \sqrt{ \frac{y_1 y_2 y_5 y_8}{y_3 y_4 y_6 y_7} }
\right).
\end{eqnarray}
We do not have a clear understanding of why $\mathcal{E}_3 $ is very similar to $\mathcal{E}_1$ but slightly different.
The three-instanton contribution takes the following form:
\begin{align}
\mathcal{F}_3=&~ \biggl[\frac{q ( \chi_{\bf s} + \chi_{\bf c} ) }%
{2(1 - q)^2 (1+A)^2(1 + q A)^2 (1 + q ^{-1} A)^2 (1 - q A^2)^2 (1 - q^{-1} A^2)^2 }
\nonumber \\
&\quad
\times \Bigl(
d_{1} \chi^{(1)}
+ d_{2} \chi^{(2)}
+ d_{3} \chi^{(3)}
+ d_{4} \chi^{(4)}
+ d_{5} \chi^{(5)}
+ d_{6} \chi^{(6)}
\nonumber \\
& \qquad \quad
+ d_{7} \chi_{\bf s} \chi_{\bf c}
+ d_{8} (\chi_{\bf s}{}^2 - \chi_{\bf s} \chi_{\bf c} + \chi_{\bf c}{}^2 )
+ d_{9}
\Bigr)\biggr]
\nonumber \\
& +
\Bigl[ \{ A,y_i \} \to \{-A, -y_i \} \text{ of the above} \Bigr],
\end{align}
where the coefficient functions $d_k$ are
given as follows:
\begin{flalign}
d_{1} = &-A \Bigl( (1 +
A^{12}) - (\chi_{\bf 2} (q) - 2) (A + A^{11}) + (2 \chi_{\bf 2} (q) + 4) (A^2 + A^{10})\cr
&\qquad+ (4 \chi_{\bf 3} (q) + 2 \chi_{\bf 2} (q) + 8) (A^3 + A^9)
+ (3 \chi_{\bf 3} (q) + 10 \chi_{\bf 2} (q) + 10) (A^4 + A^8) \cr
&\qquad + (8 \chi_{\bf 3} (q) + 5 \chi_{\bf 2} (q) + 14) (A^5 +
A^7) + (4 \chi_{\bf 3} (q) + 12 \chi_{\bf 2} (q) + 12) A^6 \Bigr),&&
\end{flalign}
\begin{flalign}
d_{2} = &~ A^2 (1 + A + A^2
\,\Bigl(
2 (1 + A^8) + (
A + A^7) + (2 \chi_{\bf 2} (q) + 5) (A^2 + A^6)\cr
&\qquad + (3 \chi_{\bf 3} (q) + 2 \chi_{\bf 2} (q) + 4) (A^3 +
A^5) + (-\chi_{\bf 3} (q) + 4 \chi_{\bf 2} (q) + 9) A^4\Bigr),&&
\end{flalign}
\begin{flalign}
d_{3} = & - A^3 \Bigl( 3 (1 + A^8) + (\chi_{\bf 2} (q) + 4)(A + A^7) + (2 \chi_{\bf 2} (q) + 12) (A^2 + A^6)\cr
&\qquad + (2 \chi_{\bf 3} (q) + 6 \chi_{\bf 2} (q) + 12) (A^3 + A^5) + (\chi_{\bf 3} (q) + 6 \chi_{\bf 2} (q) + 19) A^4 \Bigr), &&
\end{flalign}
\begin{flalign}
d_{4} = &~ \frac{A^4 (1 + A^2)}{(1 - A + A^2)^2} \cr
&\times
\Bigl(4 (1 + A^8) + (2 \chi_{\bf 2} (q) - 3) (A + A^7) - (2 \chi_{\bf 2} (q) - 14) (A^2 + A^6) \cr
&\quad + (\chi_{\bf 3} (q) + 8 \chi_{\bf 2} (q) - 8) (A^3 +A^5) + (-2 \chi_{\bf 3} (q) - 8 \chi_{\bf 2} (q) + 18) A^4\Bigr),&&
\end{flalign}
\begin{flalign}
d_{5} = & - \frac{A^5}{(1 - A + A^2)^2 }\cr
&\times\Bigl( 5 (1 + A^8) + (3 \chi_{\bf 2} (q) - 4) (A + A^7) - (4 \chi_{\bf 2} (q) - 16) (A^2 +A^6)\cr
&\quad + (10 \chi_{\bf 2} (q) - 10) (A^3 + A^5) - (\chi_{\bf 3} (q) + 10 \chi_{\bf 2} (q) - 21) A^4\Bigr),&&
\end{flalign}
\begin{flalign}
d_{6} = &~ \frac{A^6}{(1 - A + A^2)^2}\Bigl( 6 (1 + A^6) + (4 \chi_{\bf 2} (q) - 5) (A + A^5) \cr
&\qquad+ (-6 \chi_{\bf 2} (q) + 12) (A^2 +
A^4) + (-\chi_{\bf 3} (q) + 8 \chi_{\bf 2} (q) - 7) A^3 \Bigr),&&
\end{flalign}
\begin{flalign}
d_{7} = &-\frac{A^7}{(1 - A)^4 (1 + A)^4
\Bigl( 7 (1 + A^8) + 5 \chi_{\bf 2} (q) (A + A^7) -
4 \chi_{\bf 2} (q) (A^2 + A^6)\cr
&\qquad - (2 \chi_{\bf 3} (q) + \chi_{\bf 2} (q) + 2) (A^3 + A^5) +
2 A^4\Bigr),&&
\end{flalign}
\begin{flalign}
d_{8} =& ~\frac{A^8}{(1 - A + A^2)^2 (1 - A)^4 (1 + A)^4
\Bigl(
8 (1 + A^{10}) + (6 \chi_{\bf 2} (q) - 7) (A + A^9)
\cr
& \qquad
+ (-10 \chi_{\bf 2} (q) + 6) (A^2 + A^8)
- (3 \chi_{\bf 3} (q) - 6 \chi_{\bf 2} (q) + 3) (A^3 + A^7)\cr
& \qquad
+ (2 \chi_{\bf 3} (q) - 2 \chi_{\bf 2} (q) + 4) (A^4 + A^6)
- (2 \chi_{\bf 3} (q) + 4) A^5
\Bigr), &&
\end{flalign}
\begin{flalign}
d_{9} = & ~\frac{A}{(1 - A + A^2)^2}\cr
&\times\Bigl( -2 \chi_{\bf 2} (q) (1 + A^{16}) +
4 \chi_{\bf 2} (q) (A + A^{15}) + (4 \chi_{\bf 3} (q) - 4 \chi_{\bf 2} (q) + 3) (A^2 + A^{14}) \cr
&\quad +
10 \chi_{\bf 2} (q) (A^3 + A^{13})
+ (2 \chi_{\bf 4} (q) + 6 \chi_{\bf 3} (q) - 6 \chi_{\bf 2} (q) + 4) (A^4 +
A^{12}) \cr
&\quad - (6 \chi_{\bf 4} (q) - 6 \chi_{\bf 3} (q) - 14 \chi_{\bf 2} (q) - 6) (A^5 + A^{11})\cr
&\quad - (\chi_{\bf 5}(q) - 10 \chi_{\bf 4} (q) - 4 \chi_{\bf 3} (q) - 4 \chi_{\bf 2} (q) - 2) (A^6 + A^{10}) \cr
&\quad+ (2 \chi_{\bf 5}(q) - 12 \chi_{\bf 4} (q) +
16 \chi_{\bf 3} (q) + 8 \chi_{\bf 2} (q) + 14) (A^7 + A^9)\cr
&\quad - (3 \chi_{\bf 5}(q) - 12 \chi_{\bf 4} (q) + 3 \chi_{\bf 3} (q) -
8 \chi_{\bf 2} (q) + 4) A^8\Bigr).&&
\end{flalign}
\noindent\underline{\bf Four-instanton}\\
The four-instanton contribution including the extra factor is
\begin{eqnarray}
\mathrm{PE} \big[ (\mathcal{E}_4 + \mathcal{F}_4 + \mathcal{O}( y_3{}^\frac{5}{2},y_7{}^{\frac{5}{2}}) )\mathfrak{q}^4 \big],
\end{eqnarray}
where the extra factor $\mathcal{E}_4$ turned out to be identical to $\mathcal{E}_2$:
\begin{align}
\mathcal{E}_4 = \mathcal{E}_2.
\end{align}
The main part $\mathcal{F}_4$ is hard to write down explicitly.
Here, we write the expansion of $\mathcal{F}_4$ to order $A^2$:
\begin{align}
\mathcal{F}_4 = -
\Bigl(
&
(3 \chi_{\bf 3} (q) + 4\chi_{\bf 2}(q) + 2) \chi^{(1)}
+ 2 \chi_{\bf 2} (q) \chi^{(3)} + \chi^{(5)} + \chi^{(1)} \chi^{(2)}
\Bigr) A
\cr
+ \Bigl(
& (5 \chi_{\bf 4} (q) + 6 \chi_{\bf 3} (q) + 11\chi_{\bf 2} (q) + 8 ) \chi^{(2)}
+ ( 4 \chi_{\bf 3} (q) + 4\chi_{\bf 2} (q) ) \chi^{(4)}
+ (3 \chi_{\bf 2} (q) - 2) \chi^{(6)}
\cr
&
+ (4 \chi_{\bf 3}(q) + 3 \chi_{\bf 2}(q) + 2 ) ( \chi^{(1)} )^2
+ 3 \chi_{\bf 2} (q) \chi^{(1)} \chi^{(3)}
+ 2 \chi^{(1)} \chi^{(5)} + 2 (\chi^{(2)} )^2 + 2 (\chi_{\bf s})^2
\cr
&
+ ( 6 \chi_{\bf 5} (q) + 8 \chi_{\bf 4} (q) + 16 \chi_{\bf 3} (q) + 20 \chi_{\bf 2} (q) + 10 )
\Bigr) A^2 +\mathcal{O}(A^3).
\end{align}
To compare with the partition function computed in \cite{Hwang:2014uwa,Kim:2014dza}, we expand our result in terms of the Coulomb modulus $A$ (which corresponds to $w$ in \cite{Kim:2014dza}) and obtain
\begin{align}
Z_{E\textrm{-}string}={\rm PE}\Big[\sum_{m=0}^{\infty}\mathcal{F}_m(y,A,q)\mathfrak{q}^m\Big]={\rm PE}\Big[\frac{1}{(1-q)(1-q^{-1})}\sum_{n=1}^\infty \tilde{f}_n A^n\Big],
\end{align}
where
\begin{align}
\tilde{f}_1&= \chi^{(1)} + \chi_{\bf c} \,\mathfrak{q}
+\Big( 2\chi_{\bf 2}(q)\chi^{(1)} + \chi^{(3)} + \chi^{(1)} \Big)\mathfrak{q}^2 + \Big(\chi^{(1)}\chi_{\bf s}+ 2\chi_{\bf 2}(q) \chi_{\bf c} \Big) \mathfrak{q}^3
\cr
& \quad
+ \Bigl( 3 \chi_{\bf 3} (q) + 4\chi_{\bf 2}(q) + 2) \chi^{(1)}
+ 2 \chi_{\bf 2} (q) \chi^{(3)} + \chi^{(5)} + \chi^{(1)} \chi^{(2)} \Bigr) \mathfrak{q}^4
+\mathcal{O}(\mathfrak{q}^5),\\
\tilde{f}_2&=- 2- 2 \chi_{\bf s}\, \mathfrak{q}
- \Big( 2 \chi^{(4)}+ (3\chi_{\bf 2}(q) + 2 )\chi^{(2)} +4(\chi_{\bf 3}(q) + \chi_{\bf 2}(q)+1) \Big)\mathfrak{q}^2 \nonumber\\
%
&\quad\, - \Big(2 \chi^{(2)}\chi_{\bf s}+3\chi_{\bf 2}(q) \chi^{(1)}\chi_{\bf c}+4(\chi_{\bf 3}(q)+\chi_{\bf 2}(q)+1)\chi_{\bf s}\Big)\mathfrak{q}^3
\cr
& \quad\,
+ \Bigl(
(5 \chi_{\bf 4} (q) + 6 \chi_{\bf 3} (q) + 11\chi_{\bf 2} (q) + 8 ) \chi^{(2)}
+ ( 4 \chi_{\bf 3} (q) + 4\chi_{\bf 2} (q) ) \chi^{(4)}
+ (3 \chi_{\bf 2} (q) - 2) \chi^{(6)}
\cr
& \qquad \qquad
+ (4 \chi_{\bf 3}(q) + 3 \chi_{\bf 2}(q) + 2 ) ( \chi^{(1)} )^2
+ 3 \chi_{\bf 2} (q) \chi^{(1)} \chi^{(3)}
+ 2 \chi^{(1)} \chi^{(5)} + 2 (\chi^{(2)} )^2 + 2 (\chi_{\bf s})^2
\cr
& \qquad \qquad
+ ( 6 \chi_{\bf 5} (q) + 8 \chi_{\bf 4} (q) + 16 \chi_{\bf 3} (q) + 20 \chi_{\bf 2} (q) + 10 )
\Bigr) \mathfrak{q}^4
+\mathcal{O}(\mathfrak{q}^5).
%
\end{align}
Here, $\tilde{f}_1$ and $\tilde{f}_2$ are precisely those used in \cite{Kim:2014dza}.
We further checked the five-instanton contribution by inputing $y_i$ with some specific values and found that $\mathcal{F}_5$ is also consistent.\footnote{As an example, we substituted $y_1=1$, $y_2=2$, $y_3=1$, $y_4=3$, $y_5=1$, $y_6=2$, $y_7=1$, $y_8=3$, $q=5$. We did not consider simpler massless case because zero appears in the numerator at the middle of the computation if we substitute $y_1 = y_2 = \cdots = y_8 =1$ from the beginning.}
Hence, the result obtained from the topological vertex amplitude based on the Tao diagram agrees with the result obtained from the instanton calculus as well as elliptic genus of E-strings in \cite{Hwang:2014uwa,Kim:2014dza}.\footnote{
Taking into account our setup $\epsilon_1+\epsilon_2=0$ and the notation convention introduced earlier, one easily finds that $t$ and $u$ in \cite{Kim:2014dza} correspond to $1$ and $q$, respectively, in our convention. The higher dimensional irreducible representations used in \cite{Hwang:2014uwa,Kim:2014dza} can be expressed in terms of the fundamental weights. For example, $~\chi^{SO(16)}_{\bf 1920}= \chi^{(1)}\chi_{\bf c} - \chi_{\bf s}, \quad
\chi^{SO(16)}_{\bf \overline{1920}}= \chi^{(1)}\chi_{\bf s} - \chi_{\bf c},\quad
\chi^{SO(16)}_{\bf 13312}=\chi^{(2)}\chi_{\bf s}- \chi^{(1)}\chi_{\bf c}$.}
\section{Discussion}\label{sec:discussion}
\begin{figure}[t]
\begin{center}
\includegraphics[width=13cm, bb=0 0 1089 513]{HREST.pdf}
\end{center}
\caption{\small A web diagram of class $\mathcal{T}$ for a higher-rank generalization of E-string theory.}
\label{fig:HREST}
\end{figure}
In this article, we computed the partition function for 5d $SU(2)$ gauge theory with eight flavors based on a Tao diagram which is a $(p,q)$ 5-brane web that has spirally rotating arms with the period accounting for the instanton configuration of the theory. As shown
in \cite{Kim:2014dza}, the Nekrasov function coincides with the elliptic genera of the E-string theory.
Applying the topological vertex to the Tao web diagram, we reproduced the same partition function as \cite{Kim:2014dza} up to four instantons. Thus this gives a new way of studying 6d $(1,0)$ theories via Tao diagrams with type IIB perspectives. We note that there are extra factors in our computation which do not depend on the Coulomb modulus which we mode out by hand to reproduce the correct partition function.
Although we have considered a simple Tao diagram like Figure \ref{fig:taoweb} in the main text,
various Tao-type diagrams are possible which are of spiral webs with cyclic structure. We propose to call a collection of Tao diagrams
``class $\mathcal{T}$\hspace{-0.5mm}(ao).''
A typical example of theories of class $\mathcal{T}$ would be the higher-rank E-string theory.
An easy way to find such a web would be exploring consistent black--white grid (or toric-like) diagrams with spiral belts in the triangulated diagram. For instance, see Figure \ref{fig:HREST} which may describe 5d $Sp(N)$ gauge theory with a massless antisymmetric hypermultiplet.
The diagram in Figure \ref{fig:HREST} has $N$ normalizable deformations, namely $N$-dimensional Coulomb branch,
as there are $N$ internal points in the toric-like diagram.
Moreover, the web is a collection of $N$ coinciding E-string Tao webs
that are essentially decoupled\footnote{This situation is same as the web for the higher-rank $E_n$ theory in \cite{Benini:2009gi,Kim:2014nqa}, which is the case where the antisymmetric hypermultiplet is massless. It is worth noting that the case with massive antisymmetric hypermultiplet is not described by Figure \ref{fig:HREST}. }
from each other in this web realization.
This E-string-like diagram with multidimensional Coulomb branch like Figure \ref{fig:HREST}
is a candidate for 5-brane web description of the rank-$N$ E-string theory.
We can also consider another higher-rank generalization of our E-string Tao web.
Based on the fact that the rank-$1$ E-string theory is the UV fixed point of 5d $SU(2)$ gauge theory with eight flavors, we can extend the gauge group to $SU(N)$ to yield a six-dimensional theory that is the UV fixed point of 5d $SU(N)$ gauge theory with a critical number of flavors.
Figure \ref{fig:SUN} is such a web, which suggests the critical number of flavors to be ``$2N+4$.''
\begin{figure}[t]
\begin{center}
\includegraphics[width=14cm, bb=0 0 2478 1113]{SU4tao.pdf}
\end{center}
\caption{\small A web diagram of class $\mathcal{T}$ for ``$SU(N)$ gauge theory with $2N+4$ flavors.''
}
\label{fig:SUN}
\end{figure}
On the left-hand side of Figure \ref{fig:SUN}, without four
5-brane junctions attached to $[1,1]$ 7-branes, it is a 5-brane web for $SU(N)$ gauge theory with $2N$ flavors. With these junctions, it describes 5d $SU(N)$ gauge theory with $2N+4$ flavors and makes the spiral web shown on the right-hand side of Figure \ref{fig:SUN} as 7-branes (with arrows) across the branch cuts created by other 7-branes.
The way it makes spiral shape is as
in Figure \ref{fig:7branemove}.
It was classified that $SU(N)$ gauge theory exists up to $2N$ flavors,
and the UV fixed point disappears if the number of hypermultiplets exceeds the limit $N_f\leq 2N$ \cite{Intriligator:1997pq}.
Our Tao-like generalization yielding to $2N+4$ flavors, on one hand, seems to be outside this classification. On the other hand,
from the existence of a consistent web diagram with spiral-direction,
one may expect to find a 6d fixed point in UV. \footnote{
It was conjectured in \cite{Bergman:2014kza} that the UV fixed point for 5d $SU(N)$ theory with $N_f$ flavors exists $N_f\le 2N+3$ for zero Chern-Simons level. Our observation is consistent with this conjecture.}
Finding an F-theoretic or another realization of this 6d SCFT is an interesting direction to pursue further.
The idea of finding a new Tao diagram is also applicable to linear quiver and 5d $T_N$ theories \cite{Gaiotto:2009we,Benini:2009gi}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=16cm, bb=0 0 3050 1288]{T4tao.pdf}
\end{center}
\caption{\small Examples of web diagrams of class $\mathcal{T}$ created from linear quiver and 5d $T_N$ theories.}
\label{fig:TNtao}
\end{figure}
Figure \ref{fig:TNtao} describes the suitably blown-up (a) linear quiver and (b) $T_N$ theory
for obtaining possible 6d SCFTs.
Figure \ref{fig:TNtao}(a) is the web of the superconformal linear quiver theory $SU(N)^M$ blown up at four points.
The gauge theory interpretation is therefore the linear quiver theory $SU(N)^M$ with an additional two pairs of fundamental hypermultiplets of $SU(N)$ at each end of the quiver.
Moving 7-branes outside, we find a consistent web diagram of class $\mathcal{T}$,
and thus there might be a 6d fixed point theory associated with this brane configuration.
Figure \ref{fig:TNtao}(b) is the $T_N$ geometry, which is $\mathbb{C}^3/\mathbb{Z}_N\times \mathbb{Z}_N$,
blown up at three points.
This is an example of class $\mathcal{T}$ created from 5d $T_N$ theory.
One can create infinitely many webs of class $\mathcal{T}$ from conventional 5-brane webs
and identify the global symmetry of such webs
by using the collapsing 7-branes, as was studied in \cite{DeWolfe:1999hj, Mitev:2014jza}.
Determining the global symmetry may lead to a clue to identifying the corresponding 6d theory associated with a given web. It would be interesting to see whether our construction is brane-web counterpart of the F-theoretic classification of 6d SCFTs studied recently in \cite{Heckman:2013pva}.
There are many other future directions.
For instance, the refinement $\epsilon_1+\epsilon_2\neq 0$ of our Tao partition function would be an important direction.
In this paper, we present a generic 5-brane configuration but the explicit computation is based on the unrefined case.
Generalization to $\epsilon_1+\epsilon_2\neq 0$ may involve further complication. %
Deriving the Seiberg--Witten curve of the E-string theory
by using Tao web is also a fruitful direction.
In \cite{Bao:2013pwa,Kim:2014nqa}, a systematic way of computing 5d Seiberg--Witten curve starting from the toric-like diagram was developed.
This method might be applicable to Tao web, leading to the Seiberg--Witten curve obtained in \cite{Eguchi:2002fc, Eguchi:2002nx}.
Considering the role of the instanton operator \cite{Lambert:2014jna,Tachikawa:2015mha}
in the context of the 5-brane web may give a hint for understanding the reason why 6d uplift occurs.
It would also be interesting to find the relation of class $\mathcal{T}$ to the followings:
AGT correspondence for isolated SCFTs/irregular singularities
\cite{Alday:2009aq,Wyllard:2009hg,Gaiotto:2009ma,Taki:2009zd,Kanno:2012xt,Gaiotto:2012sf,Bonelli:2011aa,Kanno:2013vi},
5d AGT correspondence \cite{Awata:2009ur,Awata:2011dc,Nieri:2013yra,Taki:2014fva}, and the corresponding $q$-Toda field theory \cite{Mitev:2014isa,Isachenkov:2014eya}.
\section*{Acknowledgements}
We would like to thank Giulio Bonelli, Tohru Eguchi, Amihay Hanany, Hee-Cheol Kim, Seok Kim, Axel Kleinschmidt, Kimyeong Lee, Kazuhiro Sakai, Yuji Tachikawa, and Piljin Yi for useful discussion and comments.
We are grateful to the 2014 Simons Summer Workshop in Mathematics and Physics where the authors initiated the project, and the 2nd workshop on Developments in M-theory at High1. MT thanks KIAS for kind hospitality during his visit. MT is supported by the RIKEN iTHES project.
|
1,116,691,499,158 | arxiv | \section{Introduction}
The Euler-Poisson system is one of the simplest two-fluid models used to describe the
dynamics of a plasma consisting of moving electrons and ions. In this model the heavy ions
are assumed to be immobile and uniformly distributed in space, providing only a background of
positive charge. The light electrons are modeled as a charged compressible fluid
moving against the ionic forces. Neglecting magnetic effects,
the governing dynamics of the electron fluid is given
by the following Euler-Poisson system in $(t,x) \in [0,\infty) \times \mathbb R^d$,
\begin{align} \label{eq_EP_1}
\begin{cases}
\partial_t n + \nabla \cdot (n \v u) =0, \\
m_e n (\partial_t \v u + (\v u\cdot \nabla) \v u ) + \nabla p(n) = e n \nabla \phi, \\
\Delta \phi = 4\pi e (n-n_0).
\end{cases}
\end{align}
Here $n=n(t,x)$ and $\v u=\v u(t,x)$ denote the density and average velocities of the electrons respectively.
The symbol $e$ and $m_e$ denote the unit charge and
mass of electrons. The pressure term $p(n)$ is assumed to obey the polytropic $\gamma$-law, i.e.
\begin{align} \label{eq_EP_2}
p(n) = A n^\gamma,
\end{align}
where $A$ is the entropy constant and $\gamma \ge 1$ is called the adiabatic index.
The term $en\nabla \phi=(-ne)\cdot(-\nabla \phi)$ quantifies the
electric force acting on the electron fluid
by the positive ion background. Note that the electrons carry negative charge $-ne$.
We assume at the equilibrium the density of ions and electrons are both a constant denoted by
$n_0$. To ensure charge neutrality it is natural to impose the condition
\begin{align*}
\int_{\mathbb R^d} (n-n_0) dx =0.
\end{align*}
The boundary condition for the electric potential $\phi$ is a decaying condition at infinity,
i.e.
\begin{align} \label{eq_EP_3}
\lim_{|x| \to \infty} \phi(t,x) = 0.
\end{align}
The first and second equations in \eqref{eq_EP_1} represent mass conservation and momentum balance of
the electron fluid respectively. The third equation in \eqref{eq_EP_1} is the usual Gauss law in electrostatics.
It computes the electric potential self-consistently through the charge distribution $(n_0e -ne)$.
The Euler-Poisson system is one of the simplest
two-fluid model in the sense that the ions are treated as uniformly distributed sources in space and they
appear only as a constant
$n_0$ in the Poisson equation. This is a very physical approximation since $m_{ion}\gg m_{e}$
and the heavy ions move much more slowly than the light electrons.
Throughout the rest of this paper, we shall consider an irrotational flow
\begin{align}
\nabla \times {\mathbf u}=0 \label{e_irrot}
\end{align}
which
is preserved in time. For flows with nonzero curl the magnetic field is no longer negligible and it is more physical
to consider the full Euler-Maxwell system.
We are interested in constructing smooth global solution around the equilibrium $(n,\v u)\equiv (n_0,0)$. To do this
we first transform the system \eqref{eq_EP_1} in terms of certain perturbed
variables. For simplicity set all physical constants $e$, $m_e$, $4\pi$ and $A$ to be
one. To simplify the presentation, we also set $\gamma=3$ although other cases of $\gamma$ can be easily
treated as well. Define the rescaled functions
\begin{align*}
u(t,x) & = \frac{n(t/c_0, x) -n_0} {n_0}, \\
\v v(t,x) & = \frac 1 {c_0} \v u (t/c_0, x), \\
\psi(t,x) & = 3 \phi (t/c_0, x),
\end{align*}
where the sound speed is $c_0 = \sqrt 3 n_0$. For convenience we set $n_0 = 1/3$ so that the characteristic
wave speed is unity. The Euler-poisson system \eqref{eq_EP_1} in new variables take the form
\begin{align} \label{e927_450}
\begin{cases}
\partial_t u + \nabla \cdot \v v + \nabla \cdot( u \v v) =0, \\
\partial_t \v v + \nabla u + \nabla \left( \frac 12 u^2 + \frac 12 |\v v|^2 \right) = \nabla \psi, \\
\Delta \psi = u.
\end{cases}
\end{align}
Taking one more time derivative and using \eqref{e_irrot} then transforms \eqref{e927_450}
into the following quasi-linear Klein-Gordon system:
\begin{align} \label{e927_450abc}
\begin{cases}
(\square +1) u= \Delta\left( \frac 12 u^2+ \frac 12 |\v v|^2 \right) -\partial_t \nabla \cdot (u\v v), \\
(\square+1) \v v = -\partial_t \nabla \left( \frac 12 u^2+\frac 12 |\v v|^2 \right)
+ (1-\Delta^{-1}) \nabla \nabla \cdot (u \v v).
\end{cases}
\end{align}
For the above system, in the 3D case Guo \cite{Guo98} first constructed a global
smooth irrotational solution by using dispersive Klein-Gordon effect and adapting Shatah's normal
form method. It has
been conjectured that same results should hold in the
two-dimensional case. In our recent work \cite{jlz}, we proved the existence of a family of smooth
solutions by constructing the wave operators for the 2D system. The 2D problem with radial data
was studied in \cite{Jang_radial}.
In this work we completely settle the 2D Cauchy problem for general non-radial data.
The approach we take in this paper is based
on a new set-up of normal form transformation developed by Germain, Masmoudi and Shatah \cite{GMS1,GMS2,GMS3}.
Roughly speaking (and over-simplifying quite a bit), the philosophy of the normal form method is that \textit{one should integrate parts
whenever you can in either (frequency) space
or time}. The part where one cannot integrate by parts is called the set of space-time resonances which
can often be controlled by some finer analysis
provided the set is not so large or satisfies some frequency separation properties. The implementation of such ideas
is often challenging and depends heavily on the problem under study. In fact the heart of the whole analysis is to
choose appropriate functional spaces utilizing the fine structure of the equations.
The main obstructions in the 2D Euler-Poisson system are slow(non-integrable) $\langle t \rangle^{-1}$ dispersion,
quasilinearity and nonlocality caused by
the Riesz transform. Nevertheless we overcome all such difficulties in this paper. To put things into
perspective, we review below some related literature as well as some technical developments
on this problem.
The main difficulty in constructing time-global smooth solutions for the Euler-Poisson system
comes from the fact that the Euler-Poisson system is a hyberbolic conservation law with zero dissipation
for which no general theory is available.
The "Euler"-part of the Euler-Poisson system is the well-known compressible Euler equations.
Indeed in \eqref{eq_EP_1}
if the electric field term $\nabla \phi$ is dropped, one recovers the usual Euler equations
for compressible fluids. In \cite{Si85}, Sideris considered the 3D compressible Euler equation
for a classical polytropic ideal gas with adiabatic index $\gamma>1$. For a class of
initial data which coincide with a constant state outside a ball, he proved that the lifespan
of the corresponding $C^1$ solution must be finite. In \cite{Ra89} Rammaha extended this
result to the 2D case. For the Euler-Poisson system, Guo and Tahvildar-Zadeh \cite{GuoTZ99}
established a "Siderian" blowup result for spherically symmetric initial data.
Recently Chae and Tadmor \cite{CT08} proved finite-time blow-up for $C^1$ solutions of
a class of pressureless attractive Euler-Poisson equations in $\mathbb R^n$, $n\ge 1$.
These negative results showed the abundance of shock waves for large solutions.
The "Poisson"-part of the Euler-Poisson system has a stabilizing effect which makes the whole
analysis of \eqref{eq_EP_1} quite different from the pure compressible Euler equations.
This is best understood in analyzing small irrotational perturbations of the equilibrium state
$n\equiv n_0$, $\v u\equiv 0$. For the 3D compressible Euler equation with irrotational initial data
$(n_{\epsilon}(0),\v u_{\epsilon}(0)) = (\epsilon \rho_0 + n_0, \epsilon \v v_0)$, where
$\rho_0 \in \mathcal S(\mathbb R^3)$,
$\v v_0\in \mathcal S(\mathbb R^3)^3$ are fixed functions ($\epsilon$ sufficiently small), Sideris
\cite{Si91} proved that the lifespan of the classical solution $T_\epsilon > \exp( C/\epsilon)$.
For the upper bound it follows from his previous paper \cite{Si85} that $T_\epsilon < \exp(C/\epsilon^2)$.
Sharper results are obtained by Godin \cite{Godin05} in which he show for radial initial
data as a smooth compact $\epsilon$-perturbation of the constant state, the precise asymptotic of
the lifespan $T_\epsilon$ is exponential in the sense
\begin{align*}
\lim_{\epsilon \to 0+} \epsilon \log T_\epsilon = T^*,
\end{align*}
where $T^*$ is a constant. All these results rely crucially on the observation that after some simple reductions,
the compressible Euler equation in rescaled variables is given by a vectorial nonlinear wave equation with pure
quadratic nonlinearities. The linear part of the wave equation decays at most at the speed $t^{-(d-1)/2}$ which
in 3D is not integrable. Unless the nonlinearity has some additional nice structure such as
the null condition \cite{C86, K86}, one cannot in general expect global existence of small solutions.
On the other hand, the situation for the Euler-Poisson system \eqref{eq_EP_1}
is quite different due to the additional Poisson coupling term. As was already explained before, the
Euler-Poisson system \eqref{eq_EP_1} expressed in rescaled variables is given by
the quasi-linear Klein-Gordon system \eqref{e927_450abc}
for which the linear solutions have an enhanced decay of $(1+t)^{-d/2}$.
This is in sharp contrast with
the pure Euler case for which the decay is only $t^{-(d-1)/2}$. Note that in $d=3$, $(1+t)^{-d/2}=(1+t)^{-3/2}$
which is integrable in $t$. In a seminal paper \cite{Guo98}, by exploiting the crucial decay property of the
Klein-Gordon flow in 3D, Guo \cite{Guo98} modified Shatah's normal form method \cite{Sh85} and
constructed a smooth irrotational global solution to \eqref{eq_EP_1} around the
equilibrium state $(n_0,0)$ for which the perturbations decay at a rate $C_p \cdot (1+t)^{-p}$ for any
$1<p<3/2$ (here $C_p$ denotes a constant depending on the parameter $p$). Note in particular that
the sharp decay $t^{-3/2}$ is marginally missed here due to a technical complication caused by
the nonlocal Riesz operator in the nonlinearity.
Construction of smooth global solutions to \eqref{eq_EP_1} in the two-dimensional case was
open since Guo's work.
The first obstacle comes from slow dispersion
since the linear solution to the Klein-Gordon system in
$d=2$ decays only at $(1+t)^{-1}$ which is not integrable, in particular making the
strategy of \cite{Guo98} difficult to apply. The other main technical difficulty comes
from the nonlocal nonlinearity in \eqref{e927_450abc} which involves a Riesz-type singular operator.
For general scalar quasi-linear Klein-Gordon equations
in 3D with quadratic type nonlinearities, global small smooth solutions were first constructed independently by
Klainerman \cite{K85} using the invariant vector field method and Shatah \cite{Sh85}
using a normal form method. Even in 3D there are
essential technical difficulties in employing Klainerman's invariant vector field method due to the Riesz
type nonlocal term in \eqref{e927_450abc}.
The Klainerman invariant vector fields consist of infinitesimal generators which commute well with
the linear operator $\partial_{tt} -\Delta +1$. The most problematic part comes from the Lorentz boost
$\Omega_{0j} = t \partial_{x_j} + x_j \partial_t$. While the first part $t \partial_{x_j}$ commutes
naturally with the Riesz operator $R_{ij}=(-\Delta)^{-1} \partial_{x_i} \partial_{x_j}$,
the second part $x_j \partial_t $ interacts rather badly with $R_{ij}$, producing a commutator
which scales as
\begin{align*}
[x_j \partial_t, R_{ij}] \sim \partial_t |\nabla|^{-1}.
\end{align*}
After repeated commutation of these operators one obtain in general terms of the form
$|\nabla|^{-N}$ which makes the low frequency part of the solution out of control.
It is for this reason that in 3D case Guo \cite{Guo98} adopted Shatah's method of normal
form in $L^p$ ($p>1$) setting for which the Riesz term $R_{ij}$ causes no trouble.
We turn now to the 2D Klein-Gordon equations with pure quadratic nonlinearities. In this case, direct
applications of either Klainerman's invariant vector field method or Shatah's normal form method
are not possible since the linear solutions only decay at a speed of $(1+t)^{-1}$ which is not integrable
and makes the quadratic nonlinearity quite resonant. In \cite{ST93}, Simon and Taflin constructed wave
operators for the 2D semilinear Klein-Gordon system with quadratic nonlinearities.
In \cite{Oz96}, Ozawa, Tsutaya and Tsutsumi considered the Cauchy problem and
constructed smooth global solutions by first transforming the quadratic nonlinearity into a cubic one
using Shatah's normal form method and then applying Klainerman's invariant vector field method to obtain
decay of intermediate norms.
Due to the nonlocal complication with the Lorentz boost
which we explained earlier, this approach
seems difficult to apply in the 2D Euler-Poisson system.
As was already mentioned, the purpose of this work is to settle the Cauchy problem for
\eqref{eq_EP_1} in the two-dimensional case. Before we state our main results,
we need to make some further simplifications.
Since $\v v$ is irrotational, we can write $\v v = \nabla \phi_1$ and obtain from \eqref{e927_450}:
\begin{align} \label{e9927_732a}
\begin{cases}
\partial_t u + \Delta \phi_1 + \nabla \cdot ( u \nabla \phi_1) =0, \\
\partial_t \phi_1 + |\nabla|^{-2} \langle \nabla \rangle^2 u + \frac12 (u^2 + |\nabla \phi_1|^2) =0.
\end{cases}
\end{align}
We can diagonalize the system \eqref{e9927_732a} by introducing the complex scalar function
\begin{align}
h(t)&= \frac{\langle \nabla \rangle} {|\nabla|} u - i |\nabla| \phi_1 \notag \\
& = \frac{\langle \nabla \rangle} {|\nabla|} u + i \frac{\nabla}{|\nabla|} \cdot \v v .
\label{e92_537a}
\end{align}
Note that since $\v v$ is irrotational, we have
\begin{align}
\v v =- \frac{\nabla}{|\nabla|} \text{Im}(h). \label{eq_vh}
\end{align}
By \eqref{e927_450}, we have
\begin{align}
h(t)& = e^{it\langle \nabla \rangle} h_0 +\int_0^t e^{i(t-s)\langle \nabla \rangle} \Bigl(
- \frac{\langle \nabla \rangle \nabla}{|\nabla|} \cdot (u \v v) \notag \\
& \qquad + \frac i2 |\nabla| ( u^2 +|\v v|^2) \Bigr) ds, \label{e92_537b}
\end{align}
where $h_0$ is the initial data given by
\begin{align*}
h_0 = \frac{\langle \nabla \rangle} {|\nabla|} u_0 + i \frac{\nabla}{|\nabla|} \cdot \v v_0.
\end{align*}
Here $u_0$ is the initial density (perturbation) and $\v v_0$ is the initial velocity.
We need the following
\noindent
\textbf{Choice of constants}: we shall choose positive parameters $1\ll N_1 \ll N^\prime \ll N$,
$\epsilon_1 \ll \frac{\delta_2} {N_1} \ll \delta_2 \ll 1$,
$ \delta_1 \ll \frac{\delta_2}{N_1} \ll \delta_2 \ll 1$, and $N\gg \frac 1 {\delta_1}$. Let $q=N_1/{\epsilon_1}$.
\begin{rem}
Here the parameter $N_1$ serves the role of a large but universal constant. The small constant
$\delta_2$ is only needed in intermediate estimates but not in the final statement of our theorem.
\end{rem}
Now introduce the norms
\begin{align}
\|h\|_X : &= \| \langle t \rangle |\nabla|^{\frac 12} \langle \nabla \rangle h (t) \|_{L_t^\infty L_x^\infty}
+ \| \langle t \rangle^{-\delta_1} h(t) \|_{L_t^\infty H_x^N} \notag \\
&\qquad + \| \langle t \rangle^{1-\frac 2q} h(t) \|_{L_t^\infty L_x^q}
+\|h(t) \|_{L_t^\infty H_x^{N^\prime}} + \| \langle x \rangle e^{-it\jnab} h(t) \|_{L_t^\infty L_x^{2+\epsilon_1}};
\end{align}
and
\begin{align}
\|h\|_{X_1} : &= \| \langle t \rangle |\nabla|^{\frac 12} \langle \nabla \rangle h (t) \|_{L_t^\infty L_x^{\infty}}
+ \| \langle t \rangle^{1-\frac 2q} h(t) \|_{L_t^\infty L_x^q}
+\|h(t) \|_{L_t^\infty H_x^{N^\prime}} \notag \\
& \qquad + \| \langle x \rangle e^{-it\jnab} h(t) \|_{L_t^\infty L_x^{2+\epsilon_1}}. \label{e926_558}
\end{align}
Note that $\|h\|_{X_1}$ collects all intermediate norms of $h$ except the highest energy norm $\|h\|_{H^N}$.
This is because for the intermediate norms we have to use the normal formal technique and perform a fine decomposition
of the solution which in turn can be estimated separately.
Here we choose the norm $\| |\nabla|^{\frac 12} \langle \nabla \rangle h(t) \|_{L_x^\infty}$ for simplicity.
Actually $\| |\nabla|^{s} \langle \nabla \rangle h(t) \|_{L_x^\infty}$
for some $0<s<1$ would also suffice for our estimates. However we cannot take $s=0$ due to the non-locality caused
by the Riesz transform.
Our result is expressed in the following
\begin{thm}[Smooth global solutions for the Cauchy problem] \label{thm_main}
There exists $\epsilon>0$ sufficiently small such that if initial data $h_0$ is regular enough (for example
$\| e^{it\langle \nabla \rangle} h_0 \|_{X} \le \epsilon$ ), then there exists a unique smooth global solution
to the 2D Euler-Poisson system \eqref{e92_537a}--\eqref{e92_537b}. Moreover the solution scatters
in the energy space $H^{N^\prime}$.
\end{thm}
\begin{rem}
We did not make any effort to optimize the assumptions on the initial data or the function space that is
used in the analysis. The whole point is to construct \textit{smooth} global in time classical solutions.
\end{rem}
The rest of this paper is organized as follows. In Section 2 we gather some preliminary linear estimates.
In Section 3 we perform some preliminary transformations and decompose the solution into three parts:
the initial data, the boundary term $g$ and the cubic interaction term $f_{\text{cubic}}$.
Section 4 is devoted to the estimate of the boundary term $g$. In Section 5 we control the
high frequency part of cubic interactions. In Section 6 we control the low frequency part of
cubic interactions which is the most delicate part of our analysis.
In Section 7 we complete the proof of our main theorem.
\section*{Acknowledgements}
D. Li is supported in part by NSF Grant-0908032. Y. Wu was partially supported
by China Postdoctoral Science Foundation. Shortly after our work was submitted to ArXiv, Ionescu
and Pausader \cite{IP11} also posted their paper to ArXiv in which similar results were obtained by using a different
strategy.
\section{Preliminaries}
\subsection{Some notations}
We write $X \lesssim Y$ or $Y \gtrsim X$ to indicate $X \leq CY$ for some constant $C>0$. We use $O(Y)$ to denote any quantity $X$
such that $|X| \lesssim Y$. We use the notation $X \sim Y$ whenever $X \lesssim Y \lesssim X$. The fact that these constants
depend upon the dimension $d$ will be suppressed. If $C$ depends upon some additional parameters, we will indicate this with
subscripts; for example, $X \lesssim_u Y$ denotes the assertion that $X \leq C_u Y$ for some $C_u$ depending on $u$. Sometimes
when the context is clear, we will suppress the dependence on $u$ and write $X \lesssim_u Y$ as $X \lesssim Y$.
We will write $C=C(Y_1, \cdots, Y_n)$ to stress that the constant $C$ depends on quantities $Y_1$, $\cdots$, $Y_n$.
We denote by $X\pm$ any quantity of the form $X\pm \epsilon$ for any $\epsilon>0$.
We use the `Japanese bracket' convention $\langle x \rangle := (1 +|x|^2)^{1/2}$.
We write $L^q_t L^r_{x}$ to denote the Banach space with norm
$$ \| u \|_{L^q_t L^r_x(\R \times \R^d)} :=
\Bigl(\int_\R \Bigl(\int_{\R^d} |u(t,x)|^r\ dx\Bigr)^{q/r}\ dt\Bigr)^{1/q},$$
with the usual modifications when $q$ or $r$ are equal to infinity,
or when the domain $\R \times \R^d$ is replaced by a smaller
region of spacetime such as $I \times \R^d$. When $q=r$ we abbreviate $L^q_t L^q_x$ as $L^q_{t,x}$.
We will use $\phi\in C^\infty(\R^d)$ be a
radial bump function supported in the ball $\{ x \in \R^d: |x| \leq
\frac{25} {24} \}$ and equal to one on the ball $\{ x \in \R^d: |x|
\leq 1 \}$. For any constant $C>0$, we denote $\phi_{\le C}(x):=
\phi \bigl( \tfrac{x}{C}\bigr)$ and $\phi_{> C}:=1-\phi_{\le C}$. We also
denote $\chi_{>C} =\phi_{>C}$ sometimes.
We will often need the Fourier multiplier operators defined by the following:
\begin{align*}
\mathcal F \Bigl({T_{m(\xi,\eta)}(f,g)} \Bigr)(\xi)& = \int m(\xi,\eta) \hat f(\xi-\eta) \hat g(\eta) d\eta, \\
\mathcal F\Bigl( {T_{m(\xi,\eta,\sigma)}(f,g,h)} \Bigr)(\xi) &= \int
m(\xi,\eta,\sigma) \hat f(\xi-\eta) \hat g(\eta-\sigma) \hat h(\sigma) d\eta d\sigma.
\end{align*}
Similarly one can define $T_{m}(f_1,\cdots,f_n)$ for functions $f_1,\cdots,f_n$ and a general symbol
$m=m(\xi,\eta_1,\cdots, \eta_{n-1})$.
\subsection{Basic harmonic analysis}\label{ss:basic}
For each number $N > 0$, we define the Fourier multipliers
\begin{align*}
\widehat{P_{\leq N} f}(\xi) &:= \phi_{\leq N}(\xi) \hat f(\xi)\\
\widehat{P_{> N} f}(\xi) &:= \phi_{> N}(\xi) \hat f(\xi)\\
\widehat{P_N f}(\xi) &:= (\phi_{\leq N} - \phi_{\leq N/2})(\xi) \hat
f(\xi)
\end{align*}
and similarly $P_{<N}$ and $P_{\geq N}$. We also define
$$ P_{M < \cdot \leq N} := P_{\leq N} - P_{\leq M} = \sum_{M < N' \leq N} P_{N'}$$
whenever $M < N$. We will usually use these multipliers when $M$ and $N$ are \emph{dyadic numbers} (that is, of the form $2^n$
for some integer $n$); in particular, all summations over $N$ or $M$ are understood to be over dyadic numbers. Nevertheless, it
will occasionally be convenient to allow $M$ and $N$ to not be a power of $2$. As $P_N$ is not truly a projection, $P_N^2\neq P_N$,
we will occasionally need to use fattened Littlewood-Paley operators:
\begin{equation}\label{PMtilde}
\tilde P_N := P_{N/2} + P_N +P_{2N}.
\end{equation}
These obey $P_N \tilde P_N = \tilde P_N P_N= P_N$.
Like all Fourier multipliers, the Littlewood-Paley operators commute with the propagator $e^{it\Delta}$, as well as with
differential operators such as $i\partial_t + \Delta$. We will use basic properties of these operators many times,
including
\begin{lem}[Bernstein estimates]\label{Bernstein}
For $1 \leq p \leq q \leq \infty$,
\begin{align*}
\bigl\| |\nabla|^{\pm s} P_M f\bigr\|_{L^p_x(\R^d)} &\sim M^{\pm s} \| P_M f \|_{L^p_x(\R^d)},\\
\|P_{\leq M} f\|_{L^q_x(\R^d)} &\lesssim M^{\frac{d}{p}-\frac{d}{q}} \|P_{\leq M} f\|_{L^p_x(\R^d)},\\
\|P_M f\|_{L^q_x(\R^d)} &\lesssim M^{\frac{d}{p}-\frac{d}{q}} \| P_M f\|_{L^p_x(\R^d)}.
\end{align*}
\end{lem}
We shall use the following lemma several times which allows us to commute the $L^p$ estimates
with the linear flow $e^{it \langle \nabla \rangle}$. Roughly speaking it says that for $t\gtrsim 1$,
\begin{align*}
\|P_{<t^{C}} e^{it\langle \nabla \rangle} f \|_{p} \lesssim t^{0+} \|f\|_{p}, \quad p=2+ \text{ or } p=2-.
\end{align*}
\begin{lem} \label{lem_926_856}
For any $1\le p \le \infty$, $t\ge 0$ and dyadic $M>0$, we have
\begin{align}
\| e^{it\langle \nabla \rangle } P_{<M} g \|_p \lesssim \langle Mt \rangle^{|1-\frac 2p|} \| g\|_p.
\label{pm_boundg_a}
\end{align}
Also for any $1\le p \le \infty$, $t\ge 0$, we have
\begin{align}
\| e^{it \jnab} g \|_p \lesssim \langle t \rangle^{|1-\frac 2p|} \| \jnab^4 g \|_p.
\label{pm_boundg_b}
\end{align}
\end{lem}
\begin{proof}
We shall only prove \eqref{pm_boundg_a}. The proof of \eqref{pm_boundg_b} is similar and therefore omitted.
The idea is to use interpolation between $p=1$, $p=2$ and $p=\infty$. We consider only the case $p=\infty$. The other
case $p=1$ is similar. To establish the inequality it suffices to bound the
$L_x^1$ norm of the kernel $e^{it\langle \nabla \rangle} P_{<M}$.
Note that $e^{it\langle \nabla \rangle} P_{<M} f = K*f$,
where
\begin{align*}
\hat K(\xi) = e^{it \langle \xi \rangle} \phi(\frac{\xi} M).
\end{align*}
Observe $\| K\|_{L_x^2} \lesssim M$ and for $t>0$,
\begin{align*}
\| |x|^2 K(x) \|_{L_x^2}= \| \partial_{\xi}^2 ( \hat K(\xi) ) \|_{L_{\xi}^2} \lesssim t^2 M + t + \frac 1 M.
\end{align*}
Then
\begin{align*}
\| K\|_{L_x^1} \lesssim \| K\|_{L_x^2}^{\frac 12} \| |x|^2 K \|_{L_x^2}^{\frac 12} \lesssim \langle Mt \rangle.
\end{align*}
The desired inequality then follows from Young's inequality.
\end{proof}
Finally we need some simple lemma from vector algebra.
\begin{lem} \label{lem_deform}
For any $x\in \mathbb R^2$, $y\in \mathbb R^2$, we have
\begin{align}
\left| \frac x {\langle x \rangle} - \frac {y} {\langle y \rangle} \right|
\gtrsim \frac {|x-y|}{\langle |x|+|y|\rangle^3}. \label{e91aaaa}
\end{align}
More over there exists a matrix $Q=Q(x,y)$ such that
\begin{align}
\frac x {\langle x \rangle} - \frac y {\langle y \rangle} = Q(x,y) (x-y)
\end{align}
Also
\begin{align*}
\frac 1 {\langle |x| + |y| \rangle^3} \lesssim \| Q(x,y) \| \lesssim 1,
\end{align*}
\end{lem}
\begin{proof}
The first inequality is the essentially Lemma (4.3) in \cite{jlz}. Or one can prove it directly.
The last two properties are simple consequences of matrix norms.
\end{proof}
We shall need to exploit some subtle cancelations of the phases. The following lemma will be useful
in our nonlinear estimates.
\begin{lem} \label{lem_phase}
Consider the following phases:
\begin{align}
\phi_1(\xi,\eta,\sigma) &= \langle \xi \rangle + \langle \xi-\eta\rangle -\langle \eta-\sigma\rangle
-\langle \sigma\rangle, \notag \\
\phi_2(\xi,\eta,\sigma) & = \langle \xi \rangle - \langle \xi-\eta \rangle +\langle \eta-\sigma \rangle
-\langle \sigma \rangle, \notag \\
\phi_3(\xi,\eta,\sigma) & = \langle \xi \rangle - \langle \xi -\eta \rangle
-\langle \eta -\sigma \rangle + \langle \sigma \rangle. \notag
\end{align}
There exist smooth matrix functions
$Q_{11}=Q_{11}(\xi,\eta,\sigma)$, $Q_{12}=Q_{12}(\xi,\eta,\sigma)$, $Q_{21}=Q_{21}(\xi,\eta)$,
$Q_{22}=Q_{22}(\eta,\sigma)$, $Q_{31}=Q_{31}(\xi,\eta)$, $Q_{32}=Q_{32}(\eta,\sigma)$ such that
\begin{align*}
\partial_{\xi} \phi_1 &= Q_{11}(\xi,\eta,\sigma) \partial_{\eta} \phi_1 +
Q_{12}(\xi,\eta,\sigma) \partial_{\sigma} \phi_1, \notag \\
\partial_{\xi} \phi_2 & = Q_{21}(\xi,\eta) Q_{22}(\eta,\sigma) \partial_{\sigma} \phi_2, \notag \\
\partial_{\xi} \phi_3 & = Q_{31}(\xi,\eta) Q_{32} (\eta,\sigma) \partial_{\sigma} \phi_3.
\end{align*}
Moreover we have the point-wise (polynomial) bounds
\begin{align}
| \partial_{\xi}^{\alpha} \partial_{\eta}^{\beta} \partial_{\sigma}^{\gamma}
Q_{11}(\xi,\eta,\sigma)| + |\partial_{\xi}^{\alpha} \partial_{\eta}^{\beta} \partial_{\sigma}^{\gamma}
Q_{12}(\xi,\eta,\sigma)|&\lesssim
\langle |\xi|+|\eta|+|\sigma| \rangle^{C(|\alpha|+|\beta|+|\gamma|+1)}, \notag \\
| \partial_{\xi}^{\alpha} \partial_{\eta}^{\beta}
Q_{21}(\xi,\eta)| +| \partial_{\xi}^{\alpha} \partial_{\eta}^{\beta}
Q_{31}(\xi,\eta)| & \lesssim \langle |\xi|+|\eta| \rangle^{C(|\alpha|+|\beta|+1)}, \notag \\
|\partial_{\eta}^{\alpha} \partial_{\sigma}^{\beta} Q_{22}(\eta,\sigma)|+
|\partial_{\eta}^{\alpha} \partial_{\sigma}^{\beta} Q_{32}(\eta,\sigma)|
& \lesssim \langle |\eta|+|\sigma| \rangle^{C(|\alpha|+|\beta|+1)}.
\label{9233000}
\end{align}
\end{lem}
\begin{proof}
We prove it for $\phi_1$. The other two cases are simpler. By Lemma \ref{lem_deform}, we write
\begin{align*}
\partial_{\xi} \phi_1 &= \frac{\xi}{\langle \xi\rangle} + \frac{\xi-\eta}{\langle \xi-\eta\rangle}
=\tilde Q_1 \cdot (2\xi-\eta), \\
\partial_{\eta} \phi_1 & = \frac{\eta-\xi}{\langle \eta-\xi\rangle}
- \frac{\eta-\sigma}{\langle \eta-\sigma \rangle} = \tilde Q_2\cdot (\xi-\sigma), \\
\partial_{\sigma} \phi_1 & = \frac{\eta-\sigma}{\langle \eta-\sigma \rangle} - \frac{\sigma}
{\langle \sigma \rangle} = \tilde Q_3 \cdot (\eta-2\sigma).
\end{align*}
Hence
\begin{align*}
\partial_{\xi} \phi_1 & = \tilde Q_1 \left( 2 {\tilde Q_2}^{-1} \partial_{\eta} \phi_1 - {\tilde Q_3}^{-1}
\partial_{\sigma} \phi_1 \right) \\
& =: Q_{11} \partial_{\eta} \phi_1 + Q_{12} \partial_{\sigma} \phi_1.
\end{align*}
The bound \eqref{9233000} is obvious.
\end{proof}
\section{Preliminary transformations}
Since the function $h=h(t,x)$ is complex-valued, we write it as
\begin{align*}
h(t,x)=h_1(t,x)+ i h_2(t,x).
\end{align*}
By \eqref{e92_537a} and \eqref{eq_vh}, we have
\begin{align*}
u & = \frac{|\nabla|}{\jnab} h_1, \\
\v v &= - \frac {\nabla} {|\nabla|} h_2.
\end{align*}
In Fourier space, \eqref{e92_537b} then takes the form
\begin{align*}
&\hat h(t,\xi) \notag \\
= & e^{it\langle \xi \rangle} \widehat{h_0}(\xi)
- \int_0^t \int e^{i(t-s) \langle \xi \rangle}
\langle \xi \rangle \langle \eta \rangle^{-1}
|\eta| \frac{\xi \cdot (\xi-\eta)}{|\xi||\xi-\eta|}
\widehat{h_1}(s,\eta) \widehat{h_2}(s,\xi-\eta) d\eta ds \notag \\
& \qquad +
\frac {i} 2 \int_0^t \int e^{i(t-s) \langle \xi \rangle}
|\xi| \frac{|\eta||\xi-\eta|}{\langle \eta \rangle \langle \xi-\eta \rangle}
\widehat{h_1}(s,\eta) \widehat{h_1}(s,\xi-\eta) d\eta ds \notag \\
& \qquad - \frac i 2
\int_0^t \int e^{i(t-s) \langle \xi \rangle}
|\xi| \frac{\eta \cdot(\xi-\eta)}{|\eta| |\xi-\eta|}
\widehat{h_2}(s,\eta) \widehat{h_2}(s,\xi-\eta) d\eta ds.
\end{align*}
Denote
\begin{align*}
f(t)=e^{-it \langle \nabla \rangle} h(t).
\end{align*}
Then after a tedious calculation,
\begin{align}
\hat f(t,\xi) &= \hat h_0(\xi) +
\int_0^t \int e^{-is(\langle \xi \rangle -\langle \eta \rangle -\langle \xi-\eta \rangle)}
\Bigl( \psa \notag \\
& \qquad + \psb + \psc \Bigr)
\hat f (s,\eta) \hat f(s,\xi-\eta) d\eta ds \notag \\
& \;\;
+\int_0^t \int e^{-is(\langle \xi \rangle -\langle \eta \rangle +\langle \xi-\eta \rangle)}
\Bigl( -\psa \notag \\
& \qquad + \psb -\psc \Bigr)
\hat f (s,\eta) \overline{\hat f(s,\eta-\xi)} d\eta ds \notag \\
& \;\;
+\int_0^t \int e^{-is(\langle \xi \rangle +\langle \eta \rangle -\langle \xi-\eta \rangle)}
\Bigl( \psa \notag \\
& \qquad + \psb -\psc \Bigr)
\overline{\hat f (s,-\eta)} {\hat f(s,\xi-\eta)} d\eta ds \notag \\
& \;\;
+\int_0^t \int e^{-is(\langle \xi \rangle +\langle \eta \rangle +\langle \xi-\eta \rangle)}
\Bigl( -\psa \notag \\
& \qquad + \psb +\psc \Bigr)
\overline{\hat f (s,-\eta)} \;\overline{\hat f(s,\eta-\xi)} d\eta ds \label{eq_complicated}
\end{align}
Here $\overline{\hat f}$ denote the complex conjugate of $\hat f$. Note that
\begin{align*}
\overline{\hat f(t,-\xi)} &= e^{it\langle \xi \rangle} \hat{\bar h}(t,\xi), \\
\hat f(t,\xi) &= e^{-it\langle \xi \rangle} \hat h(t,\xi).
\end{align*}
To simplify matters, we shall write \eqref{eq_complicated} as
\begin{align}
\hat f(t,\xi) = \hat h_0(\xi) + \int_0^t \int e^{-is\phi(\xi,\eta)} m_0(\xi,\eta)
\hat f(s,\xi-\eta) \hat f(s,\eta) d\eta ds, \label{e927_546a}
\end{align}
where
\begin{align}
\phi(\xi,\eta) = \langle \xi \rangle \pm \langle \xi-\eta\rangle \pm \langle \eta \rangle, \label{phase1}
\end{align}
and $m_0(\xi,\eta)$ is given by (after some symmetrization between $\eta$ and $\xi-\eta$)
\begin{align*}
m_0(\xi,\eta) & = const\cdot \mveca {\xi} {\eta}
+ const \cdot \langle \xi \rangle \frac{\xi \cdot (\xi-\eta)} {|\xi| |\xi-\eta|} \frac{|\eta|}{\langle \eta \rangle}
\notag \\
&\qquad+ const\cdot |\xi| \cdot \frac{|\eta|}{\langle \eta\rangle}
\cdot \frac{|\xi-\eta|} {\langle \xi-\eta\rangle} + const \cdot \mvecc {\xi} {\eta} \notag \\
& := \sum_{i=1}^4 m_i(\xi,\eta).
\end{align*}
Here and in the rest of this paper we shall abuse slightly the notation and
denote $\hat f(t,\xi)$ to be either itself or its complex conjugate (i.e. $\overline{\hat f(t,-\xi)}$,
see \eqref{eq_complicated}).
Note that in the expression
of $m_0(\xi,\eta)$ there are four types of symbols. We write them
collectively as
\begin{align}
m_0(\xi,\eta) = \sum_{1\le j,k,l\le 2} a_{jkl}(\xi,\eta) \frac{\xi_j}{|\xi|}
\frac{\eta_k}{|\eta|} \frac{\xi_l-\eta_l}{|\xi-\eta|}, \label{10_613}
\end{align}
where $a_{jkl}(\xi,\eta) \in C^\infty$ and has the bound
\begin{align*}
|\partial_{\eta}^{\alpha} \partial_{\xi}^{\beta}
a_{jkl}(\xi,\eta)| \lesssim_{\alpha,\beta} \langle \xi \rangle, \qquad\forall\, \xi,\eta.
\end{align*}
For example we can write $m_2(\xi,\eta)$ as
\begin{align*}
m_2(\xi,\eta)= \langle \xi\rangle \langle \eta\rangle^{-1} \eta \cdot \frac{\eta}{|\eta|}
\frac{\xi\cdot(\xi-\eta)}{|\xi||\xi-\eta|}.
\end{align*}
Although the frequency variables $(\xi,\eta)$ are vectors, this fact will play no role in our analysis.
The explicit form of $a_{jkl}$ in \eqref{10_613} will also not be important.
Therefore we shall suppress the subscript notations and summation in \eqref{10_613}, pretend everything is scalar valued
and write $m_0(\xi,\eta)$ as a general symbol
\begin{align}
m_0(\xi,\eta)= a(\xi,\eta) \frac{\xi}{|\xi|} \frac{\xi-\eta}{|\xi-\eta|} \frac{\eta}{|\eta|},
\label{10_613a}
\end{align}
where $a\in C^\infty$ and has polynomially bounded derivatives in the sense that:
\begin{align}
|\partial_{\xi}^{\alpha} \partial_{\eta}^{\beta} a(\xi,\eta)|
\lesssim_{\alpha,\beta} \langle \xi \rangle. \label{a_bound}
\end{align}
One should think of $m_0(\xi,\eta)$ as any one of the summand in \eqref{10_613}.
Observe $m_0(\xi,\eta)$ is symmetric in the sense that
\begin{align}
m_0(\xi,\eta)=m_0(\xi,\xi-\eta). \label{m0_sym}
\end{align}
The nice feature of Klein-Gordon is
\begin{align*}
|\phi(\xi,\eta)| \gtrsim 1/
\langle |\xi| +|\eta| \rangle, \qquad \text{ for any $(\xi,\eta)$.}
\end{align*}
We can then integrate
by parts in the time variable $s$ in \eqref{e927_546a}. By \eqref{m0_sym},
\begin{align}
&\int_0^t \int e^{-is \phi(\xi,\eta)} \frac{m_0(\xi,\eta)}{\phi(\xi,\eta)} \partial_s \hat f(s,\xi-\eta) \hat f(s,\eta) d \eta \notag \\
= & \int_0^t \int e^{-is \phi(\xi,\eta)}
\frac{ m_0(\xi,\eta)}{\phi(\xi,\eta)} \partial_s \hat
f(s,\eta) \hat f(s,\xi-\eta) d \eta \label{not_change}
\end{align}
using the change of variable $\eta\to \xi-\eta$. In the above equality we have
abused again the notation and denote $\phi(\xi,\eta)=\phi(\xi,\xi-\eta)$ since
it will remain of the same form as \eqref{phase1}.
Integrating by parts in the time variable $s$ in \eqref{e927_546a} and
using \eqref{not_change}, we obtain
\begin{align}
\hat f(t,\xi)& = \widehat{\tilde h_0}(\xi) + \hat g(t,\xi) \notag \\
& \qquad\quad+ \int_0^t \int e^{-is \phi(\xi,\eta,\sigma)}
m_1(\xi,\eta,\sigma) \hat f(s,\xi-\eta) \hat f(s,\eta-\sigma) \hat f(s,\sigma) d\sigma d\eta ds,\notag\\
&=: \widehat{\tilde h_0}(\xi) +\hat g(t,\xi) + \hat f_{\text{cubic}}(t,\xi), \label{e926_552}
\end{align}
where $\tilde h_0$ collects the contribution from the boundary term $s=0$ and data $h_0$:
\begin{align}
\widehat{\tilde h_0}(\xi) = \hat h_0(\xi) + \int \frac{m_0(\xi,\eta)} {i \phi(\xi,\eta)}
\hat h_0(\xi-\eta) \hat h_0(\eta) d\eta; \label{e27_111}
\end{align}
the term $g$ denotes the boundary term arising from $s=t$:
\begin{align}
\hat g(t,\xi) = \int e^{-it \phi(\xi,\eta)} \cdot \frac{m_0(\xi,\eta)} {-i \phi(\xi,\eta)}
\hat f(t,\xi-\eta) \hat f(t,\eta) d\eta; \label{e27_111a}
\end{align}
$m_1(\xi,\eta,\sigma)$ is given by (here we use \eqref{m0_sym})
\begin{align*}
m_1(\xi,\eta,\sigma) = \frac{m_0(\xi,\eta) m_0(\eta,\sigma)} {\phi(\xi,\eta)} ;
\end{align*}
and also
\begin{align*}
\phi(\xi,\eta,\sigma)= \langle \xi \rangle \pm \langle \xi -\eta \rangle
\pm \langle \eta-\sigma \rangle \pm \langle \sigma \rangle.
\end{align*}
\section{Estimates of the boundary term $g$} \label{sec_g100}
In this section we control the boundary term $g$ coming from integration by parts in the time variable $s$
(see \eqref{e27_111a}).
We have the following
\begin{prop} \label{prop_e926_310}
\begin{align*}
\| g \|_{L_t^\infty H_x^{N^\prime}} + \| \langle t \rangle g\|_{L_t^\infty H_x^{\frac{N^\prime}2}}
+ \| \langle x \rangle g \|_{L_t^\infty L_x^{2+\epsilon_1}} \lesssim \|h\|_X^2.
\end{align*}
\end{prop}
By Proposition \ref{prop_e926_310} and Sobolev embedding, it is easy to prove the following
\begin{cor}
\begin{align*}
\| e^{it \langle \nabla \rangle} g \|_{X_1} \lesssim \| h\|_X^2.
\end{align*}
\end{cor}
\begin{rem}
Note that in Proposition \ref{prop_e926_310} the $H^{N^\prime/2}$ norm of $g$ decays at a rate $1/\langle t \rangle$.
This is not surprising since $g$ essentially is the quadratic interaction of linear waves.
\end{rem}
The proof of Proposition \ref{prop_e926_310} will occupy the rest of this section. We begin with the $H^{N^\prime}$
estimate.
By \eqref{10_613a}, we can write $g$ as
\begin{align*}
\hat g(t,\xi)= \frac {\xi}{|\xi|} \left(
\int e^{-it \phi(\xi,\eta)} \tilde a(\xi,\eta) \widehat{\mathcal Rf}(t,\xi-\eta) \widehat{\mathcal Rf}(t,\eta) d\eta
\right),
\end{align*}
where $\tilde a(\xi,\eta) \in C^\infty$ and has the bound
\begin{align*}
|\partial_{\xi}^{\alpha} \partial_{\eta}^{\beta}\tilde a(\xi,\eta)|
\lesssim_{\alpha,\beta} (\langle |\xi|+|\eta| \rangle)^{1+|\alpha|+|\beta|}.
\end{align*}
Recall the parameters $1\ll N_1 \ll N^\prime \ll N$.
Clearly
\begin{align*}
\| g(t) \|_{H^{N^\prime}} &= \| T_{{
\tilde a(\xi,\eta)} \langle \xi-\eta\rangle^{-N_1}
\langle \eta \rangle^{-N_1}} ( \jnab^{N_1} \mathcal R h, \jnab^{N_1} \mathcal Rh ) \|_{H^{N^\prime}} \\
& \lesssim \| \jnab^{N^\prime+100N_1} h \|_{4}^2 \\
& \lesssim \| h(t)\|_{H^N}^{\frac 23} \| h (t)\|_{8}^{\frac 43} \\
& \lesssim \langle t\rangle^{\frac 23 \delta_1 - 1} \| h\|_X^2 \\
& \lesssim \|h\|_X^2
\end{align*}
which is clearly good for us.
By a similar calculation, we also obtain
\begin{align*}
& \| g(t) \|_{H^{N^\prime/2}} \\
\lesssim & \| h(t)\|_{H^{N^\prime}}^{\frac 23} \| h (t)\|_{8}^{\frac 43} \\
\lesssim & \langle t\rangle^{- 1} \| h\|_X^2
\end{align*}
which is also good.
It remains for us to show the estimate
\begin{align*}
\| \langle x \rangle g(t) \|_{2+\epsilon_1} \lesssim \|h\|_X^2.
\end{align*}
For this we will have to introduce some frequency cut-offs. The main idea is that in the high frequency regime we use
smoothing (from $H^N$-energy) and in the low frequency regime we use directly dispersive bounds. To simplify the presentation,
we divide the proof into
two lemmas. Consider first the low frequency piece.
Let
\begin{align*}
\hat g_0(t,\xi) =\frac{\xi}{|\xi|} \int e^{-it {{\phi}}(\xi,\eta)}
a_0(\xi,\eta) \widehat{\mathcal R f}(t,\xi-\eta) \widehat{\mathcal R f}(t,\eta) d\eta,
\end{align*}
where
\begin{align*}
a_0(\xi,\eta) = \tilde a(\xi,\eta) \chi_{|\xi-\eta| \lesssim \langle t \rangle^{\delta_1}}
\chi_{|\eta| \lesssim \langle t \rangle^{\delta_1}}.
\end{align*}
Note that the symbol
$a_0(\xi,\eta)$ is restricted to the non-high
frequency regime $\lesssim \langle t\rangle ^{0+}$.
\begin{lem} \label{lem_926_1}
We have the estimate
\begin{align*}
\| \langle x \rangle g_0 \|_{2+\epsilon_1} \lesssim \|h \|_{X}^2.
\end{align*}
\end{lem}
\begin{proof}
Write $g_0=\mathcal R g_1$, where $R$ is the Riesz transform and $g_1$ is given by
\begin{align*}
\hat g_1(t,\xi) = \int e^{-it {{\phi}}(\xi,\eta)} a_0(\xi,\eta)
\widehat{\mathcal R f}(t,\xi-\eta) \widehat{\mathcal R f}(t,\eta) d\eta.
\end{align*}
Then
\begin{align*}
\| \langle x \rangle g \|_{2+\epsilon_1} & \lesssim \| |\nabla|^{-1} g_1 \|_{2+\epsilon_1} + \| \langle x \rangle g_1
\|_{2+\epsilon_1} \\
& \lesssim \| \langle x \rangle g_1 \|_{2+\tilde \epsilon_2} + \| \langle x \rangle g_1 \|_{2+\epsilon_1},
\end{align*}
where $\tilde \epsilon_2<\epsilon_1$ is slightly smaller than $\epsilon_1$.
Observe that using a few steps of simple integration by parts,
\begin{align}
\widehat{xg_1}(t,\xi) & \sim t \int (\partial_{\xi} {\phi}+\partial_{\eta} {\phi}) e^{-it {\phi}}
a_0(\xi,\eta)
\widehat{\mathcal Rf}(t,\xi-\eta) \widehat{\mathcal R f}(t,\eta) d\eta \label{e925_100a} \\
& \quad + \int e^{-it{\phi}}( \partial_{\xi} a_0 +\partial_{\eta} a_0)
\widehat{\mathcal Rf}(t,\xi-\eta) \widehat{\mathcal Rf}(t,\eta) d\eta
\label{e925_100b} \\
& \quad + \int e^{-it{\phi}} a_0 \widehat{\mathcal Rf}(t,\xi-\eta) \widehat{|\nabla|^{-1} f}(t,\eta) d\eta.
\label{e925_100e} \\
& \quad + \int e^{-it{\phi}} a_0 \widehat{\mathcal Rf}(t,\xi-\eta) \frac {\eta}{|\eta|}\widehat{xf}(t,\eta) d\eta.
\label{e925_100d}
\end{align}
\texttt{Estimate of \eqref{e925_100a}}: In the real space, \eqref{e925_100a} is given by the expression
\begin{align}
t e^{-it \langle \nabla \rangle} T_{(\partial_{\xi} {\phi} +\partial_{\eta} {\phi})a_0(\xi,\eta)}
(\mathcal R h, \mathcal R h). \label{e925_100aa}
\end{align}
Let $p$ be such that $2\le p \le 2+\epsilon_1$, then recall the parameters $1\ll N_1\ll N^\prime$,
\begin{align*}
&\| e^{-it \langle \nabla \rangle}
T_{(\partial_{\xi} {\phi}+\partial_{\eta} {\phi}) a_0(\xi,\eta)} (
\mathcal R h, \mathcal R h) \|_{p} \notag \\
\lesssim & \| |\nabla|^{1-\frac 2 p} (
T_{(\partial_{\xi} {\phi}+\partial_{\eta} {\phi}) a_0(\xi,\eta) \langle \xi-\eta\rangle^{-N_1}
\langle \eta \rangle^{-N_1}
} (\mathcal R \langle \nabla \rangle^{N_1} h, \mathcal R \langle \nabla \rangle^{N_1} h) ) \|_{2} \notag \\
\lesssim & \| \langle \nabla \rangle^{2N_1} h \|_4^2 \notag \\
\lesssim & \| \langle \nabla \rangle^{N^\prime} h \|_{2}^{\frac 2 3}
\| h \|_{8}^{\frac 43} \\
\lesssim & \frac 1 {\langle t \rangle } \| h\|_{X}^2.
\end{align*}
Clearly this is enough for us.
The estimates of \eqref{e925_100b} and \eqref{e925_100e} are fairly easy so we skip them. For \eqref{e925_100d}, note
that the frequency variables $(\xi-\eta,\eta)$ are all localized to $|(\xi-\eta, \eta)|\lesssim
\langle t\rangle^{\delta_1}$.
Hence by Bernstein and Lemma \ref{lem_926_856},
\begin{align*}
\| \eqref{e925_100d} \|_p\lesssim
&\| P_{\lesssim \langle t \rangle^{\delta_1}}
e^{-it \langle \nabla \rangle} T_{ a_0(\xi,\eta)} (\mathcal R h, \mathcal R P_{\le \langle t\rangle^{\delta_1}}
e^{it\langle \nabla \rangle} (xf))
\|_{p} \notag \\
\lesssim & \langle t\rangle^{C\epsilon_1+C\delta_1} \| h\|_{\infty-} \| x f\|_{2+\epsilon_1} \notag \\
\lesssim & \| h\|_{X}^2.
\end{align*}
Lemma is now proved.
\end{proof}
Consider next the high frequency piece:
\begin{align*}
\hat g_2(t,\xi) = \frac {\xi}{|\xi|} \int e^{-it {{\phi}}} {a_2(\xi,\eta)}
\widehat{\mathcal R f}(t,\xi-\eta) \widehat{\mathcal R f}(t,\eta)d\eta,
\end{align*}
where
\begin{align*}
a_2(\xi,\eta)=\tilde a(\xi,\eta) \cdot \chi_{|\xi-\eta|>
\langle t\rangle^{\delta_1}}.
\end{align*}
We only need to discuss this case in view of the symmetry between $\eta$ and $\xi-\eta$. The other
case $|\eta|\gtrsim \langle t \rangle^{\delta_1}$ is omitted.
Then
\begin{lem} \label{lem_926_300}
\begin{align*}
\| \langle x \rangle g_2 \|_{2+\epsilon_1} \lesssim \|h \|_{X}^2.
\end{align*}
\end{lem}
\begin{proof}
By the discussion in the beginning part of the proof of Lemma \ref{lem_926_1},
we see that we only need
to bound for $2\le p \le 2+\epsilon_1$, the quantity
\begin{align*}
\| \langle x \rangle \tilde g_2 \|_{p},
\end{align*}
where
\begin{align*}
\widehat{\tilde g_2}(t,\xi) = \int e^{-it {\phi}(\xi,\eta)} a_2(\xi,\eta)
\widehat{\mathcal R f}(t,\xi-\eta)\widehat{\mathcal R f}(t,\eta) d\eta.
\end{align*}
Observe
\begin{align}
\partial_{\xi} \widehat{\tilde g_2}(t,\xi) &\sim
\int e^{-it \phi} (-it \partial_{\xi}\phi \cdot a_2 + \partial_{\xi} a_2)
\cdot \widehat{\mathcal Rf}(t,\xi-\eta) \widehat{\mathcal R f}(t,\eta) d\eta \label{ee900a} \\
&\quad + \int e^{-it\phi} a_2 \partial_{\xi} ( \widehat{\mathcal R f}(t,\xi-\eta) )
\widehat{\mathcal Rf}(t,\eta) d\eta.
\label{ee900b}
\end{align}
The estimate of \eqref{ee900a} is rather easy:
\begin{align*}
\| \mathcal F^{-1}(\eqref{ee900a} ) \|_{p} &\lesssim \langle t \rangle \| \jnab^{N_1} h\|_2
\cdot \| \jnab^{N_1} P_{>\langle t \rangle^{\delta_1}} h \|_2 \\
& \lesssim \langle t \rangle^{1-\frac{N}2 \delta_1} \|h\|_X^2 \\
& \lesssim \|h\|_X^2,
\end{align*}
where we used the fact $N\gg 1/\delta_1$.
It remains to estimate \eqref{ee900b}. We decompose it further as
\begin{align}
\eqref{ee900b} & =
\int e^{-it\phi} a_2
\chi_{|\eta|\ge |\xi-\eta|} \partial_{\xi}
( \widehat{\mathcal R f}(t,\xi-\eta) ) \widehat{\mathcal Rf}(t,\eta) d\eta.
\label{ee900ba} \\
& \quad + \int e^{-it\phi} a_2 \chi_{|\eta|<|\xi-\eta|}
\partial_{\xi}( \widehat{\mathcal R f}(t,\xi-\eta) ) \widehat{\mathcal R f}(t,\eta) d\eta.
\label{ee900bb}
\end{align}
Here $\chi_{|\eta|\ge |\xi-\eta|}= 1-\chi_{|\eta|<|\xi-\eta|}$ and
\begin{align*}
\chi_{|\eta|<|\xi-\eta|} = \phi_{<1} \left( \frac{\eta}{|\xi-\eta|} \right).
\end{align*}
In particular $\chi_{|\eta|<|\xi-\eta|}$ is smooth in $\eta$ since the symbol $a_2(\xi,\eta)$ is localized
to $|\xi-\eta|\gtrsim 1$.
By frequency localization and Lemma \ref{lem_926_856},
\begin{align*}
&\| \mathcal F^{-1} ( \eqref{ee900ba} ) \|_p \\
\lesssim &
\| T_{\langle \xi \rangle^{N_1} a_2 \cdot \chi_{|\eta|\ge |\xi-\eta|}
\langle \eta \rangle^{-10N_1} \langle \xi-\eta \rangle^{N_1}}
\Bigl( |\nabla|^{-1} P_{\gtrsim 1} h, \langle
\nabla \rangle^{10N_1} P_{\gtrsim \langle t \rangle^{\delta_1}} \mathcal R h \Bigr) \|_2 \\
& \quad+
\| T_{\langle \xi \rangle^{N_1}a_2\cdot \chi_{|\eta|\ge |\xi-\eta|}
\langle \eta \rangle^{-10N_1} \langle \xi-\eta \rangle^{N_1}}
\Bigl(\jnab^{-N_1} P_{\ge 1} e^{it\jnab} \mathcal R(xf), \langle
\nabla \rangle^{10N_1} P_{\gtrsim \langle t \rangle^{\delta_1}} \mathcal R h
\Bigr) \|_2 \\
\lesssim & \|h\|_X^2 + \langle t \rangle^{-100} \| P_{\ge 1} \jnab^{-N_1} e^{it\jnab} (xf) \|_{2+\epsilon_1} \|h\|_X \\
\lesssim & \|h\|_X^2.
\end{align*}
For \eqref{ee900bb} we note that
\begin{align*}
\partial_{\xi}\left( \widehat{\mathcal Rf}(t,\xi-\eta) \right)
= -\partial_{\eta}\left( \widehat{\mathcal Rf}(t,\xi-\eta) \right).
\end{align*}
Integrating by parts in $\eta$ gives us
\begin{align}
\eqref{ee900bb} &= \int \partial_{\eta}\Bigl(
e^{-it\phi} a_2 \chi_{|\eta|<|\xi-\eta|} \Bigr)
\widehat{\mathcal Rf}(t,\xi-\eta) \widehat{\mathcal Rf}(t,\eta) d\eta \label{900a} \\
& \qquad + \int e^{-it\phi} a_2 \chi_{|\eta|<|\xi-\eta|}
\widehat{\mathcal Rf}(t,\xi-\eta) \widehat{|\nabla|^{-1} f}(t,\eta) d\eta \label{900bb}\\
& \qquad + \int e^{-it\phi} a_2 \chi_{|\eta|<|\xi-\eta|}
\widehat{\mathcal Rf}(t,\xi-\eta)\frac{\eta}{|\eta|} \widehat{xf}(t,\eta) d\eta. \label{900b}
\end{align}
The term \eqref{900a} can be estimated in the same way as \eqref{ee900a} and we skip it.
The estimate of the term \eqref{900bb} is also easy using the boundedness of $\|xf\|_{2+\epsilon_1}$ and
the localization $|\eta|\lesssim |\xi-\eta|$. The term
\eqref{900b} can also be estimated in a similar fashion as \eqref{ee900ba}.
Therefore we have proved
\begin{align*}
\| \langle x \rangle \tilde g_2 \|_p \lesssim \|h\|_{X}^2.
\end{align*}
\end{proof}
\section{Control of cubic interactions: high frequency piece}
In this section we control the cubic part of the solution which takes the form
(see \eqref{e926_552}):
\begin{align*}
\hat f_{cubic}(t,\xi) = \int_0^t \int e^{-is \phi(\xi,\eta,\sigma)} m_1(\xi,\eta,\sigma)
\hat f(s, \xi-\eta) \hat f(s,\eta-\sigma) \hat f(s,\sigma) d\eta d\sigma ds,
\end{align*}
where
\begin{align}
m_1(\xi,\eta,\sigma) =
a(\xi,\eta)b(\eta,\sigma) \frac{\xi}{|\xi|}
\frac{\xi-\eta}{|\xi-\eta|}
\frac{\eta}{|\eta|} \frac{\eta}{|\eta|}
\frac{\eta-\sigma}{|\eta-\sigma|}\frac{\sigma}{|\sigma|}. \notag
\end{align}
Here $a,b$ are $C^\infty$ functions with the point-wise bounds:
\begin{align}
|\partial_{\xi}^{\alpha}
\partial_{\eta}^{\beta} a(\xi,\eta)| &\lesssim_{\alpha,\beta} \langle |\xi|+|\eta| \rangle^{1+|\alpha|+|\beta|}
, \notag \\
|\partial_{\eta}^{\alpha}
\partial_{\sigma}^{\beta} b(\eta,\sigma)| & \lesssim_{\alpha,\beta}
\langle \eta \rangle. \label{ab_bound}
\end{align}
We write $f_{cubic} = \mathcal R f_{3}$ ($\mathcal R$ is the Riesz transform), where
\begin{align}
&\hat {f_3}(t,\xi) \notag \\
= &\int_0^t \int e^{-is \phi(\xi,\eta,\sigma)} m(\xi,\eta,\sigma)
\widehat{\mathcal R f}(s, \xi-\eta) \frac{\eta}{|\eta|} \frac {\eta}{|\eta|} \Bigl(
\widehat{\mathcal Rf}(\eta-\sigma) \widehat{\mathcal R f}(\sigma)\Bigr) d\eta d\sigma ds, \label{929_331}
\end{align}
and $m(\xi,\eta,\sigma)$ is separable in the sense that
\begin{align}
m(\xi,\eta,\sigma) & = a(\xi,\eta) b(\eta,\sigma). \label{separable}
\end{align}
Note that in the expression \eqref{929_331}, the frequency variables $(\xi,\eta,\sigma)$ are vectors and the expression
$\frac{\eta}{|\eta|}\frac{\eta}{|\eta|}$ is actually a 2 by 2 matrix, i.e.:
\begin{align*}
\Bigl(\frac{\eta}{|\eta|}\frac{\eta}{|\eta|} \Bigr)_{jk} = \frac{\eta_j \eta_k}{|\eta|^2}.
\end{align*}
This is just the symbol of the usual Riesz-type transforms and their precise form will not be important in our
analysis. Therefore to simplify the notations we shall simply write
\begin{align*}
\frac{\eta}{|\eta|} \frac{\eta}{|\eta|} = \frac{\eta}{|\eta|}
\end{align*}
and assume everything is scalar-valued. Then \eqref{929_331} takes the form
\begin{align}
&\hat {f_3}(t,\xi) \notag \\
= &\int_0^t \int e^{-is \phi(\xi,\eta,\sigma)} m(\xi,\eta,\sigma)
\widehat{\mathcal R f}(s, \xi-\eta) \frac{\eta}{|\eta|} \Bigl(
\widehat{\mathcal R f}(\eta-\sigma) \widehat{\mathcal R f}(\sigma)\Bigr) d\eta d\sigma ds.
\label{929_331a}
\end{align}
Differentiating $\hat f_3(t,\xi)$ in $\xi$ gives
\begin{align}
&\partial_{\xi} \hat f_3(t,\xi) \notag \\
&= -i \int_0^t \int s (\partial_{\xi} \phi) e^{-is\phi}
\cdot m \cdot \widehat{\mathcal R f}(s,\xi-\eta) \frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal R f}(\sigma) \Bigr) d\sigma d\eta ds
\label{e929_10a} \\
& \quad + \int_0^t \int e^{-is\phi}
\partial_{\xi} m \cdot \widehat{\mathcal Rf}(s,\xi-\eta) \frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal R f}(\sigma)
\Bigr) d\sigma d\eta ds
\label{e929_10b} \\
& \quad +\int_0^t \int e^{-is\phi} m \cdot \partial_{\xi}( \widehat{\mathcal R f}(s,\xi-\eta)) \frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\eta-\sigma)
\widehat{\mathcal R f}(\sigma)
\Bigr) d\sigma d\eta ds.
\label{e929_10c}
\end{align}
We have the following
\begin{prop}[Reduction to low frequency estimates] \label{prop_low}
We have
\begin{align}
\| e^{it\jnab} f_{cubic} \|_{X_1} \lesssim \|h\|_{X}^3
+ \| f_{low} \|_{L_t^\infty L_x^{2-\epsilon_1}([1,\infty) \times \mathbb R^2)} \label{est_rhs1}
\end{align}
where
\begin{align}
&\hat f_{low}(t,\xi) \notag \\
=& \int_1^t \int s (\partial_{\xi} \phi) e^{-is\phi}
\cdot m_{low} \cdot \widehat{\mathcal R f}(s,\xi-\eta) \frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal R f}(\sigma) \Bigr) d\sigma d\eta ds
\label{e_flow}
\end{align}
and
\begin{align*}
m_{low}(\xi,\eta,\sigma) = a_{low}(\xi,\eta) b_{low}(\eta,\sigma),
\end{align*}
with
\begin{align*}
a_{low}(\xi,\eta) &= a(\xi,\eta) \chi_{|\xi-\eta| \lesssim \langle s \rangle^{\delta_1}}
\chi_{|\eta| \lesssim \langle s \rangle^{\delta_1}}, \\
b_{low}(\eta,\sigma) & = b(\eta,\sigma) \chi_{|\eta-\sigma| \lesssim \langle s \rangle^{\delta_1}}
\chi_{|\sigma| \lesssim \langle s \rangle^{\delta_1}}.
\end{align*}
Here $a$, $b$ are $C^\infty$ functions satisfying the point-wise bounds:
\begin{align}
|\partial_{\xi}^{\alpha}
\partial_{\eta}^{\beta} a(\xi,\eta)| &\lesssim_{\alpha,\beta}
(1+|\xi|+|\eta|)^{1+|\alpha|+|\beta|}, \qquad\forall\,\; \xi,\eta \in \mathbb R^2; \notag \\
|\partial_{\eta}^{\alpha}
\partial_{\sigma}^{\beta} b(\eta,\sigma)| & \lesssim_{\alpha,\beta}
\langle \eta \rangle, \qquad \forall\,\; \eta,\sigma \in \mathbb R^2.
\notag
\end{align}
\end{prop}
\begin{proof}
By the definition of the $X_1$-norm and the fact $f_{cubic}=\mathcal R f_3$, we have
\begin{align*}
\| e^{it\jnab} f_{cubic}(t) \|_{X_1} & = \| \langle t \rangle |\nabla|^{\frac 12} \jnab
e^{it\jnab} \mathcal R f_{3}(t) \|_{L_t^\infty L_x^\infty} +
\| \langle t \rangle^{1-\frac 2q} e^{it\jnab} \mathcal R f_{3}(t) \|_{L_t^\infty L_x^q}
\notag \\
& \qquad + \| f_{3}(t) \|_{L_t^\infty H^{N^\prime}} +
\| \langle x \rangle \mathcal R f_{3}(t) \|_{L_t^\infty L_x^{2+\epsilon_1}}.
\end{align*}
We start with the $H^{N^\prime}$-estimate. Write $f_3$ as
\begin{align*}
f_3(t) = \int_0^t e^{-is\jnab}
T_{a(\xi,\eta)}
\Bigl( \mathcal Rh, \mathcal R T_{b(\eta,\sigma)}(\mathcal R h, \mathcal Rh) \Bigr) ds.
\end{align*}
Then clearly
\begin{align*}
\|f_3\|_{H^{N^\prime}}
& \lesssim \int_0^t \left\| T_{a(\xi,\eta)}
\Bigl( \mathcal R h, \mathcal R T_{b(\eta,\sigma)}(\mathcal R h, \mathcal Rh) \Bigr) \right\|_{H^{N^\prime}} ds \notag \\
& \lesssim \int_0^t \| \jnab^{2N^\prime} h \|^3_6 ds .
\end{align*}
Since $N^\prime\ll N$,
\begin{align*}
\| \jnab^{2N^\prime} h(s) \|_6 &\lesssim \| h(s) \|_{18}^{\frac 34} \| \jnab^{N} h(s) \|_{2}^{\frac 14} \notag \\
& \lesssim \langle s \rangle^{-\frac 23 + C\delta_1} \| h\|_X.
\end{align*}
we get
\begin{align*}
\|f_3\|_{H^{N^\prime}} &\lesssim \int_0^t \langle s \rangle^{-2+C\delta_1} ds \|h\|_X^3 \notag\\
&\lesssim \|h\|_X^3
\end{align*}
which is acceptable.
By a similar estimate, we also get
\begin{align}
\|\jnab^{N^\prime}f_3(t)\|_{L_t^\infty L_x^{2-\epsilon_1}}
\lesssim \|h\|_X^3; \label{oct3_e320}
\end{align}
and
\begin{align}
\| \jnab^{N^\prime} \Bigl(
\mathcal F^{-1}( \eqref{e929_10b} ) \Bigr) \|_{L_t^\infty L_x^{2-\epsilon_1}} \lesssim \|h\|_X^3. \label{oct3_e321}
\end{align}
These two estimates will be used below.
Now we turn to other norms. Observe that
\begin{align*}
\| |\nabla|^{\frac 12} \jnab e^{it\jnab} \mathcal R f_3(t) \|_{\infty}
& \lesssim \sum_{M<1} M^{\frac 12} \| P_M e^{it\jnab} f_3(t) \|_{\infty} + \sum_{M\ge 1} M^{\frac 32}
\| P_M e^{it\jnab} f_3(t) \|_\infty \\
& \lesssim \langle t \rangle^{-1} \| \jnab^{9} f_3(t) \|_{1}.
\end{align*}
Also obviously
\begin{align*}
\| e^{it\jnab} \mathcal R f_3(t) \|_{q} &\lesssim \langle t \rangle^{\frac 2q-1} \| \jnab^{5} f_3(t) \|_{q/(q-1)} \\
& \lesssim \langle t \rangle^{\frac 2q-1} \| \jnab^{9} f_3(t) \|_1.
\end{align*}
On the other hand
\begin{align*}
\| \langle x \rangle \mathcal R f_3 (t) \|_{2+\epsilon_1}
& \lesssim \| |\nabla|^{-1} f_3 \|_{2+\epsilon_1} + \| \langle x \rangle f_3(t) \|_{2+\epsilon_1} \\
& \lesssim \| f_3(t) \|_1 + \| \langle x \rangle f_3(t)\|_{2+\epsilon_1} \\
& \lesssim \| \jnab^{9} f_3(t)\|_1 + \| \langle x \rangle f_3(t) \|_{2+\epsilon_1}.
\end{align*}
Denote the RHS of \eqref{est_rhs1} as $Y$. Then we are reduced to showing
\begin{align*}
\| \jnab^{9} f_3(t)\|_{L_t^\infty L_x^1} + \| \langle x \rangle f_3(t) \|_{L_t^\infty
L_x^{2+\epsilon_1}} \lesssim Y.
\end{align*}
Since
\begin{align*}
\| \jnab^9 f_3 \|_1 &\lesssim \| \langle x \rangle\bigl( \jnab^9 f_3 \bigr) \|_{2-\epsilon_1} \\
& \lesssim \| \jnab^9 f_3 \|_{2-\epsilon_1} + \| \jnab^9\bigl( x f_3 \bigr) \|_{2-\epsilon_1},
\end{align*}
also
\begin{align*}
\| x f_3(t) \|_{2+\epsilon_1} \lesssim \| \jnab( x f_3(t) ) \|_{2-\epsilon_1},
\end{align*}
it then suffices for us to show
\begin{align*}
\| \jnab^9 f_3(t) \|_{L_t^\infty L_x^{2-\epsilon_1}} + \| \jnab^9\bigl( x f_3(t) \bigr) \|_{
L_t^\infty L_x^{2-\epsilon_1}} \lesssim Y.
\end{align*}
By \eqref{oct3_e320}, we only need to show
\begin{align*}
\| \jnab^9\bigl( x f_3(t) \bigr) \|_{
L_t^\infty L_x^{2-\epsilon_1}} \lesssim Y.
\end{align*}
By \eqref{oct3_e321}, we only need to prove
\begin{align*}
\| \jnab^{9} \mathcal F^{-1}( \eqref{e929_10a}) \|_{L_t^\infty L_x^{2-\epsilon_1}}
+ \| \jnab^9 \mathcal F^{-1} (\eqref{e929_10c}) \|_{L_t^\infty L_x^{2-\epsilon_1}}
\lesssim Y.
\end{align*}
We first deal with \eqref{e929_10a}. Write
\begin{align}
&\mathcal F^{-1} (\eqref{e929_10a}) \notag \\
=& \int_0^t s e^{-is\jnab} T_{\partial_{\xi} \phi a(\xi,\eta)}
\Bigl( \mathcal Rh, \mathcal R T_{b(\eta,\sigma)} (\mathcal Rh, \mathcal Rh) \Bigr)
ds \notag \\
= & \int_0^t s e^{-is\jnab}
T_{\partial_{\xi} \phi a(\xi,\eta)} \Bigl(
\mathcal R P_{<\langle s \rangle^{\delta_1}} h,
\mathcal R T_{b(\eta,\sigma)}
(\mathcal R P_{<\langle s \rangle^{\delta_1}} h,
\mathcal R P_{<\langle s \rangle^{\delta_1}} h ) \Bigr) ds \label{oct4_e100a} \\
&\; + O\left( \int_0^t
s
e^{-is\jnab} T_{\partial_{\xi} \phi \cdot a(\xi,\eta) }
\Bigl( \mathcal R P_{\ge \langle s \rangle^{\delta_1}} h,
\mathcal R T_{b(\eta,\sigma)} (\mathcal Rh, \mathcal R h) \Bigr) ds \right) \label{oct4_e100b} \\
&\; + O\left( \int_0^t
s
e^{-is\jnab} T_{\partial_{\xi} \phi \cdot a(\xi,\eta) }
\Bigl( \mathcal R h,
\mathcal R T_{b(\eta,\sigma)} (\mathcal RP_{\ge \langle s \rangle^{\delta_1}} h,
\mathcal R h) \Bigr) ds \right) \label{oct4_e100c} \\
&\; + O\left( \int_0^t
s
e^{-is\jnab} T_{\partial_{\xi} \phi \cdot a(\xi,\eta) }
\Bigl( \mathcal R h,
\mathcal R T_{b(\eta,\sigma)} (\mathcal Rh, \mathcal R P_{\ge \langle s \rangle^{\delta_1}}h)
\Bigr) ds \right). \label{oct4_e100d}
\end{align}
Here in \eqref{oct4_e100b}--\eqref{oct4_e100d} there is at least one high frequency cut-off
$P_{>\langle s \rangle^{\delta_1}}$ on the function $h$. Consider \eqref{oct4_e100b}.
By Lemma \ref{lem_926_856}, we have
\begin{align*}
\| \jnab^9 \eqref{oct4_e100b} \|_{2-\epsilon_1} & \lesssim
\int_0^t \langle s \rangle^2 \| \jnab^{13} T_{\partial_{\xi} \phi a(\xi,\eta)}
( \mathcal R P_{\ge \langle s \rangle^{\delta_1}} h,
\mathcal RT_{b(\eta,\sigma)} (\mathcal R h, \mathcal R h) ) \|_{2-\epsilon_1} ds \notag \\
& \lesssim \int_0^t \langle s \rangle^{2+C\delta_1} \| \jnab^{N_1} P_{\ge \langle s \rangle^{\delta_1}} h\|_2
ds \| h\|_X^2 \notag \\
& \lesssim \int_0^t \langle s \rangle^{2+C\delta_1 - \frac {N}2 \delta_1} ds \|h\|_X^3 \notag \\
& \lesssim \|h\|_X^3,
\end{align*}
where we used the fact $N\gg 1/\delta_1$.
Similarly
\begin{align*}
\|\jnab^9 \eqref{oct4_e100c}\|_{2-\epsilon_1}+ \|\jnab^9 \eqref{oct4_e100d} \|_{2-\epsilon_1} \lesssim \|h\|_X^3.
\end{align*}
Also it is not difficult to check that
\begin{align*}
\| \jnab^9\eqref{oct4_e100a} \|_{L_t^\infty L_x^{2-\epsilon_1}([0,1]\times \mathbb R^2)}
& \lesssim \int_0^1 \langle s \rangle^{C} ds \| h\|_X^3 \\
& \lesssim \| h\|_X^3.
\end{align*}
Hence we have proved that
\begin{align*}
\| \jnab^{9} \mathcal F^{-1}( \eqref{e929_10a}) \|_{L_t^\infty L_x^{2-\epsilon_1}}
\lesssim Y.
\end{align*}
It remains to estimate \eqref{e929_10c}. Actually we shall show the slightly stronger estimate
\begin{align*}
\| \jnab^9 \mathcal F^{-1}( \eqref{e929_10c} ) \|_{2-\epsilon_1} \lesssim \|h\|_X^3.
\end{align*}
We decompose \eqref{e929_10c} as
\begin{align}
& \eqref{e929_10c} \notag \\
= & \int_0^t \int e^{-is \phi} m \partial_{\xi}( \widehat{\mathcal Rf}(s,\xi-\eta) )
\chi_{|\xi-\eta|\le \langle s \rangle^{\delta_1}} \chi_{|\eta| \le \langle s \rangle^{\delta_1}}
\cdot \frac {\eta}{|\eta|} ( \widehat{\mathcal Rf}(s,\eta-\sigma)
\widehat{\mathcal R f}(s,\sigma) ) ds \label{oct4_200a} \\
& \quad+\int_0^t \int e^{-is \phi} m \partial_{\xi}( \widehat{\mathcal Rf}(s,\xi-\eta) )
\chi_{|\xi-\eta|\le \langle s \rangle^{\delta_1}} \chi_{|\eta| >\langle s \rangle^{\delta_1}}
\cdot \frac {\eta}{|\eta|} ( \widehat{\mathcal Rf}(s,\eta-\sigma)
\widehat{\mathcal R f}(s,\sigma) ) ds \label{oct4_200b} \\
& \quad + \int_0^t \int e^{-is \phi} m \partial_{\xi}( \widehat{\mathcal Rf}(s,\xi-\eta) )
\chi_{|\xi-\eta|> \langle s \rangle^{\delta_1}} \chi_{|\eta| \le |\xi-\eta|}
\cdot \frac {\eta}{|\eta|} ( \widehat{\mathcal Rf}(s,\eta-\sigma)
\widehat{\mathcal R f}(s,\sigma) ) ds \label{oct4_200c} \\
& \quad + \int_0^t \int e^{-is \phi} m \partial_{\xi}( \widehat{\mathcal Rf}(s,\xi-\eta) )
\chi_{|\xi-\eta|> \langle s \rangle^{\delta_1}} \chi_{|\eta| > |\xi-\eta|}
\cdot \frac {\eta}{|\eta|} ( \widehat{\mathcal Rf}(s,\eta-\sigma)
\widehat{\mathcal R f}(s,\sigma) ) ds. \label{oct4_200d}
\end{align}
The estimate of \eqref{oct4_200a} is quite straightforward. Since $|\xi-\eta| \le \langle s \rangle^{\delta_1}$,
and $|\eta|\le \langle s \rangle^{\delta_1}$, we can insert a fattened cut-off
$\chi_{|\xi| \lesssim \langle s \rangle^{\delta_1}}$ in \eqref{oct4_200a}.
Hence
\begin{align*}
&\| \jnab^9 \mathcal F^{-1}( \eqref{oct4_200a}) \|_{2-\epsilon_1} \notag \\
\lesssim &\int_0^t \left\| \jnab^9 e^{-is \jnab} P_{\lesssim \langle s \rangle^{\delta_1}}
T_{a(\xi,\eta)} \Bigl( e^{is\jnab}P_{\le\langle s \rangle^{\delta_1}}(
\mathcal R(xf) + |\nabla|^{-1} f ),
P_{\le \langle s \rangle^{\delta_1}} \mathcal R T_{b(\eta,\sigma)} (\mathcal Rh, \mathcal Rh) \Bigr)
\right\|_{2-\epsilon_1}ds \notag \\
\lesssim & \int_0^t \langle s \rangle^{C\delta_1+C\epsilon_1-2} ds \| h\|_X^3 \notag \\
\lesssim & \|h\|_X^3.
\end{align*}
For \eqref{oct4_200b} we note that $|\eta| \gtrsim |\xi-\eta|$ and the symbol $a(\xi,\eta)$ (together with
its sufficiently many derivatives) can be
controlled by $\langle \eta \rangle^{C}$ for some large constant $C$. By Lemma \ref{lem_926_856},
\begin{align*}
&\| \jnab^9 \mathcal F^{-1}( \eqref{oct4_200b}) \|_{2-\epsilon_1} \notag \\
\lesssim &\int_0^t \langle s \rangle \left\| \jnab^{20}
T_{a(\xi,\eta) \langle \eta \rangle^{-N_1} }
\Bigl( e^{is\jnab}P_{\le\langle s \rangle^{\delta_1}}(
\mathcal R(xf) + |\nabla|^{-1} f ), \jnab^{N_1}
P_{>\langle s \rangle^{\delta_1}} \mathcal R T_{b(\eta,\sigma)} (\mathcal Rh, \mathcal Rh) \Bigr)
\right\|_{2-\epsilon_1}ds \notag \\
\lesssim & \int_0^t \langle s \rangle^{1-\frac N2 \delta_1} ds \|h\|_X^3 \notag \\
\lesssim & \|h\|_X^3.
\end{align*}
Similarly for \eqref{oct4_200d} we can perform a dyadic summation (in $\eta$) to obtain,
\begin{align*}
&\| \jnab^9 \mathcal F^{-1}( \eqref{oct4_200d}) \|_{2-\epsilon_1} \notag \\
\lesssim &\int_0^t \langle s \rangle
\sum_{M\gtrsim \langle s \rangle^{\delta_1}} \left\| \jnab^{20}
T_{a(\xi,\eta) \langle \eta \rangle^{-N_1} }
\Bigl( e^{is\jnab}P_{\lesssim M}(
\mathcal R(xf) + |\nabla|^{-1} f ), \jnab^{N_1}
P_{M} \mathcal R T_{b(\eta,\sigma)} (\mathcal Rh, \mathcal Rh) \Bigr)
\right\|_{2-\epsilon_1}ds \notag \\
\lesssim & \int_0^t \langle s \rangle \sum_{M\gtrsim \langle s \rangle^{\delta_1}}
\langle s \rangle M^{N_1-\frac N2} \langle s \rangle^{C\delta_1} ds \|h\|_X^3 \notag \\
\lesssim & \|h\|_X^3.
\end{align*}
Finally we estimate \eqref{oct4_200c}. Using
\begin{align*}
\partial_{\xi}( \widehat{\mathcal R f}(s,\xi-\eta) ) = - \partial_{\eta} ( \widehat{\mathcal Rf}(s,\xi-\eta) ),
\end{align*}
we obtain
\begin{align}
& \eqref{oct4_200c} \notag \\
= & \int_0^t \int e^{-is\phi} ( (-is m \partial_{\eta}\phi+
\partial_{\eta}m) \chi_{|\xi-\eta|> \langle s \rangle^{\delta_1}}
\chi_{|\eta| \le |\xi-\eta|}
+m \partial_{\eta}(\chi_{|\xi-\eta|> \langle s \rangle^{\delta_1}} \chi_{|\eta|\le |\xi-\eta|})
) \notag \\
& \qquad \cdot \widehat{\mathcal Rf}(s,\xi-\eta)
\cdot \frac {\eta}{|\eta|} ( \widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal Rf}(s,\sigma) ) d\sigma d\eta ds
\label{oct4_300a} \\
& \quad+ \int_0^t \int e^{-is\phi} m
\chi_{|\xi-\eta|> \langle s \rangle^{\delta_1}} \chi_{|\eta|\le |\xi-\eta|}
\widehat{\mathcal Rf}(s,\xi-\eta)
\partial_{\eta}(\frac{\eta}{|\eta|}) \widehat{\mathcal Rf}(s,\eta-\sigma)
\widehat{\mathcal Rf}(s,\sigma) d\sigma
d\eta ds \label{oct4_300b} \\
& \quad+
\int_0^t \int e^{-is\phi} m
\chi_{|\xi-\eta|> \langle s \rangle^{\delta_1}} \chi_{|\eta|\le |\xi-\eta|}
\widehat{\mathcal Rf}(s,\xi-\eta)
\frac{\eta}{|\eta|} ( \partial_{\eta}(\widehat{\mathcal Rf}(s,\eta-\sigma)) \widehat{\mathcal Rf}(s,\sigma) )d\sigma
d\eta ds. \label{oct4_300c}
\end{align}
Clearly
\begin{align*}
\| \jnab^9 \mathcal F^{-1}(\eqref{oct4_300a}) \|_{2-\epsilon_1}
& \lesssim \int_0^t \langle s \rangle^{1+C\delta_1}
\| \jnab^{N_1} P_{>\langle s \rangle^{\delta_1}} h \|_2 \| \jnab^{10} h \|_2^2 ds \\
& \lesssim \int_0^t \langle s \rangle^{1+C\delta_1-\frac N2 \delta_1} ds \| h\|_X^3 \\
& \lesssim \|h \|_X^3.
\end{align*}
Similarly
\begin{align*}
\| \jnab^9 \mathcal F^{-1}(\eqref{oct4_300b}) \|_{2-\epsilon_1} +
\| \jnab^9 \mathcal F^{-1}(\eqref{oct4_300c}) \|_{2-\epsilon_1} \lesssim \|h\|_X^3.
\end{align*}
This ends the proof of the proposition.
\end{proof}
\section{Control of cubic interactions: low frequency piece}
In the previous section, we controlled the high frequency part of the cubic interaction term.
The most delicate part of our analysis comes from the low frequency piece.
\begin{prop} \label{prop_926_255}
We have
\begin{align*}
\| f_{low} \|_{L_t^\infty L_x^{2-\epsilon_1}([1,\infty)\times \mathbb R^2)} \lesssim \|h\|_X^3 + \| h\|_X^4,
\end{align*}
where $f_{low}$ was defined in Proposition \ref{prop_low}.
\end{prop}
The proof of this proposition will occupy the rest of this section.
\noindent
\textbf{Notation:}
We will use the notation $\|h\|_{\infty-}$ to denote $\|h\|_p$ for some $p\gtrsim \frac 1 {\epsilon_1}$.
To simplify notations we do not calculate the specific numerical value of $p$ since we only need its order
of magnitude. This notation will also be heavily used in the following calculations.
\begin{proof}[Proof of Proposition \ref{prop_926_255}]
We will discuss different forms of the
phase function $\phi(\xi,\eta,\sigma)$.
\texttt{Case 1:}
\begin{align}
\phi(\xi,\eta,\sigma) = \langle \xi \rangle -
\langle \xi -\eta \rangle + \langle \eta -\sigma \rangle -\langle \sigma \rangle. \label{e926_509}
\end{align}
By Lemma \ref{lem_phase}, we have
\begin{align*}
\partial_{\xi} \phi = Q_1(\xi,\eta) Q_2(\eta,\sigma) \partial_{\sigma} \phi,
\end{align*}
where $Q_1$, $Q_2$ are smooth.
This allows us to write
\begin{align}
-is \partial_{\xi} \phi e^{-is \phi} = Q_1(\xi,\eta) Q_2(\eta,\sigma) \partial_{\sigma}\Bigl(e^{-is \phi} \Bigr).
\label{final_e100}
\end{align}
Plugging \eqref{final_e100} into \eqref{e_flow} and integrating by parts in $\sigma$, we then
obtain
\begin{align}
& \hat f_{low}(t,\xi) \notag\\
= & -\int_1^t \int Q_1(\xi,\eta) a_{low}(\xi,\eta) \partial_{\sigma}( Q_2(\eta,\sigma) b_{low}(\eta,\sigma) )
e^{-is\phi} \notag \\
& \qquad \cdot \widehat{\mathcal Rf}(s,\xi-\eta) \frac{\eta}{|\eta|}
( \widehat{\mathcal R f}(s,\eta-\sigma) \widehat{\mathcal Rf}(s,\sigma) ) d\sigma d\eta ds \label{oct5_50a} \\
& \quad - \int_1^t \int Q_1(\xi,\eta)
Q_2(\eta,\sigma) e^{-is\phi} a_{low}(\xi,\eta) b_{low}(\eta,\sigma) \widehat{\mathcal Rf}(s,\xi-\eta) \notag \\
&\qquad \cdot \frac{\eta}{|\eta|}
\Bigl( \partial_{\sigma}(\widehat{\mathcal Rf}(s,\sigma)) \widehat{\mathcal Rf}(s,\eta-\sigma) \Bigr) d\sigma
d\eta ds. \label{oct5_50b} \\
& \quad - \int_1^t \int Q_1(\xi,\eta)
Q_2(\eta,\sigma) e^{-is\phi} a_{low}(\xi,\eta) b_{low}(\eta,\sigma) \widehat{\mathcal Rf}(s,\xi-\eta) \notag \\
&\qquad \cdot \frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\sigma) \partial_{\sigma}(\widehat{\mathcal Rf}(s,\eta-\sigma) ) \Bigr) d\sigma
d\eta ds. \label{oct5_50c}
\end{align}
By Lemma \ref{lem_926_856}, we have
\begin{align*}
& \| \mathcal F^{-1}( \eqref{oct5_50a} ) \|_{2-\epsilon_1} \notag \\
\lesssim & \int_1^t s^{C\epsilon_1} \| T_{Q_1(\xi,\eta) a_{low}(\xi,\eta)}
\Bigl( \mathcal Rh, \mathcal R T_{\partial_{\sigma}(Q_2(\eta,\sigma) b_{low}(\eta,\sigma))}
(\mathcal Rh, \mathcal Rh) \Bigr) \|_{2-\epsilon_1} ds \notag \\
\lesssim & \int_1^t s^{C\epsilon_1+C\delta_1} \|h\|_{2} \| h\|_{\infty-}^2 ds \notag \\
\lesssim & \int_1^t s^{C\epsilon_1+C\delta_1-2} ds \| h\|_X^3 \notag \\
\lesssim & \|h\|_X^3.
\end{align*}
Clearly this is acceptable for us.
For \eqref{oct5_50b}, note that by Lemma \ref{lem_926_856} and Bernstein,
\begin{align*}
& \| e^{is\jnab} \mathcal F^{-1} \Bigl( \partial_{\sigma}( \widehat{\mathcal Rf}(s,\sigma) )
\chi_{|\sigma|\lesssim \langle s \rangle^{\delta_1}} \Bigr) \|_{2+100\epsilon_1} \notag \\
\lesssim & \langle s \rangle^{C\delta_1}
( \| |\nabla|^{-1} f \|_{2+100\epsilon_1} + \| P_{\lesssim \langle s \rangle^{\delta_1}}( xf ) \|_{2+100\epsilon_1}
) \notag \\
\lesssim & \langle s \rangle^{C\delta_1+ C\epsilon_1} \| \langle x \rangle f \|_{2+\epsilon_1}.
\end{align*}
Hence
\begin{align*}
& \| \mathcal F^{-1} ( \eqref{oct5_50b} ) \|_{2-\epsilon_1} \notag \\
\lesssim & \int_1^t \langle s \rangle^{C\delta_1 +C\epsilon_1} \| h\|_{\infty-}^2 \|\langle x \rangle f \|_{2+\epsilon_1} ds \notag \\
\lesssim & \int_1^t \langle s \rangle^{C\delta_1+C\epsilon_1-2} ds \| h\|_X^3 \\
\lesssim & \|h\|_X^3.
\end{align*}
By a similar estimate, we also have
\begin{align*}
& \| \mathcal F^{-1} ( \eqref{oct5_50c} ) \|_{2-\epsilon_1}\lesssim \|h\|_X^3.
\end{align*}
This ends the estimate of phase \eqref{e926_509}.
\texttt{Case 2}:
\begin{align*}
\phi(\xi,\eta,\sigma) = \langle \xi \rangle - \langle \xi-\eta \rangle -\langle \eta-\sigma\rangle
+\langle \sigma \rangle.
\end{align*}
This is almost the same as the case \eqref{e926_509} after the change of variable $\sigma \to \eta -\sigma$.
We omit the estimates.
\texttt{Case 3}:
\begin{align}
\phi(\xi,\eta, \sigma) = \langle \xi \rangle
+ \langle \xi-\eta \rangle -\langle \eta-\sigma \rangle -\langle \sigma \rangle.
\label{e926_943}
\end{align}
For this we will have to exploit some delicate cancelations of the phases. We discuss several
subcases.
\texttt{Subcase 3a:} $|\eta|\le \langle s \rangle^{-\delta_2}$ and $|\xi| \le \langle s \rangle^{-\delta_2/N_1}$.
In the real space, this subcase corresponds to putting the LP projections $P_{\le \langle s \rangle^{-\delta_2}}$ and
$P_{\le \langle s \rangle^{-\delta_2/N_1}}$ into corresponding places of the nonlinearity.
Note that
\begin{align*}
\partial_{\xi} \phi= \frac{\xi} {\langle \xi \rangle} + \frac{\xi-\eta}{\langle \xi-\eta \rangle}.
\end{align*}
Since $\xi$ and $\xi-\eta$ are both localized to low frequencies, we gain one derivative by using the above
identity. In this subcase we have
\begin{align*}
& \| \mathcal F^{-1} ( \eqref{e_flow} ) \|_{2-\epsilon_1} \notag \\
\lesssim & \int_1^t s^{1+C\epsilon_1} \| \nabla P_{\lesssim \langle s \rangle^{-\delta_2/N_1}}
T_{a_{low}(\xi,\eta)}\Bigl( \mathcal Rh, \mathcal R T_{b_{low}(\eta,\sigma)}(\mathcal Rh, \mathcal Rh) \Bigr)
\|_{2-\epsilon_1} ds \notag \\
& \qquad +
\int_1^t s^{1+C\epsilon_1} \|
T_{a_{low}(\xi,\eta)}\Bigl( \nabla P_{\lesssim \langle s \rangle^{-\delta_2/N_1}}
\mathcal Rh, \mathcal R T_{b_{low}(\eta,\sigma)}(\mathcal Rh, \mathcal Rh) \Bigr)
\|_{2-\epsilon_1} ds \notag \\
\lesssim & \int_1^t s^{1+C\epsilon_1-\frac{\delta_2}{N_1}+C\delta_1} s^{-2} ds \| h\|_X^3 \\
\lesssim & \|h\|_X^3.
\end{align*}
Here we used the fact $\delta_2/N_1 \gg \max \{\epsilon_1, \delta_1 \}$.
\texttt{Subcase 3b:} $|\eta| \le \langle s \rangle^{-\delta_2}$, $|\xi|> \langle s \rangle^{-\delta_2/N_1}$
and $|\sigma|<\langle s \rangle^{-\delta_2/10}$. In this case we write
\begin{align*}
e^{-is \phi} = e^{-is(\langle \xi \rangle + \langle \xi-\eta\rangle -2)} e^{-is (2-\langle \eta -\sigma \rangle
-\langle \sigma \rangle ) }.
\end{align*}
Note that
\begin{align}
&| \langle \xi \rangle + \langle \xi -\eta \rangle -2 | \gtrsim \langle s \rangle^{-\delta_2/N_1}, \label{oct6_204}\\
&\langle \eta -\sigma \rangle + \langle \sigma \rangle -2 = b_2(\eta,\sigma) \cdot (\eta-\sigma)
+ b_3(\eta,\sigma) \cdot \sigma, \label{oct6_204a}
\end{align}
where $b_2$, $b_3$ are smooth.
We do an integration by parts using only \textit{part of the phase}. Namely using the identity
\begin{align*}
-i e^{-is(\langle \xi \rangle+\langle \xi-\eta\rangle -2)}
= \frac 1 {\langle \xi \rangle+\langle \xi -\eta \rangle -2}
\partial_s( e^{-is(\langle \xi \rangle+\langle \xi-\eta\rangle -2)} ).
\end{align*}
and integrating by parts in the time variable $s$, we obtain
\begin{align}
&\hat f_{low}(t,\xi) \notag \\
= & \int_1^t \int s (\partial_{\xi} \phi) m_{low} \cdot \chi_{|\eta|\le \langle s \rangle^{-\delta_2}}
\chi_{|\xi|> \langle s \rangle^{-\delta_2/N_1}} \chi_{|\sigma|< \langle s \rangle^{-\delta_2/10}} \notag \\
& \qquad \cdot \frac 1 {\langle \xi \rangle + \langle \xi-\eta \rangle -2}
\partial_s \Bigl( e^{-is (\langle \xi\rangle + \langle \xi-\eta \rangle -2)} \Bigr)
e^{-is ( 2-\langle \eta-\sigma \rangle - \langle \sigma \rangle ) } \notag \\
& \qquad \cdot \widehat{\mathcal Rf}(s,\xi-\eta)
\frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal Rf}(s,\sigma) \Bigr) d\sigma d\eta ds
\notag \\
= &
\int s (\partial_{\xi} \phi) m_{low} \cdot \chi_{|\eta|\le \langle s \rangle^{-\delta_2}}
\chi_{|\xi|> \langle s \rangle^{-\delta_2/N_1}} \chi_{|\sigma|< \langle s \rangle^{-\delta_2/10}} \notag \\
& \qquad \cdot \frac {e^{-is\phi}} {\langle \xi \rangle + \langle \xi-\eta \rangle -2}
\cdot \widehat{\mathcal Rf}(s,\xi-\eta)
\frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal Rf}(s,\sigma) \Bigr) d\sigma d\eta
\Biggr|_{s=1}^{t} \label{oct6_e20a} \\
& - \int_1^t \int (\partial_{\xi}\phi)
\partial_s \Bigl( s m_{low} \cdot \chi_{|\eta|\le \langle s \rangle^{-\delta_2}}
\chi_{|\xi|> \langle s \rangle^{-\delta_2/N_1}} \chi_{|\sigma|< \langle s \rangle^{-\delta_2/10}}
\Bigr)\notag \\
& \qquad \cdot \frac {e^{-is\phi}} {\langle \xi \rangle + \langle \xi-\eta \rangle -2}
\cdot \widehat{\mathcal Rf}(s,\xi-\eta)
\frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal Rf}(s,\sigma) \Bigr) d\sigma d\eta ds
\label{oct6_e20b} \\
& - \int_1^t \int (\partial_{\xi}\phi) \Bigl( s m_{low} \cdot \chi_{|\eta|\le \langle s \rangle^{-\delta_2}}
\chi_{|\xi|> \langle s \rangle^{-\delta_2/N_1}} \chi_{|\sigma|< \langle s \rangle^{-\delta_2/10}}
\Bigr) e^{-is\phi} \notag \\
& \qquad \cdot \frac {-i(2-\langle \eta-\sigma\rangle -\langle \sigma \rangle)
} {\langle \xi \rangle + \langle \xi-\eta \rangle -2}
\cdot \widehat{\mathcal Rf}(s,\xi-\eta)
\frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal Rf}(s,\sigma) \Bigr) d\sigma d\eta ds
\label{oct6_e20c}\\
& - \int_1^t \int (\partial_{\xi}\phi) \Bigl( s m_{low} \cdot \chi_{|\eta|\le \langle s \rangle^{-\delta_2}}
\chi_{|\xi|> \langle s \rangle^{-\delta_2/N_1}} \chi_{|\sigma|< \langle s \rangle^{-\delta_2/10}}
\Bigr) e^{-is\phi} \notag \\
& \qquad \cdot \frac {1
} {\langle \xi \rangle + \langle \xi-\eta \rangle -2}
\cdot \partial_s (\widehat{\mathcal Rf}(s,\xi-\eta))
\frac{\eta}{|\eta|}
\Bigl( \widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal Rf}(s,\sigma) \Bigr) d\sigma d\eta ds
\label{oct6_e20d}\\
& - \int_1^t \int (\partial_{\xi}\phi) \Bigl( s m_{low} \cdot \chi_{|\eta|\le \langle s \rangle^{-\delta_2}}
\chi_{|\xi|> \langle s \rangle^{-\delta_2/N_1}} \chi_{|\sigma|< \langle s \rangle^{-\delta_2/10}}
\Bigr) e^{-is\phi} \notag \\
& \qquad \cdot \frac {1
} {\langle \xi \rangle + \langle \xi-\eta \rangle -2}
\cdot (\widehat{\mathcal Rf}(s,\xi-\eta))
\frac{\eta}{|\eta|}
\Bigl( \partial_s(\widehat{\mathcal Rf}(s,\eta-\sigma) \widehat{\mathcal Rf}(s,\sigma)) \Bigr) d\sigma d\eta ds
\label{oct6_e20e}
\end{align}
For \eqref{oct6_e20a}, we have by \eqref{oct6_204},
\begin{align*}
& \| \mathcal F^{-1} (\eqref{oct6_e20a}) \|_{2-\epsilon_1} \notag \\
\lesssim & \| h\|_X^3
+ t^{1+C\epsilon_1} \left\| T_{\frac{a_{low}(\xi,\eta) \partial_{\xi} \phi} {\langle \xi \rangle
+ \langle \xi-\eta\rangle -2}}
\Bigl( \mathcal Rh(t),
\mathcal R P_{\lesssim \langle t \rangle^{-\delta_2}}
T_{b_{low}(\eta,\sigma)}(\mathcal Rh(t), \mathcal R P_{<\langle t \rangle^{-\delta_2/10}}
h(t) )\Bigr)\right \|_{2-\epsilon_1} \notag \\
\lesssim & \|h\|_X^3 + t^{1+C\epsilon_1+C\delta_1+C\delta_2} \|h(t)\|_{2} \|h(t) \|_{\infty-}^2 \\
\lesssim & \|h\|_X^3.
\end{align*}
For \eqref{oct6_e20b}, it is not difficult to check that
\begin{align*}
&\partial_s \Bigl( s m_{low} \cdot \chi_{|\eta|\le \langle s \rangle^{-\delta_2}}
\chi_{|\xi|> \langle s \rangle^{-\delta_2/N_1}} \chi_{|\sigma|< \langle s \rangle^{-\delta_2/10}}
\Bigr)\notag \\
= & a(\xi,\eta) b(\eta,\sigma) \tilde{\chi}_{|\xi-\eta| \lesssim \langle s \rangle^{\delta_1}}
\tilde{\chi}_{|\eta|\lesssim \langle s \rangle^{\delta_1}}
\tilde{\chi}_{|\eta-\sigma| \lesssim \langle s \rangle^{\delta_1}}
\tilde{\chi}_{|\sigma| \lesssim \langle s \rangle^{\delta_1}}
\tilde{\chi}_{|\eta|\lesssim \langle s \rangle^{-\delta_2}}
\tilde{\chi}_{|\xi|\gtrsim \langle s \rangle^{-\delta_2/N_1}}
\tilde{\chi}_{|\sigma|\lesssim \langle s \rangle^{-\delta_2/10}},
\end{align*}
where $\tilde{\chi}$ are some fattened cut-offs. Using again \eqref{oct6_204}, we have
\begin{align*}
& \| \mathcal F^{-1} (\eqref{oct6_e20b}) \|_{2-\epsilon_1} \notag \\
\lesssim & \int_1^t s^{C\epsilon_1+C\delta_2+C\delta_1} \|h\|_{\infty-}^2 \|h\|_2 ds \\
\lesssim & \|h\|_X^3.
\end{align*}
For \eqref{oct6_e20c}, we use \eqref{oct6_204a} to gain one more derivative.
\begin{align*}
& \| \mathcal F^{-1} (\eqref{oct6_e20c}) \|_{2-\epsilon_1} \notag \\
\lesssim & \int_1^t s^{1+C\epsilon_1}
\left\| T_{\frac{a_{low}(\xi,\eta)}{\langle \xi \rangle + \langle \xi-\eta \rangle -2}}
\Bigl( \mathcal Rh, \mathcal R P_{\le \langle s \rangle^{-\delta_2}}
T_{b_2(\eta,\sigma) b_{low}(\eta,\sigma)}
(\mathcal R \nabla P_{\lesssim \langle s \rangle^{-\delta_2/10}} h,
P_{<\langle s \rangle^{-\delta_2/10}} h )\Bigr) \right\|_{2-\epsilon_1} ds \notag \\
& \quad + \int_1^t s^{1+C\epsilon_1}
\left\| T_{\frac{a_{low}(\xi,\eta)}{\langle \xi \rangle + \langle \xi-\eta \rangle -2}}
\Bigl( \mathcal Rh, \mathcal R P_{\le \langle s \rangle^{-\delta_2}}
T_{b_3(\eta,\sigma) b_{low}(\eta,\sigma)}
(\mathcal Rh, P_{<\langle s \rangle^{-\delta_2/10}}\nabla h )\Bigr) \right\|_{2-\epsilon_1} ds \notag\\
\lesssim & \int_1^t s^{1+C\epsilon_1+C\delta_1 -\frac{\delta_2}{10} + \frac{20\delta_2}{N_1}}
s^{-2} ds \|h\|_X^3 \notag \\
\lesssim & \|h\|_X^3,
\end{align*}
where we need to take $N_1$ sufficiently large.
We turn now to the estimate of \eqref{oct6_e20d}. We need a lemma
\begin{lem} \label{lem_partial_tf}
We have
\begin{align*}
\| e^{it\jnab} \partial_t f(t) \|_{\infty-} \lesssim \langle t \rangle^{-2+C\epsilon_1} \| h\|_X^2.
\end{align*}
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem_partial_tf}]
By the derivations in the introduction section (see \eqref{e92_537b}--\eqref{10_613}), we can write
\begin{align*}
e^{it\jnab} \partial_t f = \mathcal R T_{O(\langle \xi \rangle)}
(\mathcal R h, \mathcal Rh).
\end{align*}
Hence
\begin{align*}
\| e^{it\jnab} \partial_t f \|_{\infty-} &\lesssim \| \jnab^{N_1} h(t) \|_{\infty-}^2 \notag \\
& \lesssim \langle t \rangle^{-2+C\epsilon_1} \| h\|_X^2.
\end{align*}
\end{proof}
Now we continue to estimate \eqref{oct6_e20d}. By Lemma \ref{lem_partial_tf},
\begin{align*}
& \| \mathcal F^{-1} ( \eqref{oct6_e20d} ) \|_{2-\epsilon_1} \notag \\
\lesssim & \int_1^t s^{1+C\epsilon_1+c\delta_1+C\delta_2} s^{-3} ds \|h\|_X^4 \notag \\
\lesssim & \|h\|_X^4.
\end{align*}
Similarly
\begin{align*}
\| \mathcal F^{-1} ( \eqref{oct6_e20e} ) \|_{2-\epsilon_1} \lesssim \| h\|_X^4.
\end{align*}
We have concluded the estimate of Subcase 3b.
\noindent
\texttt{Subcase 3c}: $|\eta| \le \langle s \rangle^{-\delta_2}$, $|\xi|>\langle s \rangle^{-\delta_2/N_1}$
and $|\sigma|\ge \langle s \rangle^{-\delta_2/10}$.
Then clearly
\begin{align*}
|2\sigma-\eta| \gtrsim \langle s \rangle^{-\delta_2/10}.
\end{align*}
By \eqref{e926_943}, we then have
\begin{align*}
|\partial_{\sigma} \phi | = \left| \frac{\sigma-\eta}{\langle \sigma-\eta \rangle}
+ \frac{\sigma}{\langle \sigma \rangle} \right| \gtrsim \langle s \rangle^{-\delta_2}.
\end{align*}
Hence we can do integration by parts in $\sigma$ in \eqref{e_flow}, i.e.,
\begin{align*}
-is e^{-is \phi} = \frac{\partial_{\sigma} \phi}{|\partial_{\sigma} \phi|^2} \cdot \partial_{\sigma}
(e^{-is \phi} ).
\end{align*}
This will produce
some additional Fourier symbols. However they are all separable in the sense of \eqref{separable}.
This yields
\begin{align*}
\left\| \mathcal F^{-1}(\eqref{e_flow}) \right\|_{2-\epsilon_1}
& \lesssim \int_1^t \langle s \rangle^{C(\delta_1+\epsilon_1+\delta_2)-2} ds \|h \|_X^3 \\
& \lesssim \|h\|_X^3.
\end{align*}
We now consider
\texttt{Subcase 3d}: $|\eta|\ge \langle s \rangle^{-\delta_2}$.
In this case we have to exploit some delicate cancelations of the phases.
By Lemma \ref{lem_phase}, we can write
\begin{align}
\partial_{\xi} \phi
=\; Q_1(\xi,\eta,\sigma) \partial_{\eta} \phi+
Q_2(\xi,\eta,\sigma) \partial_{\sigma} \phi,
\end{align}
where $Q_1$, $Q_2$ are smooth symbols. The idea then is to do integration by parts in $\eta$ or $\sigma$.
To simplify matters, we only present the case
of integration by parts in $\eta$ since the other case will be similar. We stress that one has to use the localization
$|\eta|\gtrsim \langle s\rangle^{-\delta_2}$ because the Riesz symbol $\eta/|\eta|$ will then be smooth.
\begin{align*}
\| \mathcal F^{-1} ( \eqref{e_flow} ) \|_{2-\epsilon_1}
& \lesssim \int_1^t \langle s \rangle^{C\delta_1+C\delta_2+C\epsilon_1}
\| P_{\lesssim \langle s \rangle^{\delta_1} } e^{is\jnab} (xf) \|_{2+\epsilon_1} \| h\|_{\infty-}^2 ds \notag \\
& \qquad + \int_1^t \langle s \rangle^{C\delta_1+C\delta_2+C\epsilon_1}
\|P_{\lesssim \langle s \rangle^{\delta_1}} e^{is\jnab} |\nabla|^{-1} f \|_{2+100\epsilon_1}
\|h\|_{\infty-}^2 ds \notag \\
& \qquad + \int_1^t \langle s \rangle^{C\delta_1+C\delta_2 + C\epsilon_1}
\| h\|_2 \|h\|_{\infty-}^2 ds \notag \\
& \lesssim \int_1^t \langle s\rangle^{C\epsilon_1+ C\delta_1+C\delta_2} \langle s\rangle^{-2} ds \| h\|_X^3 \\
& \lesssim \|h\|_X^3.
\end{align*}
We have finished all estimates corresponding to the phase \eqref{e926_943}.
\noindent
\texttt{Case 4}:
\begin{align}
\phi(\xi,\eta,\sigma) =
\langle \xi \rangle+
\langle \xi -\eta \rangle + \langle \eta -\sigma \rangle -\langle \sigma \rangle. \label{phase_plus1}
\end{align}
Although in this case we note that
\begin{align*}
|\phi| \gtrsim 1,
\end{align*}
it is not good to do integrate by parts in the time variable $s$ in the whole phase since the Riesz symbol $\eta/|\eta|$
is not necessarily smooth near the origin.
We discuss two subcases.
\texttt{Subcase 4a}: $|\eta|\le \langle s \rangle^{-\delta_2}$. In this regime it is not good to integrate by part the whole
phase due to the Riesz symbol $\eta/|\eta|$.
To resolve this difficulty, note that
\begin{align*}
\langle \eta-\sigma \rangle -\langle \sigma \rangle = Q(\eta,\sigma) \eta,
\end{align*}
where $Q(\eta,\sigma)$ is smooth. We then use the identity
\begin{align*}
& \frac 1 {i(\langle \xi\rangle +\langle \xi-\eta\rangle)}
\frac d {ds}
\Bigl( e^{is (\langle \xi\rangle +\langle \xi-\eta\rangle)} \Bigr) \\
= & e^{is(\langle \xi \rangle +\langle \xi-\eta\rangle)}
\end{align*}
to do integration by parts in the time variable $s$. We will have no trouble when the time derivative hits the term
$e^{is(\langle \eta-\sigma\rangle -\langle \sigma\rangle)}$
since
\begin{align*}
&\frac{d}{ds}\Bigl( e^{is(\langle \eta-\sigma\rangle -\langle \sigma\rangle)} \Bigr) \notag \\
= & Q(\eta,\sigma) \eta e^{is(\langle \eta-\sigma\rangle-\langle \sigma\rangle)}.
\end{align*}
The $\eta$ factor will produce an extra decay of $s^{-\delta_2}$ by using
$\partial P_{\lesssim \langle s \rangle^{-\delta_2}}\approx s^{-\delta_2}$ when we perform $L_p$ estimates.
When the time derivative hits $\hat f(s)$ we will get an additional decay of $s^{-1+}$ which is certainly enough
for us. Note that all the symbols are separable in the sense of \eqref{separable}.
The details are quite similar (in fact simpler) to that of Case 3. We therefore skip it here.
\texttt{Subcase 4b:}
$|\eta|\gtrsim \langle s \rangle^{-\delta_2}$. In this case we can directly
integrate by parts in the time variable $s$ since the Riesz symbol $\eta/|\eta|$ is smooth in this regime.
The calculations are quite straightforward using Lemma \ref{lem_partial_tf}.
There are also boundary terms when we perform integration by parts in $s$ but they can be easily estimated along
similar lines as done in Case 3. Hence we skip it.
This concludes the estimate of phase \eqref{phase_plus1}.
\noindent
\texttt{Case 5:}
\begin{align*}
\phi(\xi,\eta,\sigma) = \langle \xi \rangle + \langle \xi-\eta \rangle
-\langle \eta-\sigma \rangle + \langle \sigma \rangle.
\end{align*}
This is almost the same as Case 4 after the change of variable $\sigma\to \eta-\sigma$. Hence we skip it.
\noindent
\texttt{Case 6:}
\begin{align}
\phi(\xi,\eta,\sigma) = \langle \xi \rangle -\langle \xi-\eta\rangle
+ \langle \eta-\sigma \rangle + \langle \sigma \rangle.
\end{align}
Again we sketch the proof and discuss two subcases.
\texttt{Subcase 6a:} $|\eta| \le \langle s \rangle^{-\delta_2}$. Observe that
\begin{align*}
\partial_{\xi} \phi = Q(\xi,\eta) \cdot \eta
\end{align*}
where $Q$ is smooth. The extra $\eta$ factor allows us to gain one derivative and $s^{-\delta_2}$ decay.
This is similar to Case 3a and hence we omit the details.
\texttt{Subcase 6b:} $|\eta| > \langle s \rangle^{-\delta_2}$. In this regime
the Riesz symbol $\eta/|\eta|$ is smooth and hence we can perform integration by parts in the
time variable $s$. The estimates are quite similar to Case 4b. We omit the details.
\texttt{Case 7:}
\begin{align*}
\phi(\xi,\eta,\sigma)= \langle \xi \rangle + \langle \xi-\eta \rangle + \langle \eta-\sigma \rangle
+ \langle \sigma \rangle.
\end{align*}
\texttt{Subcase 7a:} $|\eta|\le \langle s \rangle^{-\delta_2}$ and $|\sigma| \le \langle s \rangle^{-\delta_2/10}$.
In this case we again will do integration by parts using \textit{part of the phase}.
Write
\begin{align*}
e^{-is\phi}
=
\frac{-i}{\langle \xi \rangle + \langle \xi -\eta\rangle +2}
\partial_s \Bigl( e^{-is (\langle \xi \rangle + \langle \xi-\eta \rangle +2)}\Bigr)
e^{-is(\langle \eta-\sigma \rangle + \langle \sigma\rangle -2 )}.
\end{align*}
Note that
\begin{align*}
\langle \eta-\sigma \rangle + \langle \sigma \rangle -2
= b_4(\eta,\sigma)\cdot (\eta-\sigma) + b_5(\eta,\sigma) \cdot \sigma,
\end{align*}
where $b_4$ and $b_5$ are smooth. The estimates are similar to before and hence we omit the details.
\texttt{Subcase 7b:} $|\eta|\le \langle s \rangle^{-\delta_2}$ and $|\sigma|> \langle s \rangle^{-\delta_2/10}$.
In this case
\begin{align*}
|\partial_{\sigma} \phi| \gtrsim \langle s \rangle^{-\delta_2/10}
\end{align*}
and we can integrate by parts in $\sigma$. We omit details.
\texttt{Subcase 7c:} $|\eta|>\langle s \rangle^{-\delta_2}$. In this case the Riesz symbol $\eta/|\eta|$ is
smooth and we can directly integrate by parts in the whole phase $\phi$. The details are omitted.
We have finished the estimates of all cases. The proposition is now proved.
\end{proof}
\section{Proof of Theorem \ref{thm_main}}
In this section we complete the proof of Theorem \ref{thm_main}.
\begin{proof}[Proof of Theorem \ref{thm_main}]
We begin with the $H^N$-energy estimate. By \eqref{e92_537b} and energy estimates, one obtains
\begin{align*}
\frac d {dt} (\| h (t) \|_{H^N}^2) \lesssim (\| \nabla h (t)\|_{\infty} +\|\langle \nabla \rangle u (t)\|_{\infty}
+ \| \partial \v v (t) \|_{\infty}) \| h(t) \|_{H^N}^2.
\end{align*}
By splitting into low and high frequencies, we then have
\begin{align*}
&\| \nabla h (t)\|_{\infty} +\|\langle \nabla \rangle u (t)\|_{\infty}
+ \| \partial \v v (t) \|_{\infty} \notag \\
\lesssim & \| \nabla h(t) \|_{\infty} + \| |\nabla| h(t) \|_{\infty} +
\||\nabla|^{-1} \partial_i \partial_j h\|_{\infty} \\
\lesssim & \| |\nabla|^{\frac 12} \langle \nabla \rangle h(t) \|_{\infty}.
\end{align*}
Then clearly as long as $L^\infty$ norm of $|\nabla|^{\frac12} \langle \nabla \rangle h(t)$ decay
at the sharp rate $\frac 1 {\langle t \rangle}$, we can close the energy estimate and obtain $\|h(t)\|_{H^N} \lesssim
t^{\delta_1}$. We stress here that it is very important to have some amount of derivatives on $h$ especially in the
low frequency regime due to the non-locality caused by the Riesz transform.
We then only need to estimate $\|h(t) \|_{X_1}$ (see \eqref{e926_558}).
For this we need to use the fine decomposition
\eqref{e926_552} and estimate $g$ and $f_{\text{cubic}}$ separately. The estimate of $g$ was given in Section 4.
The cubic interaction $f_{\text{cubic}}$ was estimated in Section 5 and 6. To simplify the presentation we have suppressed many unnecessary
details (and repetitive computations). Finally we note that scattering in the intermediate energy
space $H^{N^\prime}$ follows from calculations in Section 4--6 and hence we will omit the proof.
The theorem is proved.
\end{proof}
|
1,116,691,499,159 | arxiv | \section{Introduction}
\label{sec:intro}
A copula is mathematical concept which is often used to model the joint distribution of several random variables. The applications of copula models permeate a number of fields where of interest is also the dependence structure between the random variables considered, e.g. \cite{Hougaard:2000}, \cite{patton2006modelling}, \cite{Dupuis:2007qo},\cite{Genest:2007fh} and \cite{Lakhal:2008sh}. The propagation of copula-related ideas in probability and statistics started with \cite{Sklar:1959} which proved that for a random vector $(Y_1,\ldots,Y_p)$ with cumulative distribution function (CDF) $H(y_1,\ldots,y_p)$ and marginal continuous CDFs $F_i(y_i)$, $i=1,\ldots,p$ there exists a unique \textit{copula} $C:[0,1]^p\rightarrow [0,1]$ such that
\begin{equation}
H(y_1,\ldots,y_k)=C(F_1(y_1),\ldots,F_k(y_k)).
\end{equation}
For statistical modelling it is also useful to note that, a p-dimensional copula $C$ and marginal continuous CDFs $F_i(y_i)$, $i=1,\ldots,p$ are building blocks for a valid $p$-dimensional CDF, $C(F_1(y_1),\ldots,F_p(y_p))$ with $i$th marginal CDF equal to $F_i(y_i)$, thus providing much-needed flexibility in modelling multivariate distributions. \\
The above results can be extended when conditioning on a covariate vector $X\in {\bf R}^q$ \citep{lamb-van, patton2006modelling} so that
\begin{equation}
H(y_1,\ldots,y_k|X)=C_X(F_1(y_1|X),\ldots,F_k(y_k|X)),
\label{cc}
\end{equation}
where all CDFs and the copula are conditional on $X$. For the rest of this paper we follow \cite{levi2018bayesian} and assume that the copula in \eqref{cc} belongs to a parametric family and its one-dimensional parameter depends on $X$ through some unknown function $\th(X):{\bf R}^q \rightarrow \Theta$. The range of $\th(X)$ is usually restricted, so we introduce a known one-to-one link function $g:\Theta \rightarrow {\bf R}$ such that the \textit{calibration function}, $\eta:{\bf R}^q \rightarrow {\bf R}$, defined as $\eta(X)=g(\th(X))$ has unrestricted range.\\
The simplifying assumption (SA) \citep{czado2010pair}
states that $\eta(X)$ is constant. Clearly, SA greatly simplifies the estimation in conditional copula models, including their use in hierarchical models such as vines \citep[see, for instance,][]{czado2009}. It has been shown in \cite{levi2018bayesian} that SA is violated when important covariates are not included in the model \eqref{cc}. In an important contribution, \cite{agn} showed that assuming SA when the data generative process has non-constant calibration may lead to biased results.
In light of these results, there is a genuine demand for strategies that effectively test whether the SA is appropriate or not. A number of research contributions address this issue for frequentist analyses, e.g. \cite{acar2013statistical}, \cite{gijbels2015estimation}, \cite{derumigny2016tests}\cite{killiches2017examination}. \\
We place the problem in a Bayesian analysis context where inference for $\eta$ relies on a flexible model, following the general philosophy expounded in \cite{sabeti2014additive, klein},\cite{hernandez2013gaussian} or \cite{levi2018bayesian}. Within the Bayesian paradigm, it was observed in \cite{cra-sabeti} that when generic model selection criteria to identify data support for SA tend to favour the more complex model even when SA holds.
In the next section we present the problem in mathematical terms and review some of the Bayesian model selection procedures one can use in this context. A new approach for testing SA, based on a data-splitting procedure, is described in Section~\ref{sec:meth}. A merit of the proposal is that it is quite general in its applicability, but this comes, unsurprisingly, at the expense of power. In order to investigate whether the trade-off is reasonable we design a simulation study and present its conclusions in Section~\ref{sec:sim}. Section~\ref{sec:theory} contains theoretical justification of the proposed algorithm and the paper closes with a discussion of extensions to other regression problems and concluding remarks.
\section{The Problem}
\label{sec:prob}
Here we focus on bivariate response variables so that the observed data consist of $n$ independent triplets ${\mathcal D}=\{(x_i,y_{1i},y_{2i}),{\hspace{2ex}} i=1,\ldots, n\}$ where $y_{1i}$ and $y_{2i}$ are in ${\bf R}$ and $x_i\in {\bf R}^q$. Also let us denote ${\mathbf y}_1=(y_{11},\ldots,y_{1n})$, ${\mathbf y}_2=(y_{21},\ldots,y_{2n})$ and ${\mathbf X} \in {\bf R}^{n\times q}$ is the matrix with $i^{th}$ row equal to $x_i^T$. We rely on \eqref{cc} to express the \textit{full conditional model} for $Y_1$ and $Y_2$ given $X$
\begin{equation}
\label{eq:gen}
P(\omega|\by1,\by2,{\mathbf X})=\prod_{i=1}^n f_1(y_{1i}|\omega_1,x_i)f_2(y_{2i}|\omega_2,x_i)c_{\th(x_i)}\left(F_1(y_{1i}|\omega_1,x_i),F_2(y_{2i}|\omega_2,x_i)\right),
\end{equation}
where $f_j$, $F_j$ are the density and, respectively, the CDF for $Y_j$, while $\omega_j$ denotes all the latent variables and parameters associated with the $j$th marginal distribution, for $j=1,2$. The copula density function is denoted by $c$ and it depends on $X$ through unknown function $\th(X)=g^{-1}(\eta(X))$. Note that the expression above is very general with no assumptions about marginal distributions. Assuming that all parameters $\omega$ can be estimated, the copula family can be selected using several model selection criteria \citep[e.g.,][]{sabeti2014additive, levi2018bayesian}. Once the copula family is selected, the objective is to check whether the SA is valid, in other words whether full model in \eqref{eq:gen} becomes the \textit{reduced model}
\begin{equation}
P(\omega|\by1,\by2,{\mathbf X})=\prod_{i=1}^n f_1(y_{1i}|\omega_1,x_i)f_2(y_{2i}|\omega_2,x_i)c_{\th}\left(F_1(y_{1i}|\omega_1,x_i),F_2(y_{2i}|\omega_2,x_i)\right).
\label{cc-sa}
\end{equation}
Note that in \eqref{cc-sa} the copula depends only on one scalar parameter, $\th$. \\
If flexible models as Gaussian Processes are implemented within the Bayesian paradigm, then the characteristics of the posterior distribution will be estimated using draws $\{{\omega^{(t)}}\}_{t=1}^M$ obtained by running an Markov chain Monte Carlo (MCMC) algorithm (e.g.,\cite{sabeti2014additive, levi2018bayesian}) to sample the posterior. Data support for the full and reduced models, \eqref{eq:gen} and \eqref{cc-sa}, may be established using several criteria. We briefly review two options that distinguish between models based on predictive power.\\
\subsubsection*{The Cross-Validated Pseudo Marginal Likelihood and Its Conditional Variant}
The cross-validated pseudo marginal likelihood (CVML) \cite{geisser1979predictive} calculates the average (over parameter values) prediction power for model $\mathcal M$ via
\begin{equation}
\mbox{CVML}(\mathcal M)=\sum_{i=1}^{n}\log\left(P(y_{1i},y_{2i}|\mathcal D_{-i},\mathcal M)\right),
\label{cvml-def}
\end{equation}
where $\mathcal D_{-i}$ is the data set from which the $i$th observation has been removed.
An estimate of \eqref{cvml-def} for a given model is estimated using posterior draws ${\omega^{(t)}}$ given the whole data set ${\mathcal D}$ \citep[detailed derivations can be found in][]{levi2018bayesian} via
\begin{equation}
\mbox{CVML}_{est}(\mathcal M)=-\sum_{i=1}^{n}\log\left(\frac{1}{M}
\sum_{t=1}^{M}P(y_{1i},y_{2i}|\mathbf \omega^{(t)},\mathcal M)^{-1}\right).
\end{equation}
The model with the largest CVML is preferred.
The conditional CVML (CCVML), introduced by \cite{levi2018bayesian} specifically for selection of copula models, considers conditional rather than joint predictions
\begin{equation}
\mbox{CCVML}(\mathcal M)=\frac{1}{2}\left\{ \sum_{i=1}^{n}\log\left[P(y_{1i}|y_{2i},\mathcal D_{-i},\mathcal M)\right] + \sum_{i=1}^{n}\log\left[P(y_{2i}|y_{1i},\mathcal D_{-i},\mathcal M)\right] \right\}.
\end{equation}
Again this criterion can be estimated from posterior samples using
\begin{eqnarray}
\mbox{CCVML}_{est}(\mathcal M)&=&-\frac{1}{2}\sum_{i=1}^{n}\left\{\log\left[\frac{1}{M}
\sum_{t=1}^{M}\frac{P(y_{2i}|\mathbf \omega^{(t)},\mathcal M)}{P(y_{1i},y_{2i}|\mathbf \omega^{(t)},\mathcal M)}\right] \right. \nonumber \\
&+&\left. \log\left[\frac{1}{M}
\sum_{t=1}^{M}\frac{P(y_{1i}|\mathbf \omega^{(t)},\mathcal M)}{P(y_{1i},y_{2i}|\mathbf \omega^{(t)},\mathcal M)}\right] \right\}.
\end{eqnarray}
Similar to CVML, the model with the largest CCVML is selected.
\subsubsection*{Watanabe-Akaike Information Criterion}
The Watanabe-Akaike Information Criterion \citep{watanabe2010asymptotic} is an information-based criterion that is closely related to CVML, as discussed in \cite{vehtari2017practical}.The WAIC is defined as
\begin{equation}
\mbox{WAIC}(\mathcal M)=-2\mbox{fit}(\mathcal M) + 2\mbox{p}(\mathcal M) ,
\label{waic}
\end{equation}
where the model fitness is
\begin{equation}
\mbox{fit}(\mathcal M)=\sum_{i=1}^{n}\log E\left[P(y_{1i},y_{2i}|\mathbf \omega,\mathcal M)\right]
\label{eq:fitness}
\end{equation}
and the penalty
\begin{equation}
\mbox{p}(\mathcal M)=\sum_{i=1}^{n}\mbox{Var}[ \log P(y_{1i},y_{2i}|\mathbf \omega,\mathcal M)].
\label{eq:penalty}
\end{equation}
The expectation in \eqref{eq:fitness} and the variance in \eqref{eq:penalty} are with respect to the conditional distribution of $\omega$ given the data and can easily be estimated using the ${\omega^{(t)}}$ draws.
The model with the smallest WAIC measure is preferred.
\section{Detecting Data Support for SA}
\label{sec:meth}
As will be shown in Section~\ref{sec:sim} the criteria described above have unsatisfactory performances when the reduced model is the generative one. Instead, we propose to use some of the properties that are invariant to the group of permutations when SA indeed holds.
In the first stage we randomly divide the data ${\mathcal D}$ into training and test sets, ${\mathcal D}_1$ and ${\mathcal D}_2$, with $n_1$ and $n_2$ sample sizes, respectively. The full model defined by \eqref{eq:gen} is fitted on ${\mathcal D}_1 $, and we denote ${\omega^{(t)}}$ the $t$-th draw sampled from the posterior. For the $i$th item in ${\mathcal D}_2$, compute point estimates $\hat \eta_i$ and $\hat U_i=(\hat U_{1i},\hat U_{2i})$ where $\hat U_{ji} = F_j(y_{ji}|\hat\omega_j,x_i)$, $j=1,2$, $i=1,\ldots,n_2$. The marginal parameters estimates, $\hat\omega_j$, are obtained from the training data posterior draws. For instance, if the marginal models are $Y_{1i}\sim {\mathcal N}(f_1(x_i),\sigma_1^2)$ and $Y_{2i}\sim {\mathcal N}(f_2(x_i),\sigma_2^2)$, then each of the MCMC sample ${\omega^{(t)}}$ leads to an estimate $\hat f_1^t(x_i),\hat f_2^t(x_i),\hat \sigma^t_1,\hat \sigma^t_2,\hat \eta^t(x_i)$.
Then $\hat U_i=(\hat U_{1i},\hat U_{2i})$ are obtained using $$(\hat U_{1i},\hat U_{2i})=(\Phi((y_{1i}-\overline{\hat f_1(x_i)})/\overline{\hat \sigma_1}),\Phi((y_{2i}-\overline{\hat f_2(x_i)})/\overline{\hat \sigma_2})),$$
where the overline $\overline{a}$ signifies the averages of Monte Carlo draws $a^t$.
Given the vector of calibration function evaluations at the test points, $\hat \eta=(\hat \eta_1,\ldots,\hat \eta_{n_2})$, and a partition $\min(\hat \eta)=a_1<\ldots <a_{K+1}=\max(\hat \eta)$ of the range of $\eta$ into $K$ disjoint intervals, define the set of observations in ${\mathcal D}_2$ that yield calibration function values between $a_k$ and $a_{k+1}$, $B_k=\{i:a_k \leq \hat \eta_i < a_{k+1}\}$ $k=1,\ldots,K$. We choose the partition such that each "bin" $B_k$ has approximately the same number of elements, $n_2/K$.
\begin{table}[h]
\label{tab:met1}
\begin{enumerate}
\item[{\bf A1}] Compute the $k$th bin-specific Spearman's rho $\hat \rho_k$ from $\{\hat U_i:i \in B_k\})$ $k=1,\ldots,K$.
\item[{\bf A2}] Compute the observed statistic $T^{obs}=\max_k(\hat\rho_k)-\min_k(\hat\rho_k)$. Note that if
SA holds, we expect the observed statistic to be close to zero.
\item[{\bf A3}] Consider $J$ permutations $\lambda_j:\{1,\ldots,n_2\}\rightarrow\{1,\ldots,n_2\}$. For each permutation $\lambda_j$:
\begin{enumerate}
\item[A3.1] Compute $\hat \rho_{jk} = \rho(\{\hat U_i:\lambda_j(i) \in B_k\})$ $k=1,\ldots,K$.
\item[A3.2] Compute test statistic $T_j=\max_k(\hat\rho_{jk})-\min_k(\hat\rho_{jk})$. Note if SA holds, then we expect $T_j$ to be close to $T^{obs}$.
\end{enumerate}
\item[{\bf A4}] We consider that there is support in favour of SA at significance level $\alpha$ if $T^{obs}$ is smaller than the $(1-\alpha)$-th empirical quantile calculated from the sample $\{T_j: 1\le j \le J\}$.
\end{enumerate}
\caption{\it Method 1: A permutation-based procedure for assessing data support in favour of SA}
\end{table}
Under SA, the bin-specific estimates for various measures of dependence, e.g. Kendall's $\tau$ or Spearman's $\rho$, computed from the samples $\hat U_i$, are invariant to permutations, or swaps across bins. Based on this observation, we consider the procedure described in Table 1 for identifying data support for SA. The distribution of the resulting test statistics obtained in Method 1 is determined empirically, via permutations. Alternatively, one can rely on the asymptotic properties of the bin-specific dependence parameter estimator and construct Chi-square test. Specifically, suppose the bin-specific Pearson correlations $\hat \rho_k$ are computed from samples
$\{\hat U_i:i \in B_k\})$, for all $k=1,\ldots,K$. Let $\hat \rho = (\hat \rho_1,\ldots,\hat \rho_K)^T$, and $\tilde n = n_2/K$ be the number of points in each bin. It is known that $\hat\rho_k$ is asymptotically normal distributed for each $k$ so that $$\sqrt{\tilde n}(\hat \rho_k-\rho_k) \overset{d}{\rightarrow} {\mathcal N}(0,(1-\rho_k^2)^2),$$ where $\rho_k$ is the true correlation in bin $k$. If we assume that $\{\hat \rho_k$:\; $k=1,\ldots,K\}$ are independent, and set $\rho = (\rho_1,\ldots,\rho_K)^T$ and $\Sigma=diag((1-\rho_1^2)^2,\ldots,(1-\rho_K^2)^2)$, then we have:
$$\sqrt{\tilde n}(\hat \rho - \rho)\overset{d}{\rightarrow} {\mathcal N}(0,\Sigma)$$
In order to combine evidence across bins, we define the matrix $A \in {\bf R}^{(K-1)\times K}$ as
$$A = \begin{bmatrix} 1 & -1 & 0 & \cdots & 0 \\
0 & 1 & -1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & \cdots & 1 & -1 \end{bmatrix}$$
Since under the null hypothesis SA holds, one gets $\rho_1= \ldots =\rho_K$, implying
$$\tilde n (A\hat\rho)^T (A\Sigma A^t)^{-1} (A\hat \rho) \overset{d}{\rightarrow} \chi^2_{K-1}.$$ Method 2, with its steps detailed in Table \ref{tb:met2}, relies on the ideas above to test SA.
\begin{table}
\begin{enumerate}
\item[{\bf B1}] Compute the bin-specific Pearson correlation $\hat \rho_k$ from samples
$\{\hat U_i:i \in B_k\})$, for all $k=1,\ldots,K$. Let $\hat \rho = (\hat \rho_1,\ldots,\hat \rho_K)^T$, and $\tilde n = n_2/K$, the number of points in each bin.
\item[{\bf B2}]
Define $\rho = (\rho_1,\ldots,\rho_K)^T$, $\Sigma=diag((1-\rho_1^2)^2,\ldots,(1-\rho_K^2)^2)$ and $A \in {\bf R}^{(K-1)\times K}$ be equal to
$$A = \begin{bmatrix} 1 & -1 & 0 & \cdots & 0 \\
0 & 1 & -1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & \cdots & 1 & -1 \end{bmatrix}.$$
Compute $T^{obs} = \tilde n (A\hat\rho)^T (A\hat \Sigma A^t)^{-1} (A\hat \rho)$.
\item[{\bf B3}] Compute p-value = $P(\chi^2_{K-1} > T^{obs})$ and reject SA if p-value$<\alpha.$
\end{enumerate}
\label{tb:met2}
\caption{\it Method 2: A Chi-square test for assessing data support in favour of SA}\end{table}
Method 1 evaluates the p-value using a randomization procedure \cite{lehmann2006testing}, while the second is based on the asymptotic normal theory of Pearson correlations. To get reliable results it is essential to assign test observations to "correct" bins which is true when calibration predictions are as close as possible to the true unknown values, i.e. $\hat \eta(x_i)\approx \eta(x_i)$. The latter heavily depends on the estimation procedure and sample size of the training set. Therefore it is advisable to apply very flexible methodologies for the calibration function estimation and have enough data points in the training set. We immediately see a tradeoff as more observations are assigned to ${\mathcal D}_1$ the better will be the calibration test predictions, at the expense of decreasing power due to a smaller sample size in ${\mathcal D}_2$. For our simulations we have used $n_1\approx 0.65n$ and $n_2\approx 0.35n$, and $K\in\{2, 3\}$.\\
\section{Simulations}
\label{sec:sim}
In this section we present the performance of the proposed methods and comparisons with generic CVML and WAIC criteria on simulated data sets. Different functional forms of calibration function, sample sizes and magnitude of deviation from SA will be explored.
\subsubsection*{Simulation details}
We generate samples of sizes $n=500$ and $n=2000$ from 6 scenarios described below. For all scenarios Clayton copula \cite{embrechts2001modelling} will be used to model dependence between responses, covariates are independently sampled from $\mathcal U[0,1]$. For scenarios 1 to 3, the covariate dimension $q=2$ in the remaining ones $q=5$. Marginal conditional distributions $Y_1|X$ and $Y_2|X$ are modeled as Gaussian with constant variances $\sigma_1^2,\sigma_2^2$ and conditional means $f_1(X),f_2(X)$ respectively. All these parameters are generally not known in advanced and must be estimated jointly with the calibration function $\eta(X)$. For convenience we parametrize calibration by Kendall's tau $\tau(X)$ \cite{embrechts2001modelling} which has one-to-one correspondence with $\eta(X)$ but takes values in $[-1,1]$.
\begin{enumerate}
\item[\textbf{Sc1}] $f_1(X)=0.6\sin(5x_1)-0.9\sin(2x_2)$,\\
$f_2(X)=0.6\sin(3x_1+5x_2)$,\\
$\tau(X)=0.5$,$\sigma_1=\sigma_2=0.2$
\item[\textbf{Sc2}] $f_1(X)=0.6\sin(5x_1)-0.9\sin(2x_2)$,\\
$f_2(X)=0.6\sin(3x_1+5x_2)$,\\
$\tau(X)=0.5+\beta\sin(10X^T\beta)$\\
$\beta=(1,3)^T/\sqrt{10}$, $\sigma_1=\sigma_2=0.2$
\item[\textbf{Sc3}] $f_1(X)=0.6\sin(5x_1)-0.9\sin(2x_2)$,\\
$f_2(X)=0.6\sin(3x_1+5x_2)$,\\
$\tau(X)=0.5 + \beta*2*(x_1+\cos(6x_2)-0.45)/3$\\
$\sigma_1=\sigma_2=0.2$
\end{enumerate}
\textbf{Sc1} corresponds to SA since Kendall's tau is independent of covariate level. The calibration function in \textbf{Sc2} has single index form for the calibration function, while in \textbf{Sc3} it has an additive structure on $\tau$ scale (generally not additive on $\eta$ scale), these simulations are useful to evaluate performance under model mispecification. Also tau in \textbf{Sc2} and \textbf{Sc3} depend on parameter $\beta$, which in this study is set to $\beta=0.25$.
\subsubsection*{Simulation results}
For each sample size and scenario we have repeated the analysis using 250 independently replicated data sets. For each data, the GP-SIM model suggested by \cite{levi2018bayesian} is fitted. This method implements sparse Gaussian Process (GP) priors for marginal conditional means and sparse GP-Single Index for calibration function. The inference is based on 5000 MCMC samples for all scenarios, as the chains were run for 10000 iterations with 5000 samples discarded as burn-in. The number of inducing inputs was set to 30 for all GP. For generic SA testing, GP-SIM fitting is done for the whole data sets and posterior draws are used to estimate CVML and WAIC. Since the proposed methods require splitting the data set into training and test sets, we first randomly divide each data set with proportion 65\% to training and 35\% for testing then fit GP-SIM on training set and then use posterior draws for point estimates of $F_1(y_{1i}|x_i)$, $F_2(y_{2i}|x_i)$ and $\eta(x_i)$ for every observation in test set. In Method 1 we used 500 permutations. Table~\ref{table:gen} shows the percentage of SA rejections for $\alpha=0.05$.
\begin{table} [!ht]
\begin{center}
\caption{ Simulation Results: Generic, proportion of rejection of SA for each scenario, sample size and generic criteria.}
\smallskip
\scalebox{1.2}{
\begin{tabular}{l| l l l || l l l }
\multicolumn{1}{c}{ } & \multicolumn{3}{c}{$\mathbf{N=500}$} &\multicolumn{3}{c}{$\mathbf{N=1000}$} \\
\hline
Scenario & CVML & CCVML & WAIC & CVML & CCVML & WAIC \\
\hline
\textbf{Sc1} & $43.5\%$ & $44.0\%$ & $43.5\%$ & $32.9\%$ & $26.2\%$ & $33.9\%$ \\
\textbf{Sc2} & $100\%$ & $100\%$ & $100\%$ & $100\%$ & $100\%$ & $100\%$ \\
\textbf{Sc3} & $100\%$ & $100\%$ & $100\%$ & $100\%$ & $99.6\%$ & $100\%$ \\
\hline
\end{tabular}
}
\label{table:gen}
\end{center}
\end{table}
The presented results clearly illustrate that generic methods have very high type I error probabilities. This leads to a loss of statistical efficiency since a complex model is selected over a much simpler one. In addition, the SA may be of interest in itself in certain applications, e.g. stock exchange modelling where it is useful to determine whether the dependence structure between different stock prices does not depend on other factors.
\begin{table} [!ht]
\begin{center}
\caption{ Simulation Results: Proposed method, proportion of rejection of SA for each scenario, sample size, number of bins (K) and method.}
\smallskip
\scalebox{1.1}{
\begin{tabular}{l| l l l l || l l l l }
\multicolumn{1}{l}{} & \multicolumn{4}{c}{Permutation test} & \multicolumn{4}{c}{$\chi^2$ test} \\
\hline
&\multicolumn{2}{c}{$\mathbf{N=500}$} & \multicolumn{2}{c||}{$\mathbf{N=1000}$} & \multicolumn{2}{c}{$\mathbf{N=500}$} & \multicolumn{2}{c}{$\mathbf{N=1000}$}\\
\hline
Scenario & $K=2$ & $K=3$ & $K=2$ & $K=3$ & $K=2$ & $K=3$ & $K=2$ & $K=3$ \\
\hline
\textbf{Sc1} & $6.7\%$ & $6.2\%$ & $4.9\%$ & $4.0\%$ & $8.9\%$ & $9.7\%$ & $8.0\%$ & $8.4\%$ \\
\textbf{Sc2}($\beta=0.25$) & $97.3\%$ & $96.4\%$ & $100\%$ & $100\%$ & $100\%$ & $99.1\%$ & $100\%$ & $100\%$ \\
\textbf{Sc3}($\beta=0.25$) & $56.4\%$ & $49.3\%$ & $88.9\%$ & $88.4\%$ & $69.3\%$ & $68.4\%$ & $96.8\%$ & $96.9\%$ \\
\hline
\end{tabular}
}
\label{table:bin}
\end{center}
\end{table}
The simulations summarized in Table~\ref{table:bin} show that the proposed methods have much smaller probability of Type I error which vary around the threshold of 0.05. It must be pointed, however, that under SA the performance of $\chi^2$ test worsens with the number of bins $K$, which is not surprising since as $K$ increases, the number of observations in each bin goes down and normal approximation for the distribution of Pearson correlation becomes tenuous. The performance of both methods improves with sample size. We also notice a loss of power between Scenarios 2 and 3, which is due to model misspecification, since in the latter case the generative model is different from the postulated one.
\section{Theoretical Discussion}
\label{sec:theory}
In this section we prove that under canonical assumptions, the probability of type I error for Method 2 in Section 3 converges to $\alpha$ when SA is true.
Suppose we have independent samples from $K$ populations (groups) with the same sample size, $(u_{1i}^1,u_{2i}^1)_{i=1}^{n_1}\sim (U_1^1,U_2^1)$,\ldots,$(u_{1i}^K,u_{2i}^K)_{i=1}^{n_K}\sim (U_1^K,U_2^K)$, the goal is to test $\rho_1=\ldots=\rho_K$. To simplify notation, we assume $n_1=\ldots, n_K=n$. Letting $\hat \rho = (\hat \rho_1,\ldots,\hat \rho_K)$ be a vector of sample correlations , $\Sigma=diag((1-\rho_1^2)^2,\ldots,(1-\rho_K^2)^2)$ and $(K-1)\times K$ matrix A as defined in Section~\ref{sec:meth}, then canonical asymptotic results imply that, as $n \rightarrow \infty$,
\begin{equation}
\label{eq:chisq-test}
T = n(A\hat\rho)^T(A\Sigma A^T)^{-1}(A\hat \rho) \overset{d}{\rightarrow} \chi^2_{K-1}.
\end{equation}
Based on the model fitted on ${\mathcal D}_1$, we define estimates of $F_1(y_{1i}|x_i)$ and $F_2(y_{2i}|x_i)$ by $\hat U=\{\hat U_i=(\hat F_1(y_{1i}|x_i),\hat F_1(y_{2i}|x_i))\}_{i=1}^{n_2}$. Note that $\hat U$ depends on ${\mathcal D}_1$ and $X$. Given a fixed number of bins $K$ and assuming, without loss of generality, equal sample sizes in each bin $\tilde n = n_2/K$, the first step is to assign $\hat U_i$ to bins by values of $\hat \eta(x_i)$. Introduce a permutation $\lambda^*:\{1,\ldots,n_2\}\rightarrow \{1,\ldots,n_2\}$ that "sorts" $\hat U$ from smallest $\hat \eta(x)$ value to largest i.e. $\hat U_{\lambda^*}=\{\hat U_{\lambda^*(i)}\}_{i=1}^{n_2}$ with $\hat \eta(x_{\lambda^*(1)})< \hat \eta(x_{\lambda^*(2)}) <\cdots < \hat \eta(x_{\lambda^*(n_2)})$. Finally define the test function $\phi()$ with specified significance level $\alpha$ to test SA:
\begin{equation}
\phi(\hat U|{\mathcal D}_1,X,\lambda^*)=\begin{cases} 1 {\hspace{2ex}} \mbox{ if } {\hspace{2ex}} T(\hat U_{\lambda^*})>\chi^2_{K-1}(1-\alpha) \\
0 {\hspace{2ex}} \mbox{ if } {\hspace{2ex}} T(\hat U_{\lambda^*})\leq\chi^2_{K-1}(1-\alpha). \end{cases}
\end{equation}
Where test function $T(U)$ as in \eqref{eq:chisq-test} with $\hat \rho_1 = \rho(U_1,\ldots,U_{\tilde n})$,$\hat \rho_2 = \rho(U_{\tilde n +1},\ldots,U_{2\tilde n})$,\ldots,$\hat \rho_K = \rho(U_{(K-1)\tilde n+1},\ldots,U_{K\tilde n})$. Intuitively if SA is false then we would expect $T(\hat U_{\lambda^*})$ to be larger then the critical value $\chi^2_{K-1}(1-\alpha)$. \\
The goal is to show that this procedure have probability of type I error equal to $\alpha$, which is equivalent to the expectation of the test function:
\begin {equation}
\mbox{P(Type I error)} = \int \phi(\hat U|{\mathcal D}_1,X,\lambda^*)P(\lambda^*|{\mathcal D}_1,X)P(\hat U|{\mathcal D}_1,X)P({\mathcal D}_1)P(X)d\hat U d{\mathcal D}_1 dX d \lambda^*.
\end{equation}
Note that $\lambda^*$ does not depend on $\hat U$ because of the data splitting to train and test sets. Also usually $P(\lambda^*|{\mathcal D}_1,X)$ is just a point mass at some particular permutation. In general the above integral cannot be evaluated, however if we assume that for all test cases:
\begin{equation}
\begin{split}
&\hat F_1(y_{1i}|x_i) \overset{p}{\rightarrow} F_1(y_{1i}|x_i) {\hspace{2ex}} \mbox{as} {\hspace{2ex}} n\to \infty , \\
&\hat F_2(y_{2i}|x_i) \overset{p}{\rightarrow} F_2(y_{2i}|x_i) {\hspace{2ex}} \mbox{as} {\hspace{2ex}} n\to \infty .
\end{split}
\end{equation}
Then under SA and as $n\to \infty$, $P(\hat U|{\mathcal D}_1,X)\approx \prod_{i=1}^{n_2}c(\hat u_{1i},\hat u_{2i})$ where $c(,)$ is copula density and the expectation becomes:
\begin {equation}
\begin{split}
\mbox{P(Type I error)} =& \int \phi(\hat U|\lambda^*)P(\lambda^*|{\mathcal D}_1,X)P(\hat U)P({\mathcal D}_1)P(X)d\hat U d{\mathcal D}_1 dX,d \lambda^* = \\
&\int \left(\int \phi(\hat U|\lambda^*)P(\hat U)d\hat U \right)P(\lambda^*|{\mathcal D}_1,X)P({\mathcal D}_1)P(X)d{\mathcal D}_1 dX d \lambda^*=\alpha.
\end{split}
\end{equation}
Therefore if marginal CDF predictions for test cases are consistent then this procedure has the required probability of type I error for sufficiently large sample size.
\section{Conclusion}
\label{sec:conc}
In this paper we propose two methods to check data support for the simplifying assumption in conditional bivariate copula problems. The method is based on splitting the whole data set to train and test sets, then partitioning test set into bins using predicted calibration values and finally use randomization or $\chi^2$ test to check if the distribution in each bin is the same or not. It was presented theoretically and empirically that under SA probability of Type I error is controlled while generic methods fail to provide reliable results. In addition to conditional copulas we also mentioned how this idea can be generalized to variate of different problems. There are still some uncertainty about what proportion of the data should be assigned to train and which to test set. It was also assumed that sample sizes in each "bin" is the same however in some problems power can be increases by changing sample sizes in each bin. These problems will be investigated further.
\bibliographystyle{ims}
|
1,116,691,499,160 | arxiv | \section{\textbf{Supplemental Materials for ``Integer Quantum Magnon Hall Plateau-Plateau Transition in
a Spin Ice Model''}}
\section{two-dimensional spin ice model under strong out-of-plane field}
A magnetic system considered consists of two inequivalent ferromagnetic `islands' of the order
of 100 nm size~\cite{s-wang06}; one is centered on a $x$-link of a two-dimensional square lattice and the other is
on a $y$-link (Fig.~\ref{s-fig1}). The ferromagnetic island on the $x$/$y$-link is spatially elongated along the
$x$/$y$-direction respectively. Thus, their magnetic moments prefer to point along the
$\pm x/y$-direction respectively due to the magnetic shape anisotropy.
We model these two ferromagnetic islands as two inequivalent spins on the
$x/y$-link with large magnetic moment $S$ (${\bm S}_{i\in A}$/${\bm S}_{i\in B}$ respectively).
The shape anisotropy is included as an effective single-ion spin-anisotropy energy
such as $-D (S^x_{i\in A})^2$ and $-D (S^y_{i\in B})^2$.
Ferromagnetic moments are coupled with one another via the magnetic dipole-dipole interaction~\cite{s-wang06},
so do spins in the spin model. Based on the Holstein-Primakoff mapping, the corresponding quadratic
magnon Hamiltonian is derived as in Eq.~(2) (in the main text).
For simplicity, we include only dipole couplings between the nearest neighbor spins
(denoted by $J$: see Fig.~\ref{s-fig1}) and between next nearest neighbor spins
(denoted by $J^{\prime}_{A,1}=J^{\prime}_{B,2}=J^{\prime}_1$ or
$J^{\prime}_{A,2}=J^{\prime}_{B,1}=J^{\prime}_2$: see Fig.~\ref{s-fig1}).
The ratio among these three are chosen to be consistent
with the $1/r^3$ dipolar coupling strength (see below).
\begin{figure}[t]
\centering
\includegraphics[width=0.40\textwidth]{sfig1.eps}
\caption{(color online) Square-lattice spin ice model with two inequivalent ferromagnetic spins;
${\bm S}_{{\bm i}\in A}$ and ${\bm S}_{{\bm i}\in B}$ are on the center of the $x$-link and $y$-link
of the square lattice respectively. The nearest neighbor dipole coupling strength is denoted by $J$,
the next-nearest neighbor dipole coupling strengths are denoted by $J^{\prime}_{1}$ and
$J^{\prime}_2 (<J^{\prime}_1)$. $e_1$ and $e_2$ are the primitive lattice vectors of the
square lattice. $\delta_{1}$ and $\delta_2$ connect the nearest neighbor spins; $\delta_1\equiv \frac{e_1+e_2}{2}$
and $\delta_2\equiv \frac{e_1-e_2}{2}$.}
\label{s-fig1}
\end{figure}
Without randomness (${\bm d}_j\equiv {\bm h}_{j}\equiv 0$), the quadratic Hamiltonian given in Eq.~(2) in the main text can be
Fourier-transformed,
\begin{align}
{\bm H}_b &= \frac{1}{2}\sum_{\bm k} \left(\begin{array}{cccc}
b^{\dagger}_{{\bm k},A} & b^{\dagger}_{{\bm k},B} & b_{{-{\bm k}},A} & b_{-{\bm k},B} \\
\end{array}\right) \cdot {\bm H}_{\rm BdG}({\bm k}) \cdot
\left(\begin{array}{c}
b_{{\bm k},A} \\
b_{{\bm k},B} \\
b^{\dagger}_{{-{\bm k}},A} \\
b^{\dagger}_{-{\bm k},B} \\
\end{array}\right), \nn \\
{\bm H}_{\rm BdG}({\bm k}) &= a_0({\bm k}) {\bm \gamma}_0 + a_2({\bm k}) {\bm \gamma}_2
+ a_{14}({\bm k}) {\bm \gamma}_{14} + a_{23}({\bm k}) {\bm \gamma}_{23} +
a_{45}({\bm k}) {\bm \gamma}_{45}, \label{bdg-k}
\end{align}
with
\begin{align}
{\bm \gamma}_{j} \equiv {\bm \sigma}_{j} \otimes {\bm \tau}_1, \
{\bm \gamma}_4 \equiv {\bm \sigma}_0 \otimes {\bm \tau}_2, \
{\bm \gamma}_5 \equiv {\bm \sigma}_0 \otimes {\bm \tau}_3, \
{\bm \gamma}_{\mu\nu} \equiv i {\bm \gamma}_{\mu}{\bm \gamma}_{\nu},
\end{align}
where $j=1,2,3$ and $\mu,\nu=1,\cdots,5$. 2 by 2 Pauli matrix ${\bm \sigma}_{j}$
is for the particle-hole space, while 2 by 2 Pauli matrix ${\bm \tau}_{j}$
is for the A and B sublattices. Coefficients in Eq.~(\ref{bdg-k}) are
\begin{align}
a_{0}({\bm k}) &= H_Z - DS - 4JS - J^{\prime} S (\cos k_x + \cos k_y) - 4J^{\prime} S, \label{eq1} \\
a_{2}({\bm k}) &= -6JS \sin \frac{k_x}{2} \sin \frac{k_y}{2}, \ \ a_{14}({\bm k}) = DS, \label{eq2} \\
a_{23}({\bm k}) & = 3 J^{\prime} S (\cos k_x - \cos k_y), \ \
a_{45}({\bm k}) = 2JS \cos \frac{k_x}{2} \cos \frac{k_y}{2}, \label{eq3}
\end{align}
where we put $J^{\prime}_1=J^{\prime}_2 \equiv J^{\prime}$
for simplicity. The 4 by 4 matrix is diagonalized at every ${\bm k}$
by a paraunitary transformation~\cite{s-colpa78}
\begin{eqnarray}
{\bm T}({\bm k})
= e^{-i\theta_1 {\bm \gamma}_3} e^{i\frac{\pi}{4} {\bm \gamma}_4} e^{\mu_1 {\bm \gamma}_1}
e^{-i\theta_2 {\bm \gamma}_{3}} \left(\begin{array}{cccc}
\cosh \mu_2 & 0 & -i \sinh \mu_2 & 0 \\
0 & \cosh \mu_3 & 0 & - i\sinh \mu_3 \\
i\sinh \mu_2 & 0 & \cosh \mu_2 & 0 \\
0 & i\sinh \mu_3 & 0 & \cosh \mu_3 \\
\end{array}\right),
\end{eqnarray}
where $\theta_1$, $\theta_2$, $\mu_1$, $\mu_2$ and $\mu_3$ are defined by
$a_0$, $a_{2}$, $a_{14}$, $a_{23}$ and $a_{45}$ as follows;
\begin{align}
\sin 2\theta_1 &= \frac{a_{23}}{\sqrt{a^2_{23} + a^2_2}}, \ \ \cos 2\theta_1 = \frac{a_2}{\sqrt{a^2_{23} + a^2_2}},
\label{eq4} \\
\sin 2\theta_2 &= \frac{\sqrt{a^2_{23}+a^2_2}\frac{a_{14}}{\sqrt{a^2_0 - a^2_{14}}}}{\sqrt{a^2_{45} + \Big(\sqrt{a^2_{23}+a^2_{2}}\frac{a_{14}}{\sqrt{a^2_0 - a^2_{14}}}\Big)^2 }}, \ \ \cos 2\theta_2 =\frac{a_{45}}{\sqrt{a^2_{45} + \Big(\sqrt{a^2_{23}+a^2_{2}}\frac{a_{14}}{\sqrt{a^2_0 - a^2_{14}}}\Big)^2 }}
, \label{eq5} \\
\sinh 2 \mu_1 & = \frac{a_{14}}{\sqrt{a^2_0 - a^2_{14}}}, \ \ \cosh 2\mu_1 = \frac{a_{0}}{\sqrt{a^2_0 - a^2_{14}}},
\label{eq6} \\
\sinh 2\mu_2 &= \frac{b_{24}}{\sqrt{(b_0+b_5)^2-b^2_{24}}}, \ \
\cosh 2\mu_2 = \frac{b_{0}+b_5}{\sqrt{(b_0+b_5)^2-b^2_{24}}}, \label{eq7} \\
\sinh 2\mu_3 &= \frac{-b_{24}}{\sqrt{(b_0-b_5)^2-b^2_{24}}}, \ \
\cosh 2\mu_3 = \frac{b_{0}-b_5}{\sqrt{(b_0-b_5)^2-b^2_{24}}}, \label{eq8}
\end{align}
with
\begin{eqnarray}
b_0 \equiv \sqrt{a^2_0 - a^2_{14}}, \
b_{24} \equiv \sqrt{a^2_{23} + a^2_2} \frac{a_{0}}{\sqrt{a^2_0 - a^2_{14}}}, \
b_{5} \equiv \sqrt{a^2_{45} + \Big(\sqrt{a^2_{23} + a^2_2} \frac{a_{14}}{\sqrt{a^2_0 - a^2_{14}}}\Big)^2}. \label{eq9}
\end{eqnarray}
In terms of the transformation, the BdG Hamiltonian is paraunitary equivalent to a diagonal matrix,
\begin{eqnarray}
{\bm T}^{\dagger}({\bm k})\cdot {\bm H}_{\rm BdG}({\bm k}) \cdot {\bm T}({\bm k}) = \left(\begin{array}{cccc}
E_{+}({\bm k}) & & & \\
& E_{-}({\bm k}) & & \\
& & E_{+}(-{\bm k}) & \\
& & & E_{-}(-{\bm k}) \\
\end{array}\right), \ \ {\bm T}^{\dagger}({\bm k}) \cdot {\bm \sigma}_3 \otimes {\bm \tau}_0
\cdot {\bm T}({\bm k}) = {\bm \sigma}_3 \otimes {\bm \tau}_0, \nn
\end{eqnarray}
where the two magnon energy bands are even functions in ${\bm k}$;
\begin{eqnarray}
E_{\pm} ({\bm k}) \equiv \sqrt{\big(b_0({\bm k}) \pm b_5({\bm k})\big)^2 - b^2_{24}({\bm k})}. \label{energy}
\end{eqnarray}
Note that $H_Z$ is sufficiently large that the lower magnon band $E_{-}({\bm k})$ is fully gapped;
$E_{-}({\bm k})>0$ for $\forall {\bm k}$. This also allows that
$a^2_{0}-a^2_{14}>0$ and $b_0 > b_5$. In the absence of the next-nearest neighbor
dipolar interaction $J^{\prime}=0$ ($a_{23}=0$), two bands form a band touching with a massless
Dirac dispersion at ${\bm k}=(0,\pi)$ and $(\pi,0)$, where $a_{45}=a_{2}=b_5=0$. The Dirac dispersions
acquire a finite mass by non-zero next-nearest neighbor dipolar interaction and the sign of the mass
is determined by that of $J^{\prime}$. Due to this mass acquaintance, the upper magnon band ($E_{+}({\bm k})$)
and lower magnon band ($E_{-}({\bm k})$) have $\pm 1$ Chern integer respectively
for $J^{\prime} > 0$. When $J^{\prime}$ changes its sign from positive to negative,
these two integers change into $\mp 1$ respectively.
To calculate the Chern integer for the upper magnon band
directly~\cite{s-thouless82,s-kohmoto85,s-shindou13a,s-engelhardt15,s-furukawa15,s-xu16},
look into the first column of the paraunitary matrix ${\bm T}({\bm k})$, which is nothing but a periodic part
of the Bloch wavefunction for the upper magnon band,
\begin{eqnarray}
{\bm T}({\bm k}) = \left(\begin{array}{cccc}
\frac{u_{1,+}({\bm k})}{N_+} &\cdots & \cdots & \cdots \\
\frac{u_{2,+}({\bm k})}{N_+} & \cdots & \cdots &\cdots \\
\frac{v_{1,+}({\bm k})}{N_+} & \cdots & \cdots & \cdots \\
\frac{v_{2,+}({\bm k})}{N_+} & \cdots & \cdots & \cdots \\
\end{array}\right). \label{T}
\end{eqnarray}
where $u_{1,+}$ and $u_{2,+}$ ($v_{1,+}$ and $v_{2,+}$) are connected with each other
by the $C_4$ rotation ($(k_x,k_y) \rightarrow
(-k_y,k_x)$, exchanges A and B sublattices). $u_{j,+}$ and $v_{j,+}$ are connected with each other
by a particle-hole transformation, which is generic in any quadratic boson Hamiltonian;
${\bm \sigma}_1 \cdot {\bm H}_{\rm BdG}({\bm k}) \cdot
{\bm \sigma}_1 = {\bm H}^{*}_{\rm BdG}(-{\bm k})$. $u_{1,+}({\bm k})$ is given by
\begin{align}
u_{1,+}({\bm k}) &= \cos (\theta_1 + \theta_2)
\cosh \mu_1 \cosh \mu_2 + \sin (\theta_1 - \theta_2)
\sinh \mu_1 \sinh \mu_2 \nn \\
& \hspace{2cm} + i \big( \sin (\theta_1 - \theta_2)
\cosh \mu_1 \cosh \mu_2 + \cos (\theta_1 + \theta_2)
\sinh \mu_1 \sinh \mu_2\big). \label{u1}
\end{align}
while the other three are obtained from this by the $C_4$ rotation or by the
particle-hole transformation. ${\bm T}({\bm k})$ is given by a proper normalization; $N^2_+ \equiv
u^2_{1,+} + u^2_{2,+} -v^2_{1,+} - v^2_{2,+}$. By using Eqs~.(\ref{eq1}-\ref{eq9}), one can see that
$u_{1,+}$ (and also $v_{1,+}$) has a zero only at ${\bm k}=(0,\pi)$, while $u_{2,+}$ and $v_{2,+}$ have a zero at
${\bm k}=(\pi,0)$. Thus, we expand $u_{1,+}({\bm k})$ with respect to small
${\bm q}$ with ${\bm k}\equiv (0,\pi)+{\bm q}$;
\begin{eqnarray}
u_{1,+}((0,\pi)+{\bm q}) = {\bm X}\cdot {\bm q} + i {\bm Y}\cdot {\bm q} + {\cal O}(q^2),
\end{eqnarray}
where ${\bm X}\equiv (-a,-b)$, ${\bm Y} \equiv (a,-b)$. $a$ and $b$ are calculated as follows,
\begin{align}
a &= \frac{d\theta_1}{d k_x}|_{{\bm k}=(0,\pi)} \!\
\Big(\cosh \mu_1 \cosh \mu_2
- \sinh \mu_1 \sinh \mu_2\Big) = \frac{J}{4J^{\prime}} \!\ \Big(\cosh \mu_1 \cosh \mu_2
- \sinh \mu_1 \sinh \mu_2\Big) > 0 \nn \\
b &= \frac{d\theta_2}{d k_y}|_{{\bm k}=(0,\pi)} \!\
\Big(\cosh \mu_1 \cosh \mu_2
+ \sinh \mu_1 \sinh \mu_2\Big) = \frac{JS \sqrt{a^2_0-a^2_{14}}}{3\sqrt{2}J^{\prime}S DS}
\Big(\cosh \mu_1 \cosh \mu_2
+ \sinh \mu_1 \sinh \mu_2\Big)> 0. \nn
\end{align}
for $J^{\prime}>0$ and $D>0$.
Note that ${\bm X}\times{\bm Y}=2ab>0$. Thus, a phase of $u_{1,+}({\bm k})$, i.e. $\theta_{1,+}({\bm k})$
with $u_{1,+}\equiv e^{i\theta_{1,+}}|u_{1,+}|$, acquires $+2\pi$ phase holonomy,
whenever ${\bm k}$ rotates once around $(0,\pi)$ anti-clockwise. This dictates that
the Chern integer for the upper band is $+1$ (that for the lower band is $-1$
due to the sum rule~\cite{s-shindou13a}).
The non-zero topological integers for these two magnon bands result
in a topological chiral magnon edge mode within the band
gap~\cite{s-halperin82,s-hatsugai93,s-shindou13a,s-engelhardt15,s-furukawa15,s-xu16}.
The sign of the integer dictates that the mode has a chiral
dispersion with the anti-clockwise rotation when
viewed from $+z$ direction and the out-of-field is along $+z$ direction (``right-handed''
chiral mode).
So far, we assume that $J^{\prime}_1=J^{\prime}_2=J^{\prime}$. For
$J^{\prime}_2=0.8 J^{\prime}_1$, we confirmed numerically that the same
band gap with the chiral edge mode persists (Fig.~\ref{s-fig2}).
\begin{figure}[t]
\centering
\includegraphics[width=0.90\textwidth]{sfig2.eps}
\caption{(color online) Energy band dispersions of magnon states as function of the
wave vector $k$, calculated with open/periodic boundary condition along the $y$/$x$-direction.
Black colored dispersions are for bulk modes, and red colored dispersions are
for the chiral edge modes; those at $k<0$ are localized at $y=M (>0)$ and those at $k>0$ are localized
at $y=0$. (a) $J^{\prime}_1S=J^{\prime}_{2}S=0.35$, $H_Z=15.8$, $DS=2.2$, $JS=1.0$,
(b) $J^{\prime}_1S=0.35, J^{\prime}_{2}S=0.28$, $H_Z=15.8$, $DS=2.2$, $JS=1.0$.}
\label{s-fig2}
\end{figure}
\section{A phase boundary between the quantum magnon Hall regime and
conventional magnon localized regime}
A phase diagram in the main text (Fig.~3) has the quantum magnon
Hall regime and conventional magnon localized regime. The boundary between
these two regions is identified as a scale-invariant point of the two-terminal
conductance calculated with the periodic boundary condition; $G_p$. Fig.~\ref{s-fig3}
shows $G_p$ as a function of the single-particle
(magnon) energy $E$ for several $W$. Thereby, we found two such scale-invariant
points (one at $E={\cal E}_1(W)$ specified by a black dotted line in Fig.~\ref{s-fig3}
and the other at $E={\cal E}_2(W)$ by a red dotted line). For ${\cal E}_1(W)<E<{\cal E}_{2}(W)$
an ``edge conductance'' characterized by $G_o-G_p$ has
a tendency to take the quantized value ($1/h$) in the thermodynamic limit
($G_o$ is the conductance along the $x$-direction
with the open boundary condition along the $y$-direction) .
For $E<{\cal E}_1(W)$ or $E>{\cal E}_2(W)$, the edge conductance goes to zero.
From these observations, we regard the former region as the quantum magnon Hall regime
and the latter as the conventional magnon localized regime.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth]{sfig3.eps}
\caption{(color online) Two-terminal conductance as a function of energy of magnon state $E$ for
several $W$ ((a) $W=1.3$ and (b) $W=2.1$), calculated for the $L\times L$ system.
Solid lines with different colors denote the conductance along the $x$-direction with
periodic boundary condition along the $y$-direction $G_p$; $L=20$ (black), $L=30$ (red), $L=40$ (blue).
Broken lines with different colors denote $G_o-G_p$, where $G_o$ is the conductance with open
boundary condition along the $y$-direction. The other parameters are the same as those given
in the caption of Fig. 1 and 4 in the main text. Two direct transition points are identified
as a scale-invariant point of $G_p$; a red colored dotted line for $E={\cal E}_2(W)$ and black colored
dotted line for $E={\cal E}_1(W)$. Note that, for ${\cal E}_1(W)<E<{\cal E}_2(W)$, $G_o-G_p$ has a
tendency to take the quantized value ($1/h$) in the thermodynamic limit. }
\label{s-fig3}
\end{figure}
\section{microwave antennas experiment}
The two terminal magnon conductance calculated in the main text can be measured in a
standard microwave experiment commonly used for spin wave experiments~\cite{s-serga10}.
The experiment consists of two microstrip microwave antennas attached to
the two-dimensional square-lattice spin ice system (Fig.~\ref{s-fig4}).
The two antennas are spatially separated from each other shorter than a spin coherent length,
over which spin wave propagates without an energy dissipation. Note that the spin
coherent length in ferromagnetic insulator such as YIG can be over millimeters,
while it is at most on the order of several micrometer in ferromagnetic metals.
The role of the first antenna is for spin wave excitation and
that of the second antenna is for its detection. An a.c. electric current with a
frequency in the microwave regime (let us call this as `external frequency') is introduced
in the first antenna (`input signal'). The current locally excites spin wave with the same
external frequency. The spin wave propagates through the magnonic crystal system, and,
after a certain time, the spin wave reaches the second antenna, where an a.c. electric
current is induced (`output signal').
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{sfig4.eps}
\caption{A schematic picture for microwave antennas experiment. Two blue-colored
bars denote the coplanar waveguide. The gray-colored plate denotes the two-dimensional
patterned ferromagnetic film (square-lattice spin ice).}
\label{s-fig4}
\end{figure}
The two terminal magnon conductance studied in the main text
corresponds to a transmission ratio between the
output electric current and input current. The ratio can be
obtained as function of the external frequency within the microwave regime~\cite{s-serga10}.
When the frequency and the disorder
strength are chosen inside the quantum magnon Hall
regime, the transmission ratio is finite (see Fig.~3 of the main text; the
frequency corresponds to energy of magnon states in the figure).
Especially, the ratio is dominated by chiral edge spin wave transport,
when the distance between two antennas is longer than the localization length.
Namely, the bulk spin wave excited by the first antenna dies off quickly before it
reaches the second antenna due to its finite localization length. Meanwhile,
the chiral edge spin wave
excited by the first antenna travels along the edge without being backward scattered.
When the frequency and the disorder are in the conventional magnon localized regime,
the transmission ratio reduces dramatically. The
ratio becomes exponentially small, if the localization length is much
shorter than the distance between the two antennas. Accordingly, the quantum phase
transition from the quantum magnon Hall regime to conventional magnon localized
regime can be experimentally measurable through the dramatic reduction of the
transmission ratio as a function of either the external frequency or the disorder
strength.
Note that the distance between the two antennas must be shorter
than a finite spin coherence length (Fig.~\ref{s-fig4}).
The finite distance between the two antennas
may result in a blurred change of the transmission ratio at
the phase transition point. Nonetheless, the spin ice model made out of
ferromagnetic insulator such as YIG allows a very large distance between the two
antennas, e.g. 8mm in YIG~\cite{s-serga10}. Since a typical localization length would be at largest on the
order of micrometer scale~\cite{s-evers15}, the very large spin coherence
length in YIG may even enable us to study the critical properties of the
quantum phase transition.
\section{thermal magnon Hall conductivity in generic disordered quantum magnon Hall systems and
its relation to the thermal magnon Hall conductivity in the clean limit}
In the main text, we have studied only the model with two magnon bands.
A realistic material may have more than two magnon bands, which have non-zero
quantized Chern integers. Our study
as well as established knowledge on interplays between localization effect and
quantum Hall physics~\cite{s-prange} suggests that even small disorder makes all
these bulk magnon bands localized except for delocalized bulk states at
respective band center (Fig.~\ref{s-fig5}(a,b)). A pair of two delocalized bulk states
bound a mobility gap, inside which a topological chiral edge mode lives (Fig.~\ref{s-fig5}(b)).
For the two-band model studied in the main text,
the bulk delocalized states at $E={\cal E}_{2}$ and $E={\cal E}_1$
encompass the mobility gap, inside which the chiral edge mode lives.
As in Fig.~3 of the main text, the edge mode disappears when a pair of the
two delocalized bulk states fall into the same energy.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{sfig5.eps}
\caption{Schematic picture for bulk magnon bands and chiral edge modes in the
clean limit (a,c) and with disorders (b,d,e,f). The Chern integers for each magnon band
are shown in (a) such as $-1,+1,+1,+1,-2$ (from below). A right/left-handed chiral edge
mode is depicted by a red line with up/down-headed arrow respectively.
Gray shadow regions denote extended bulk states, while the white region
in the bands corresponds to mobility gaps. (c,d) the $n$-th band with
${\rm Ch}(n) = l$, $\sum^{n-1}_{j=1}{\rm Ch}(j)=-m$ and $l>m>0$,
(e) When the $n$-th band with ${\rm Ch}(n) = l$, $\sum^{n-1}_{j=1}{\rm Ch}(j)=m$ and
$l>0$, $m>0$ is disordered.}
\label{s-fig5}
\end{figure}
For the generic situation described above, we can employ the same argument as
in the main text, to derive an edge contribution to the thermal Hall conductivity,
\begin{eqnarray}
\kappa_{xy} = - \frac{k^2_B T}{h} \sum_{j} \Big[C_2\big(g({\cal E}^{+}_j,T)\big) -
C_2\big(g({\cal E}^{-}_j,T)\big)\Big]. \label{eq11}
\end{eqnarray}
Here the summation is taken over chiral edge modes; the integer $j$ counts
chiral edge modes at finite frequency. ${\cal E}^{+}_j$ and ${\cal E}^{-}_j$ stand for
a pair of two energies by which the $j$-th chiral edge mode is bounded (Fig.~\ref{s-fig5}(b)).
We define ${\cal E}^{+}_j > {\cal E}^{-}_j$ when the $j$-th chiral edge mode is
right-handed, while ${\cal E}^{-}_j > {\cal E}^{+}_j$ when the mode is left-handed.
$C_2(x) \equiv \int^{x}_0 dt \ln \big((1+t)/t\big)^2$ and $g(E,T)$ is the
Bose distribution function.
The above expression is qualitatively consistent with the thermal magnon Hall
conductivity in the clean limit, which was previously obtained based on the linear response
theory~\cite{s-matsumoto11a,s-matsumoto11b,s-matsumoto14,s-qin11,s-qin12};
\begin{eqnarray}
\kappa_{xy} = - \frac{k^2_B T}{\hbar} \sum_{n}\int \frac{d^2{\bm k}}{(2\pi)^2}
\Omega_{n}({\bm k}) \!\ \Big[C_2\big(g({\cal E}_n({\bm k}),T)\big) - \frac{\pi^2}{3} \Big].
\label{eq12}
\end{eqnarray}
Here ${\cal E}_n({\bm k})$ and $\Omega_{n}({\bm k})$ stand for
the $n$-th magnon energy band and $n$-th band Berry's curvature, respectively.
${\bm k}\equiv (k_x,k_y)$ denotes the two-dimensional crystal momentum. The Chern integer
is defined for each band as an integral of the curvature in the first Brillouin zone (B.Z.);
\begin{eqnarray}
{\rm Ch}(n) \equiv \int_{\rm B.Z.}\!\ \Omega_{n}({\bm k}) \!\ \frac{d^2{\bm k}}{2\pi}. \nonumber
\end{eqnarray}
When all bulk magnon bands are
fully gapped: ${\cal E}_{n}({\bm k}) > 0$ for all ${\bm k}$ and for all $n$,
a sum of the Chern integers over band is zero~\cite{s-shindou13a};
$\sum_{n} {\rm Ch}(n) = 0.$ Thereby, Eq.~(\ref{eq12}) reduces to
\begin{eqnarray}
\kappa_{xy} = -\frac{k^2_B T}{h} \sum_{n}\int \frac{d^2{\bm k}}{2\pi}
\!\
\Omega_{n}({\bm k}) \!\ C_2\big(g\big({\cal E}_n({\bm k}),T\big)\big).
\label{eq13}
\end{eqnarray}
Eq.~(\ref{eq13}) becomes identical to Eq.~(\ref{eq11}), when
an energy band width of each bulk magnon band
is much smaller than $k_B T$.
In this limit, ${\cal E}_{n}({\bm k})$ in the right hand side of Eq.~(\ref{eq13})
can be replaced by its band center energy $\overline{{\cal E}_{n}}$;
\begin{align}
\lim_{\Delta \ll k_BT} \kappa_{xy} &= -
\frac{k^2_B T}{h} \sum_{n}\int \frac{d^2{\bm k}}{2\pi}
\!\
\Omega_{n}({\bm k}) \!\ C_2\big(g\big(\overline{{\cal E}_n},T\big)\big)
= - \frac{k^2_B T}{h} \sum_{n} {\rm Ch}(n) \!\ C_2\big(g\big(\overline{{\cal E}_n},T\big)\big).
\label{eq14}
\end{align}
The equivalence between Eq.~(\ref{eq11}) and Eq.~(\ref{eq14}) can be
seen with a help of the bulk-edge correspondence~\cite{s-halperin82,s-hatsugai93,s-shindou13a}.
For example, consider (i) the $n$-th bulk magnon band whose band
center energy is $\overline{{\cal E}_n}$ and
${\rm Ch}(n) = l$ and $\sum^{n-1}_{j=1}{\rm Ch}(j)=-m$ with $l>m>0$.
The bulk-edge correspondence dictates that
$m$ pieces of right-handed chiral edge modes enter into the bulk band
from below and $(l-m)$ pieces of left-handed chiral edge modes enter into
the band from above (Fig.~\ref{s-fig5}(c)). In the presence of small disorders,
all the bulk states in the $n$-th band are localized except for the delocalized
states at the band center $\overline{{\cal E}_n}$. Thereby, the delocalized bulk states
at the band center terminate all the chiral edge modes;
$\overline{{\cal E}_n}$ bounds the $m$ pieces of right-handed
chiral edge modes from above and the $(l-m)$ pieces of left-handed chiral edge
modes from below (Fig.~\ref{s-fig5}(d)).
Let us consider another examples: (ii) the $n$-th band with
${\rm Ch}(n) = l$, $\sum^{n-1}_{j=1}{\rm Ch}(j)=m$ and $l>0$, $m>0$.
In this case, the correspondence tells that $m$ pieces of left-handed
chiral edge modes pass by the band center energy $\overline{{\cal E}_n}$,
while $l$ pieces of left-handed chiral modes are terminated by the delocalized
bulk states at $\overline{{\cal E}_n}$; $\overline{{\cal E}_n}$ bounds the latter
$l$ pieces of modes from below (Fig.~\ref{s-fig5}(e)). By considering
other cases as well and integrating them together, we can readily rewrite
Eq.~(\ref{eq14}) into Eq.~(\ref{eq11}) in the small band width limit.
Using $C_2(g(E=0+))=\frac{\pi^2}{3}$, we can further
generalize the argument so far into a case with complete flat zero energy
bands, ${\cal E}_{n}({\bm k})=0$ for all ${\bm k}$ and for $\exists$ $n$, giving
a consistency between Eq.~(\ref{eq11}) and Eq.~(\ref{eq12}) too.
The thermal Hall conductivity can be used to
confirm the presence/absence of topological chiral edge modes in finite frequency
regimes. For example, the thermal magnon Hall conductivity in the high temperature
limit goes to a {\it non-zero} constant value ! Moreover, the value is given by
a sum of those mobility gaps which bound topological chiral edge modes;
\begin{eqnarray}
\lim_{T\rightarrow \infty}\kappa_{xy} =
\frac{k_B}{h} \sum_{j} \big({\cal E}^{+}_{j}-{\cal E}^{-}_j\big). \label{eq17}
\end{eqnarray}
Here the summation is over the edge modes;
${\cal E}^{+}_{j}$ and ${\cal E}^{-}_j$ bound the $j$-th chiral edge mode in pair.
Note that ${\cal E}^{+}_j>{\cal E}^{-}_j$ for the right handed chiral edge mode
and ${\cal E}^{+}_j<{\cal E}^{-}_j$ for the left handed mode; right/left
handed chiral mode contributes to postive/negative thermal Hall conductivity
respectively. Nonetheless, there is no `topological' reason which requires the
sum in Eq.~(\ref{eq17}) to be zero. The non-zero $\kappa_{xy}$ in the high
temperature limit is quite unconventional. The feature clearly
tells the presence of the chiral edge modes from otherwise
in actual experimental systems.
|
1,116,691,499,161 | arxiv | \section{Introduction}
The sequence prediction problem is the task of predicting the next symbol, $x_n$ after observing $x_1 x_2 \cdots x_{n-1}$.
Solomonoff induction \cite{Sol64a,Sol64b} solves this problem by taking inspiration from Occam's razor and Epicurus' principle of multiple
explanations. These ideas are formalised in the field of Kolmogorov complexity, in particular by the universal a priori semi-measure $\mathbf M$.
Let $\mu(x_n|x_1\cdots x_{n-1})$ be the true (unknown) probability of seeing $x_n$ having already observed $x_1\cdots x_{n-1}$.
The celebrated result of Solomonoff \cite{Sol64a} states that if $\mu$ is computable then
\eqn{\label{eqn-sol}
\lim_{n\to\infty} \left[\mathbf M(x_n|x_1\cdots x_{n-1}) - \mu(x_n | x_1\cdots x_{n-1})\right] = 0 \text{ with } \mu\text{-probability } 1
}
That is, $\mathbf M$
can learn the true underlying distribution from which the data is sampled with probability 1. Solomonoff induction is arguably the gold
standard predictor, universally solving many (passive) prediction problems \cite{Hut04,Hut07, Sol64a}.
However, Solomonoff induction makes no guarantees if $\mu$ is not computable. This would not be problematic if it were unreasonable to
predict sequences sampled from incomputable $\mu$, but this is not the case. Consider the sequence below, where every
even bit is the same as the preceding odd bit, but where the odd bits may be chosen arbitrarily.
\eqn{
\label{eqn-intro-1} \text{00 11 11 11 00 11 00 00 00 11 11 00 00 00 00 00 11 11}
}
Any child will quickly learn the pattern that each even bit is the same as the preceding odd bit and will
correctly predict the even bits. If Solomonoff induction is to be considered a truly intelligent predictor then it too should be
able to predict the even bits. More generally, it should be able to detect any computable sub-pattern.
It is this question, first posed in \cite{Hut04,Hut09open} and resisting attempts by experts for 6 years, that we address.
At first sight, this appears to be an esoteric question, but consider the following problem. Suppose you are given a sequence of pairs,
$x_1 y_1 x_2 y_2 x_3 y_3\cdots$
where $x_i$ is the data for an image (or feature vector) of a character and $y_i$ the corresponding ascii code (class label) for that character.
The goal of online classification is to construct a predictor
that correctly predicts $y_i$ given $x_i$ based on the previously seen training pairs. It is reasonable to assume that there is a relatively
simple pattern to generate $y_i$ given $x_i$ (humans and computers seem to find simple patterns for character recognition). However it is not necessarily reasonable to assume there exists a simple, or even computable,
underlying distribution generating the training data $x_i$. This problem is precisely what gave rise to discriminative learning \cite{LR06}.
It turns out that there exist sequences with even bits equal to preceding odd bits on which the conditional distribution of $\mathbf M$
fails to converge to $1$ on the even bits.
On the other hand, it is known that $\mathbf M$ is a defective measure, but may be normalised to a proper measure, $\mathbf M_{norm}$.
We show that this normalised version {\it does} converge on any recursive sub-pattern
of any sequence, such as that in Equation (\ref{eqn-intro-1}). This outcome is unanticipated since (all?) other results in the field
are independent of normalisation \cite{Hut04,Hut07,LV08, Sol64a}. The proofs are completely different to the standard proofs of
predictive results.
\section{Notation and Definitions}
We use similar notation to \cite{Gacs83,Gacs08,Hut04}. For a more comprehensive introduction to Kolmogorov complexity and
Solomonoff induction see \cite{Hut04,Hut07, LV08, ZL70}.
\subsubsect{Strings}
A finite binary string $x$ is a finite sequence $x_1x_2x_3\cdots x_n$ with $x_i \in \mathcal B = \left\{0, 1\right\}$. Its length is denoted $\ell(x)$.
An infinite binary string $\omega$ is an infinite sequence $\omega_1\omega_2\omega_3\cdots$.
The empty string of length zero is denoted $\epsilon$.
$\mathcal B^n$ is the set of all binary strings of length $n$. $\mathcal B^*$ is the set of all finite binary strings. $\mathcal B^\infty$ is the
set of all infinite binary strings.
Substrings are denoted $x_{s:t} := x_s x_{s+1}\cdots x_{t-1}x_{t}$ where $s,t \in \N$ and $s \leq t$. If $s > t$ then $x_{s:t} = \epsilon$.
A useful shorthand is $x_{<t} := x_{1:t-1}$.
Strings may be concatenated.
Let $x,y \in \mathcal B^*$ of length $n$ and $m$ respectively. Let $\omega \in \mathcal B^\infty$. Then,
\eq{
xy &:= x_1x_2\cdots x_{n-1}x_{n}y_1y_2\cdots y_{m-1}y_m \\
x\omega &:= x_1x_2\cdots x_{n-1}x_{n}\omega_1\omega_2\omega_3\cdots
}
For $b \in \mathcal B$, $\neg b = 0$ if $b = 1$ and $\neg b = 1$ if $b = 0$.
We write
$x \sqsubseteq y$ if $x$ is a prefix of $y$. Formally, $x \sqsubseteq y$ if $\ell(x) \leq \ell(y)$ and $x_i = y_i$ for all $1 \leq i \leq \ell(x)$.
$x \sqsubset y$ if $x \sqsubseteq y$ and $\ell(x) < \ell(y)$.
\subsubsect{Complexity}
Here we give a brief introduction to Kolmogorov complexity and the associated notation.
\begin{definition}[Inequalities]
Let $f, g$ be real valued functions. We write
$f(x) \stackrel{\times}\geq g(x)$
if there exists a constant $c > 0$ such that $f(x) \geq c \cdot g(x)$ for all $x$. $f(x) \stackrel{\times}\leq g(x)$ is defined similarly.
$f(x) \stackrel{\times}= g(x)$ if $f(x) \stackrel{\times}\leq g(x)$ and $f(x) \stackrel{\times}\geq g(x)$.
\end{definition}
\begin{definition}[Measures]
We call $\mu:\mathcal B^* \to [0,1]$ a {\it semimeasure} if $\mu(x) \geq \sum_{b\in\mathcal B} \mu(xb)$ for all $x \in \mathcal B^*$, and a probability measure if equality
holds and $\mu(\epsilon) = 1$. $\mu(x)$ is the $\mu$-probability that a sequence starts with $x$. $\mu(b | x) := {\mu(xb) \over \mu(x)}$ is the
probability of observing $b \in \mathcal B$ given that $x \in \mathcal B^*$ has already been observed.
A function $P:\mathcal B^* \to [0,1]$ is a {\it semi-distribution} if $\sum_{x\in\mathcal B^*} P(x) \leq 1$ and a probability distribution if equality holds.
\end{definition}
\begin{definition}[Enumerable Functions]
A real valued function $f:A \to \R$ is {\it enumerable} if there
exists a computable function $f:A\times\N \to \mathbb Q$ satisfying
$\lim_{t\to\infty} f(a, t) = f(a)$ and $f(a, t+1) \geq f(a, t)$ for all $a \in A$ and $t \in \N$.
\end{definition}
\begin{definition}[Machines]
A Turing machine $L$ is a recursively enumerable set (which may be finite) containing pairs of finite binary strings $(p^1, y^1), (p^2, y^2), (p^3, y^3),\cdots$.
$L$ is a {\it prefix machine} if the set $\left\{p^1, p^2,p^3\cdots\right\}$ is prefix free (no program is a prefix of any other).
It is a {\it monotone machine} if for all $(p, y), (q, x) \in L$ with $\ell(x) \geq \ell(y)$, $p \sqsubseteq q \implies y \sqsubseteq x$.
We define $L(p)$ to be the set of strings output by program $p$. This is different for monotone and prefix machines.
For prefix machines, $L(p)$ contains only one element, $y \in L(p)$ if $(p, y) \in L$. For monotone machines, $y \in L(p)$ if there
exists $(p, x) \in L$ with $y \sqsubseteq x$ and there does not exist a $(q, z) \in L$ with $q \sqsubset p$ and $y \sqsubseteq z$.
For both machines $L(p)$ represents the output of machine $L$ when given input $p$.
If $L(p)$ does not exist then we say $L$ does not halt on input $p$.
Note that for monotone machines it is possible for the same program to output multiple strings. For example $(1, 1), (1, 11), (1, 111), (1, 1111), \cdots$
is a perfectly legitimate monotone Turing machine. For prefix machines this is not possible. Also note that if $L$ is a monotone machine
and there exists an $x \in \mathcal B^*$ such that $x_{1:n} \in L(p)$ and $x_{1:m} \in L(p)$ then $x_{1:r} \in L(p)$ for all $n \leq r \leq m$.
\end{definition}
\begin{definition}[Complexity] Let $L$ be a prefix or monotone machine then define
\eq{
\lambda_L(y) &:= \sum_{p : y \in L(p)} 2^{-\ell(p)} & C_L(y) &:= \min_{p \in \mathcal B^*} \left\{ \ell(p) : y \in L(p) \right\}
}
If $L$ is a prefix machine then we write $\mathbf m_L(y) \equiv \lambda_L(y)$. If $L$ is a monotone machine then we write $\mathbf M_L(y) \equiv \lambda_L(y)$.
Note that if $L$ is a prefix machine then $\lambda_L$ is an enumerable semi-distribution while if $L$ is a monotone machine, $\lambda_L$ is
an enumerable semi-measure. In fact, every enumerable semi-measure (or semi-distribution) can be represented via some machine $L$ as $\lambda_L$.
\end{definition}
For prefix/monotone machine $L$ we write $L_t$ for the first $t$ program/output pairs in the recursive
enumeration of $L$, so $L_t$ will be a finite set containing at most $t$ pairs.\footnote{$L_t$ will contain exactly $t$ pairs unless $L$ is finite,
in which case it will contain $t$ pairs until $t$ is greater than the size of $L$. This annoyance will never be problematic.}
The set of all monotone (or prefix) machines is itself recursively enumerable \cite{LV08},\footnote{Note the
enumeration may include repetition, but this is unimportant in this case.} which allows one to define a
universal monotone machine ${U_M}$
as follows. Let $L^i$ be the $i$th monotone machine in the recursive enumeration of monotone machines.
\eq{
(i'p, y) \in {U_M} \Leftrightarrow (p, y) \in L^i
}
where $i'$ is a prefix coding of the integer $i$. A universal prefix machine, denoted ${U_P}$, is defined in a similar way. For details see \cite{LV08}.
\begin{theorem}[Universal Prefix/Monotone Machines]\label{thm_universal}
For the universal monotone machine ${U_M}$ and universal prefix machine ${U_P}$,
\eq{
\mathbf m_{U_P}(y) &> c_L \mathbf m_L(y) \text{ for all } y\in \mathcal B^* & \mathbf M_{U_M}(y) &> c_L \mathbf M_L(y) \text{ for all } y\in\mathcal B^*
}
where $c_L > 0$ depends on $L$ but not $y$.
\end{theorem}
For a proof, see \cite{LV08}. As usual, we will fix reference universal prefix/monotone machines ${U_P}$, ${U_M}$ and drop the subscripts by letting,
\eq{
\mathbf m(y) &:= \mathbf m_{U_P}(y) \equiv \sum_{p : y \in {U_P}(p)} 2^{-\ell(p)} & \mathbf M(y) &:= \mathbf M_{U_M}(y) \equiv \sum_{p : y\in {U_M}(p)} 2^{-\ell(p)} \\
K(y) &:= C_{U_P}(y) \equiv \min_{p \in \mathcal B^*} \left\{\ell(p) : y\in{U_P}(p)\right\} & Km(y) &:= \min_{p\in\mathcal B^*} \left\{\ell(p) : y\in{U_M}(p)\right\}
}
The choice of reference universal Turing machine is usually\footnote{See \cite{HM07} for a subtle exception. All the results in this paper are independent of universal Turing machine.} unimportant since a different choice varies $\mathbf m, \mathbf M$ by only a multiplicative
constant, while $K, Km$ are varied by additive constants.
For natural numbers $n$ we define $K(n)$ by $K(\left<n\right>)$ where $\left<n\right>$ is the binary representation of $n$.
$\mathbf M$ is not a proper measure, $\mathbf M(x) > \mathbf M(x0) + \mathbf M(x1)$ for all $x \in \mathcal B^*$, which means that $\mathbf M(0|x) + \mathbf M(1|x) < 1$, so $\mathbf M$
assigns a non-zero probability that the sequence will end. This is because there are monotone programs $p$ that halt, or enter infinite loops.
For this reason Solomonoff introduced a normalised version, $\mathbf M_{norm}$ defined as follows.
\begin{definition}[Normalisation]
\eq{
\mathbf M_{norm}(\epsilon) &:=1 & \mathbf M_{norm}(y_n | y_{<n}) \equiv {\mathbf M_{norm}(y_{1:n}) \over \mathbf M_{norm}(y_{<n})}:= {\mathbf M(y_{1:n}) \over \mathbf M(y_{<n}0) + \mathbf M(y_{<n}1)}.
}
\end{definition}
This normalisation is not unique, but is philosophically and technically the most attractive and was used and defended by Solomonoff.
Historically, most researchers have accepted the defective $\mathbf M$ for technical convenience. As mentioned, the difference seldom matters,
but in this paper it is somewhat surprisingly crucial. For a discussion of normalisation, see \cite{LV08}.
\begin{theorem}\label{thm_kolmogorov}The following are results in Kolmogorov complexity. Proofs for all can be found in \cite{LV08}.
\begin{enumerate}
\item $\mathbf m(x) \stackrel{\times}= 2^{-K(x)}$
\item $2^{-K(xb)} \stackrel{\times}= 2^{-K(x\neg b)}$
\item $\mathbf M(x) \stackrel{\times}\geq \mathbf m(x)$
\item If $P$ is an enumerable semi-distribution, then $\mathbf m(y) \stackrel{\times}\geq P(y)$
\item If $\mu$ is an enumerable semi-measure, then $\mathbf M(y) \stackrel{\times}\geq \mu(y)$
\end{enumerate}
\end{theorem}
Note the last two results are equivalent to Theorem \ref{thm_universal} since every enumerable semi-(measure/distribution) is
generated by a monotone/prefix machine in the sense of Theorem \ref{thm_universal} and vice-versa.
Before proceeding to our own theorems we need a recently proven result in algorithmic information theory.
\begin{theorem}\label{thm_gap}[Lempp, Miller, Ng and Turetsky, 2010, unpublished, private communication]
$\lim_{n\to\infty} {\mathbf m(\omega_{<n}) \over \mathbf M(\omega_{<n})} = 0$, for all $\omega \in \mathcal B^\infty$.
\end{theorem}
\section{$\mathbf M_{norm}$ Predicts Selected Bits}
The following Theorem is the main positive result of this paper. It shows that any computable sub-pattern of a sequence will
eventually be predicted by $\mathbf M_{norm}$.
\begin{theorem}\label{thm_positive}
Let $f:\mathcal B^*\to \mathcal B \cup \left\{\epsilon\right\}$ be a total recursive function and $\omega \in \mathcal B^\infty$ satisfying $f(\omega_{<n}) = \omega_n$
whenever $f(\omega_{<n}) \neq \epsilon$.
If $f(\omega_{<n_i}) \neq \epsilon$ is defined for an infinite sequence $n_1, n_2, n_3, \cdots$ then
$\lim_{i\to\infty} \mathbf M_{norm}(\omega_{n_i} | \omega_{<n_i}) = 1$.
\end{theorem}
Essentially the Theorem is saying that if there exists a computable predictor $f$ that correctly predicts the next bit every time it tries
(i.e when $f(\omega_{<n}) \neq \epsilon$) then $\mathbf M_{norm}$ will eventually predict the same bits as $f$.
By this we mean that if you constructed a predictor $f_{\mathbf M_{norm}}$ defined by $f_{\mathbf M_{norm}}(\omega_{<n}) = \operatornamewithlimits{arg\,max}_{b\in\mathcal B} \mathbf M_{norm}(b|\omega_{<n})$, then
there exists an $N$ such that $f_{\mathbf M_{norm}}(\omega_{<n}) = f(\omega_{<n})$ for all $n > N$ where $f(\omega_{<n}) \neq \epsilon$.
For example, let $f$ be defined by
\eq{
f(x) = \begin{cases}
x_{\ell(x)} & \text{if } \ell(x) \text{ odd} \\
\epsilon & \text{otherwise}
\end{cases}
}
Now if $\omega \in \mathcal B^\infty$ satisfies $\omega_{2n} = f(\omega_{<2n}) = \omega_{2n-1}$ for all $n \in \N$
then Theorem \ref{thm_positive} shows that $\lim_{n\to\infty} \mathbf M_{norm}(\omega_{2n}|\omega_{<2n}) = 1$. It says nothing
about the predictive qualities of $\mathbf M_{norm}$ on the odd bits, on which there are no restrictions.
The proof essentially relies on using $f$ to show that monotone programs for $\omega_{<n_i} \neg\omega_{n_i}$ can be converted to prefix programs.
This is then used to show that $\mathbf M(\omega_{<n_i} \neg\omega_{n_i}) \stackrel{\times}= \mathbf m(\omega_{<n_i}\neg\omega_{n_i})$. The result will then
follow from Theorem \ref{thm_gap}.
Theorem \ref{thm_positive} insists that $f$ be totally recursive and that $f(\omega_{<n}) = \epsilon$ if $f$ refrains
from predicting. One could instead allow $f$ to be partially recursive and simply not halt to avoid making a prediction. The proof below breaks
down in this case and we suspect that Theorem \ref{thm_positive} will become invalid if $f$ is permitted to be only partially recursive.
\begin{proof}[Proof of Theorem \ref{thm_positive}]
We construct a machine $L$ from ${U_M}$ consisting of all programs that produce output that $f$ would not predict. We then show that
these programs essentially form a prefix machine. Define $L$ by the following process
\begin{enumerate}
\item $L := \emptyset$ and $t := 1$.
\item Let $(p, y)$ be the $t$th pair in ${U_M}$.
\item Let $i$ be the smallest natural number such that $y_i \neq f(y_{<i}) \neq \epsilon$. That is, $i$ is the position at which $f$ makes its first
mistake when predicting $y$. If $f$ makes no prediction errors then $i$ doesn't exist.\footnote{This is where
the problem lies for partially recursive prediction functions. Computing the smallest $i$ for which $f$ predicts incorrectly is incomputable
if $f$ is only partially recursive, but computable if it is totally recursive. It is this distinction that allows $L$ to be recursively enumerable, and so
be a machine.}
\item If $i$ exists then $L := L \cup \left\{(p, y_{1:i}) \right\}$ (Note that we do not allow $L$ to contain duplicates).
\item $t := t+1$ and go to step 2.
\end{enumerate}
Since $f$ is totally recursive and ${U_M}$ is recursively enumerable, the process above shows that $L$ is recursively enumerable. It is easy
to see that $L$ is a monotone machine. Further, if $(p, y), (q, x) \in L$ with $p \sqsubseteq q$ then $y = x$. This follows since
by monotonicity we would have that $y \sqsubseteq x$, but $f(x_{<\ell(y)}) = f(y_{<\ell(y)}) \neq y_{\ell(y)} = x_{\ell(y)}$ and by steps 3 and 4 in
the process above we have that $\ell(x) = \ell(y)$.
Recall that $L_t$ is the $t$th enumeration of $L$ and contains $t$ elements. Define $\bar L_t \subseteq L_t$
to be the largest prefix free set of shortest programs. Formally, $(p, y) \in \bar L_t$ if there does not exist a $(q, x) \in L_t$
such that $q \sqsubset p$. For example, if $L_t = (1, 001), (11, 001), (01, 11110), (010, 11110)$ then $\bar L_t = (1, 001), (01, 11110)$. If
we now added $(0, 11110)$ to $L_t$ to construct $L_{t+1}$ then $\bar L_{t+1}$ would be $(1, 001), (0, 11110)$.
Since $L_t$ is finite, $\bar L_t$ is easily computable from $L_t$. Therefore the following function is computable.
\eq{
P(y, t) := \sum_{p : (p, y) \in \bar L_t} 2^{-\ell(p)} \geq 0.
}
Now $\bar L_t$ is prefix free, so by Kraft's inequality $\sum_{y\in\mathcal B^*} P(y, t) \leq 1$ for all $t \in \N$.
We now show that $P(y, t + 1) \geq P(y, t)$ for all $y \in \mathcal B^*$ and $t\in \N$ which proves that $P(y) = \lim_{t\to\infty} P(y, t)$ exists
and is a semi-distribution.
Let $(p, y)$ be the program/output pair in $L_{t+1}$ but not in $L_t$. To see how $P(\cdot, t)$ compares to $P(\cdot, t+1)$ we need to
compare $\bar L_{t}$ and $\bar L_{t+1}$. There are three cases:
\begin{enumerate}
\item There exists a $(q, x) \in L_t$ with $q \sqsubset p$. In this case $\bar L_{t+1} = \bar L_t$.
\item There does not exist a $(q, x) \in L_t$ such that $p \sqsubset q$. In this case $(p, y)$ is simply added to
$\bar L_t$ to get $\bar L_{t+1}$ and so $\bar L_t \subset \bar L_{t+1}$. Therefore $P(\cdot, t+1) \geq P(\cdot, t)$ is clear.
\item There does exist a $(q, x) \in \bar L_t$ such that $p \sqsubset q$. In this case $\bar L_{t+1}$ differs from $\bar L_t$ in that
it contains $(p, y)$ but not $(q, x)$. Since $p \sqsubset q$ we have that $y = x$. Therefore
$P(y, t+1) - P(y, t) = 2^{-\ell(p)} - 2^{-\ell(q)} > 0$ since $p \sqsubset q$. For other values, $P(\cdot, t) = P(\cdot, t+1)$.
\end{enumerate}
Note that it is not possible that $p = q$ since then $x = y$ and duplicates are not added to $L$.
Therefore $P$ is an enumerable semi-distribution. By Theorem \ref{thm_kolmogorov} we have
\eqn{
\label{eqn-pos1} \mathbf m(\omega_{<n_i}\neg\omega_{n_i}) \stackrel{\times}\geq P(\omega_{<n_i}\neg\omega_{n_i})
}
where the constant multiplicative fudge factor in the $\stackrel{\times}\geq$ is independent of $i$.
Suppose $\omega_{<n_i}\neg\omega_{n_i} \in {U_M}(p)$.
Therefore there exists a $y$ such that $\omega_{<n_i}\neg\omega_{n_i} \sqsubseteq y$ and $(p, y)\in{U_M}$. By parts 2 and 3 of the process above,
$(p, \omega_{<n_i}\neg\omega_{n_i})$ is added to $L$. Therefore there exists a $T \in \N$
such that $(p, \omega_{<n_i}\neg\omega_{n_i}) \in L_t$ for all $t \geq T$.
Since $\omega_{<n_i}\neg\omega_{n_i} \in {U_M}(p)$, there does not exist a $q \sqsubset p$ with $\omega_{<n_i}\neg\omega_{n_i} \in {U_M}(q)$. Therefore
eventually, $(p, \omega_{<n_i}\neg\omega_{n_i}) \in \bar L_t$ for all $t \geq T$. Since every program in ${U_M}$ for $\omega_{<n_i}\neg\omega_{n_i}$
is also a program in $L$, we get
\eq{
\lim_{t\to\infty} P(\omega_{<n_i}\neg\omega_{n_i}, t) \equiv P(\omega_{<n_i}\neg\omega_{n_i}) = \mathbf M(\omega_{<n_i} \neg\omega_{n_i}).
}
Next,
\eqn{
\label{eqn-pos2} \mathbf M_{norm}(\neg\omega_{n_i} | \omega_{<n_i}) &\equiv {\mathbf M(\omega_{<n_i}\neg\omega_{n_i}) \over \mathbf M(\omega_{<n_i}\omega_{n_i}) + \mathbf M(\omega_{<n_i}\neg\omega_{n_i})} \\
\label{eqn-pos3} &\stackrel{\times}\leq {\mathbf m(\omega_{<n_i}\neg\omega_{n_i}) \over \mathbf M(\omega_{1:n_i})} \\
\label{eqn-pos4} &\stackrel{\times}= {\mathbf m(\omega_{1:n_i}) \over \mathbf M(\omega_{1:n_i})}
}
where Equation (\ref{eqn-pos2}) follows by the definition of $\mathbf M_{norm}$. Equation (\ref{eqn-pos3}) follows from Equation (\ref{eqn-pos1}) and algebra. Equation (\ref{eqn-pos4}) follows since $\mathbf m(xb) \stackrel{\times}= 2^{-K(xb)} \stackrel{\times}= 2^{-K(x\neg b)} \stackrel{\times}= \mathbf m(x\neg b)$, which is Theorem \ref{thm_kolmogorov}.
However, by Theorem \ref{thm_gap}, $\lim_{i\to\infty} {\mathbf m(\omega_{<n_i}) \over \mathbf M(\omega_{<n_i})} = 0$ and so $\lim_{i\to\infty} \mathbf M_{norm}(\neg\omega_{n_i} | \omega_{<n_i}) = 0$. Therefore $\lim_{i\to\infty} \mathbf M_{norm}(\omega_{n_i} | \omega_{<n_i}) = 1$
as required.
\end{proof}
We have remarked already that Theorem \ref{thm_positive} is likely not valid if $f$ is permitted to be a partial recursive function that only
output on sequences for which they make a prediction. However, there is a class of predictors larger than the totally recursive ones of Theorem
\ref{thm_positive}, which $\mathbf M_{norm}$ still learns.
\begin{theorem}\label{thm_positive2}
Let $f:\mathcal B^*\to \mathcal B \cup \left\{\epsilon\right\}$ be a partial recursive function and $\omega \in \mathcal B^\infty$ satisfying
\begin{enumerate}
\item $f(\omega_{<n})$ is defined for all $n$.
\item $f(\omega_{<n}) = \omega_n$ whenever $f(\omega_{<n}) \neq \epsilon$.
\end{enumerate}
If $f(\omega_{<n_i}) \in \mathcal B$ for an infinite sequence $n_1, n_2, n_3, \cdots$ then
\eq{
\lim_{i\to\infty} \mathbf M_{norm}(\omega_{n_i} | \omega_{<n_i}) = 1.
}
\end{theorem}
The difference between this result and Theorem \ref{thm_positive} is that $f$ need only be defined on all prefixes of at least one $\omega \in \mathcal B^\infty$
and not everywhere in $\mathcal B^*$. This allows for a slightly broader class of predictors. For example, let $\omega = p^1 b^1 p^2 b^2 p^3 b^3 \cdots$ where
$p^i$ is some prefix machine that outputs at least one bit and $b^i$ is the first bit of that output. Now there exists a computable $f$
such that $f(p^1 b^1 \cdots p^{i-1} b^{i-1} p^i) = b^i$ for all $i$ and $f(\omega_{<n}) = \epsilon$ whenever $\omega_n \neq b^i$ for some $i$ ($f$
only tries to predict the outputs). By Theorem \ref{thm_positive2}, $\mathbf M_{norm}$ will correctly predict $b^i$.
The proof of Theorem \ref{thm_positive2} is almost identical to that of Theorem \ref{thm_positive}, but with one additional subtlety.\\
{\it Proof sketch. }
The proof follows that of Theorem \ref{thm_positive} until the construction of $L$. This breaks down because step 3
is no longer computable since $f$ may not halt on some string that is not a prefix of $\omega$. The modification is to run steps 2-4
in parallel for all $t$ and only adding $(p, y_{1:i})$ to $L$ once it has been proven that $f(y_{<i}) \neq y_i$ and $f(y_{<k})$ halts for all $k < i$,
and either chooses not to predict (outputs $\epsilon$), or predicts correctly. Since $f$ halts on all
prefixes of $\omega$, this does not change $L$ for any programs we care about and the remainder of the proof goes through identically.
It should be noted that this new class of predictors is still less general than allowing $f$ to an arbitrary partial recursive predictor. For
example, a partial recursive $f$ can predict
the ones of the halting sequence, while choosing not to predict the zeros (the non-halting programs). It is clear this cannot be modified into
a computable $f$ predicting both ones and zeros, or predicting ones and outputting $\epsilon$ rather than zero, as this would solve the halting problem.
\section{$\mathbf M$ Fails to Predict Selected Bits}
The following theorem is the corresponding negative result that while the conditional distribution of $\mathbf M_{norm}$ converges to $1$ on
recursive sub-patterns, $\mathbf M$ can fail to do so.
\begin{theorem}\label{thm_negative}
Let $f:\mathcal B^*\to\mathcal B\cup\left\{\epsilon\right\}$ be the total recursive function defined by,
\eq{
f(z) := \begin{cases}
z_{\ell(z)} & \text{if } \ell(z) \text{ odd} \\
\epsilon & \text{otherwise}
\end{cases}
}
There exists an infinite string $\omega \in \mathcal B^\infty$ with $\omega_{2n} = f(\omega_{<2n}) \equiv \omega_{2n-1}$ for all $n \in \N$ such that
\eq{
\liminf_{n\to\infty} \mathbf M(\omega_{2n} | \omega_{<2n}) < 1.
}
\end{theorem}
The proof requires some lemmas.
\begin{lemma}\label{lem_M}$\mathbf M(xy)$ can be bounded as follows.
\eqn{
\label{eqn-lemM} 2^{K(\ell(x))}\mathbf M(y) \stackrel{\times}\geq \mathbf M(xy) &\stackrel{\times}\geq \mathbf M(y)2^{-K(x)}.
}
\end{lemma}
\begin{proof}
Both inequalities are proven relatively easily by normal methods as used in \cite{LV08} and elsewhere. Nevertheless we present them as a warm-up
to the slightly more subtle proof later.
Now construct monotone machine $L$, which we should think of as taking two programs as input. The first, a prefix program $p$, the output of
which we view as a natural number $n$. The second, a monotone program. We then simulate the monotone machine and strip the first $n$ bits of
its output. $L$ is formally defined as follows.
\begin{enumerate}
\item $L := \emptyset$, $t := 1$
\item Let $(p, n), (q, y)$ be the $t$th pair of program/outputs in ${U_P}\times {U_M}$, which is enumerable.
\item If $\ell(y) \geq n$ then add $(pq, y_{n+1:\ell(y)})$ to $L$
\item $t := t+1$ and go to step 2
\end{enumerate}
By construction, $L$ is enumerable and is a monotone machine. Note that if $xy \in {U_M}(q)$ and $\ell(x) \in {U_P}(p)$ then $y \in L(pq)$.
Now,
\eqn{
\label{eqn-lemM-1} \mathbf M(y) \stackrel{\times}\geq \mathbf M_L(y) &\equiv \sum_{r : y \in L(r)} 2^{-\ell(r)} \geq \sum_{q, p : xy \in {U_M}(q), \ell(x) \in {U_P}(p)} 2^{-\ell(pq)} \\
\label{eqn-lemM-2} &=\sum_{q : xy \in {U_M}(q)} 2^{-\ell(q)} \sum_{p : \ell(x) \in {U_P}(p)} 2^{-\ell(p)} \equiv \mathbf M(xy) \mathbf m(\ell(x)) \\
\label{eqn-lemM-3} &\stackrel{\times}= \mathbf M(xy) 2^{-K(\ell(x))}
}
where Equation (\ref{eqn-lemM-1}) follows by Theorem \ref{thm_universal}, definitions and because if $xy \in {U_M}(q)$ and $\ell(x) \in {U_P}(p)$ then $y \in L(pq)$.
Equation (\ref{eqn-lemM-2}) by algebra, definitions. Equation (\ref{eqn-lemM-3}) by Theorem \ref{thm_kolmogorov}.
The second inequality is proved similarly. We define a machine $L$ as follows,
\begin{enumerate}
\item $L = \emptyset$, $t:=1$
\item Let $(q, x), (r, y)$ be the $t$th element in ${U_P}\times {U_M}$, which is enumerable.
\item Add $(qr, xy)$ to $L$
\item $t:=t+1$ and go to step 2
\end{enumerate}
It is easy to show that $L$ is monotone by using the properties of ${U_P}$ and ${U_M}$. Now,
\eq{
\mathbf M(xy) \stackrel{\times}\geq \mathbf M_L(xy) &\equiv \sum_{p : xy \in L(p)} 2^{-\ell(p)} \geq \sum_{q, r : x \in {U_P}(q), y \in {U_M}(r)} 2^{-\ell(qr)} \\
&= \sum_{q : x \in {U_P}(q)} 2^{-\ell(q)} \sum_{r : y \in {U_M}(r)} 2^{-\ell(r)} \equiv \mathbf m(x) \mathbf M(y) \stackrel{\times}= 2^{-K(x)} \mathbf M(y).
}
\end{proof}
\begin{lemma}\label{lem_gap}
There exists an $\omega \in \mathcal B^\infty$ such that
\eq{
\liminf_{n\to\infty} \left[\mathbf M(0|\omega_{<n}) + \mathbf M(1|\omega_{<n}) \right] = 0.
}
\end{lemma}
\begin{proof}
First we show that for each $\delta > 0$ there exists a $z \in \mathcal B^*$ such that $\mathbf M(0|z) + \mathbf M(1|z) < \delta$. This result is already known
and is left as an exercise (4.5.6) with a proof sketch in \cite{LV08}. For completeness, we include a proof. Recall that $\mathbf M(\cdot, t)$ is
the function approximating $\mathbf M(\cdot)$ from below. Fixing an $n$, define $z \in \mathcal B^*$ inductively as follows.
\begin{enumerate}
\item $z := \epsilon$
\item Let $t$ be the first natural number such that $\mathbf M(zb, t) > 2^{-n}$ for some $b \in \mathcal B$.
\item If $t$ exists then $z := z\neg b$ and repeat step 2. If $t$ does not exist then $z$ is left unchanged (forever).
\end{enumerate}
Note that $z$ must be finite since each time it is extended, $\mathbf M(zb, t) > 2^{-n}$. Therefore $\mathbf M(z\neg b, t) < \mathbf M(z, t) - 2^{-n}$ and so
each time $z$ is extended, the value of $\mathbf M(z, t)$ decreases by at least $2^{-n}$ so eventually $\mathbf M(zb, t) < 2^{-n}$ for all $b \in \mathcal B$. Now once the $z$
is no longer being extended ($t$ does {\it not} exist in step 3 above) we have
\eqn{
\label{eqn-lem1} \mathbf M(z0) + \mathbf M(z1) &\leq 2^{1 - n}.
}
However we can also show that $\mathbf M(z) \stackrel{\times}\geq 2^{-K(n)}$. The intuitive idea is that the process above requires only the value of $n$, which can be
encoded in $K(n)$ bits. More formally, let $p$ be such that $n \in {U_P}(p)$ and note that the following set is
recursively enumerable (but not recursive) by the process above.
\eq{
L_{p} := (p, \epsilon), (p, z_{1:1}), (p, z_{1:2}), (p, z_{1:3}), \cdots, (p, z_{1:\ell(z) - 1}), (p, z_{1:\ell(z)}).
}
Now take the union of all such sets, which is a) recursively enumerable since ${U_P}$ is, and b) a monotone machine because ${U_P}$ is a prefix machine.
\eq{
L := \bigcup_{(p, n) \in {U_P}} L_{p}.
}
Therefore
\eqn{
\label{eqn-lem2} \mathbf M(z) \stackrel{\times}\geq \mathbf M_L(z) \geq 2^{-K(n)}
}
where the first inequality is from Theorem \ref{thm_universal} and the second follows since if $n^*$ is the program of length $K(n)$
with ${U_P}(n^*) = n$ then $(n^*, z_{1:\ell(z)}) \in L$. Combining Equations (\ref{eqn-lem1}) and (\ref{eqn-lem2}) gives
\eq{
\mathbf M(0|z) + \mathbf M(1|z) &\stackrel{\times}\leq 2^{1 - n + K(n)}.
}
Since this tends to zero as $n$ goes to infinity,\footnote{An integer $n$ can easily be encoded in $2 \log n$ bits, so $K(n) \leq 2 \log n + c$ for some $c > 0$ independent of $n$.} for each $\delta > 0$ we can construct a $z\in\mathcal B^*$ satisfying $\mathbf M(0|z) + \mathbf M(1|z) < \delta$, as required.
For the second part of the proof, we construct $\omega$ by concatenation.
\eq{\omega := z^1 z^2z^3 \cdots
}
where $z^n \in \mathcal B^*$ is chosen such that,
\eqn{
\label{eqn-lem0} \mathbf M(0|z^n) + \mathbf M(1|z^n) < \delta_n
}
with $\delta_n$ to be chosen later. Now,
\eqn{
\label{eqn-lem3} \mathbf M(b|z^1\cdots z^n) &\equiv {\mathbf M(z^1\cdots z^n b) \over \mathbf M(z^1\cdots z^n) } \\
\label{eqn-lem4} &\stackrel{\times}\leq \left[{2^{K(\ell(z^1\cdots z^{n-1})) + K(z^1\cdots z^{n-1})} }\right]{\mathbf M(z^n b) \over \mathbf M(z^n)} \\
\label{eqn-lem5} &\equiv \left[2^{K(\ell(z^1\cdots z^{n-1})) + K(z^1\cdots z^{n-1})} \right] \mathbf M(b|z^n)
}
where Equation (\ref{eqn-lem3}) is the definition of conditional probability. Equation (\ref{eqn-lem4}) follows by
applying Lemma \ref{lem_M} with $x = z^1z^2\cdots z^{n-1}$ and $y = z^n$ or $z^n b$.
Equation (\ref{eqn-lem5}) is again the definition of conditional probability. Now let
\eq{
\delta_n = {2^{-n} \over 2^{K(\ell(z^1 \cdots z^{n-1})) + K(z^1\cdots z^{n-1})}}.
}
Combining this with Equations (\ref{eqn-lem0}) and (\ref{eqn-lem5}) gives
\eq{
\mathbf M(0 | z^1 \cdots z^n) + \mathbf M(1|z^1 \cdots z^n) \stackrel{\times}\leq 2^{-n}.
}
Therefore,
\eq{
\liminf_{n\to\infty} \left[\mathbf M(0|\omega_{<n}) + \mathbf M(1|\omega_{<n})\right] = 0
}
as required.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_negative}]
Let $\bar\omega\in\mathcal B^\infty$ be defined by $\bar \omega_{2n} := \bar\omega_{2n-1} := \omega_n$ where $\omega$ is the string defined in the previous lemma.
Recall ${U_M} := \left\{(p^1, y^1), (p^2, y^2), \cdots\right\}$ is the universal monotone machine. Define monotone machine $L$ by the
following process,
\begin{enumerate}
\item $L = \emptyset$, $t = 1$
\item Let $(p, y)$ be the $t$th element in the enumeration of ${U_M}$
\item Add $(p, y_1 y_3 y_5 y_7 \cdots)$ to $L$
\item $t := t+ 1$ and go to step 2.
\end{enumerate}
Therefore if $\bar\omega_{<2n} \in {U_M}(p)$ then $\omega_{1:n} \in L(p)$. By identical reasoning as elsewhere,
\eqn{
\label{eqn-neg1} \mathbf M(\omega_{1:n}) \stackrel{\times}\geq \mathbf M(\bar\omega_{<2n}).
}
In fact, $\mathbf M(\omega_{1:n}) \stackrel{\times}= \mathbf M(\bar\omega_{<2n})$, but this is unnecessary.
Let $P := \left\{p : \exists b \in \mathcal B \text{ s.t } \omega_{1:n}b \in {U_M}(p)\right\}$ and $Q := \left\{p : \omega_{1:n} \in {U_M}(p)\right\} \supset P$.
Therefore
\eq{
1 - \mathbf M(0 | \omega_{1:n}) - \mathbf M(1 | \omega_{1:n}) &= 1 - {{\sum_{p\in P} 2^{-\ell(p)}} \over {\sum_{q \in Q} 2^{-\ell(q)}}} \\
&= {{\sum_{p \in Q - P} 2^{-\ell(p)}} \over {\sum_{q \in Q} 2^{-\ell(q)}}}.
}
Now let $\bar P := \left\{p : \exists b \in \mathcal B \text{ s.t } \bar\omega_{<2n}b\in {U_M}(p)\right\}$ and
$\bar Q := \left\{p : \bar\omega_{<2n} \in {U_M}(p)\right\}
\supset \bar P$.
Define monotone machine $L$ by the following process
\begin{enumerate}
\item $L = \emptyset$, $t := 1$
\item Let $(p, y)$ be the $t$th program/output pair in ${U_M}$
\item Add $(p, y_1y_1 y_2 y_2 \cdots y_{\ell(y)-1} y_{\ell(y)-1} y_{\ell(y)})$ to $L$
\item $t := t + 1$ and go to step 2.
\end{enumerate}
Let $p \in Q - P$. Therefore $\omega_{1:n} \in {U_M}(p)$ and $\omega_{1:n}b\notin {U_M}(p)$ for any $b \in \mathcal B$. Therefore $\bar\omega_{<2n} \in L(p)$
while $\bar\omega_{<2n}b \notin L(p)$ for any $b \in \mathcal B$. Now there exists an $i$ such that $L$ is the $i$th machine in the enumeration of
monotone machines, $L^i$.
Therefore, by the definition of the universal monotone machine ${U_M}$ we have that $\bar\omega_{<2n}b \notin {U_M}(i'p) = L^i(p) = L(p) \ni \bar\omega_{<2n}$ and ${U_M}(i'p) = L(p)$
for any $b \in \mathcal B$. Therefore $i'p \in \bar Q - \bar P$ and so,
\eqn{
\label{eqn-neg1.5} \sum_{q \in \bar Q - \bar P} 2^{-\ell(q)} \geq
\sum_{p : i'p \in \bar Q - \bar P} 2^{-\ell(i'p)} \geq
\sum_{p \in Q - P} 2^{-\ell(i'p)} \stackrel{\times}= \sum_{p \in Q - P} 2^{-\ell(p)}.
}
Therefore
\eqn{
\label{eqn-neg2} 1 - \mathbf M(0|\bar\omega_{<2n}) - \mathbf M(1 | \bar \omega_{<2n}) &\equiv {\sum_{p \in \bar Q - \bar P} 2^{-\ell(p)} \over \mathbf M(\bar\omega_{<2n})} \\
\label{eqn-neg3} &\stackrel{\times}\geq {\sum_{p \in Q- P} 2^{-\ell(p)} \over \mathbf M(\omega_{1:n})} \\
\label{eqn-neg4} &\equiv 1 - \mathbf M(0|\omega_{1:n}) - \mathbf M(1|\omega_{1:n})
}
where Equation (\ref{eqn-neg2}) follows from the definition of $\bar P$, $\bar Q$ and $\mathbf M$. Equation (\ref{eqn-neg3}) by (\ref{eqn-neg1.5}) and
(\ref{eqn-neg1}). Equation (\ref{eqn-neg4}) by the definition of $P, Q$ and $\mathbf M$.
Therefore by Lemma \ref{lem_gap} we have
\eq{
\limsup_{n\to\infty} \left[ 1 - \mathbf M(0|\bar\omega_{<2n}) - \mathbf M(1|\bar\omega_{<2n})\right] \stackrel{\times}\geq
\limsup_{n\to\infty} \left[ 1 - \mathbf M(0|\omega_{1:n}) - \mathbf M(1|\omega_{1:n})\right] = 1.
}
Therefore $\liminf_{n\to\infty} \mathbf M(\bar\omega_{2n}|\bar\omega_{<2n}) < 1$
as required.
\end{proof}
Note that $\lim_{n\to\infty} \mathbf M(\bar\omega_{2n}|\bar\omega_{<2n}) \neq 0$ in fact, one can show that there exists a $c > 0$ such that
$\mathbf M(\bar\omega_{2n}|\bar\omega_{<2n}) > c$ for all $n \in \N$. In this sense $\mathbf M$ can still be used to predict in the same way as $\mathbf M_{norm}$,
but it will never converge as in Equation (\ref{eqn-sol}).
\section{Discussion}
\subsubsect{Summary}
Theorem \ref{thm_positive} shows that if an infinite sequence contains a computable sub-pattern then the normalised universal semi-measure $\mathbf M_{norm}$
will eventually predict it. This means that Solomonoff's normalised version of induction is effective in the classification example given
in the introduction. Note that we have only proven the binary case, but expect the proof will go through identically for arbitrary finite
alphabet.
On the other hand, Theorem \ref{thm_negative} shows that plain $\mathbf M$ can fail to predict such structure in the sense that the conditional distribution
need not converge to $1$ on the true sequence. This is because it is not a proper measure, and does not converge to one.
These results are surprising since (all?) other predictive results, including Equation (\ref{eqn-sol}) and many others in \cite{Hut04,Hut07,LV08,Sol64a},
do not rely on normalisation.
\subsubsect{Consequences}
We have shown that $\mathbf M_{norm}$ can predict recursive structure in infinite strings that are incomputable (even stochastically so).
These results give hope that a Solomonoff inspired algorithm may be effective at online classification, even when the training data is
given in a completely unstructured way. Note that while $\mathbf M$ is enumerable and $\mathbf M_{norm}$ is only approximable,\footnote{A function $f$ is
approximable if there exists a computable function $f(\cdot, t)$ with $\lim_{t\to\infty} f(\cdot, t) = f(\cdot)$. Convergence need not be monotonic.}
both the conditional distributions are only approximable, which means it is no harder to predict using $\mathbf M_{norm}$ than $\mathbf M$.
\subsubsect{Open Questions}
A number of open questions were encountered in writing this paper.
\begin{enumerate}
\item Extend Theorem \ref{thm_positive} to the stochastic case where a sub-pattern is generated
stochastically from a computable distribution rather than merely a computable function. It seems likely that a different approach
will be required to solve this problem.
\item Another interesting question is to strengthen the result by proving a convergence rate. It may be possible to prove that under
the same conditions as Theorem \ref{thm_positive} that $\sum_{i=1}^\infty \left[1 - \mathbf M_{norm}(\omega_{n_i}|\omega_{<n_i})\right] \stackrel{\times}\leq K(f)$ where
$K(f)$ is the (prefix) complexity of the predicting function $f$. Again, if this is even possible, it will likely require a different approach.
\item Prove or disprove the validity of Theorem \ref{thm_positive} when the totally recursive prediction function $f$ (or the modified predictor of Theorem \ref{thm_positive2}) is replaced by
a partially recursive function.
\end{enumerate}
\subsubsect{Acknowledgements}
We thank Wen Shao and reviewers for valuable feedback on earlier drafts and the Australian Research Council for support under grant DP0988049.
\newpage
\begin{small}
|
1,116,691,499,162 | arxiv | \section{Introduction}
In this note we study the quark correlations inside the nucleon - forming a
diquark \cite{Jaffe} - in the context of elastic pion-proton scattering at low
momentum transfer.
The interest in this process is a consequence of Ref. \cite{bb2} where the
elastic proton-proton scattering, assuming the proton is composed of quark and
diquark, was discussed. We found that (i) it was possible to fit very
precisely the ISR elastic $pp$ data \cite{elastic} even up to $-t\approx3$
GeV$^{2}$ and (ii) the diquark turned out to be rather large, comparable to
the size of the proton. Moreover, we found that the quark-diquark model of the
nucleon in the wounded \cite{bbc} constituent model \cite{bb1} allows to
explain very well the RHIC data \cite{phobos} on particle production in the
central rapidity region.
Given above arguments, it is interesting to explore the model for another
process. The natural one is elastic pion-proton scattering. Following
\cite{bb2} we consider the proton to be composed of two constituents - a quark
and a diquark. As far as the pion is concerned we consider two cases. The
first one treats the pion as an object composed of two constituent quarks, the
second one treats the pion as a single object i.e. an object without
constituent quark structure.
For both cases we evaluate the inelastic pion-proton cross-section,
$\sigma(b)$, at a given impact parameter $b$. Then, from the unitarity
condition we obtain the elastic amplitude\footnote{We ignore the real part of
the amplitude.}
\begin{equation}
t_{el}(b)=1-\sqrt{1-\sigma(b)}, \label{unitarity}%
\end{equation}
and consequently the elastic amplitude in momentum transfer representation:%
\begin{equation}
T(\Delta)=\int t_{el}(b)e^{i\vec{\Delta}\cdot\vec{b}}d^{2}b.
\end{equation}
With this normalization one can evaluate the total cross section:%
\begin{equation}
\sigma_{tot}=2T(0), \label{tot}%
\end{equation}
and elastic differential cross section ($t\simeq-|\Delta|^{2}$):%
\begin{equation}
\frac{d\sigma}{dt}=\frac{1}{4\pi}|T(\Delta)|^{2}. \label{elas}%
\end{equation}
Our strategy is to adjust the parameters of the model so that it fits best the
data for elastic pion-proton cross-section. In this way the model can provide
some information on the details of proton and pion structure at small momentum transfer.
\section{Pion as a quark-quark system}
We follow closely the method presented in \cite{bb2} where the elastic and
inelastic proton-proton collision was studied. Consequently, the inelastic
pion-proton cross-section at a fixed impact parameter $b$, $\sigma(b)$, is
given by:%
\begin{equation}
\sigma(b)=\int d^{2}s_{q}d^{2}s_{d}d^{2}s_{q1}d^{2}s_{q2}D_{p}(s_{q}%
,s_{d})D_{\pi}(s_{q1},s_{q2})\sigma(s_{q},s_{d};s_{q1},s_{q2};b),
\label{sigma}%
\end{equation}
where $D_{p}(s_{q},s_{d})$ and $D_{\pi}(s_{q1},s_{q2})$ denote the
distribution of quark ($s_{q}$) and diquark ($s_{d}$) inside the proton and
the distribution of quarks ($s_{q1}$,$s_{q2}$) inside the pion, respectively.
$\sigma(s_{q},s_{d};s_{q1},s_{q2};b)$ is the probability of inelastic
interaction at fixed impact parameter $b$ and frozen transverse positions of
all constituents. The schematic view of this process is shown in Fig.
\ref{fig1}.
Using the Glauber \cite{Glauber} and Czyz-Maximon \cite{cm} expansions we
have:\footnote{Here and in the following we assume that all constituents act
independently.}%
\begin{align}
1-\sigma(s_{q},s_{d};s_{q1},s_{q2};b) & =[1-\sigma_{qq}(b+s_{q1}%
-s_{q})][1-\sigma_{qq}(b+s_{q2}-s_{q})]\times\nonumber\\
\times & [1-\sigma_{qd}(b+s_{q1}-s_{d})][1-\sigma_{qd}(b+s_{q2}-s_{d})],
\end{align}
where $\sigma_{ab}(s)$ ($ab$ denotes $qq$ or $qd$) are inelastic differential
cross-sections of the constituents. \begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{fig1.eps}
\end{center}
\caption{Pion-proton scattering in the quark-diquark model. Pion as a
quark-quark system.}%
\label{fig1}%
\end{figure}
Following \cite{bb2} we parametrize $\sigma_{ab}(s)$ using simple Gaussian
forms:%
\begin{equation}
\sigma_{ab}(s)=A_{ab}e^{-s^{2}/R_{ab}^{2}}. \label{pab}%
\end{equation}
We constrain the radii $R_{ab}$ by the condition: $R_{ab}^{2}=R_{a}^{2}%
+R_{b}^{2}$ where $R_{a}$ denotes the quark or diquark's radius.\footnote{We
assume the quark radius to be the same in the proton and pion. We have
checked, however, that allowing the $R_{q}$'s to vary independently we are led
to the same conclusions.}
From (\ref{pab}) we obtain the total inelastic cross sections: $\sigma
_{ab}=\pi A_{ab}R_{ab}^{2}$. Following \cite{bb2} we assume that the ratios of
cross-sections satisfy the condition: $\sigma_{qd}/\sigma_{qq}=2$, what allows
to evaluate $A_{qd}$ in term of $A_{qq}$.
For the distribution of the constituents inside the proton we take a Gaussian
with radius $R$:%
\begin{equation}
D_{p}(s_{q},s_{d})=\frac{1+\lambda^{2}}{\pi R^{2}}e^{-(s_{q}^{2}+s_{d}%
^{2})/R^{2}}\delta^{2}(s_{d}+\lambda s_{q}), \label{Dp}%
\end{equation}
where the parameter $\lambda$ has the physical meaning of the ratio of the
quark and diquark masses $\lambda=m_{q}/m_{d}$ (the delta function guarantees
that the center-of-mass of the system moves along the straight line). One
expects $1/2\leq\lambda\leq1$.
For the distribution of quarks inside the pion we take a Gaussian with radius
$d$:
\begin{equation}
D_{\pi}(s_{q1},s_{q2})=\frac{1}{\pi d^{2}}e^{-(s_{q1}^{2}+s_{q2}^{2})/2d^{2}%
}\delta^{2}(s_{q1}+s_{q2}). \label{Dpi}%
\end{equation}
It allows to define the effective pion radius $R_{\pi}$:%
\begin{equation}
R_{\pi}^{2}=d^{2}+R_{q}^{2}.
\end{equation}
Now the calculation of $\sigma(b)$, given by (\ref{sigma}), reduces to
straightforward Gaussian integrations. The relevant formula is given in the
Appendix. Introducing this result into the general formulae given in Section
$1$ one can evaluate the total and elastic differential pion-proton cross-sections.
Our strategy is to adjust the parameters of the model so that it fits the data
best. We have analyzed the data for elastic $\pi^{+}p$ scattering at two
incident momenta $p_{lab}=$ $100$ GeV and $200$ GeV \cite{pion}. An example of
our calculation is shown in Fig. \ref{pip1}, where the differential cross
section $d\sigma/dt$ at $p_{lab}=200$ GeV, evaluated from the model, is
compared with data \cite{pion}.\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.4]{pip1.eps}
\end{center}
\caption{The model compared to data \cite{pion} on differential cross section
for elastic $\pi^{+}p$ scattering at $p_{lab}=200$ GeV (not all points
plotted, for clarity). Pion as a quark-quark system.}%
\label{pip1}%
\end{figure}
From Fig. \ref{pip1} one sees that it is possible to fit very precisely the
data up to $-t\approx1$ GeV$^{2}$. However, the model, with the pion as a
two-quark system, predicts the diffractive minimum which is not seen in the
data. In the next section we show that assuming the pion to be a single entity
(no constituent quark structure) we are able to remove this problem.
The relevant values of the parameters are given in Table \ref{Tab_a}%
.\footnote{The model is almost insensitive to the value of $\lambda$ (provided
that $1/2\leq$ $\lambda\leq1$).} \begin{table}[h]
\begin{center}%
\begin{tabular}
[c]{|c|c|c|c|c|c|}\hline\hline
$p_{lab}$ [GeV] & $R_{q}$ [fm] & $R_{d}$ [fm] & $R$ [fm] & $R_{\pi}$ [fm] &
$A_{qq}$\\\hline
$200$ & 0.26 & 0.82 & 0.28 & 0.44 & 1\\\hline
\end{tabular}
\end{center}
\caption{The parameters of the model at the incident momentum $p_{lab}=200$
GeV. Pion as a quark-quark system.}%
\label{Tab_a}%
\end{table}
It is intriguing to notice that the values of the most interesting parameters,
$R_{q}$ and $R_{d}$, are not far from those obtained in \cite{bb2} were
elastic $pp$ scattering was studied in similar approach. Again we observe that
the diquark is rather large.
\section{Pion as a single entity}
In the present section we assume that the pion interacts as a single entity
i.e. the pion has no constituent quark structure. The schematic view of the
pion-proton scattering in this approach is shown in Fig. \ref{fig2}.
The inelastic pion-proton cross-section $\sigma(b)$ at a fixed impact
parameter $b$ reads:%
\begin{equation}
\sigma(b)=\int d^{2}s_{q}d^{2}s_{d}D_{p}(s_{q},s_{d})\sigma(s_{q},s_{d};b),
\end{equation}
with $D_{p}(s_{q},s_{d})$ given by (\ref{Dp}) and $\sigma(s_{q},s_{d};b)$
expressed by:
\begin{equation}
1-\sigma(s_{q},s_{d};b)=[1-\sigma_{q\pi}(b-s_{q})][1-\sigma_{d\pi}(b-s_{d})].
\end{equation}
In analogy to the previous approach the inelastic differential quark-pion,
$\sigma_{q\pi}(s)$, and diquark-pion, $\sigma_{d\pi}(s)$, cross sections are
parametrized using simple Gaussian:%
\begin{equation}
\sigma_{a\pi}(s)=A_{a\pi}e^{-s^{2}/R_{a\pi}^{2}}. \label{p_a_pi}%
\end{equation}
In this case we constrain the radii $R_{a\pi}$ by the condition: $R_{a\pi}%
^{2}=R_{a}^{2}+R_{\pi}^{2}$ where $R_{a}$ denotes the quark or diquark's
radius and $R_{\pi}$ denotes the pion's radius.\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{fig2.eps}
\end{center}
\caption{Pion-proton scattering with the pion as a single entity.}%
\label{fig2}%
\end{figure}
This gives:%
\begin{align}
\sigma(b) & =\frac{A_{q\pi}x}{x+r}e^{-b^{2}/(x+r)}+\frac{A_{d\pi}%
y}{y+\lambda^{2}r}e^{-b^{2}/(y+\lambda^{2}r)}\\
& -\frac{A_{q\pi}A_{d\pi}xy}{xy+yr+\lambda^{2}xr}e^{-b^{2}\tfrac
{x+y+(1+\lambda)^{2}r}{xy+yr+\lambda^{2}xr}},\nonumber
\end{align}
where $x=R_{q}^{2}+R_{\pi}^{2},$ $y=R_{d}^{2}+R_{\pi}^{2}$ and $r=R^{2}%
/(1+\lambda^{2})$. Introducing this result into the general formulae given in
Section $1$ one can evaluate the total and elastic differential pion-proton cross-sections.
From (\ref{p_a_pi}) we deduce the total inelastic cross sections:
$\sigma_{a\pi}=\pi A_{a\pi}R_{a\pi}^{2}$. As before we demand that the ratios
of cross-sections satisfy the condition: $\sigma_{d\pi}/\sigma_{q\pi}=2$, what
allows to evaluate $A_{d\pi}$ in term of $A_{q\pi}$.
It turns out that the model in this form works very well indeed i.e. it is
possible to fit very precisely the data even up to $-t\approx3$ GeV$^{2}$. We
have analyzed the data at two incident momenta of $100$ and $200$ GeV
\cite{pion}. The results of our calculations are shown in Fig. \ref{pip2}%
.\begin{figure}[h]
\begin{center}
\includegraphics[scale=1.4]{pip2.eps}
\end{center}
\caption{Pion acts as a single entity. The model compared to data \cite{pion}
(not all points plotted, for clarity) on differential cross section for
elastic $\pi^{+}p$ scattering at $p_{lab}=100$ GeV (rescaled by a factor
$10^{-2}$) and $200$ GeV.}%
\label{pip2}%
\end{figure}
The relevant values of the parameters are given in Table \ref{Tab_b}. Again
the most interesting observation is the large size of the diquark, comparable
to the size of the proton. \begin{table}[h]
\begin{center}%
\begin{tabular}
[c]{|c|c|c|c|c|c|}\hline\hline
$p_{lab}$ [GeV] & $R_{q}$ [fm] & $R_{d}$ [fm] & $R$ [fm] & $R_{\pi}$ [fm] &
$A_{q\pi}$\\\hline
$100$ & 0.25 & 0.79 & 0.28 & 0.50 & 0.80\\\hline
$200$ & 0.25 & 0.79 & 0.28 & 0.52 & 0.75\\\hline
\end{tabular}
\end{center}
\caption{The parameters of the model at two incident momenta $p_{lab}=100$ GeV
and $200$ GeV. Pion acts as a single entity. }%
\label{Tab_b}%
\end{table}
\section{ Discussion and conclusions}
In conclusion, it was shown that the constituent quark-diquark structure of
the proton can account very well for the data on elastic $\pi^{+}p$
scattering. The detailed confrontation with data allows to determine the
parameters characterizing the proton and pion structure. We confirm the large
size of the diquark, while the pion seems to interact as a single entity i.e.
without constituent quark structure.
Several comments are in order.
(a) We compared the model only to elastic $\pi^{+}p$ scattering data, however,
there is no statistically significant difference between $\pi^{+}p$ and
$\pi^{-}p$ data \cite{pion} at any $t$ value (at least up to $-t\approx3$
GeV$^{2}$).
(b) The pion seems to interact as a single entity. It suggests that during
pion-nucleus collision the pion produces the same number of particles no
matter how many inelastic collisions it undergoes.\footnote{We thank Robert
Peschanski for this observation.}
\bigskip
\textbf{Acknowledgements}
We thank Andrzej Bialas for suggesting this investigation and illuminating
discussions. Discussions with Stephane Munier, Robert Peschanski, Michal
Pra{\-}sza{\-}lo{\-}wicz and Samuel Wallon are also highly appreciated.
\section{Appendix}
The problem is to calculate the following integral:%
\begin{gather}
\frac{r_{1}r_{2}}{\pi^{2}}\int d^{2}sd^{2}s^{\prime}e^{-r_{1}s^{2}}%
e^{-r_{2}s^{\prime2}}e^{-y_{1}(b+\lambda s-s^{\prime})^{2}}e^{-y_{2}(b+\lambda
s+s^{\prime})^{2}}\times\\
\times e^{-x_{1}(b-s-s^{\prime})^{2}}e^{-x_{2}(b-s+s^{\prime})^{2}%
}=\frac{r_{1}r_{2}}{\Omega}e^{-b^{2}\Gamma/\Omega},\nonumber
\end{gather}
where:%
\begin{align}
\Gamma & =(1+\lambda)^{2}\left[ 4x_{1}x_{2}(y_{1}+y_{2})+4y_{1}y_{2}%
(x_{1}+x_{2})+r_{2}(x_{1}+x_{2})(y_{1}+y_{2})\right] \nonumber\\
& +4r_{1}(x_{1}+y_{1})(x_{2}+y_{2})+r_{1}r_{2}(x_{1}+x_{2}+y_{1}+y_{2}),
\end{align}%
\begin{align}
\Omega & =(1-\lambda)^{2}(x_{1}y_{2}+x_{2}y_{1})+(1+\lambda)^{2}(x_{1}%
y_{1}+x_{2}y_{2})+\lambda^{2}[r_{2}(y_{1}+y_{2})+4y_{1}y_{2}]\nonumber\\
& +r_{1}(x_{1}+x_{2}+y_{1}+y_{2}+r_{2})+r_{2}(x_{1}+x_{2})+4x_{1}x_{2}.
\end{align}
Other needed integrals can be obtained by putting some of the $x_{1},$
$x_{2},$ $y_{1}$ or $y_{2}=0$.
|
1,116,691,499,163 | arxiv | \section{Introduction}
Typically, the most important goal of an agent in a reinforcement learning (RL) setting is to maximize the cumulative reward function over the long run, based on the immediate reward it receives at each timestep from the environment. Consequently, the reward is the main source of information for the algorithm allowing it to decide whether a given interaction within its environment was positive or not. In a way, the reward plays a similar role for an agent, as the biological sensations do for humans ~\cite{sutton2018reinforcement}.
Oftentimes, for each RL problem there exists some prespecified goal function to maximize in designer's mind. Generally, in simple environments, it is straightforward to translate the desired intention (our goal) into the reward signal. The process of designing a reward signal usually comes down to trying many variants of the settings, and observing the process of algorithm's improvement on the goal along the way \cite{sutton2018reinforcement}. Yet, one might argue that as we move towards more difficult environments, especially those that modern deep reinforcement learning (DRL) tries to study, this process can quickly become unpredictable and lead to surprising results \cite{muszynski2017happiness}. Additionally, RL algorithms have also been shown to be able to ``hack" their own goal functions to deliver the reward in unintended and unexpected ways \cite{amodei2016concrete}.
As a result, in this work we concentrate on exploring the space of all possible reward signals in a RL setting in order to bypass hand-engineering of the reward signal. We propose an algorithm for evolving the reward signal at each timestep of agents' interaction with the environment in order to optimize $N$ goals that are of interest to the designer of the system. We demonstrate that the algorithm can work in the domain of the game of Pong when trying to learn the goals of winning, losing and cooperating.
We argue that a number of the goal and reward signal pairs found by the algorithm are non-obvious and probably would not have been specified a priori. We also find differences in the training performance between our approach and typical score-based reward signals hand-coded in Pong. Concretely, for some of the goals the evolutionary approach yielded better stability of training or an overall better performance on a given goal.
\section{Background} \label{sec:background}
Reinforcement learning (RL) is an area of machine learning that describes the learner as an agent in an environment \cite{sutton2018reinforcement}.
Mathematically, the setting is usually formalized as a \emph{Markov decision process (MDP)}.
The goal of the agent, the goal of learning in an MDP, is to find a policy $\pi$ that maximizes the cumulative reward by best mapping states to action selection probabilities. The cumulative reward is described by the sum of immediate scalar reward $r$ over $t$ timesteps: $\sum_{1}^{t} r_t$.
In the case when the model of the environment is not available, RL can be used to find $\pi$. \emph{Q-learning} \cite{dayan1992q} is arguably the most famous example of an RL algorithm. It is a model-free temporal difference algorithm. In Q-learning, a value function $Q(s,a)$ is calculated over state-action pairs. It is updated based on the immediate reward and the discounted expected future reward in the following way:
\begin{align}
Q(s_t, a_t) \gets & Q(s_t, a_t) + \alpha [r_{t+1} + \nonumber \\ &+ \gamma \max_{a} Q(s_{t+1}, a) - Q(s_t, a_t)]
\end{align}
In the update rule above, $\gamma$ is the discount factor for future rewards, and $\alpha \in [0,1]$ is the learning rate that determines how quickly $Q$ is updated based on new reward information.
Q-learning is proven to converge to the optimal policy $\pi^*$, given “sufficient” number of updates for each state-action pair, and a decreasing learning rate $\alpha$.
The growing amount of data and much better technology to work with it, have both contributed greatly to the recent reemergence of interest in neural networks. As a result, neural networks have grown from ``shallow" models of a few layers to ``deep" models that have quickly shown impressive practical power in the realm of image recognition and beyond \cite{lecun2015deep,schmidhuber2015deep}.
Moreover, it turned out that in complex environments (large number of states) it is possible to approximate the action-value function by a deep neural network with parameters $\theta$. This approach is known as a Deep Q-learning algorithm (DQN) \cite{mnih2015} - and has given birth to Deep Reinforcement Learning (DRL). Concretely, in DQN $Q_\theta^*$ is obtained by minimizing the loss function:
\begin{equation}
L_{t}(\theta) = E_{h_{t},a_{t},r_{t},h_{t+1}}[(y_t - Q_\theta^\pi(h_t, a_t))^2],
\end{equation}
where experience reply is a tuple $h_t = (s_t, a_t, s_{t+1}, r_{t+1})$, $y_t = r_t + \gamma \max_{a'}Q_\theta^\pi(a_{t+1}, h_{t+1})$,
and where $\pi$ is an $\epsilon$-greedy policy that takes an action $argmax_{a_t}Q^\pi(a_{t}, h_{t})$ with probability $1 - \epsilon$, or
takes a random action otherwise.
The methods presented above focus on estimating value functions in order to solve RL problems. The problem can also be approached with evolutionary computation \cite{sutton2018reinforcement,eiben2015introduction}. Its main building block is an \emph{evolutionary algorithm}, the main components of which are: representation (definition of individuals), evaluation function (or fitness function), population, parent selection mechanism, variation operators, recombination and mutation, survivor selection mechanism (replacement) \cite{eiben2015introduction}.
In this work, we concentrate on using DQN as an approach to learn each goal based on the rewards that are generated by using a subset of elements of the evolutionary algorithm (see Algorithm \ref{alg:re}).
\section{Related work} \label{sec:relWork}
We present the literature connected to our work by looking through the following lenses: approaches to learning a reward signal specifically through evolution and evolutionary methods with the focus on deep reinforcement learning.
\subsubsection{Reward evolution}
The most closely related works to that presented in this paper are those concentrating on the evolution of the reward. The observation leading to this approach can be the fact, that the reward can be seen as one of the variables in the general RL setting. As a result, it can be optimized, and evolutionary algorithms are often a good choice for any optimization task. The proposed solution can then be evaluated against the designer's goal, and the worst performing solutions eliminated, leading to an improved result.
To the best of our knowledge, the work by Singh et al. [2009] is the closest in spirit to ours. The work places the design of a reward function in an evolutionary context. The paper formulates an optimal reward function given a fitness function and some distribution of environments. It then uses exhaustive search over the parameter space of the reward function. In the experiment, the authors find that the reward found through this process can differ from the one intended by the fitness function, but can still be advantageous \cite{singh2009rewards}.
Our work differs in the sense that we present the evolution of the reward in a complex domain of a Pong game, in an environment with an opponent present. Whereas Singh et al. [2009] concentrate on experiments in a setting with predefined features to define the combinatorial space of reward functions, we consider the mutations of the reward at each timestep. While Singh et al.[2009] present an emergence of reward functions, our approach additionally shows an emergence of complex behaviours of an agent.
Singh et al. [2009] extended their work with the focus on the problem of intrinsic versus extrinsic motivation \cite{singh2010intrinsically}. More recently, the framework presented by Singh et al. [2009] has been expanded to the space of reward functions spanned by a given set of feature states and multi-objective evaluation function \cite{grunitzki2017flexible}.
Genetic Programming has also been used in order to evolve a population of reward functions \cite{niekum2010evolved,niekum2011evolution,niekum2010genetic}. In order to search for reward functions, the search is preformed over the space of programmes generating the rewards. The approach is proposed as a method to ``alleviate the difficulty of scaling reward function search" and replace the exhaustive search used by Singh et al. [2009].
Another approach has been to use evolutionary computation to evolve a population of agents' behaviour, allowing them to inherit the rewards as a secondary optimization process \cite{ackley1991interactions}. The rewards are mutated, hence the fitness can increase over time. There are two neural networks involved. The first evaluation network is fixed and uses inherited weights to produce a scalar reward. The second - action network - is also inherited, but trainable after initialization.
\subsubsection{Evolutionary methods in deep reinforcement learning}
Lastly, we broadly describe the context of using evolutionary methods specifically in a deep reinforcement learning setting.
Perhaps the first study to present the applicability of evolutionary methods in the context of complex environments was by Salimans et al. \cite{salimans2017evolution}. It showed that using evolutionary strategies and enough computational power was enough to learn to perform well in the domain of Atari games. The research mixing the worlds of DRL and evolution has quickly followed. Ideas from evolutionary computation have been mostly applied to DRL by creating populations of agents which undergo mutations and increase the diversity of strategies created for the agents. Then, when agents interact, the process of elimination guarantees high quality of the resulting solution. Such an approach was part of an algorithm mastering a difficult domain of Starcraft II \cite{alphastarblog}, and the evolutionary aspects of that system have also been investigated \cite{arulkumaran2019alphastar}. Another study drawing on ideas from evolution in the DRL setting is Jaderberg et al. \cite{jaderberg2018human} with an objective of ``the emergence of complex cooperative agents." The goal is achieved by applying a variant of population-based training \cite{jaderberg2017population} to DRL agents.
Recently, the reward search has been automated by adding an additional, evolutionary layer over standard RL layer, to find the reward that maximizes the task objective (through proxy rewards \cite{chiang2018learning} - parameterized versions of the standard environment rewards) in a continuous control task \cite{faust2019evolving}.
To the best of our knowledge, ours is the first study to concentrate specifically on the problem of learning a reward signal in a complex, multi-agent setting through an evolutionary algorithm without any prespecified features used as combinations to mutate the reward function. We create a population of agents by training them separately on different mutations of the reward signal. The reward signal is not prespecified, but its evolution allows us to achieve a set of complex and diverse behaviours.
\begin{algorithm}[tbp]
\caption{Reward signal evolution algorithm}
\label{alg:re}
\begin{algorithmic}[1]
\STATE initialize learning algorithm $A$
\STATE initialize $i$ goal functions $G_i(r)$
\STATE initialize $j$ unique random reward sequences $r_j$
\STATE store $\textbf{r} = \{i \Vert r_j\}_{i{\times}j}$
\STATE burn-in: train $A$ on $r_j$ for $M$ timesteps
\REPEAT
\STATE train $A(r_j)$ until $M$ timesteps
\STATE test $A(r_j)$ for $m$ timesteps
\STATE calculate $G_i(r_j)$ on test data
\FOR{$i$}
\STATE remove $ argmin \bar G_i(r_j)$ from $\textbf{r}$
\STATE add $p$ unique random mutations of $r_j$ to $\textbf{r}$
\ENDFOR
\STATE $M := M + M$
\UNTIL{$convergence$}
\end{algorithmic}
\end{algorithm}
\section{Reward signal evolution}
In this section we concentrate on the details of our approach. If we think about the interplay between the number of reward signals and goal functions, we can distinguish the following three cases: a) $1$ to $1$; b) $n$ to $1$; c) $n$ to $N$.
Usually, in RL we have a situation in which there is one predefined reward signal, designed with one assumed goal in mind - we call it the $1$ to $1$ case. One can also have a situation, when many reward signals ($n$) can be explored to find the one best optimizing our goal ($n$ to $1$). The last combination is the one in which we have $n$ reward signal candidates, and use them to optimize for many different goals $N$ ($n$ to $N$). The Algorithm \ref{alg:re} presented in this section can be used to cover these cases, and can be seen as a general framework of sorts.
Concretely, the proposed approach (see Algorithm \ref{alg:re}) requires us to choose our learning algorithm, $1$ or many goal functions we are interested in (can be seen as fitness functions), and $1$ or many unique random reward sequences (here, the goal is a function representing our intended behaviour of the agent, whereas the reward is the instantaneous scalar quantity received by the agent from the environment at each timestep). The algorithm of choice is then trained on the initial reward signals, and its performance on the specified goal functions is checked periodically. At that point two things happen - first, we eliminate the rewards leading to the worst result for each of the goals, and second, we add the required number of new reward signal mutations to the population. In the example provided in Section \ref{sec:Experiment} the convergence means that we have eliminated all the possible reward signal mutations, and are left with one winner per each goal. In general, in many examples one could create almost infinitely many reward signals. That could be seen as the \emph{mutation} phase. Once you do not mutate any new reward signals, the algorithm enters the \emph{elimination} phase, until we are left with top $k$ solutions that we require.
Moreover, we distinguish the following ways to choose $r_j$ in Algorithm \ref{alg:re}:
\begin{itemize}
\item \textbf{Fixed} - the typical way for setting a reward signal in RL
\item \textbf{Semi-evolutionary} - the algorithm is free to mutate the reward for the set of rules specified by the expert (e.g. position of the ball in a pong game, or for interacting with specific elements of the environment)
\item \textbf{Evolutionary} - the algorithm is free to mutate the reward at any timestep $t$.
\end{itemize}
Finally, we note that the evolutionary approach for choosing $r_j$ can be seen as a \emph{reward sampling scheme}, sampling from the distribution of all possible reward signals.
\section{Experiment}\label{sec:Experiment}
\subsection{Set-up}
To illustrate how Algorithm \ref{alg:re} could work in practice, we chose the Pong game environment, as shown in Figure \ref{fig:pong_regions}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.2\textwidth]{pong_regions2}
\caption{The Pong game screen showing three rectangular regions (a1, a2 and a3 separated by vertical red lines) in which the reward signal can be mutated (it can take either a value of 0 or 1). }
\label{fig:pong_regions}
\end{figure}
In all experiments we concentrate on controlling the paddle on the right side of the screen.
We use DQN \cite{mnih2015} as the base algorithm $A$ in Algorithm \ref{alg:re}.
Typically, RL algorithms have one reward function to optimize for. Here, we give the algorithm three reward functions ($i = 3$). Concretely, the goals are:
\begin{itemize}
\item \textbf{Winning}, maximizing the cumulative number of points scored by the agent ($G_1(r)$)
\item \textbf{Losing}, maximizing the cumulative number of points lost by the agent ($G_2(r)$)
\item \textbf{Cooperation}, maximizing the time spent playing until the point is lost by either of the players ($G_3(r)$).
\end{itemize}
We use the semi-evolutionary way to choose $r_j$ by introducing \emph{mutation regions} to lower the computational cost of the algorithm and have a better intuition and control of the obtained results.
To generate the reward sequences, $r_j$, the algorithm can choose from the values of either 0 or 1, in any of the reward regions $a1$, $a2$, $a3$ depicted in Figure \ref{fig:pong_regions}. That means that there is one unique reward signal when the ball passes either of the paddles, and another one when the ball is between the paddles.
The three regions, combined with a 0 or 1 reward signal in each of them, yield a total of 8 unique reward signals that the algorithm can discover through mutations. We set the initial value of $j$ to 3. Furthermore, we initialize the parameters $M$ and $m$ to have the values of 1 million and 100,000 respectively. That means that we perform the elimination and mutation step of the algorithm after each 1 million timesteps, based on the test results from further 100,000 timesteps. Lastly, after each elimination, we add two new unique reward signals to the population ($p = 2$).
Our aim in choosing this experiment was to first, verify if the proposed algorithm would work at all. Second, limiting the number of possible mutations allows us to perform an exhaustive search on the space of the reward signals and thus have better comparison with the typical score-based approach baseline. We also acknowledge the complexity and scalability issues of the proposed approach, which is typical for evolutionary approaches, however those were not the main focus of our work at this stage.
\subsection{Results}
First, we randomly generated three reward signals, and obtained the following values of $r_j$: $r_1 = 000$, $r_2 = 001$, and $r_3 = 011$. As an example, $r_2 = 001$ corresponds to the reward signal of 1 when the ball passes the paddle on the right, and 0 otherwise (see Figure \ref{fig:pong_regions}). The DQN algorithm was trained on each of the reward signals $r_j$ for 1 million timesteps, as a burn-in phase. Then, the algorithm entered its main loop. After the next 1 million timesteps of training, each $A(r_j)$ was tested for 100,000 timesteps. The values of $G_i(r_j)$ were calculated, and the reward signals corresponding to the lowest value for each goal were eliminated from $\textbf{r}$. Lastly, two unique reward signal mutations were added: $r_4 = 101$ and $r_5 = 110$. Hence, the state of $\textbf{r}$ after the first iteration was given by:
\begin{equation}
\textbf{r} =
\begin{Bmatrix}
1 \Vert \cancel{000} & 001 & 011 &\aug& 101 & 110\\
2 \Vert 000 & 001 & \cancel{011} &\aug& 101 & 110\\
3 \Vert \cancel{000} & 001 & 011 &\aug& 101 & 110
\end{Bmatrix}
\end{equation}
The algorithm continued to run for six more iterations. We present the steps of that process in a graphical form in Figures 2-4.
The figures capture the creation of each new mutation of the reward signal, its score for a given goal, and the reward signals eliminated along the way. Ties (in case when different reward signals led to the same outcome for a given goal) were decided at random.
Additionally, we present each of the iterations in a matrix form, to show the outcome of each loop in more detail:
\begin{equation}
\begin{split}
\textbf{r} = &
\begin{Bmatrix}
1 \Vert \cancel{000} & 001 & 011 &\aug& 101 & 110\\
2 \Vert 000 & 001 & \cancel{011} &\aug& 101 & 110\\
3 \Vert \cancel{000} & 001 & 011 &\aug& 101 & 110
\end{Bmatrix}
\\
\xrightarrow{3M}&
\begin{Bmatrix}
1 \Vert \cancel{001} & 011 & 101 & 110 &\aug& 010 & 100\\
2 \Vert 000 & 001 & 101 & \cancel{110} &\aug& 010 & 100\\
3 \Vert \cancel{001} & 011 & 101 & 110 &\aug& 010 & 100
\end{Bmatrix}
\\
\xrightarrow{4M}&
\begin{Bmatrix}
1 \Vert 011 & \cancel{101} & 110 & 010 & 100 &\aug& 111\\
2 \Vert 000 & 001 & 101 & 010 & \cancel{100} &\aug& 111\\
3 \Vert 011 & \cancel{101} & 110 & 010 & 100 &\aug& 111
\end{Bmatrix}
\\
\xrightarrow{5M}&
\begin{Bmatrix}
1 \Vert 011 & 110 & 010 & 100 & \cancel{111}\\
2 \Vert 000 & 001 & 101 & \cancel{010} & 111\\
3 \Vert 011 & 110 & 010 & 100 & \cancel{111}
\end{Bmatrix}
\\
\xrightarrow{6M}&
\begin{Bmatrix}
1 \Vert \cancel{011} & 110 & 010 & 100\\
2 \Vert 000 & 001 & 101 & \cancel{111}\\
3 \Vert \cancel{011} & 110 & 010 & 100
\end{Bmatrix}
\\
\xrightarrow{7M}&
\begin{Bmatrix}
1 \Vert 110 & \cancel{010} & 100\\
2 \Vert 000 & \cancel{001} & 101\\
3 \Vert 110 & 010 & \cancel{100}
\end{Bmatrix}
\\
\xrightarrow{8M}&
\begin{Bmatrix}
1 \Vert \cancel{110} & 100\\
2 \Vert 000 & \cancel{101}\\
3 \Vert 110 & \cancel{010}
\end{Bmatrix}
\end{split}
\end{equation}
Eventually, the surviving reward signal combinations for each of the goals were:
\begin{itemize}
\item \textbf{Winning}: $r_7 = 100$
\item \textbf{Losing}: $r_1 = 000$
\item \textbf{Cooperation}: $r_5 = 110$
\end{itemize}
\begin{figure}[t]
\centering
\scalebox{0.55}{
\begin{tikzpicture}
\begin{axis}[xlabel=timesteps (in million), ylabel=won, legend pos=outer north east]
\addplot[blue, mark=x] table[x=M, y=000] {win.dat};
\addlegendentry{000}
\addplot[red, mark=o] table[x=M, y=001] {win.dat};
\addlegendentry{001}
\addplot[green, mark=*] table[x=M,y=010] {win.dat};
\addlegendentry{010}
\addplot[cyan, mark=-] table[x=M,y=100] {win.dat};
\addlegendentry{100}
\addplot[violet, mark=|] table[x=M,y=011] {win.dat};
\addlegendentry{011}
\addplot[magenta, mark=triangle] table[x=M,y=101] {win.dat};
\addlegendentry{101}
\addplot[gray, mark=+] table[x=M,y=110] {win.dat};
\addlegendentry{110}
\addplot[orange, mark=oplus] table[x=M,y=111] {win.dat};
\addlegendentry{111}
\end{axis}
\end{tikzpicture}
}
\caption{Survival and fitness of a given reward signal when the goal is \textbf{winning}.}
\bigskip
\scalebox{0.55}{
\begin{tikzpicture}
\begin{axis}[xlabel=timesteps (in million), ylabel=lost, legend pos=outer north east]
\addplot[blue, mark=x] table[x=M, y=000] {lose.dat};
\addlegendentry{000}
\addplot[red, mark=o] table[x=M, y=001] {lose.dat};
\addlegendentry{001}
\addplot[green, mark=*] table[x=M,y=010] {lose.dat};
\addlegendentry{010}
\addplot[cyan, mark=-] table[x=M,y=100] {lose.dat};
\addlegendentry{100}
\addplot[violet, mark=|] table[x=M,y=011] {lose.dat};
\addlegendentry{011}
\addplot[magenta, mark=triangle] table[x=M,y=101] {lose.dat};
\addlegendentry{101}
\addplot[gray, mark=+] table[x=M,y=110] {lose.dat};
\addlegendentry{110}
\addplot[orange, mark=oplus] table[x=M,y=111] {lose.dat};
\addlegendentry{111}
\end{axis}
\end{tikzpicture}
}
\caption{Survival and fitness of a given reward signal when the goal is \textbf{losing}.}
\bigskip
\scalebox{0.55}{
\begin{tikzpicture}
\begin{axis}[xlabel=timesteps (in million), ylabel=co-op, legend pos=outer north east]
\addplot[blue, mark=x] table[x=M, y=000] {coop.dat};
\addlegendentry{000}
\addplot[red, mark=o] table[x=M, y=001] {coop.dat};
\addlegendentry{001}
\addplot[green, mark=*] table[x=M,y=010] {coop.dat};
\addlegendentry{010}
\addplot[cyan, mark=-] table[x=M,y=100] {coop.dat};
\addlegendentry{100}
\addplot[violet, mark=|] table[x=M,y=011] {coop.dat};
\addlegendentry{011}
\addplot[magenta, mark=triangle] table[x=M,y=101] {coop.dat};
\addlegendentry{101}
\addplot[gray, mark=+] table[x=M,y=110] {coop.dat};
\addlegendentry{110}
\addplot[orange, mark=oplus] table[x=M,y=111] {coop.dat};
\addlegendentry{111}
\end{axis}
\end{tikzpicture}
}
\caption{Survival and fitness of a given reward signal when the goal is \textbf{cooperation}.}
\end{figure}
\subsection{Comparisons and sensitivity of the results}
Here, we concentrate on analysing the sensitivity of the results presented in the previous section and on comparisons with baselines.
Table \ref{table:algo_results} shows the intermediate score on each of the goals for each of the reward signals every 1 million timesteps. This allows us to evaluate the sensitivity of the presented solutions, as we can investigate what could have happened, had the algorithm made different choices along the way. The table also includes the scores on the said 1 million checkpoints for three baseline DQN algorithms ('b100', 'b010', 'b001' in Table \ref{table:algo_results}) and a random play baseline (rand in Table \ref{table:algo_results}). The random play baseline takes random actions at each timestep. With equal probability it can move up, down, or stay it the same place. The baseline DQN algorithms have the same architecture as the DQN used in Algorithm \ref{alg:re}, however the reward signals on which they learn differ. The rewards for the baseline DQNs are based on the score. This means that 'b100' receives a reward of 1 only when it scores a point, 'b001' when it loses a point, and 'b010' receives a reward of 1 as long as neither of the players scores. The baseline algorithms 'b100', 'b010', 'b001' are closest - it would seem - to the semi-evolved algorithms that used '100', '010', and '001' as their respective reward signals. The main difference is in the frequency of the timeframes for which the algorithm receives the reward.
The results of the random play, as shown in Table \ref{table:algo_results} are rather consistent as far as all the three proposed goal functions are concerned.
To analyze the results in Table \ref{table:algo_results} in greater detail, we look at each of the goals separately.
\begin{table}[t]
\centering
\caption{Performance of DQN on the goal of winning, losing and cooperating for all evolved signal combinations, baseline score-based signals, and random signal at each 1M checkpoint between 2M and 9M time steps, when no signal elimination is performed. Additionally, average values of the score obtained per each goal across the whole training session are provided.}
\label{table:algo_results}
\scalebox{0.73}{
\begin{tabular}{lrrrrrrrrr}
\toprule
& \multicolumn{1}{l}{} & \multicolumn{1}{l}{2M} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{3M} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{4M} & \multicolumn{1}{l}{} \\
$r_j$ & \multicolumn{1}{l}{won} & \multicolumn{1}{l}{lost} & \multicolumn{1}{l}{co-op} & \multicolumn{1}{l}{won} & \multicolumn{1}{l}{lost} & \multicolumn{1}{l}{co-op} & \multicolumn{1}{l}{won} & \multicolumn{1}{l}{lost} & \multicolumn{1}{l}{co-op} \\ \midrule
000 & 0 & 1366 & 73.17 & 0 & 1375 & 72.66 & 0 & 1372 & 72.84 \\
001 & 27 & 1328 & 73.78 & 0 & 1376 & 72.47 & 0 & 1359 & 73.54 \\
010 & 79 & 624 & 142.06 & 79 & 541 & 160.97 & 68 & 624 & 144.33 \\
100 & 312 & 586 & 111.05 & 141 & 523 & 150.04 & 284 & 553 & 119.26 \\
011 & 0 & 1123 & 88.90 & 2 & 1235 & 80.72 & 0 & 1369 & 73.01 \\
101 & 0 & 1380 & 72.42 & 0 & 1311 & 76.21 & 0 & 1380 & 72.42 \\
110 & 137 & 579 & 139.36 & 138 & 416 & 180.09 & 66 & 504 & 175.07 \\
111 & 0 & 1275 & 78.41 & 2 & 1369 & 72.90 & 1 & 1385 & 72.11 \\
b100 & 62 & 773 & 119.54 & 0 & 1311 & 76.24 & 256 & 895 & 86.64 \\
b010 & 152 & 855 & 99.18 & 244 & 344 & 169.92 & 110 & 644 & 132.37 \\
b001 & 0 & 1378 & 72.50 & 0 & 1378 & 72.47 & 0 & 1392 & 71.92 \\
rand & 62 & 997 & 94.24 & 75 & 965 & 96.07 & 62 & 1006 & 93.46 \\ \midrule
& & & & & & & & & \\
& \multicolumn{1}{l}{} & \multicolumn{1}{l}{5M} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{6M} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{7M} & \multicolumn{1}{l}{} \\
$r_j$ & \multicolumn{1}{l}{won} & \multicolumn{1}{l}{lost} & \multicolumn{1}{l}{co-op} & \multicolumn{1}{l}{won} & \multicolumn{1}{l}{lost} & \multicolumn{1}{l}{co-op} & \multicolumn{1}{l}{won} & \multicolumn{1}{l}{lost} & \multicolumn{1}{l}{co-op} \\ \midrule
000 & 3 & 1120 & 88.84 & 0 & 1373 & 72.68 & 0 & 1372 & 72.79 \\
001 & 1 & 1383 & 72.14 & 0 & 1386 & 72.10 & 51 & 1202 & 79.72 \\
010 & 163 & 277 & 226.97 & 54 & 481 & 186.40 & 99 & 339 & 227.83 \\
100 & 341 & 552 & 111.79 & 223 & 431 & 152.05 & 394 & 434 & 120.65 \\
011 & 13 & 743 & 132.07 & 15 & 1302 & 75.81 & 0 & 1310 & 76.23 \\
101 & 0 & 1359 & 73.51 & 0 & 1370 & 72.92 & 0 & 1369 & 72.90 \\
110 & 343 & 272 & 161.76 & 192 & 405 & 167.16 & 206 & 294 & 199.16 \\
111 & 0 & 1383 & 72.19 & 41 & 1175 & 82.08 & 0 & 1231 & 81.20 \\
b100 & 433 & 304 & 135.44 & 550 & 355 & 110.37 & 577 & 267 & 118.3 \\
b010 & 262 & 412 & 148.15 & 64 & 547 & 163.49 & 102 & 509 & 163.11 \\
b001 & 0 & 1376 & 72.57 & 0 & 1373 & 72.78 & 0 & 1384 & 72.21 \\
rand & 72 & 990 & 94.06 & 49 & 986 & 96.54 & 68 & 933 & 99.84 \\ \midrule
& & & & & & & & & \\
& \multicolumn{1}{l}{} & \multicolumn{1}{l}{8M} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{9M} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{Avg.} & \multicolumn{1}{l}{} \\
$r_j$ & \multicolumn{1}{l}{won} & \multicolumn{1}{l}{lost} & \multicolumn{1}{l}{co-op} & \multicolumn{1}{l}{won} & \multicolumn{1}{l}{lost} & \multicolumn{1}{l}{co-op} & \multicolumn{1}{l}{won} & \multicolumn{1}{l}{lost} & \multicolumn{1}{l}{co-op} \\ \midrule
000 & 0 & 1382 & 72.27 & 0 & 1374 & 72.70 & 0 & 1342 & 74.74 \\
001 & 0 & 1375 & 72.62 & 0 & 1385 & 72.14 & 10 & 1349 & 73.56 \\
010 & 48 & 503 & 180.99 & 98 & 338 & 228.74 & 86 & 466 & 187.29 \\
100 & 494 & 499 & 100.44 & 505 & 385 & 112.05 & 337 & 495 & 122.17 \\
011 & 2 & 1381 & 72.23 & 0 & 1382 & 72.23 & 4 & 1231 & 83.90 \\
101 & 0 & 1380 & 72.40 & 0 & 1382 & 72.33 & 0 & 1366 & 73.14 \\
110 & 194 & 200 & 253.17 & 161 & 286 & 222.98 & 180 & 370 & 187.34 \\
111 & 0 & 1370 & 72.94 & 0 & 1381 & 72.38 & 6 & 1321 & 75.53 \\
b100 & 528 & 289 & 122.09 & 482 & 289 & 129.45 & 361 & 560 & 112.26 \\
b010 & 140 & 331 & 211.9 & 83 & 559 & 155.55 & 145 & 525 & 155.46 \\
b001 & 0 & 1392 & 71.76 & 0 & 1382 & 72.31 & 0 & 1382 & 72.32 \\
rand & 58 & 999 & 94.49 & 65 & 975 & 96.07 & 64 & 981 & 95.60 \\ \bottomrule
\end{tabular}
}
\end{table}
\subsubsection{Learning to lose}
Losing every single point seems to be the easiest goal to learn, as expected. The reward signal '001' led to a very good result on the goal of losing the maximum number of points, but its performance was close on that goal to many other reward signals that just proved too difficult to learn, and similar to the baseline score 'b001'. Semi-evolved rewards '000' and '111' did not lead to any significant learning, even over 9 million timesteps of training - and as a result achieved a good performance on the goal of losing. The final results on all the goals for the said reward signals where close to that of '011' and '101', which also proved too confusing for the DQN to learn based on the same pixel input.
\subsubsection{Learning to win}
Three out of the eight semi-evolved reward signals led to an algorithm that was able to score more points than random play. Namely, '010', '100', '110'. It seems that as long as the reward is given for keeping the ball between the paddles, and there is no incentive to lose - the signal is good enough to learn to score some points. Interestingly, the signal '110' led to the second best performing algorithm on the goal concerning scoring as many points as possible. It was only topped by perhaps the ``obvious" reward signal choice of '100'. The baseline algorithm 'b100' proved better on average than '100' across the 8 test checkpoints, however it did suffer from less consistency, and even got stuck in a local extremum at 3 million timeframes (the algorithm kept the paddle in the lower part of the screen and only occasionally managed to hit the ball). This could be explained by the frequency of the timeframes for which the algorithm receives the reward during training. It seems that the performance of the algorithm trained on the '100' signal was more consistently improving on its goal, and achieved the highest score at 9 million timesteps, with the highest score on $G_1$ of all the algorithms. It seems it was also learning faster than the baseline in the beginning.
\subsubsection{Learning to cooperate}
As far as cooperation is concerned, two reward signals led to a very similar average performance on that goal. Concretely, running an algorithm to learn on the reward signal '010' and '110' would give comparable performance. We find it interesting, as we would argue that '110' is not an obvious choice to set-up a priori with cooperation in mind. Both reward signals led to a better cooperation than the baseline 'b010'. It is worth noting, that not only did the reward signal '110' lead to the highest average cooperation, but also to the third highest score on the ``winning" goal (including the baseline 'b100').
\subsubsection{Sensitivity}
Finally, Algorithm \ref{alg:re} led to the following choice of reward signals: '100', '000', '110' for $G_1$, $G_2$ and $G_3$, respectively. The results are not very sensitive to the choices between different reward signals along the way. This is because in the final round we always keep the solution leading to the best performance in that particular round, and in previous iterations we only eliminate the worst performing solution. This means that as long as the signal does not lead to the worst performance on a given goal consistently, it has a high probability of making it through to the next round. The signals '100', '000', '110' lead to a very good performance on their respective goals in each round and are the best in the final round, as a result they emerge as the winners of the algorithm unfailingly. Those signals can be seen as dominant strategies for their particular goals and they end up surviving no matter at which stage they were introduced. In comparison, due to its low performance at one of the milestones, the baseline signal 'b100' does not progress to the final elimination round 39\% of the time.
For clarity, we summarize the main results of the experiment in the list below:
\begin{itemize}
\item Algorithm \ref{alg:re} proved to work on highly dimensional input space of pixels
\item many rewards can lead to a good performance on the goal of ``losing"
\item fewer reward signals lead to learning to cooperate
\item the goal of ``winning" proved the most difficult to learn, with the smallest number of reward signals leading to the desired outcome
\item our experiments give further evidence of the fact that there exist multiple reward signals to learn a given goal
\item interestingly, the performance during training of the baseline 'b100' seemed much less stable and predictable than for the algorithm trained on the reward signal '100', possibly due to the frequency of timesteps for which each algorithm ``experienced" the reward
\item lastly, the sensitivity of the results shows that it is not obvious at which point the training phase should stop, in order to have the agent performing at its best with respect to the goal.
\end{itemize}
\section{Discussion}
The problem of reward signal design is an interesting open area of research in RL, especially as we move from low towards highly dimensional input spaces where it is potentially easier to misspecify the reward and end up with an unwanted behaviour of the algorithm.
As a possible approach for exploring the space of all possible reward signals, in this work we presented an algorithm that allows to mutate the reward at each timestep of interacting with the environment. We showed that it is possible to use the proposed algorithm in order to learn complex behaviours of winning, cooperation and losing by specifying high-level goal functions, but without providing a concrete reward signal for any of them. The reward signals were discovered from scratch and we found that often there is no unique reward signal to achieve a certain level of performance on a given goal. The number of reward signals leading to a given goal also varies, depending on the complexity of the goal.
The proposed algorithm is highly adaptable, i.e. the designer can change the learning algorithm used, modify the goals (also look at combinations of basic goals for a specific application), change the statistic used as the fitness function, control the number of mutations, etc.
There are many possible extensions of the presented approach. First, sampling the reward from a continuous rather than discrete set of values seems worth investigating as are any ways for lowering the computational cost of the algorithm. Additionally, it has been observed that training using independent RL can lead to overfitting to opponents' strategy \cite{lanctot2017unified,muszynski2017happiness} and one way to overcome that problem is to use population-based training \cite{jaderberg2018human}. Hence, we argue that joining reward evolution with population-based training could improve such solutions to an even
greater degree.
\section*{Acknowledgments}
This work was supported by Microsoft Research through its PhD Scholarship Programme.
\bibliographystyle{named}
|
1,116,691,499,164 | arxiv | \section{Introduction}
To accommodate the ever increasing traffic in wireless spectrum, FCC has legalized the unlicensed access of a part of the licensed spectrum band- which is known as {\em secondary access}. However, secondary access will only proliferate when it is rendered profitable to license holders (primaries)\footnote{Primaries are unlikely to encourage \lq\lq{}free ride\rq\rq{} where secondaries (unlicensed users) can access idle channel without paying for them. If primaries are not compensated, they may therefore create artificial activity by transmitting some noise to deny secondaries.}. We devise a framework to enable primaries to decide price they would charge unlicensed users (secondaries). The price will depend on the transmission rate which evolves randomly owing to usage of subscribers of primaries and fading. A secondary receives a payoff from a channel depending on the transmission rate offered by the channel and the price quoted by the primary. Secondaries buy those channels which give them the highest payoff, which leads to a {\em competition} among primaries.
Price selection in oligopolies has been extensively investigated in economics as a non co-operative Bertrand Game \cite{mwg} and its modifications \cite{Osborne, Kreps}. Price competition among wireless service providers have also been explored to a great extent \cite{Ileri, Mailespectrumsharing, Mailepricecompslotted, Xing, Niyatospeccrn, Niyatomultipleseller,Zhou,kavurmacioglu,yitan,duan,zhang,jia,yang,sengupta,kim}. We divide this genre of works in two parts: i) Papers which model price competition as Auction (\cite{sengupta,Xu}), and ii) Papers which model the price competition as a non co-operative game (\cite{Ileri, Mailespectrumsharing, Mailepricecompslotted, Xing, Niyatospeccrn, Niyatomultipleseller, kavurmacioglu,jia,yang,yitan,zhang,lin,duan,kim}). We now distinguish our work with respect to these papers. As compared to the genre of work in the first category our model is readily scalable and a central auctioneer is not required. Some papers in the second category \cite{jia,yang,yitan,zhang,duan,kim} considered the quality of primaries as a factor while selecting the price. But all of these papers mentioned above, ignore two important properties which distinguish spectrum
oligopoly from standard oligopolies: First, a primary selects a price knowing only the transmission rate of its own channel; it is unaware of transmission rates offered by channels of its competitors. Thus, if a primary quotes a high price, it will earn a large profit if it sells its channel, but may not be able to sell at all; on the other hand a low price will enhance the probability of a sale but may also fetch lower profits in the event of a sale. Second, the same spectrum band can be utilized simultaneously at geographically dispersed locations without interference; but the same band can not be utilized simultaneously at interfering locations; this special feature, known as {\em spatial reuse}
adds another dimension in the strategic interaction
as now a primary has to cull a set of non-interfering locations which is denoted as an {\em independent set}; at which to offer its channel apart from selecting a price at every node of that set. We have accommodated both of the above the uncertainty of competition in this paper. Building on the results we obtain in this paper, we have accommodated both of the above characteristics in the sequel
Some recent works (\cite{Gaurav1,Janssen,Kimmel,gauravjsac}) that consider uncertainty of competition and the impact of spatial reuse (\cite{Zhou} and \cite{gauravjsac}) assume that the commodity on sale can be in one of two states: available or otherwise. This assumption does not capture different transmission rates offered by available channels. A primary may now need to employ different pricing strategies and different independent set selection strategies for different transmission rates, while in the former case a single pricing and independent set selection strategy will suffice as a price needs not be quoted for an unavailable commodity. Our investigation seeks to contribute in this space.
We have studied the price competition in spectrum oligopoly in a sequel of two papers. Overall a primary has to select an independent set and a pricing strategy in each location of the independent set. In this paper we focus only on the pricing strategy of primaries by studying the game for only one location. The results that we have characterized provide indispensable tools for studying the joint decision problem involving multiple locations. In the sequel to this paper, we provide a framework for solving the joint decision problem building on the single location pricing strategy. In addition, initially secondary market is likely to be introduced in geographically dispersed locations; which are unlikely to interfere with each other, thus, the price competition in each location reduces to a single location pricing problem which we solve in this paper. The reduced problem turns out to be of independent interest as it exhibits certain desirable properties which no longer hold when there are multiple locations.
We formulate the price selection as a game in which each primary selects a price depending on the transmission rate its channel provides. Since prices can take real values, the strategy space is uncountably infinite; which precludes the guarantee of the existence of an Nash Equilibrium (NE) strategy profile. Also standard algorithms for computing an NE strategy profile do not exist unlike when the strategy space is finite.
First, we consider that primaries interact only once (Section~\ref{sec:onenodegame}). We consider that the preference of the secondaries can be captured by a penalty function which associates a penalty value to each channel that is available for sale depending on its transmission rate and the price quoted. We show that for a large class of penalty functions, there exists a {\em unique} NE strategy profile, which we explicitly compute (Section~\ref{sec:computation}). In the sequel to this paper, we show that this is no longer the case when a primary owns a channel over multiple locations.
We show that the unique NE strategy profile is symmetric i.e. price selection strategy of all primaries are statistically identical. Our analysis reveals that primaries select price in a manner such that the preference order of transmission rates is retained. This negates the intuition that prices ought to be selected so as to render all transmission rates equally preferable to a secondary. The analysis also reveals that the unique NE strategy profile consists of "nice" cumulative distributions in that they are continuous and strictly increasing; the former rules out pure strategy NEs and the latter ensures that the support sets are contiguous.
Subsequently, utilizing the explicit computation algorithm for the symmetric NE strategies, %
we analytically investigate the reduction in expected profit suffered under the unique symmetric NE pricing strategies as compared to the maximum possible value allowing for collusion among primaries (Section~\ref{sec:slnumerical}).Finally, we extend our one shot game at single location, to a repeated game where primaries interact with each other multiple number of times (Section~\ref{sec:repeatedgame}) and compute a subgame perfect Nash equilibrium (SPNE) in which a primary attains a payoff which is arbitrarily close to the payoff that a primary would have obtained if primaries were select price jointly; thus, price competition does not lower payoff.
{\em All proofs are relegated to Appendix}.
\section{System Model}
\label{sec:model}
We consider a spectrum market with $l (l\geq 2)$ primaries. Each primary owns a channel. Different channels leased by primaries to secondaries constitute disjoint frequency bands. There are $m$ secondaries. We initially consider the case when primaries know $m$, later generalize our results for random, apriori unknown $m$ (Section~\ref{sec:random}).
The channel of a primary provides a certain transmission rate to a secondary who is granted access. Transmission rate (i.e. Shanon Capacity) depends on 1) the number of subscribers of a primary that are using the channel\footnote{Shanon Capacity \cite{cover} for user $i$ at a channel is equal to $\log\left(1+\dfrac{p_{i}h_i}{\sum_{j\neq i}p_jh_j+\sigma^2}\right)$ where $p_k$ is the power with which user $k$ is transmitting, $\sigma^2$ is the power of white noise, $h_k$ is the channel gain between transmitter and receiver which depends on the propagation condition. If a secondary is using the channel then $p_i, h_i$ of the numerator are the attributes associated with the secondary while $p_j, h_j j\neq i$ are those of the subscribers of the primaries. In general, the power $p_j$ for subscriber of primaries is constant for subscriber $j$ of primary, but the number of subscribers vary randomly over time. The power $p_i$ with which a secondary will transmit may be a constant or may decrease with the number of subscribers of primaries in order to limit the interference caused to each subscriber. The above factors contributes to the random fluctuation in the capacity of a channel offered to a secondary. In our setting $p_i, h_i$s are assumed to be the same across the secondaries for a channel which we justify later. However, these values can be different for different channels.} and 2) the propagation condition of the radio signal. The transmission rate evolves randomly over time owing to the random fluctuations of the usage of subscribers of primaries and the propagation condition\footnote{Referring to footnote 2, $h_k$ and $\sigma^2$ evolve randomly owing to the random scattering of the particles in the ionosphere and troposphere; this phenomenon is also known as {\em fading}.}. We assume that at every time slot, the channel of a primary belongs to one of the states $0, 1, \ldots, n$\footnote{We discretize the available transmission rates into a fixed number of states $n$. This is a standard approximation to discretize the continuous function\cite{fischer, Luo}. The corresponding inaccuracy becomes negligible with increase in $n$.}. State $i$ provides a lower transmission rate to a secondary than state $j$ if $i < j$ and state $0$ arises when
the channel is not available for sale i.e. secondaries can not use the channel when it is in state $0$\footnote{Generally a minimum transmission rate is required to send data. State $0$ indicates that the transmission rate is below that threshold due to either the excessive usage of subscribers of primaries or the transmission condition.}. A channel is in state $i \geq 1 $ w.p. $q_i>0$ and in state $0$ w.p. $1-q$ where $q = \sum_{i=1}^n q_i$, independent of the channel states of other primaries \footnote{We have shown that at a given slot, the channel state differs across the primaries mainly because of the differences of i) the number of subscribers that are using the channel and ii) the propagation conditions. Since different primaries have different subscriber bases, thus, their usage behaviors are largely uncorrelated. Also, channels of different primaries operate on different frequency bands and have different noise levels, thus, the propagation conditions are also uncorrelated across the channels.}. Thus, the state of the channel of each primary is independent and identically distributed. We do not make any assumption on the relationship between $q_i$ and $l$ or $q_i$ and $m$.\footnote{ Since each primary sells its channel to only one secondary, thus, referring to footnotes 2 and 3 transmission rate (or $q_i$) at a channel does not depend on $m$ (secondary demand) in practice.}.
We assume
\begin{align}\label{prob}
q<1.
\end{align}
We assume that the transmission rate offered by the channel of a primary is the same to all secondaries. We justify the above assumption in the following. We consider the setting where the secondaries are one of the following types: i) Service provider who does not lease spectrum from the FCC and serves the end-users through secondary access, ii) end-users who directly buy a channel from primaries. In initial stages of deployment of the secondary market, secondaries will be of the first type. When the secondaries are of the first type, then a primary would not know the transmission rate to the end-users who are subscribers of the service provider. A primary measures the channel qualities across different positions in the locality (e.g. a cell) and considers the average as the channel quality that an end-user subscribed to a secondary service provider will get at the location (e.g. a cell). This average will be identical across different end-users subscribed to different secondary service providers and hence, the channel quality is identical across the secondaries.
If the secondaries are of the second type, then the primary may know the transmission rate that an end-user will attain which may be different at different positions. However, if a primary needs to select a price for each position of the location, then it needs to compute the transmission rate at each possible position at that location (e.g. a cell). Thus, the computation and storage requirement for the primary would be large. Such position based pricing scheme will also not be attractive to the end-users since they may perceive it discriminatory as the price changes when its position changes within a location (e.g. a cell). Thus, such a position based pricing scheme may not be practically implementable. Hence, a primary estimates the channel quality and decides the price for the estimated channel quality by considering that the channel quality will not significantly vary across the location. This is because end-users who are interested to buy the channel from a primary at a location most likely have similar propagation paths: for example, secondary users who buy the channels are most often present in buildings (e.g. shopping complex, an office or residential area). The distance from the base station of the primary to the end-users is also similar because the end-users are close to each other in a location. Thus, the path loss component will also similar. Hence, the channel quality is considered to be identical across the secondaries.
Though the quality of a channel is identical for secondaries, the quality can vary across the channels. A primary can get an estimate of the transmission rate by sending a pilot test signal at different positions with the location and then, applying some standard estimation techniques\cite{estimation}.
\subsection{Penalty functions of Secondaries and Strategy of Primaries}
Each primary selects a price for its channel if it is available for sale.
We formulate the decision problem of primaries as a non-cooperative game. A primary selects a price with the knowledge of the state of its channel, but without knowing the states of the other channels; a primary however knows $l, m, n, q_1, \ldots, q_n$.
Secondaries are passive entities. They select channels depending on the price and the transmission rate a channel offers. We assume that the preference of secondaries can be represented by a penalty function. If a primary selects a price $p$ at channel state $i$, then the channel incurs a penalty $g_i(p)$ for all secondaries. As the name suggests, a secondary prefers a channel with a lower penalty. Since lower prices should induce lower penalty, thus, we assume that each $g_i(\cdot)$ is strictly increasing; therefore, $g_i(\cdot)$ is invertible. A primary selects a price for its available channel, but, since there is an one-to-one relation between the price and penalty at each state, we can equivalently consider that primaries select penalties instead. For a given price, a channel of higher transmission rate must induce lower penalty, thus, $g_i(p)<g_j(p)$ if $i>j$. We can also consider that desirability of a channel for a secondary is the negative of penalty. A secondary will not buy any channel whose desirability falls below a certain threshold, equivalently, whose penalty exceeds a certain threshold. We consider that such threshold is the same for each secondary and we denote it as $v$ i.e. no secondary will buy any channel whose penalty exceeds $v$. Secondaries have the same penalty function and the same upper bound for penalty value ($v$), thus, secondaries are statistically identical.
Primary $i$ chooses its penalty using an arbitrary probability distribution function (d.f.) $\psi_{i,j}(\cdot)$ \footnote{Probability distribution refers cumulative distribution function (c.d.f.). C.d.f. of a random variable $X$ is the function
$G(x), G(x)=P(X\leq x) \quad \forall x\in \Re$ \cite{df}.}
when its channel is in state $j \geq 1$. If $j = 0$ (i.e., the channel is unavailable), primary $i$ chooses a penalty of $v+1$: this is equivalent
to considering that such a channel is not offered for sale as no secondary
buys a channel whose penalty exceeds $v$.
\begin{defn}
A strategy of a primary $i$ for state $j\geq 1$, $\psi_{i,j}(\cdot)$ provides the penalty distribution it uses at each node, when the channel state is $j\geq 1$. $S_i=(\psi_{i,1},....,\psi_{i,n})$ denotes the strategy of primary $i$, and $(S_1,...,S_l)$ denotes the strategy profile of all primaries (players).
$S_{-i}$ denotes the strategy profile of primaries other than $i$.
\end{defn}
\subsection{Payoff of Primaries}
We denote $f_i(\cdot)$ as the inverse of $g_i(\cdot)$. Thus, $f_i(x)$ denotes the price when the penalty is $x$ at channel state $i$. We assume that $g_i(\cdot)$ is continuous, thus $f_i(\cdot)$ is continuous and strictly increasing. Also, $f_i(x)<f_j(x)$ for each $x$ and $i<j$. Each primary incurs a transition cost $c>0$ when the primary leases its channel to a secondary, and therefore never selects a price lower than $c$.
If primary $i$ selects a penalty $x$ when its channel state is $j$, then its payoff is \\
\begin{eqnarray}\label{eq:uij}
\begin{cases} f_j(x)-c & \text{if the primary sells its channel}\nonumber\\
0 & \text{otherwise. }
\end{cases}
\end{eqnarray}
Note that if $Y$ is the number of channels offered for sale, for which the penalties are upper bounded by $v$, then
those with $\min(Y, m)$ lowest penalties are sold since secondaries select channels in the increasing order of penalties. The ties among channels with identical penalties are broken randomly and symmetrically
among the primaries.
\begin{defn}\label{defn:expectedpayoff}
$u_{i,j}(\psi_{i,j},S_{-i})$ is the expected payoff when primary $i$\rq{}s channel is at state $j$ and selects strategy $\psi_{i,j}(\cdot)$ and other primaries use strategy $S_{-i}$.
\end{defn}
\subsection{Nash Equilibrium}
We use Nash Equilibrium (NE) as a solution concept which we define below
\begin{defn}\cite{mwg}
A \emph{Nash equilibrium} $(S_1, \ldots, S_n)$ is a strategy profile such that no primary can improve its expected profit by unilaterally deviating from its strategy. So, with $S_i=(\psi_{i,1},....,\psi_{i,n})$, $(S_1, \ldots, S_n)$, is an NE if for each primary $i$ and channel state $j$
\begin{align}
u_{i,j}(\psi_{i,j},S_{-i})\geq u_{i,j}(\tilde{\psi}_{i,j},S_{-i}) \ \forall \ \tilde{\psi}_{i, j}.
\end{align}
An NE $(S_1, \ldots, S_n)$ is a \emph{symmetric NE} if $S_i = S_j$ for all $i, j.$
\end{defn}
An NE is asymmetric if $S_i\neq S_j$ for some $i,j\in \{1,\ldots,l\}$.
Note that if $m\geq l$, then primaries select the highest penalty $v$ with probability $1$. This is because, when $m\geq l$, then, the channel of a primary will always be sold. Henceforth, we will consider that $m<l$.
\subsection{A Class of Penalty Functions}\label{sec:a class}
Since $g_i(\cdot)$ is strictly increasing in $p$ and $g_i(p)>g_j(p)$ for $i<j$, we focus on penalty functions of the form $g_i(p)=h_1(p)-h_2(i)$, where $h_1(\cdot)$ and $h_2(\cdot)$ are strictly increasing in their respective arguments.
Note that $-g_i(p)$ may be considered as a utility that a secondary would get at channel state $i$ when the price is set at $p$. Such utility functions are commonly assumed to be concave \cite{song}; which is possible only if $g_i(\cdot)$ is convex in $p$ i.e. $h_1(\cdot)$ is convex. It is easy to show that when $g_i(p)=h_1(p)-h_2(i)$ and $h_1(\cdot)$ is convex, satisfies the following property:
\textbf{Assumption 1}
\begin{align}\label{con1}
\dfrac{f_i(y)-c}{f_j(y)-c}<\dfrac{f_i(x)-c}{f_j(x)-c} \ \mbox{ for all } x>y > g_i(c), i<j.
\end{align}
Moreover, when $g_i(p)=h_1(p)/h_2(i)$, then, the inequality in (\ref{con1}) is satisfied for some certain convex functions $h_1(\cdot)$ like $h_1(p)=p^r (r\geq 1),\exp(p)$. In addition, there is also a large set of functions that satisfy (\ref{con1}), such as: $g_i(p) = \zeta\left(p -h_2(i)\right), g_i(p) = \zeta\left(p/h_2(i)\right)$ where
$\zeta(\cdot)$ is continuous and strictly increasing. Moreover,
$g_i(\cdot)$ are such that the inverses are of the form $f_i(x)=h(x)+h_2(i), f_i(x) = h(x)*h_2(i)$, where $h(\cdot)$ is {\em any} strictly increasing function, satisfy Assumption 1. Henceforth, we only consider penalty functions which satisfy (\ref{con1}).
We mostly consider $g_i(\cdot)$s which do not depend on $l,m,n, q_1,\ldots,q_n$ (e.g. Fig.~\ref{fig:dist},~\ref{fig:asymptotic_rne},~\ref{fig:efficiency}). But in some case we also consider that $g_i(\cdot)$ is a function of $n$ (e.g. Fig.~\ref{fig:example_nvaries}).
\section{One Shot Game: Structure, Computation, Uniqueness and Existence of NE}
\label{sec:onenodegame}
First, we identify key structural properties of a NE (should it exist).
Next we show that the above properties lead to a unique strategy profile which we explicitly compute - thus the symmetric NE is unique should it exist. We finally prove that the strategy profile resulting from the structural properties above is indeed a NE thereby establishing the existence.
\vspace{-0.5cm}
\subsection{Structure of an NE}\label{sec:singlestructure}
We provide some important properties that any NE $\left(S_1, \ldots, S_l\right)$ (where $S_i=\{\psi_{i,1},\ldots,\psi_{i,n}\}$) must satisfy. First, we show that each primary must use the same strategy profile (Theorem~\ref{thm:noasymmetricNE}). In the sequel to this paper, we show that this is no longer the case when there are multiple locations. In fact we show that there may be multiple asymmetric NEs when there are multiple locations. We show that under the NE strategy profile a primary selects lower values of the penalties when the channel quality is high (Theorem~\ref{thm2}). We have also shown that $\psi_{i,j}(\cdot)$ are continuous and contiguous (Theorem~\ref{thm1} and \ref{thm3}).
\begin{thm}\label{thm:noasymmetricNE}
Each primary must use the same strategy i.e. $\psi_{i,j}(\cdot)=\psi_{k,j}(\cdot)$ $\forall i,k\in \{1,\ldots,l\}$ and $j\in \{1,\ldots,n\}$.
\end{thm}
Theorem~\ref{thm:noasymmetricNE} implies that an NE strategy profile can not be asymmetric. Since channel statistics are identical and payoff functions are identical to each primary, thus this game is symmetric. Given that the game is symmetric, apparently there should only be symmetric NE strategies. Although there are symmetric games where asymmetric NEs do exist \cite{Fey}, we are able to rule that out in our setting using the assumptions that naturally arise in practice namely those which are stated in Sections II.A. and II.B and Assumption 1 which is satisfied by a large class of functions that are likely to arise in practice (Section II.E). Thus, a significance of Theorem~\ref{thm:noasymmetricNE} is that Theorem 1 holds for a large class of penalty functions which are likely to arise in practice. However, we show that there may exist asymmetric NE in absence of Assumption 1 (Section~\ref{sec:asymmetricNE}).
Now, we point out another significance of the above theorem. In a symmetric game it is difficult to implement an asymmetric NE. For example, for two players if $(S_1,S_2)$ is an NE with $S_1\neq S_2$, then $(S_2,S_1)$ is also an NE due to the symmetry of the game. The realization of such an NE is only possible when one player knows whether the other is using $S_1$ or $S_2$. But, coordination among players is infeasible apriori as the game is non co-operative. Thus, Theorem~\ref{thm:noasymmetricNE} entails that we can avoid such complications in this game. We show that this game has a unique symmetric NE through Lemma~\ref{lm:computation} and Theorem~\ref{thm4}. Thus, Theorem~\ref{thm:noasymmetricNE}, Lemma~\ref{lm:computation} and Theorem~\ref{thm4} together entail that there exists a unique NE strategy profile.
Since every primary uses the same strategy, thus, \textbf{we drop the indices corresponding to primaries and represent the strategy of any primary as $S = (\psi_{1}(\cdot), \psi_2(\cdot),.....,\psi_n(\cdot)) $, where $\psi_{i}(\cdot)$ denotes the strategy at channel state $i$.} Thus, we can represent a strategy profile in terms of only one primary.
\begin{defn}\label{defn:phi}
$\phi_j(x)$ is the expected profit of a primary whose channel is in state $j$ and selects
a penalty $x$ \footnote{Note that $\phi_j(x)$ depend on strategies of other primaries , to keep notational simplicity, we do not make it explicit}.
\end{defn}
\begin{thm}
\label{thm1}
$\psi_{i}(.), i\in \{1,..,n\}$ is a continuous probability distribution. Function $\phi_j(\cdot)$ is continuous.
\end{thm}
The above theorem implies that $\psi_i(\cdot)$ does not have any jump at any penalty value. i.e. no penalty value is chosen with positive probability. We now intuitively justify the property. There are uncountably infinite number of penalty values and thus, clearly $\psi_i(\cdot)$ can only have jump at some of those values. Intuitively, there is no inherent asymmetry amongst the penalty values within the interval $(g_i(c),v)$ i.e. at the penalty values except the end points of the interval $[g_i(c),v]$. Thus, a primary does not prioritize any of those penalty values. Now, we intuitively explain why $\psi_i(\cdot)$ does not have jump at the end points. First, at penalty $g_i(c)$, a primary gets a payoff of $0$ when the channel state is $i$; but the payoff at any penalty value greater that $g_i(c)$ is positive, thus $\psi_i(\cdot)$ does not prioritize the penalty value $g_i(c)$. On the other hand, intuitively if a primary selects penalty $v$ with positive probability, then the rest would select slightly lower penalty in order to enhance their sales and thus, the probability that the primary would sell its channel decreases. Thus, $\psi_i(\cdot)$ also does not have a jump at $v$.
Note that in a deterministic N.E. strategy at channel state $i$, then $\psi_{i}(\cdot)$ must have a jump from $0$ to $1$ at the above penalty value. Such $\psi_i(\cdot)$ is {\em not} continuous. Thus, the above theorem rules out any deterministic N.E. strategy. The fact that $\phi_j(\cdot)$ is continuous has an important technical consequence; this guarantees the existence of the best response penalty in Definition~\ref{br} stated in Section~\ref{sec:computation}.
\begin{defn}\label{dlu}
We denote the lower and upper endpoints of the support set\footnote{The support set of a probability distribution is the smallest closed set such that the probability of its complement is $0.$\cite{df}} of $\psi_i(.)$ as $L_i$ and $U_i$ respectively i.e.
\begin{equation*}\label{n77}
L_i=\inf\{x: \psi_{i}(x)>0\}.
\end{equation*}
\begin{equation*}\label{n77a}
U_i=\inf\{x: \psi_i(x)=1\}.
\end{equation*}
\end{defn}
We next show that primaries select higher penalty when the transmission rate is low. More specifically, we show that upper endpoint of the support set of $\psi_{i}(\cdot)$ is upper bounded by the lower endpoint of $\psi_{i-1}(\cdot)$.
\begin{thm}
\label{thm2}
$U_i\leq L_{i-1}$, if $j<i$.
\end{thm}
Theorem~\ref{thm2} is apparently counter intuitive. We prove it using the assumptions stated in Section II. In particular, we rely on Assumption 1 which is satisfied by a large class of penalty functions (Section~\ref{sec:a class}). Thus, the significance of Theorem~\ref{thm2} is that the counter intuitive structure holds for a large class of penalty functions. However, in Section~\ref{sec:counterexample2} we show that Theorem~\ref{thm2} needs not to hold in absence of Assumption 1.
Fig.~\ref{fig:dist} illustrates $L_i$s and $U_i$s in an example scenario (The distribution $\psi_i(\cdot)$ in Fig.~\ref{fig:dist} is plotted using (\ref{c5})).
\begin{figure*}
\begin{minipage}{.32\linewidth}
\begin{center}
\includegraphics[width=65mm, height=40mm]{bar_psi_1_2.png}
\vspace*{-.1cm}
\end{center}
\end{minipage}\hfill
\begin{minipage}{.32\linewidth}
\begin{center}
\includegraphics[width=60mm, height=40mm]{bar_psi_2.png}
\end{center}
\end{minipage}\hfill
\begin{minipage}{.32\linewidth}
\begin{center}
\includegraphics[width=60mm, height=40mm]{bar_psi_3.png}
\end{center}
\end{minipage}
\caption{\small Figure in the left hand side shows the d.f. $\psi_i(\cdot), i=1,\ldots,3$ as a function of penalty for an example setting: $v=100, c=1, l=21, m=10, n=3, q_1=q_2=q_3=0.2$ and $g_i(x)=x-i^3$. Note that support sets of $\psi_i(\cdot)$s are disjoint with $L_3=17.2766$, $U_3=17.345=L_2$, $U_2=22.864=L_1$, and $U_1=100=v$. Figures in the center and the right hand side show d.f. $\psi_2(\cdot)$ and $\psi_3(\cdot)$ respectively, using different scales compared to the left hand figure.}
\label{fig:dist}
\vspace*{-.6cm}
\end{figure*}
In general, a continuous NE penalty selection distribution may not be contiguous i.e. support set may be a union of multiple number of disjoint closed intervals. Thus, the support set of $\psi_{i}(\cdot)$ may be of the following form $[a_1,b_1]\cup\ldots\cup [a_d,b_d]$ with $b_k<a_{k+1}, k\in\{1,\ldots,d-1\}, a_1=L_i$ and $b_d=U_i$. In this case, $\psi_i(\cdot)$ is strictly increasing in each of $[a_k,b_k] k\in \{1,\ldots,d\}$, but it is constant in the \lq\lq{}gap\rq\rq{} between the intervals i.e. $\psi_i(\cdot)$ is constant in the interval $[a_{k-1},b_k]$ $k\in\{2,\ldots,d\}$. We rule out the above possibility in the following theorem i.e. the support set of $\psi_i(\cdot)$ consists of only one closed interval $[L_i,U_i]$ which also establishes that $\psi_{i}(\cdot)$ is strictly increasing from $L_i$ to $U_i$. In the following theorem we also rule out any \lq\lq{}gap\rq\rq{} between support sets for different $\psi_i(\cdot)$, $i=1,\ldots,n$.
\begin{thm}
\label{thm3}
The support set of $\psi_i(\cdot), i=1,..,n$ is $[L_i, U_i]$ and $U_i=L_{i-1}$ for $i=2,..,n$, $U_1=v$.
\end{thm}
Theorem~\ref{thm2} states that $L_{i-1}\geq U_i$. Theorem~\ref{thm3} further confirms that $L_{i-1}=U_{i}$ i.e. there is no \lq\lq{}gap\rq\rq{} between the support sets. Theorem~\ref{thm3} also implies that there is no \lq\lq{}gap\rq\rq{} within a support set. We now explain the intuition that leads to the reason. If there are $a$ and $b$ which are in the support sets such that the primaries do not select any penalty in the interval $(a,b)$, then, a primary can get strictly a higher payoff at any penalty in the interval $(a,b)$ compared to penalty at $a$ or just below $a$. Thus, a primary would select penalties at or just below $a$ with probability $0$ which implies that $a$ can not be in the support set of an NE strategy profile. We prove Theorem~\ref{thm3} using the above insights and Theorem~\ref{thm2}.
Figure~\ref{fig:dist} illustrates d.f. $\psi_{i}(\cdot)$ for an example scenario.
\vspace{-0.5cm}
\subsection{Computation, Uniqueness and Existence}\label{sec:computation}
We now show that the structural properties of an NE identified in
Theorems~\ref{thm:noasymmetricNE}-\ref{thm3} are satisfied by a unique strategy profile,
which we explicitly compute (Lemma~\ref{lm:computation}). This proves the uniqueness of a NE subject to the existence. We show that the strategy profile is indeed an NE in Theorem~\ref{thm4}.
We start with the following definitions.
\begin{defn}\label{br}
A \emph{best response} penalty for a channel in state $j\geq 1$ is $x$ if and only if
\begin{equation}\label{eq:bestresponse}
\phi_j(x)=\underset{y\in \Re}{\text{sup}}\phi_j(y).
\end{equation}
Let $u_{j,max} = \phi_j(x)$ for a best response $x$\footnote{Since $\phi_j(\cdot)$ is continuous by Theorem~\ref{thm1} and penalty must be selected within the interval $[g_j(c),v]$, thus the maximum exists in (\ref{eq:bestresponse}). This maximum is equal to $u_{j,max}$ and $\phi_j(x)$ is equal to $u_{j,max}$ for some $x$.} for state $j$, $j \geq 1$ i.e., $u_{j,max}$ is the maximum expected profit that a primary earns when its channel is in state $j$, $j \geq 1$.
\end{defn}
\begin{rmk}
In an NE strategy profile a primary only selects a best response penalty with a positive probability. Thus, a primary selects $x$ with positive probability at channel state $i$, then expected payoff to the primary must be $u_{i,max}$ at $x$.
\end{rmk}
\begin{defn}\label{defn:w}
Let $w(x)$ ($w_i$, respectively) be the probability of at least $m$ successes out of $l-1$ independent Bernoulli trials, each of which occurs with probability $x$ ($\sum_{k=i}^{n}q_k$, respectively). Thus,
\begin{eqnarray}
w(x)&=&\sum_{i=m}^{l-1}\dbinom{l-1}{i}x^i(1-x)^{l-i-1}.\label{d4}\\
w_i&=&w\left(\sum_{j=i}^{n}q_j\right) \quad \text{for } i=1,...,n \quad \text{\&} w_{n+1}=0\label{d5}.
\end{eqnarray}
\end{defn}
The following lemma provides the explicit expression for the maximum expected payoff that a primary can get under an NE strategy profile.
\begin{lem} \label{lu} For $1 \leq i \leq n$,
\begin{eqnarray}u_{i,max} & = & p_i-c .\nonumber\\
\mbox{where, } p_i& =& c+(f_i(L_{i-1})-c)(1-w_i).
\label{n51}\\
\mbox{and } L_i&=&g_i(\dfrac{p_i-c}{1-w_{i+1}}+c), L_{0}=v.\label{n52}
\end{eqnarray}
\end{lem}
\begin{rmk}\label{rmk:expectedpayoff}
Expected payoff obtained by a primary under an NE strategy profile at channel state $i$ is given by $p_i-c$.
\end{rmk}
\begin{rmk}
Starting from $L_0=v$, we obtain $p_1$ (from (\ref{n51})) and using $p_1$ we obtain $L_1$ by (\ref{n52}) which we use to obtain $p_2$ from (\ref{n51}). Thus, recursively we obtain $p_i$ and $L_i$ for $i=1,\ldots,n$.
\end{rmk}
We now obtain expressions for $\psi_{i}(\cdot)$ using expression of $L_i$ and $p_i$. We use the fact that $w(\cdot)$ is strictly increasing and continuous in $[0,1]$.
\begin{lem} \label{lm:computation}
An NE strategy profile (if it exists) $\left(\psi_1(\cdot), \ldots, \psi_n(\cdot)\right)$ must
comprise of: \begin{align}\label{c5}
\psi_i(x)= &
0 , \text{if} \ x<L_i\nonumber\\
& \dfrac{1}{q_i}(w^{-1}(\dfrac{f_i(x)-p_i}{f_i(x)-c})-\sum_{j=i+1}^{n}q_j), \text{if} \ L_{i-1}\geq x\geq L_i\nonumber\\
& 1, \text{if} \ x>L_{i-1}.
\end{align}
where $p_i, L_i, i=0,..,n$ are defined in (\ref{n51})
\end{lem}
\begin{rmk}
Using (\ref{c5}) we can easily compute the strategy profile $(\psi_1(\cdot),\ldots,\psi_{n}(\cdot))$. Fig.~\ref{fig:dist} illustrates d.f. $\psi_i(\cdot)$ for an example scenario.
\end{rmk}
The following lemma ensures that $\psi_i(\cdot)$ as defined in Lemma~\ref{lm:computation} is indeed a strategy profile.
\begin{lem}\label{dcont}
$\psi_i(\cdot)$ as defined in Lemma~\ref{lm:computation} is a strictly increasing and continuous probability distribution function.
\end{lem}
Fig.~\ref{fig:dist} illustrates continuous and strictly increasing $\psi_i(\cdot)$ for $i=1,\ldots, 3$ for an example setting.
Explicit computation in Lemma~\ref{lm:computation} shows that the NE strategy profile is unique, if it exists. There is a plethora of symmetric games \cite{mwg} where NE strategy profile does not exist. However, we establish that any strategy profile of the form (\ref{c5}) is an NE.
\begin{thm}\label{thm4} Strategy profile as defined in Lemma~\ref{lm:computation} is an NE.
\end{thm}
Hence, we have shown that
\begin{thm}\label{singlelocation}
The strategy profile, in which each primary randomizes over the penalties in the range $[L_i,L_{i-1}]$ using the continuous probability distribution function $\psi_i(\cdot)$ (defined in Lemma~\ref{lm:computation}) when the channel state is $i$, is the unique NE strategy profile.
\end{thm}
\subsection{Random Demand}\label{sec:random}
Note that all our results readily generalize to allow for random number of secondaries (M) with probability mass functions (p.m.f.) $\Pr(M=m)=\gamma_m$. A primary does not have the exact realization of number of secondaries, but it knows the p.m.f. . We only have to redefine $w(x)$ as-
\begin{eqnarray}
\sum_{k=0}^{\max(M)}\gamma_k\sum_{i=k}^{l-1}\dbinom{l-1}{i}x^i(1-x)^{l-1-i}\nonumber
\end{eqnarray}
and $w_{n+1}=\gamma_0$.
\section{Performance Evaluations of NE Strategy Profile in Asymptotic Limit}\label{sec:slnumerical}
In this section, we analyze the reduction in payoff under NE strategy profile due to the competition.
First, we study the expected payoff that a primary obtains under the unique NE strategy profile in the asymptotic limit (Lemma~\ref{eff}). Subsequently, we compare the expected payoff of primaries under the NE strategy profile with the payoff that primaries get when they collude (Lemma~\ref{thresh}). Subsequently, we investigate the asymptotic variation of strategy profiles of primaries with $n$ in an example setting (Fig.~\ref{fig:example_nvaries})
Recall from Remark~\ref{rmk:expectedpayoff} that expected payoff obtained by a primary under the unique NE strategy profile at channel state $i$ is given by $p_i-c$. Next,
\begin{defn}\label{payoff}
Let $R_{NE}$ denote the ex-ante expected profit of a primary at the Nash equilibrium. Then,
\begin{eqnarray}\label{eq:rne}
R_{NE}=\sum_{i=1}^{n}(q_i.(p_i-c)).
\end{eqnarray}
\end{defn}
Note that $l\cdot R_{NE}$ denotes the total ex-ante expected payoff obtained by primaries at the NE strategy profile. We obtai
\begin{lem}\label{eff}
Let $c_{j}=g_j(c), j=1,..,n$. When $l\rightarrow \infty$, then
\begin{eqnarray}
p_i-c\rightarrow \begin{cases} f_i(v)-c\quad & \text{if } (l-1)\sum_{j=1}^{n}q_j<m\nonumber\\
f_i(c_{k})-c \quad \text{if } & (l-1)\sum_{j=k+1}^{n}q_j<m\nonumber\\ & <(l-1)\sum_{j=k}^{n}q_j \quad 1\leq k< i\nonumber\\
0 \quad \text{if } & m< (l-1)\sum_{j=i}^{n}q_j.
\end{cases}
\end{eqnarray}
\end{lem}
\begin{figure*}
\begin{minipage}{.48\linewidth}
\begin{center}
\includegraphics[width=60mm, height=40mm]{asymptotic_expay.png}
\caption{\small This figure illustrates the variation of $p_i-c, i=1,\ldots,n$ with $m$ in an example setting: $l=51, n=3, v=100, c=1, q_1=q_2=q_3=0.2$ and $g_i(x)=x^2-i^3$. For $m\leq 5$, $p_i-c\approx 0$ for all $i$. For $5\leq 5\leq 15$, $p_i-c\approx 0$ for $i=1,2.$. For $15\leq m\leq 20$, $p_1-c\approx 0$. When $m$ exceeds $40$, $p_i-c, i=1,2,3$ closely match the highest possible expected value as Lemma~\ref{eff} indicates.
\label{fig:asymptotic_rne}
\vspace{-0.4cm
\end{center}
\end{minipage}\hfill
\begin{minipage}{.48\linewidth}
\begin{center}
\includegraphics[width=60mm,height=40mm]{efficiency_single.png}
\caption{\small Variation of efficiency ($\eta$) with $m$ in an example setting: $g_i(x)=x^2-i^3$, $l=51, n=3, q_1=q_2=q_3=0.2, v=100$ and $c=1$. When $m\leq 5$, $\eta\approx0$. When $m\geq 35$, $\eta\approx 1$.}
\label{fig:efficiency}
\vspace{-0.4cm}
\end{center}
\end{minipage}
\end{figure*}
We illustrate Lemma ~\ref{eff} using an example in Figure~\ref{fig:asymptotic_rne}.
Intuitively, competition increases with the decrease in $m$. Primaries choose prices progressively closer to the lower limit $c$. Thus, the expected payoff $p_i-c, i=1\ldots,n$ decreases as $m$ decreases. The above lemma reveals that, as $m$ becomes smaller only those primaries whose channels provide higher transmission rate can have strictly positive payoff i.e. $p_i-c$ is positive (Fig.~\ref{fig:asymptotic_rne}).
From Lemma~\ref{eff} and (\ref{eq:rne}) we readily obtain
\begin{cor}\label{cor:rne}
When $l\rightarrow \infty$, then,
\begin{eqnarray}
R_{NE}\rightarrow\begin{cases} \sum_{j=1}^{n}q_j.(f_{j}(v)-c)\quad &\mbox{If } (l-1)\sum_{j=1}^n q_j<m\nonumber\\
\sum_{j=i+1}^{n}q_j.(f_{j}(c_{i})-c) \quad &\mbox{If } (l-1)\sum_{j=i+1}^{n}q_j<m\nonumber\\ & <(l-1)\sum_{j=i}^{n}q_j \nonumber\\ & , i\in\{1,..,n-1\}\nonumber\\
0\quad &\mbox{If } m< (l-1)q_n.\nonumber\\\end{cases}
\end{eqnarray}
\end{cor}
Thus, asymptotically $R_{NE}$ decreases as $m$ decreases (Fig.~\ref{fig:asymptotic_rne}).
\begin{defn}\label{defn:eta}
Let $R_{OPT}$ be the maximum expected profit earned through collusive selection of
prices by the primaries. \emph{Efficiency} $\eta$ is defined as $\dfrac{l\cdot R_{NE}}{R_{OPT}}.$
\end{defn}
Efficiency is a measure of the reduction in the expected profit owing to competition.
The asymptotic behavior of $\eta$ is characterized by the following lemma
\begin{lem}\label{thresh}
When $l\rightarrow \infty$, then
\begin{eqnarray}
\eta \rightarrow\begin{cases}
1 \quad &\mbox{If } (l-1)\sum_{j=1}^{n}q_j<m \nonumber\\
0 \quad & \mbox{If } m< (l-1)q_n.
\end{cases}
\end{eqnarray}
\end{lem}
We illustrate the variation of efficiency with $m$ using an example in Figure~\ref{fig:efficiency}. Intuitively, the competition decreases with increase in $m$; thus primaries set their penalties close to the highest possible value for all states. This leads to high efficiency. On the other hand, competition becomes intense when $m$ decreases, thus, $R_{NE}$ becomes very small as Corollary ~\ref{cor:rne} reveals. But, if primaries collude, primaries can maximize the aggregate payoff by offering only the channels of highest possible states by selecting highest penalties. Thus, efficiency becomes very small when $m$ is very small (Fig.~\ref{fig:efficiency}).
The transmission rates of an available channel constitute a continuum in practice. We have discretized the transmission rates of an available channel in multiple states for the ease of analysis. However, the theory allows us to investigate numerically how the penalty distribution strategies behave in the asymptotic limit (Fig.~\ref{fig:example_nvaries}).
\begin{figure}
\includegraphics[width=100mm,height=40mm]{l_1l_n.png}
\caption{\small Figure shows the plot of $L_1$ and $L_n$ with $n$. We consider $v=10, c=0, l=21, m=10, q_1=\ldots=q_n=0.5/n$ and the following penalty function: $g_i(x)=x-(r_{max}-r_{min})*i/n$, where $r_{max}$ denotes the maximum possible transmission rate which a secondary user can transmit and $r_{min}$ denotes the minimum transmission rate required to transmit the signal. Note that the penalty function is of the form $g_i(x)=h_1(x)-h_2(i)$, where, $h_1(x)=x$ is strictly increasing concave function in $x$, and $h_2(i)=(r_{max}-r_{min})*i/n$ is strictly decreasing in $i$. Thus, $g_i(\cdot)$ satisfies Assumption 1. We have equally divided the available rate region into the number of states $n$. We consider that $h_2(\cdot)$ is the representative rate at state $i$. We consider $r_{max}=3.5$ and $r_{min}=0.5$.}
\label{fig:example_nvaries}
\vspace{-0.4cm}
\end{figure}
Fig.~\ref{fig:example_nvaries} reveals that $L_n$ increases with $n$ and eventually converges to a point which is strictly less than $v$. On the other hand, $L_1$ converges to $v$ as $n$ becomes large (Fig.~\ref{fig:example_nvaries}). Thus, the lower endpoints (and thus, the upper endpoints (since $U_{j}=L_{j-1}$)) of penalty selection strategies converge at different points when $n$ becomes large
\section{Repeated Game}\label{sec:repeatedgame}
We have so far considered the setting where primaries and secondaries interact only once. In practice, however, primaries and secondaries interact repeatedly. To analyze repeated interactions we consider a scenario where the one shot game is repeated an infinite number of times. We characterize the subgame perfect Nash Equilibrium where the expected payoff of primaries can be made arbitrarily close to the payoff that primaries would obtain if they would collude.
The one shot game is played at time slots $t=0,1,2,\ldots,$. Let, $\phi_{i,t}$ denote the expected payoff at stage $t$, when the channel state is $i$. Hence, the payoff of a primary, when its channel state is $i$, is given by
\begin{align}
\phi_i=(1-\delta)\sum_{t=0}^{\infty}\delta^{t}\phi_{i,t}
\end{align}
where, $\delta\in(0,1)$ is the discount factor.
Since NE constitutes a weak notion in repeated game \cite{mwg}, we will focus on Subgame Perfect Nash Equilibrium (SPNE).
\begin{defn}\cite{mwg}
Strategy profile $(S_1,\ldots,S_n)$ constitutes a SPNE if the strategy profile prescribes an NE at every subgame.
\end{defn}
The one shot unique NE that we have characterized, is also a SPNE in repeated game. Total expected payoff that a primary can get, is $R_{NE}$ (Definition~\ref{payoff}) under one-shot game. We have already shown that this payoff is not always efficient (Lemma~\ref{thresh}) i.e. $\eta\nrightarrow 1$. Here, we present an efficient SPNE (Theorem~\ref{spne}), provided that $\delta$ is sufficiently high.
Fix a sequence of $\{\epsilon_i\}_{1}^{n}$ such that
\begin{eqnarray}
0=\epsilon_1<\epsilon_2<\ldots<\epsilon_n. \label{condn2}\\
\epsilon_i\leq v-L_{i-1},
\epsilon_i\leq v-g_i((f_i(v)-c)(1-w_i)+c).\label{condn3}
\end{eqnarray}
We provide a {\em Nash Reversion Strategy} such that a primary will get a payoff arbitrarily close to the payoff that it would obtain when all primaries collude.\\
\textbf{Strategy Profile ($SP_R$)}:
\textit{ Each primary selects penalty $v-\epsilon_i$, (where $\epsilon_i$ satisfies (\ref{condn2}) and (\ref{condn3})), when state of the channel is $i$, at $t=0$ and also at time slot $\tau=1,2,\ldots$ as long as all other primaries have chosen $v-\epsilon_j$, $j\in \{1,\ldots,n\}$, when their channel state is $j$ at all time slots $0,1,\ldots,\tau-1$. Otherwise, play the unique one shot game NE strategy $\psi_i(\cdot)$ (Lemma~\ref{lm:computation}).}
\begin{rmk}
Note that if everyone sticks to the strategy, then each primary selects a penalty of $v-\epsilon_i$ at every time slot, when the channel state is $i$. Under the collusive setting, each primary selects penalty $v$. Thus, for every $\gamma>0$, we can choose sufficiently small $\epsilon_i, i=1,\ldots,n$ and sufficiently high $\delta$ such that the efficiency (definition~\ref{defn:eta}) is at least $1-\gamma$.
\end{rmk}
Now we are ready to state the main result of this section
\begin{thm}\label{spne}
Suppose $\{\epsilon_i\}_{i=1}^{n}$ are such that they satisfy (\ref{condn2}) and (\ref{condn3}). Then, there exists $\delta_{min}\in (0,1)$ such that for any discount factor $\delta\geq \delta_{min}$ the strategy profile $SP_R$ is a SPNE.
\end{thm}
\begin{rmk}
Thus, there exists a SPNE strategy profile (for sufficiently high $\delta$) where each primary obtains an expected payoff arbitrarily close to the payoff it would have obtained if all primaries collude.
\end{rmk}
\section{Pending Question: What happens when Assumption 1 is relaxed?}\label{sec:assumpnecessary}
We show that if penalty functions do not satisfy Assumption 1, then the system may have multiple NEs (section~\ref{sec:multipleNEs}), asymmetric NE (section~\ref{sec:asymmetricNE}) and the strategy profile that we have characterized in (\ref{c5}) may not be an NE (section~\ref{sec:counterexample2}).
\subsection{Multiple NEs}\label{sec:multipleNEs}
We first give a set of penalty functions which do not satisfy Assumption 1 and then we state a strategy profile which is an NE for this set of penalties along with the strategy profile that we have characterized in (\ref{c5}). \\
Let $f_i(\cdot)$ be such that
\begin{align}\label{mt1}
\dfrac{f_i(x)-c}{f_j(x)-c}=\dfrac{f_i(y)-c}{f_j(y)-c}\quad (x>y>g_i(c), i<j)
\end{align}
Examples of such kind of functions are $g_i(x)=(x-c)^p/i$.
It can be easily verified that strategy profile, described as in (\ref{c5}), is still an NE strategy profile under the above setting. We will provide another NE strategy profile.
First, we will introduce some notations which will be used throughout this section.
\begin{eqnarray}
\bar{p}_i=(f_i(v)-c)(1-w_1)+c \label{mt3}\\
\bar{L}=g_1(\bar{p}_1)\label{mt4}
\end{eqnarray}
Now, we show that there exists a symmetric NE strategy profile where a primary selects the same strategy for its each state of the channel. This establishes that the the system has multiple NEs.
Let\rq{}s consider the following symmetric strategy profile where at channel state $i$ a primary\rq{}s strategy profile is $\bar{\psi}_i(\cdot)=\bar{\psi}(\cdot)$ for $i=1,\ldots,n$, where
\begin{align}\label{dfmt}
\bar{\psi}(x)=& 0 \quad(\text{if } x<\bar{L})\nonumber\\
& \dfrac{1}{\sum_{j=1}^{n}q_j}w^{-1}(1-\dfrac{\bar{p}_1-c}{f_1(x)-c})\quad (\text{if } v\geq x\geq \bar{L})\nonumber\\
& 1\quad (\text{if } x>v)
\end{align}
First, we show that $\bar{\psi}(\cdot)$ is a probability d.f.
\begin{lem}\label{lm:strategymultiple}
$\bar{\psi}(\cdot)$ as defined in (\ref{dfmt}) is a probability distribution function.
\end{lem}
Note that in this strategy profile each primary selects the same strategy irrespective of the channel state. Next, we show that strategy profile as described in (\ref{dfmt}) is an NE strategy profile.
\begin{thm}\label{thm:multipleNE}
Consider the strategy profile where $\bar{\psi}_i(\cdot)=\bar{\psi}(\cdot)$, for $i=1,\ldots, n$. This strategy profile constitute an NE.
\end{thm}
\subsection{Asymmetric NE}\label{sec:asymmetricNE}
If Assumption 1 is not satisfied, then there may exist asymmetric NEs in contrast to the setting when Assumption 1 is satisfied. We again consider the penalty functions are of the type given in (\ref{mt1}).
We consider $n=2, l=2, m=1$ and $q_1=q_2$.
Here, we denote $\psi_{i,j}(\cdot)$ as the strategy profile for primary $i , i=1,2 $ at channel state $j$, $j=1,2$. Let
\begin{align}\label{eq:asymce1}
\hat{L}=g_2((f_2(v)-c)(1-q_1-q_2)/(1-q_2)+c)
\end{align}
Note from (\ref{eq:asymce1}) that
\begin{align}
(f_2(\hat{L})-c)(1-q_2)=(f_2(v)-c)(1-q_1-q_2)\nonumber\\
(f_1(\hat{L})-c)(1-q_2)=(f_1(v)-c)(1-q_1-q_2) \quad (\text{from } (\ref{mt1}))\nonumber\\
(f_1(\hat{L})-c)(1-q_1)=(f_1(v)-c)(1-q_1-q_2)\quad (\text{since } q_1=q_2)
\end{align}
Next
\begin{align}
\hat{L}_{low}=g_1((f_1(\hat{L})-c)(1-q_2)+c)
\end{align}
Again using (\ref{mt1}) and the fact that $q_1=q_2$ we also obtain
\begin{align}
f_2(\hat{L}_{low})-c=(f_2(\hat{L})-c)(1-q_1)
\end{align}
Consider the following strategy profile
\begin{align}\label{eq:asymmne1}
\psi_{1,1}(x)& =1,\quad (x\geq v),\nonumber\\
& =\dfrac{1}{q_1}(1-q_2-\dfrac{(f_2(v)-c)(1-q_1-q_2)}{f_2(x)-c})\quad (v>x>\hat{L})\nonumber\\
& =0, \quad x\leq \hat{L}\nonumber\\
\psi_{1,2}(x)& =1, \quad (x\geq \hat{L}), \nonumber\\
& =\dfrac{1}{q_2}(1-\dfrac{(f_1(\hat{L})-c)(1-q_2)}{f_1(x)-c})\quad (\hat{L}>x>\hat{L}_{low})\nonumber\\
& =0,\quad x\leq \hat{L}_{low}
\end{align}
and
\begin{align}\label{eq:asymmne2}
\psi_{2,1}(x)& =1,\quad (x\geq \hat{L}),\nonumber\\
& =\dfrac{1}{q_1}(1-\dfrac{(f_2(\hat{L})-c)(1-q_1)}{f_2(x)-c})\quad \hat{L}_{low}<x<\hat{L}\nonumber\\
& =0.\quad x\leq \hat{L}_{low}\nonumber\\
\psi_{2,2}(x)&=1,\quad x\geq v\nonumber\\
& =\dfrac{1}{q_2}(1-q_1-\dfrac{(f_1(v)-c)(1-q_1-q_2)}{f_1(x)-c})\quad \hat{L}>x>v\nonumber\\
& =0, \quad x\leq \hat{L}
\end{align}
It is easy to discern that the above strategy profile is a continuous distribution function. Also note that $\psi_{1,1}\neq \psi_{2,1}$ and $\psi_{1,2}\neq \psi_{2,2}$. Hence, the strategy profile is asymmetric. The following theorem confirms that the strategy profile that we have just described is indeed an NE.
\begin{thm}\label{thm:asymmne}
The strategy profile $\psi_{1}=\{\psi_{1,1}(\cdot),\psi_{1,2}(\cdot)\}$ and $\psi_2=\{\psi_{2,1}(\cdot),\psi_{2,2}(\cdot)\}$ as described in (\ref{eq:asymmne1}) and (\ref{eq:asymmne2}) respectively is an NE.
\end{thm}
The above theorem confirms that there may exist an asymmetric NE when the penalty functions are of the form (\ref{mt1}).
\subsection{Strategy Profile Described in (\ref{c5}) may not be an NE}\label{sec:counterexample2}
We first describe a set of penalty functions that do not satisfy (\ref{con1}) and then we show that the strategy profile that we have characterized in (\ref{c5}) is not an NE. \footnote{Note that in theorem~\ref{thm4} we have shown that the strategy profile described in (\ref{c5}) is an NE when (\ref{con1}) is satisfied}
Consider that $n=2,c=0$; penalty functions are as follow:
\begin{align}
g_1(x)=x, f_1(x)=x\label{a3e0}\\
g_2(x)=\log(x), f_2(x)=e^x\label{a3e}
\end{align}
Now, as $c=0$, hence
\begin{align}\label{a3e1}
\dfrac{f_1(x)-c}{f_2(x)-c}=\dfrac{x}{e^x}=F(x)\nonumber\\
\dfrac{dF(x)}{dx}=e^{-x}-xe^{-x}
\end{align}
From (\ref{a3e1}), it is clear that for $x>1$, $F(x)$ is strictly decreasing. Hence we have for $1<x<y$
\begin{align}\label{a3e3}
\dfrac{f_2(y)-c}{f_1(y)-c}>\dfrac{f_2(x)-c}{f_1(x)-c}
\end{align}
which contradicts (\ref{con1}).\\
Now, let $v=5$ and $l=20,m=10, q_1=0.2,q_2=0.4$. Under this setting, we obtain
\begin{thm}\label{thm:notaNE}
The strategy profile as defined in (\ref{c5}) is \textbf{not} an NE strategy profile.
\end{thm}
\begin{rmk}
Thus, the condition in (\ref{con1}) is also {\em necessary} for a NE strategy profile to be in the form (\ref{c5}).
\end{rmk}
In the example constructed above, however, an NE strategy profile may still exist. Now, we show that in certain cases we can have a symmetric NE where $L_1<L_2$. Thus, when channel state is $1$ and $2$, a primary chooses penalty from the interval $[L_1,L_2]$ and $[L_2,v]$ respectively. (Note the difference; in the strategy profile described in lemma~\ref{lm:computation}, we have $L_1>L_2$).
Consider the following symmetric strategy profile where each primary selects strategy $\tilde{\psi}_i$ at channel state $i$, $i\in\{1,2\}$-
\begin{align}\label{a3e5}
\tilde{\psi}_i(x)= &
0 , \text{if } x<\tilde{L}_i\nonumber\\
& \dfrac{1}{q_i}(w^{-1}(\dfrac{f_i(x)-\tilde{p}_i}{f_i(x)-c})-\sum_{j=1}^{i-1}q_j), \text{if } \tilde{L}_{i+1}\geq x\geq \tilde{L}_i\nonumber\\
& 1, \text{if } x>\tilde{L}_{i+1}
\end{align}
with $\tilde{L}_3=v$, $\tilde{L}_1>1$.\\
and
\begin{align}\label{a3e2}
\tilde{p}_2& =(f_2(v)-c)(1-w(q_1+q_2))\nonumber\\
\tilde{L}_2& =g_2(\dfrac{\tilde{p}_2}{1-w(q_1)})\nonumber\\
\tilde{L}_1& =g_1(f_1(\tilde{L}_2)(1-w(q_1))\nonumber\\
\tilde{p}_1& =\dfrac{f_1(\tilde{L}_2)}{f_2(\tilde{L}_2)}*\tilde{p}_2
\end{align}
When $v=5$, $l=20,m=10, q_1=0.2, q_2=0.4$; we obtain $\tilde{p}_2=27.6185, \tilde{L}_2=3.3201, \tilde{p}_1=3.3148, \tilde{L}_1=3.3148$ .\\
It is easy to show that $\tilde{\psi}_i(\cdot)$ as defined in (\ref{a3e5}) is distribution function with $\tilde{\psi}_i(\tilde{L}_i)=0$ and $\tilde{\psi}_i(\tilde{L}_{i+1})=1$. Note that under $\tilde{\psi}_i(\cdot)$, a primary selects penalty from the interval $[\tilde{L}_1,\tilde{L}_2]$ and $[\tilde{L}_2,v]$ when the channel states are $1$ and $2$ respectively. So, it remains to show that it is an NE strategy profile. The following theorem shows that it is indeed an NE strategy profile.
\begin{thm}\label{counterexample2}
Strategy profile $\tilde{\psi}_i(\cdot), i=1,\ldots,n$ as described in (\ref{a3e5}) is an NE .
\end{thm
\section{Conclusion and Future Work}
We have analyzed a spectrum oligopoly market with primaries and secondaries where secondaries select a channel depending on the price quoted by a primary and the transmission rate a channel offers. We have shown that in the one-shot game there exists a unique NE strategy profile which we have explicitly computed. We have analyzed the expected payoff under the NE strategy profile in the asymptotic limit and compared it with the payoff that primaries would obtain when they collude. We have shown that under a repeated version of the game there exists a subgame perfect NE strategy profile where each primary obtains a payoff arbitrarily close to the payoff that it would have obtained if all primaries collude.
The characterization of an NE strategy profile under the setting i) when secondaries have different penalty functions and ii) when demand of secondaries vary depending on the pricing strategy remains an open problem. The analytical tools and results that we have provided may provide the basis for developing a framework to solve those problems
|
1,116,691,499,165 | arxiv | \section{Introduction and motivation}
Among experts in concurrency and category theory, Petri nets have always been informally regarded as presentations of free strict symmetric monoidal categories (FSSMCs). The intuition behind such claims is simple: Given a Petri net, we can use its set of places to generate the objects of a FSSMC. Similarly, each of its transitions is considered as a generating morphism in the corresponding category, with domain and codomain the monoidal product of its input/output places, respectively. In this setting, a marking of the net is just an object in the corresponding FSSMC, and any sequence of transition firings can be mapped to a morphism. An example of this can be seen in Figure~\ref{fig: execution alongside net}, where we made use of wiring diagrams~\cite{Selinger2010} to represent FSSMCs morphisms.
\begin{figure}[h!]
\centering
\scalebox{0.5}{
\begin{tikzpicture}
\pgfmathsetmacro\bS{5}
\pgfmathsetmacro\hkX{(\bS/3.5)}
\pgfmathsetmacro\kY{-1.5}
\pgfmathsetmacro\hkY{\kY*0.5}
\draw pic (m0) at (0,0) {netA={{1}/{1}/{2}/{0}/{}/{}/{}}};
\draw pic (m1) at (\bS,0) {netA={{0}/{2}/{2}/{0}/{$\blacktriangle$}/{}/{}}};
\draw pic (m2) at ({2 * \bS},0) {netA={{0}/{1}/{3}/{1}/{}/{$\blacktriangledown$}/{}}};
\draw pic (m3) at ({3 * \bS},0) {netA={{0}/{1}/{2}/{2}/{}/{}/{$\blacktriangledown$}}};
\begin{scope}[very thin]
\foreach \j in {1,...,3} {
\pgfmathsetmacro \k { \j * \bS - 1 };
\draw[gray,dashed] (\k,-4) -- (\k,-8.25);
\draw[gray] (\k,1) -- (\k,-4);
}
\end{scope}
\begin{scope}[shift={(0,-4)}, oriented WD, bbx = 1cm, bby =.4cm, bb min width=1cm, bb port sep=1.5]
\draw node [fill=\backgrnd,bb={1}{1}] (Tau) at (\bS -1,-1) {$t$};
\draw node [fill=\backgrnd,bb={1}{2}, ] (Mu) at ({2 * \bS - 1},-1) {$v$};
\draw node [fill=\backgrnd,bb={1}{1}] (Nu) at ({3 * \bS - 1},{2 * \kY}) {$u$};
\draw (-1,-1) -- node[above] {$p_1$} (0,-1)
-- node[above] {} (Tau_in1);
\draw (-1,-2) -- node[above] {$p_2$} (0,-2) -- (\bS-1, -2);
\draw (-1,-3) -- node[above] {$p_3$} (0,-3) -- (\bS-1, -3);
\draw (-1,-4) -- node[above] {$p_3$} (0,-4) -- (\bS-1, -4);
\draw (Tau_out1) -- node[above] {$p_2$} (Mu_in1);
\draw (\bS-1,-2) -- (2*\bS-1, -2);
\draw (\bS-1,-3) -- (2*\bS-1, -3);
\draw (\bS-1,-4) -- (2*\bS-1, -4);
\draw (Mu_out1) -- node[above] {$p_3$} (3*\bS-1, -0.725);
\draw (Mu_out2) -- node[above] {$p_4$} (3*\bS-1, -1.325);
\draw (2*\bS-1,-2) -- (3*\bS-1, -2);
\draw (2*\bS-1,-3) -- (Nu_in1);
\draw (2*\bS-1,-4) -- (3*\bS-1, -4);
\draw (3*\bS-1,-0.725) to (4*\bS-2, -1.325) -- node[above] {$p_3$} (4*\bS-1, -1.325);
\draw (3*\bS-1,-1.325) -- (3*\bS,-1.325) to (4*\bS-2, -4) -- node[above] {$p_4$} (4*\bS-1, -4);
\draw (3*\bS-1,-2) to (4*\bS-2, -0.725) -- node[above] {$p_2$} (4*\bS-1, -0.725);
\draw (Nu_out1) to (4*\bS-2, -3) -- node[above] {$p_4$} (4*\bS-1, -3);
\draw (3*\bS-1,-4) to (4*\bS-2, -2) -- node[above] {$p_3$} (4*\bS-1, -2);
\end{scope}
\begin{pgfonlayer}{background}
\filldraw [line width=4mm,join=round,\backgrnd]
((-1,1.5) rectangle (19,-8.5);
\end{pgfonlayer}
\end{tikzpicture}
}
\caption{An execution corresponding to a sequence of firings}\label{fig: execution alongside net}
\end{figure}
\noindent
This correspondence between Petri nets and free strict symmetric monoidal categories defines a \emph{process semantics} for nets: The FSSMC is interpreted as a deterministic version of its corresponding net, in which the history of every single token is tracked. This, in principle, could be implemented using a dependently typed language such as Idris~\cite{Brady2013} and, even more satisfactorily, such an implementation would allow one to map the FSSMC corresponding to a net to any other monoidal category representing a semantics, for instance the category $\textbf{Hask}$~\cite{HaskellWiki} of Haskell types and functions. The result is a procedure to formally compile a Petri net down to computations, with the net itself used as control to trigger the execution of algorithms and the passing of data between them. This is the approach adopted in defining the Statebox programming language~\cite{StateboxTeam2018}.
When focusing on implementation, this conceptual correspondence has to be made precise. In this paper we will review many approaches to the problem which have been pursued throughout the years, and highlight how all of them have to make compromises which are either computationally unfeasible or conceptually unsatisfying for our requirements. This is because the general strategy to link nets to FSSMCs has traditionally boiled down to the following:
\begin{itemize}
\item We want a correspondence from nets to FSSMCs;
\item We want said correspondence to be extendable to a functor;
\item We want said functor to have an adjoint.
\end{itemize}
These desiderata are sensible from a purely categorical point of view, as best exemplified by the unifying work carried out in~\cite{Master2019}, but they are not enough to guarantee that a practical and usable implementation is feasible. So, instead, we propose the following "wishlist" to guide our implementation efforts:
\begin{definition}[List of requirements]\label{def: requirements}
\leavevmode \\
\begin{enumerate}
\item We want to map each net to a free strict symmetric monoidal category, representing its possible executions. Free structures are particularly appealing for us since they are easier to handle computationally. Moreover we do not want to change the net/FSSMC definition too much to find a correspondence between the two, because we want to leverage on the diagrammatic formalism developed for both, see Requirement~\ref{item: mapping is useful for intuition}. \label{item: mapping net to FSSMC}
\item We want the mapping from nets to FFSMCs to be computationally feasible. \label{item: mapping is computationally feasible}
\item We want the FSSMCs corresponding to nets to represent computations in a \emph{meaningful way}. In particular, we want to be able to map net computations to other categories, translating the generating morphisms of a FSSMC into real operations (which can be pure functions, asynchronous calls etc.) via functors. Therefore, FSSMCs should not identify computations which are conceptually distinct. Moreover, any morphism in the FSSMC must correspond to a possible computation of the net. \label{item: mapping is faithful and full}
\item We want our mapping to be useful for intuition: Users should be able to program in a goal-oriented way using nets (as explained in~\cite{Genovese}), test against their properties with already developed tools~(e.g.~\cite{Sobocinski2013a}), and then map the net to its computations to establish links with a semantics. \label{item: mapping is useful for intuition}
\item We want users to be able to morph a net into another. Such morphisms should automatically be lifted to morphisms of net executions. We moreover want users to be able to ``tweak'' morphisms between executions directly in case the lifting provided is not satisfying for the task at hand. \label{item: morphisms are automatically lifted}
\end{enumerate}
\end{definition}
In the following sections we will point out how the two lists provided above are somehow incompatible: If having a functorial correspondence from FSSMCs to nets doesn't pose any problem, going in the opposite direction begs for some choices to be made. These choices depend on the problem of linearizing multisets, and as we will see they cannot be globally extended to all nets without either quotienting computations which should be considered distinct, or by adding further structure to the definition of nets, which lessens their appeal in applications. So, while an adjoint correspondence between nets and computations seem desirable from a categorical point of view, all attempts to obtain one render it meaningless for applications.
We will also point out how improving upon the current situation is not possible if the functorial mapping from nets to FSSMCs is to be preserved. Luckily, dropping it is not really a problem: We will show how having a functor from FSSMCs to nets is enough for implementation purposes. In the last section, we introduce \idrisct~\cite{StateboxTeamb}, a work-in-progress implementation of the content we covered.
\section{Preliminary definitions}
When things are seen more in detail, the correspondence between Petri nets and FSSMCs is not as precise as it seems: It fails in a particularly frustrating way which, as pointed out in~\cite{Baldan2003}, boils down to the fact that the inputs and outputs of transitions are multisets and hence commutative monoids, while the monoid of objects in a monoidal category is not. To make these claims precise, we start by giving some definitions.
\subsection{Strings and multisets}
\begin{definition}
Let $S$ be a set. We denote with $\Msets{S}$ and $\Strings{S}$ the set of \emph{finite multisets} and the set of \emph{strings of finite length} over $S$, respectively. The \emph{length of a string} $s$ is denoted with $|s|$, while the empty string with $\tensorUnit$.
\end{definition}
\begin{fact}
Given a set $S$, $\Msets{S}$ is the \emph{free commutative monoid generated by $S$}, with the empty multiset as unit and multiset sum as multiplication. Similarly, $\Strings{S}$ is the \emph{free monoid generated by $S$}, with the empty string $\epsilon$ as unit and string concatenation as multiplication.
\end{fact}
\begin{fact}
Any function between sets $f:A \to B$ gives rise to a corresponding multiset homomorphism $\Mset{f}: \Msets{A} \to \Msets{B}$. Similarly, any function between sets $f:A \to B$ gives rise to a corresponding monoid homomorphism $\Strings{f}: \Strings{A} \to \Strings{B}$.
\end{fact}
Note that the opposite is not true in general: There are multiset (monoid) homomorphisms that are not determined by a function between their corresponding base sets. This motivates the following definition:
\begin{definition}
A multiset homomorphism $X: \Msets{A} \to \Msets{B}$ (respectively, monoid homomorphism $Y: \Strings{A} \to \Strings{B}$) is called \emph{grounded} if there is a function $f: A \to B$ such that $\Mset{f} = X$ (respectively $\Strings{f} = Y$).
\end{definition}
\begin{definition}
Given a set $S$, we define \emph{the multiplicity of $S$} as the homomorphism $\Multiplicity{S}: \Strings{S} \to \Msets{S}$ sending a string $s$ to the multiset associating to each element of $S$ its number of occurrencies in $s$. When no ambiguity arises, we will use $\Multiplicity{}$ in place of $\Multiplicity{S}$.
\end{definition}
\begin{fact}
Any monoid homomorphism $X: \Strings{A} \to \Strings{B}$ gives rise to a corresponding multiset homomorphism $\Mset{X}: \Msets{A} \to \Msets{B}$ by setting $\Mset{X}(a) = \Multiplicity{B}(X(a))$ for $a \in A$, and extending to non-generators using freeness. If $X = \Strings{f}$ for some $f:A \to B$, then $\Mset{X} = \Mset{f}$.
\end{fact}
\subsection{Petri nets}
Now we focus on Petri nets. Many -- often inequivalent -- definitions of Petri nets have been given throughout the years, so it makes sense to spell out which particular definition we are committing to. In the context of process semantics, the following one is the most popular:
\begin{definition}
A \emph{Petri net} $N$ is a tuple $\Net{N}$, where $\Pl{N}$ and $\Tr{N}$ are sets, called the set of \emph{places} and \emph{transitions} of $N$, respectively, while $\Pin{-}{N}$ and $\Pout{-}{N}$ are functions $\Tr{N} \to \Msets{\Pl{N}}$, representing the input/output places, respectively, connected to each transition.
\end{definition}
\begin{definition}
A \emph{morphism of Petri nets} $f:N \to M$ is specified by a couple $(\PlM{f}, \TrM{f})$, with $\PlM{f}: \Msets{\Pl{N}} \to \Msets{\Pl{M}}$ multiset homomorphism and $\TrM{f}: \Tr{N} \to \Tr{M}$ function such that
\begin{equation*}
\Pin{-}{N};\PlM{f} = \TrM{f};\Pin{-}{M} \qquad \Pout{-}{N};\PlM{f} = \TrM{f};\Pout{-}{M}
\end{equation*}
\end{definition}
Indeed, we get the following fact, which is proven noting that multiset homomorphisms can be composed and that identity multiset homomorphisms can be lifted to identity net morphsims.
\begin{fact}
Petri nets and their morphisms form a category, denoted $\Petri$.
\end{fact}
The category $\Petri$ has been used, among others, in~\cite{Meseguer1990} and~\cite{Sassone1995}. Sometimes the definition of net morphism given above is too general, and begs for a suitable restriction. Most often, an interesting subcategory of $\Petri$ is the following one:
\begin{fact}
There is a subcategory of $\Petri$, which we denote as $\GPetri$, where objects are nets and morphisms are \emph{grounded} homomorphisms of nets.
\end{fact}
The restriction from $\Petri$ to $\GPetri$ has been used in~\cite{Baez2018}, where nets were studied using double categories.
\subsection{Free strict symmetric monoidal categories (FSMCs)}
Now we spell out what a free symmetric monoidal category is. Here too there is some confusion, since freeness can be imposed only on objects, or on both objects and morphisms. Let us clarify:
\begin{definition}\label{def: free on objects smc}
A \emph{free-on-objects, strict SMC (symmetric monoidal category)} is a strict symmetric monoidal category whose monoid of objects is freely generated.
\end{definition}
\begin{definition}\label{def: free smc}
A \emph{free strict SMC (FSSMC)} is a symmetric monoidal category whose monoid of objects is $\Strings{S}$ for some set of generators $S$, and whose morphisms are generated by the following introduction rules:
\begin{gather*}
\frac{s \in \Strings{S}}{\id{s}: s \to s}
\qquad
\frac{s,t \in \Strings{S}}{\sigma_{s,t}: {s \tensor t} \to {s \tensor t}}
\qquad
\frac{(\alpha, s,t) \in T}{\alpha: s \to t} \\\\
\frac{\alpha: A \to B, \,\, \alpha': A' \to B'}{\alpha \tensor \alpha': {A \tensor A'} \to {B \tensor B'}}
\qquad
\frac{\alpha: A \to B, \,\, \beta: B \to C}{\alpha;\beta: A \to C}
\end{gather*}
Where elements of the set $T$ are triples $(\alpha, s, t)$ with $s, t \in \Strings{S}$. Morphisms are quotiented by the following equations, for $\alpha: A \to B$, $\alpha': A' \to B'$, $\alpha'': A'' \to B''$, $\beta: B \to C$, $\beta':B' \to C'$, $\gamma: C \to D$:
\begin{align*}
\alpha ; \id{B} = &\,\,\alpha = \id{A} ; \alpha &
\quad (\alpha;\beta);\gamma &= \alpha;(\beta;\gamma)\\
\tensorUnit\tensor \alpha = &\,\,\alpha = \alpha \tensor \tensorUnit&
\quad (\alpha \tensor \alpha') \tensor \alpha'' &= \alpha \tensor (\alpha' \tensor \alpha'')\\
\id{A} \tensor \id{A'} &= \id{A \tensor A'} &
\quad (\alpha \tensor \alpha') ; (\beta \tensor \beta') &= (\alpha ; \beta) \tensor (\alpha' ; \beta')\\
\sigma_{A, A' \tensor A''} = (\sigma_{A,A'} &\tensor \id{A''}); (\id{A} \tensor \sigma_{A',A''}) &
\quad \sigma_{A,A'};\sigma_{A',A} &= \id{A \tensor A'}\\
&\quad\qquad \mathrlap{\sigma_{A,A'};(\alpha' \tensor \alpha)= (\alpha \tensor \alpha');\sigma_{B,B'}}&
\end{align*}
\end{definition}
Intuitively, a FSSMC is a symmetric monoidal category where morphisms do not satisfy any further equation. Since the monoid of objects is freely generated, we will often use the string notation $s_1 \dots s_n$ to denote $s_1\tensor \dots \tensor s_n$. Among FSSMCs there is a distinguished class of categories -- which will be the cause of all the negative results in this paper -- that deserve their own notation:
\begin{definition}
Given a set $S$ we denote with $\Sym{S}$ the category generated as in Definition~\ref{def: free smc}, where the monoid of objects is $\Strings{S}$ and the set of generating morphisms $T$ is empty.
\end{definition}
Finally, we have the following couple of results, easy to prove:
\begin{fact}
Free-on-objects strict SMCs and strict symmetric functors between them form a category, denoted $\FOSSMC$. Restricting the objects to FSSMCs, we obtain a subcategory $\FSSMC \subset \FOSSMC$. Restricting strict symmetric monoidal functors to be grounded homomorphisms on objects, we obtain further restrictions $\GFOSSMC \subset \FOSSMC$ and $\GFSSMC \subset \FSSMC$.
\end{fact}
\section{Past approaches and development implications}
We now review past approaches to the problem of finding a correspondence between nets and FSSMCs, focusing on how they relate to our requirements.
\subsection{The symmetric approach}
In~\cite{Meseguer1990}, probably the most influential paper describing Petri nets from a categorical standpoint, nets and their morphisms are axiomatized as the objects and morphisms of a category. In~\cite{Degano1996} this process is taken a step further, with each Petri net being mapped to its category of computations, which is strict symmetric monoidal. Since inputs and outputs of transitions are multisets, a linearization problem arises, because objects in a symmetric monoidal category do not commute in the general case. The authors solve the problem by imposing commutativity: In~\cite{Sassone1996} it has been proven that the mapping in~\cite{Degano1996} is equivalent to mapping each net $N$ to a category as in Definition~\ref{def: free smc}, with $T$ being the set of transitions of the net, and then quotienting by the following equations, for $s, s',t$ object generators, $s \neq t$ and $(\alpha, A \otimes s \otimes t \otimes B, A' \otimes s' \otimes s' \otimes B')$ in $T$:
\begin{equation*}
\sigma_{s,t} = \id{s,t}
\qquad
(\id{A} \otimes \sigma_{s,s} \otimes \id{B});\alpha = \alpha
\qquad
\alpha;(\id{A'} \otimes \sigma_{s',s'} \otimes \id{B'}) = \alpha
\end{equation*}
The idea is to completely annihilate symmetries, making morphisms totally indifferent to the causal relationships between tokens.
This approach is unsatisfying for many reasons: First of all, as proven in~\cite{Sassone1995}, annihilating symmetries implies that the correspondence from nets to categories cannot be made functorial in any straightforward way. More importantly the category of computations of a net, defined in this way, \emph{has no real computational meaning}: It is not possible to keep track of the causal flow of tokens, and given that symmetric monoidal categories are almost never commutative on objects, the idea of mapping computations to a semantics is shattered. In particular, commutativity on objects is not satisfied by functional programming languages when we consider them as categories with data types as objects and functions between them as morphisms: Here, the monoidal product amounts to taking tuples, which do not commute. Such a strategy then violates Requirement~\ref{item: mapping is faithful and full} in Definition~\ref{def: requirements}.
On the other hand, in~\cite{Baez2018} the authors worked with grounded morphisms -- hence in~$\GPetri$ -- and defined a double-categorical framework which allows the gluing together of nets. The idea is very interesting, since it can be employed in using Petri nets to write code modularly. Moreover, if the choice of working in~$\GPetri$ reduces the expressiveness of Petri net morphisms, it doesn't necessarily constitute a severe limitation from the applicative standpoint. The additional double-categorical structure allows us to obtain a mapping between nets and symmetric monoidal categories which is, this time, functorial in a double-categorical interpretation. Unfortunately, to make things work, the authors had to impose commutativity on places once again, violating Requirement~\ref{item: mapping is faithful and full}.
\subsection{The pre-net approach}
In~\cite{Baldan2003}, commutativity was dropped in favor of another strategy, namely weakening the definition of Petri net to the one of \emph{pre-net}, and showing how pre-nets present free strict symmetric monoidal categories. The advantage is that pre-nets can be functorially -- and adjointly -- mapped to FSSMCs. Pre-nets are essentially an ordered version of Petri nets where the order in which places enter/exit transitions has to be specified explicitly.
This approach works, but the requirement that the ordering play nice with net morphisms gets in the way of using Petri nets to model complicated processes. Specifically, there may not be any morphisms between two pre-nets, even if there are between their underlying nets. This is shown in Figure~\ref{fig: pre-net failing}, where we have adopted the graphical formalism for pre-nets introduced in~\cite{Baldan2003}.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.24\textwidth}\centering
\scalebox{0.5}{
\begin{tikzpicture}
\node [place,tokens=0] (1a) {};
\node [place,tokens=0] (1b) [below = 1.5cm of 1a] {};
\node [transition] (2a) [right = 1.5cm of 1a] {}
edge [pre] node[] {} (1a)
edge [pre] node[] {} (1b);
\node [transition] (2b) [right = 1.5cm of 1b] {}
edge [pre] node[] {} (1a)
edge [pre] node[] {} (1b);
\begin{pgfonlayer}{background}
\filldraw [line width=4mm,join=round,black!10]
(1a.north -| 1a.west) rectangle (2b.south -| 2b.east);
\end{pgfonlayer}
\end{tikzpicture}
}
\caption{}\label{fig: first net}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}\centering
\scalebox{0.5}{
\begin{tikzpicture}
\node [place,tokens=0] (1a) {};
\node [place,tokens=0] (1b) [below = 1.5cm of 1a] {};
\node [transition] (2a) [right = 1.5cm of 1a] {}
edge [pre] node[fill=black!10] {1} (1a)
edge [pre] node[fill=black!10, below left] {2} (1b);
\node [transition] (2b) [right = 1.5cm of 1b] {}
edge [pre] node[fill=black!10, above left] {2} (1a)
edge [pre] node[fill=black!10] {1} (1b);
\begin{pgfonlayer}{background}
\filldraw [line width=4mm,join=round,black!10]
(1a.north -| 1a.west) rectangle (2b.south -| 2b.east);
\end{pgfonlayer}
\end{tikzpicture}
}
\caption{}\label{fig: first pre-net}
\end{subfigure}
\\
\begin{subfigure}[t]{0.24\textwidth}\centering
\scalebox{0.5}{
\begin{tikzpicture}
\node [place,tokens=0] (1a) {};
\node [place,tokens=0] (1b) [below = 1.5cm of 1a] {};
\node [transition] (2a) [below right = 0.55cm and 1.5cm of 1a] {}
edge [pre] node[] {} (1a)
edge [pre] node[] {} (1b);
\begin{pgfonlayer}{background}
\filldraw [line width=4mm,join=round,black!10]
(1a.north -| 1a.west) rectangle (2b.south -| 2b.east);
\end{pgfonlayer}
\end{tikzpicture}
}
\caption{}\label{fig: second net}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}\centering
\scalebox{0.5}{
\begin{tikzpicture}
\node [place,tokens=0] (1a) {};
\node [place,tokens=0] (1b) [below = 1.5cm of 1a] {};
\node [transition] (2a) [below right = 0.55cm and 1.5cm of 1a] {}
edge [pre] node[fill=black!10] {1} (1a)
edge [pre] node[fill=black!10] {2} (1b);
\begin{pgfonlayer}{background}
\filldraw [line width=4mm,join=round,black!10]
(1a.north -| 1a.west) rectangle (2b.south -| 2b.east);
\end{pgfonlayer}
\end{tikzpicture}
}
\caption{}\label{fig: second pre-net}
\end{subfigure}
\caption{The net~\ref{fig: first net} is morphed to the net~\ref{fig: second net} by sending places to themselves and the two transitions in~\ref{fig: first net} to the one in~\ref{fig: second net}. There is no way to lift this to a morphism between pre-nets~\ref{fig: first pre-net} and~\ref{fig: second pre-net} without collapsing places.}\label{fig: pre-net failing}
\end{figure}
\noindent
Indeed, there is no functor from the category of Petri nets to the category of pre-nets, meaning that the ``specification of a net'' -- which is how pre-nets are interpreted in~\cite{Baldan2003} -- cannot be created on the fly in a way that respects morphisms of underlying nets. To understand why this is a problem, imagine the following scenario: A user draws a Petri net. To execute it, it is automatically mapped to its corresponding FSSMC. When the user runs the net, selecting which tokens have to be processed by which transition via the UI, the corresponding morphisms are composed in the FSSMC. Now the user decides to morph the net into another. It would be desirable to induce a functor between the corresponding FSSMCs, so that we could ``import'' all the histories from the former to the latter. Using pre-nets as a stepping stone between nets and FSSMCs we cannot, because the net transformation specified by the user may not correspond to a morphism of pre-nets. This violates Requirement~\ref{item: morphisms are automatically lifted} in Definition~\ref{def: requirements}, the only solution being to ask users to employ pre-nets directly. In doing so the graphical formalism becomes less intuitive and the allowed transformations greatly restricted. Even worse, it is not easy to intuitively understand when a net morphism is ``allowed'' and when it is not, with an obvious -- and substantial -- loss in applications. This violates Requirement~\ref{item: mapping is useful for intuition}.
\subsection{The quotient approach}
In~\cite{Sassone1995}, Petri nets are taken as they are, and it's free strict symmetric monoidal categories being modified to accomodate an adjunction. We consider this approach as one of the most valid so far, and we generalized it in~\cite{Genovese2018}. Let us spell things out in detail:
\begin{definition}[From~\cite{Sassone1995}, Def. 3.5]\label{def: unordered executions}
Given a Petri net $(N)$, we map it to the free-on-objects, strict symmetric monoidal category $\FoldSym N$ defined as follows:
\begin{itemize}
\item The monoid of objects of $\FoldSym N$ is freely generated by $\Pl{N}$;
\item Morphisms are generated by the following introduction rules:
\begin{gather*}
\frac{s \in \Strings{\Pl{N}}}{\id{s}: s \to s}
\quad \frac{s,t \in \Strings{\Pl{N}}}{\sigma_{s,t}: {s \tensor t} \to {t \tensor s}}\\\\
\frac{t \in T_N}{t_{u,v}: u \to v} \quad \forall u,v.(\Multiplicity{}(u)= \Pin{t}{N} \wedge \Multiplicity{} (v)= \Pout{t}{N})\\\\
\frac{\alpha: A \to B, \,\, \alpha': A' \to B'}{\alpha \tensor \alpha': {A \tensor A'} \to {B \tensor B'}}
\qquad
\frac{\alpha: A \to C, \,\, \beta: B \to C}{\alpha;\beta: A \to C}
\end{gather*}
\item Morphisms are quotiented by the same axioms of Definition~\ref{def: free smc} plus the following one, for $p: u \to u'$, $q: v \to v'$ symmetries:
\begin{equation}\label{eq: Sassone axiom}
p;t_{u',v'} = t_{u,v};q
\end{equation}
\end{itemize}
\end{definition}
This strategy works, but is computationally unfeasible: For each transition $t$, we need to introduce as many generating morphisms as are the permutations of the input/output places of $t$. If for instance $t$ has $10$ different inputs and no outputs, we need to introduce $10!$ generating morphisms. Moreover, they have to be linked together by quotienting the category with supplementary equations, resulting in more computational overhead. (Quotients are difficult to deal with in code.) Although the quotienting overhead can be reduced (for instance using \texttt{postulate} in Idris), the superexponential explosion of generating morphisms is not avoidable. All this violates Requirement~\ref{item: mapping is computationally feasible}.
Furthermore, axiom~\ref{eq: Sassone axiom} in Definition~\ref{def: unordered executions} makes $\FoldSym(N)$ non-free, violating Requirement~\ref{item: mapping net to FSSMC}. This depends on the fact that axiom~\ref{eq: Sassone axiom} doesn't just relate generating morphisms to other generating morphisms, but also to themselves, in case they have repeated entries in their source and targets. Using wiring diagrams to represent morphisms, for a transition $t: s \otimes s \to v$ axiom~\ref{eq: Sassone axiom} entails that the diagrams in Figure~\ref{fig: sassone fails} are considered equal.
\begin{figure}[!h]
\centering
\begin{tikzpicture}[oriented WD, bb port length=10pt]
\node[bb={2}{1}] (X1) {$t$};
\draw[label]
node [yshift=2, right=2pt of X1_out1] {$v$}
;
\node[bb={2}{0}, transparent, left = 0.5cm of X1] (T1) {$t$};
\draw[label]
node [yshift=2, left=2pt of T1_in1] {$s$}
node [yshift=2, left=2pt of T1_in2] {$s$}
;
\draw (T1_in1') to (X1_in2);
\draw (T1_in2') to (X1_in1);
\node[right = 1.5cm of X1] (eq) {$=$};
\node[bb={2}{1}, right = 1.5cm of eq] (X2) {$t$};
\draw[label]
node [yshift=2, left=2pt of X2_in1] {$s$}
node [yshift=2, left=2pt of X2_in2] {$s$}
node [yshift=2, right=2pt of X2_out1] {$v$}
;
\begin{pgfonlayer}{background}
\filldraw [line width=4mm,join=round,\backgrnd]
([xshift=-5mm]X1.north -| T1_in1.west) rectangle ([xshift=5mm]X1.south -| X1_out1.east);
\filldraw [line width=4mm,join=round,\backgrnd]
([xshift=-5mm]X2.north -| X2_in1.west) rectangle ([xshift=5mm]X2.south -| X2_out1.east);
\end{pgfonlayer}
\end{tikzpicture}
\caption{According to Definition~\ref{def: unordered executions} these diagrams are equal, making the category non-free}\label{fig: sassone fails}
\end{figure}
\noindent
This is a problem when it comes to mapping net computations to a semantics: As tuples are not commutative in the general case, our functorial correspondence cannot be symmetric monoidal.
Finally, the notion of morphism between the categories $\FoldSym(N)$ is greatly impractical, since morphisms between them are equivalence classes of functors. This equivalence collapses functors with radically different behaviors if we embrace the idea that symmetries are important to keep track of ``which-token-is-doing-what'' in a net. This is in contrast with Requirement~\ref{item: mapping is faithful and full}.
\section{The curse of linearization}
As we have seen, many approaches come very close to showing adjunctions or even equivalences between something that very closely resembles the category $\Petri$ and something that very closely resembles the category $\FSSMC$. However, no approach really nails down a ``computer-friendly'' correspondence between $\Petri$ and $\FSSMC$ themselves. Why is that? As we will see, the issue is that there is no canonical way to linearize multiset homomorphisms that is preserved by symmetric monoidal functors.
At the most basic level, to send transitions of a net to generators in a FSSMC we need to linearize its inputs and outputs from multisets to strings, hence we need to have a function $\Ordering{S}: \Msets{S} \to \Strings{S}$ such that $\Ordering{S};\Multiplicity{} = \id{\Msets{S}}$ for every set of generators $S$.
For obvious reasons, there is no canonical choice for such a function without imposing additional structure on the generating set and, according to our desiderata, we don't want this additional structure to ``get in the way'' when we use nets for practical purposes.
Luckily, there are many different ways to obtain $\Ordering{S}$ without having to impose unreasonable requirements on the net. We present one of the many possible approaches in the Appendix~\ref{app: Appendix}, and for now we just suppose to have $\Ordering{}$ defined. With it, we can cleanly linearize transition inputs/outputs and generate a FSSMC from a net.
\begin{definition}\label{def: free functor on objects}
Given a Petri net $N$, we map it to the FSSMC $\OFoldSym N$ generated as in Definition~\ref{def: free smc}, with $\Pl{N}$ as the set of object generators, and generating morphisms given by
\begin{equation*}
T := \suchthat{(t,\Ordering{N}(\Pin{t}{N}),\Ordering{N}(\Pout{t}{N}))}{t \in \Tr{N}}
\end{equation*}
\end{definition}
It is useful to compare our mapping with the one provided in Definition~\ref{def: unordered executions}: The main difference between the two is that the use of the ordering function $\Ordering{N}$ in Definition~\ref{def: free functor on objects} is replaced by a universally quantified statement over multiplicities using $\Multiplicity{}$ in Definition~\ref{def: unordered executions}. The computational overhead introduced by Definition~\ref{def: unordered executions} ultimately depends on the fact that the function $\Ordering{N}$, which provides a canonical choice for generating morphisms, cannot be defined without imposing further structure on the net. On the contrary, in Definition~\ref{def: free functor on objects} we are able to use it to reduce the number of generating morphisms introduced from $n! \cdot m!$ in the worst case to just $1$, while getting rid of quotients altogether and keeping the category free at the same time.
We conclude the section with the observation that the mapping from nets to FSSMCs is invertible, the proof of which is obvious from the definitions:
\begin{definition}\label{def: forgetful functor on objects}
To every FSSMC $\CategoryC$, we associate the net $\UnFoldSym \CategoryC$ defined as follows:
\begin{itemize}
\item Places of $\UnFoldSym \CategoryC$ are the generating objects of $\CategoryC$;
\item Transitions of $\UnFoldSym \CategoryC$ are the generating morphisms of $\CategoryC$;
\item $\Pin{t}{\UnFoldSym \CategoryC}$, for $t: A \to B$ generating morphism of $\CategoryC$, is defined as $\Multiplicity{}(A)$;
\item $\Pout{t}{\UnFoldSym \CategoryC}$, for $t: A \to B$ generating morphism of $\CategoryC$, is defined as $\Multiplicity{}(B)$.
\end{itemize}
\end{definition}
\begin{proposition}\label{prop: fold-unfold isomorphism}
For any net $N$, $\UnFoldSym \OFoldSym N$ is isomorphic to $N$. For each FSSMC $\CategoryC$, $\OFoldSym \UnFoldSym \CategoryC$ is isomorphic to $\CategoryC$.
\end{proposition}
\subsection{Transition-preserving functors}
We are quite happy: There are ways to linearize the input/output places of Petri nets that allow to biject them to FSSMCs. We now need to find a suitable notion of morphism between FSSMCs which can be considered as the linearized counterpart of a net morphism. In our conceptual framework, morphism generators represent net transitions, while symmetries and identities represent the ``necessary bookkeeping'' to deterministically spell out the causal flow of tokens. So, it seems reasonable to restrict to functors that send generating morphisms to generating morphisms. We make this precise:
\begin{definition}
Let $\CategoryC, \CategoryD$ be FSSMCs. A strict monoidal functor $F:\CategoryC \to \CategoryD$ is called \emph{transition-preserving} if it maps generating morphisms in $\CategoryC$ to morphisms in $\CategoryD$ of the form $\sigma;t;\sigma'$, where $t$ is a generating morphism and $\sigma, \sigma'$ are symmetries.
\end{definition}
Notice how functors between FSSMCs as defined in~\cite{Baldan2003} are transition-preserving, as are representatives of the equivalence classes that are used in~\cite{Sassone1995} to define functors. It can easily be proven that our definition behaves well with respect to the categorical structure of $\FSSMC$, and in fact:
\begin{proposition}\label{prop: transition-preserving subcategory}
FSSMCs and transition-preserving functors form a subcategory of $\FSSMC$. Similarly, FSSMCs and grounded, transition-preserving functors form a subcategory of $\GFSSMC$. From now on we will use $\FSSMC$ and $\GFSSMC$ to denote such restricted categories.
\end{proposition}
\begin{proof}
Clearly, identity functors are transition-preserving. Now let $F:\CategoryC \to \CategoryD$ and $G: \CategoryD \to \CategoryE$. If $t_\CategoryC$ is a generator of $\CategoryC$, then it is mapped by $F$ to $\sigma_F;t_\CategoryD;\sigma'_F$ in $\CategoryD$. $t_\CategoryD$ is itself a generator, so it is mapped by $G$ to $\sigma_G; t_\CategoryE; \sigma'_G$. Putting all of this together, we find that $F;G$ maps the generator $t_\CategoryC$ to:
\begin{align*}
GFt_\CategoryC &= G(\sigma_F;t_\CategoryD;\sigma'_F)\\
&= G\sigma_F ; Gt_\CategoryD; G\sigma'_F\\
&= G\sigma_F; (\sigma_G;t_\CategoryE;\sigma'_G); G\sigma_F\\
&= (G\sigma_F; \sigma_G);t_\CategoryE;(\sigma'_G; G\sigma_F)
\end{align*}
Because $G$ is strict monoidal, $G\sigma_F$ and $G\sigma'_F$ are symmetries. Hence, $G\sigma_F;\sigma_G$ and $G\sigma'_F;\sigma'_G$ are symmetries as well, proving that $F;G$ is transition-preserving
\end{proof}
The choice of restricting to transition-preserving functors does not break any of the requirements spelled out in Definition~\ref{def: requirements}: Non-transition-preserving functors do not have any interpretation as morphisms of net computations.
It is also worth noting that the isomorphism between $\CategoryC$ and $\OFoldSym \UnFoldSym \CategoryC$ of Proposition~\ref{prop: fold-unfold isomorphism} is transition-preserving, so we can still ``go back and forth'' from nets to computations, and vice-versa, in our restricted setting.
To be sure that our definition is sensible, we still have to check that we can lift net morphisms to transition-preserving functors. What we have in mind is something along the lines of the following proposition:
\begin{proposition}[Sketch]\label{def: free functor on morphisms}
Let $f: N \to M$ be a morphism of nets. $f$ induces a transition-preserving functor $\OFoldSym f: \OFoldSym N \to \OFoldSym M$ as follows:
\begin{itemize}
\item A generating object $s$ of $\OFoldSym N$ is a place in $N$, so we set $(\OFoldSym f)s = \Ordering{M} \PlM{f}(s)$;
\item We extend the mapping $\OFoldSym f$ to all objects by using the fact that the monoid of objects of $\OFoldSym N$ is free;
\item On morphisms, we send identities to identities and symmetries to symmetries. If $t: \Ordering{N}\Pin{t}{N} \to \Ordering{N}{\Pout{t}{N}}$ is a generator of $\OFoldSym N$, we send it to:
\begin{equation*}
(\OFoldSym f) \Ordering{N}\Pin{t}{N} \xrightarrow{\bar{\sigma}} \Ordering{M}\Pin{\TrM{f}t}{M} \xrightarrow{\TrM{f}t} \Ordering{M}\Pout{\TrM{f}t}{M} \xrightarrow{\bar{\sigma}'} (\OFoldSym f) \Ordering{N}\Pin{t}{N}
\end{equation*}
\item We extend $\OFoldSym f$ to all the remaining morphisms by (monoidal) composition.
\end{itemize}
If $f$ is grounded, so is $\OFoldSym f$.
\end{proposition}
The way we set things up in Proposition~\ref{def: free functor on morphisms} seems to be the only sensible thing to do. Nevertheless, there is a problem: For each generating morphism of $\OFoldSym N$, how do we define the symmetries $\bar{\sigma}$, $\bar{\sigma}'$? This is the key point that makes getting a functorial correspondence from nets to FSSMCs so difficult: Linearizing multisets to strings is easy, while linearizing multiset \emph{homomorphisms} to string homomorphisms is not! Before making this precise, we prove that there are choices of $\bar{\sigma}, \bar{\sigma}'$ for which Proposition~\ref{def: free functor on morphisms} holds.
\begin{proof}[Proof of Proposition~\ref{def: free functor on morphisms}]
On objects there is nothing to prove, since the mapping is obtained by applying freeness on the mapping on generators, which makes it monoidal by definition. This is also sufficient to prove the last statement: If $f$ is a grounded homomorphism of multisets on places, $\OFoldSym f$ is a grounded monoid homomorphism on objects.
On morphisms, identities are mapped to identities and symmetries to symmetries, so symmetry and identity preservation laws hold trivially. We need to check that the functor is well-defined on generating morphisms, hence proving:
\begin{equation*}
\Multiplicity{}((\OFoldSym f) \Ordering{N}\Pin{t}{N}) =
\Multiplicity{}(\Ordering{M}\Pin{\TrM{f}t}{M})
\quad
\Multiplicity{}(\Ordering{M}\Pout{\TrM{f}t}{M}) =
\Multiplicity{}((\OFoldSym f) \Ordering{N}\Pin{t}{N})
\end{equation*}
We focus on the first one, the proof of the second being analogous. $\Ordering{N}\Pin{t}{N}$ is an object of $\OFoldSym N$, and so it is a string $s_1 \dots s_n$. By definition,
\begin{align*}
(\OFoldSym f) \Ordering{N}\Pin{t}{N}
&= (\OFoldSym f) (s_1 \dots s_n) \\
&= (\OFoldSym f) s_1 \dots (\OFoldSym f) s_n \\
&= \Ordering{M} \PlM{f}(s_1) \dots \Ordering{M} \PlM{f}(s_n)
\end{align*}
Hence, taking multiplicities,
\begin{align*}
\Multiplicity{}((\OFoldSym f) \Ordering{N}\Pin{t}{N})
&= \Multiplicity{}(\Ordering{M} \PlM{f}(s_1) \dots \Ordering{M} \PlM{f}(s_n)) \\
&= \Multiplicity{}(\PlM{f}(s_1) \dots \PlM{f}(s_n)) \\
&= \PlM{f}\Multiplicity{}(s_1 \dots s_n) \\
&= \PlM{f}(\Pin{t}{N}) \\
&= \Pin{\TrM{f}t}{M} \\
&= \Multiplicity{}(\Ordering{M}\Pin{\TrM{f}t}{M}) \\
&= \Multiplicity{}((\OFoldSym f) \Ordering{N}\Pin{t}{N})
\end{align*}
This is enough to guarantee that $(\OFoldSym f) \Ordering{N}\Pin{t}{N}$ and $\Ordering{M}\Pin{\TrM{f}t}{M}$ are permutation of one another, so there exists a symmetry $\bar{\sigma}$ between them. An obvious consequence is also that $\OFoldSym f$ is transition-preserving, by definition.
Preservation of monoidal products and compositions holds trivially since we defined them freely from generating morphisms, identities and symmetries.
\end{proof}
Proposition~\ref{def: free functor on morphisms} says that if for each generating morphism we can pick a symmetry -- any symmetry -- between $(\OFoldSym f) \Ordering{N}\Pin{t}{N}$ and $\Ordering{M}\Pin{\TrM{f}t}{M}$ on one hand, and between $\Ordering{M}\Pout{\TrM{f}t}{M}$ and $(\OFoldSym f) \Ordering{N}\Pin{t}{N}$ on the other, then we get a functor. Clearly we want this choice to respect functor composition. Alas, this is not possible without imposing further structure on the nets in a way that violates our requirements. We will demonstrate this by investigating the structure of symmetries in a FSSMC.
\subsection{Chasing symmetries}
\begin{proposition}\label{prop: different objects unique symmetry}
Let $\sigma: s \to s'$ be a morphism in $\Sym{S}$. If $s$ can be written as a monoidal product $s_1 \dots s_n$ of elements of $S$, with $s_i = s_j$ iff $i = j$, then for any other symmetry $\sigma': s \to s'$ it is $\sigma = \sigma'$.
\end{proposition}
\begin{proof}
Using the coherence theorem for string diagrams of symmetric monoidal categories~\cite{Selinger2010}, $\Sym{S}$ is a category where objects are wires and morphisms are identities and crossings. So, $\sigma:s \to s'$ can be represented as a bunch of wires, each going from an $s_i$ in $s$ to exactly one $s'_j$ in $s'$, with $s_i = s'_j$. Coherence for symmetric monoidal categories means that only connectivity matters, that is, two symmetries $\sigma, \sigma'$ are considered equal if the wiring of $\sigma$ can be topologically rearranged into the wiring of $\sigma'$, keeping the ends of the wires fixed in their positions, and without bending wires into a ``U'' shape.
Since $\Strings{S}$ is free and generated by $S$, the decomposition of $s$ into object generators $s_1 \dots s_n$ is unique. The same can be said of $s'$ with $s' = s'_1 \dots s'_m$, and $\sigma: s \to s'$ implies that $n = m$. Given that all $s_i$ are distinct, each $i$ has exactly one corresponding $j$ such that $s_i = s'_j$, and $\sigma$ must connect the two. This completes the proof, since any other $\sigma$ is forced to make the same connection between $s_i$ and $s'_j$ for an arbitrary $i$, and by coherence, connectivity of ends is the only thing that matters to rearrange a tangle of wires into another.
\end{proof}
Notice how the proposition above does not hold if we allow repeated generating objects in the decomposition of $s$. In fact, if $s = s_1 s_1$ for some generating object $s_1$, the morphisms below cannot be topologically deformed into one another:
\begin{figure}[!h]
\centering
\begin{tikzpicture}[oriented WD, bb port length=10pt]
\node[bb={2}{2}, transparent] (X1) {};
\draw[label]
node [yshift=2, left=2pt of X1_in1] {$s_1$}
node [yshift=2, left=2pt of X1_in2] {$s_1$}
node [yshift=2, right=2pt of X1_out1] {$s_1$}
node [yshift=2, right=2pt of X1_out2] {$s_1$}
;
\draw (X1_in1') to (X1_out2');
\draw (X1_in2') to (X1_out1');
\node[bb={2}{2}, transparent, right = 5cm of X1] (X2) {};
\draw[label]
node [yshift=2, left=2pt of X2_in1] {$s_1$}
node [yshift=2, left=2pt of X2_in2] {$s_1$}
node [yshift=2, right=2pt of X2_out1] {$s_1$}
node [yshift=2, right=2pt of X2_out2] {$s_1$}
;
\draw (X2_in1') to (X2_out1');
\draw (X2_in2') to (X2_out2');
\begin{pgfonlayer}{background}
\filldraw [line width=4mm,join=round,\backgrnd]
([xshift=-5mm]X1.north -| X1_in1.west) rectangle ([xshift=5mm]X1.south -| X1_out1.east);
\filldraw [line width=4mm,join=round,\backgrnd]
([xshift=-5mm]X2.north -| X2_in1.west) rectangle ([xshift=5mm]X2.south -| X2_out1.east);
\end{pgfonlayer}
\end{tikzpicture}
\end{figure}
\noindent
Focusing on the counterexamples a bit more, we see that the problem with repeated generating objects is that they may or may not be swapped, and that these two choices are not equivalent. Luckily, there is a clean way to establish whether or not a symmetry swaps the same object generators.
\begin{definition}
A morphism in $\Sym{S}$ is called a \emph{basic block} if it is of the form $\id{u} \tensor \sigma_{s_1,s_2} \tensor \id{t}$ for some objects $u,t$ and some object generators $s_1,s_2$.
\end{definition}
\begin{fact}
Using coherence conditions, any symmetry $\sigma: s \to s'$ can be written as a finite composition of basic blocks.
\end{fact}
\begin{definition}\label{def: lifting strings}
Let $S$ be a set. Given a string $s \in \Strings{S}$, denote with $s^l$ the string obtained as follows:
\begin{itemize}
\item Starting from the left, consider the $i$-th entry of $s$, call it $s_i$. It will be equal to some generator $t$.
\item If we have $s_i \neq s_j$ for all $j < i$, then set $s_i$ equal to the formal symbol $t^0$;
\item Otherwise, there are other entries $j$ with $j < i$ and both $s_i = t = s_j$. Consider the largest of such $j$. If $t$ has been replaced with the formal symbol $t^k$ at entry $j$, replace it with $t^{k+1}$ at entry $i$.
\end{itemize}
\end{definition}
\begin{example}
Consider the string $s = aababbccba$ on the set $\{a,b,c\}$. Applying Definition ~\ref{def: lifting strings}, we get $s^l = a^1 a^2 b^1 a^3 b^2 b^3 c^1 c^2 b^4 a^4$.
\end{example}
$s^l$ is just a rewriting of $s$ in which all equal entries have been distinguished by enumeration. It follows trivially that $s^l$ is a string itself:
\begin{proposition}
If $s \in \Strings{S}$, then $s^l \in \Strings{{S^l}}$, where
\begin{equation*}
S^l := \suchthat{s^i}{s \in S \wedge i \in \naturals}
\end{equation*}
Moreover, there is an obvious functor $\Sym{S^l} \to \Sym{S}$ which collapses $s^i$ to $s$ for all $i \in \naturals$.
\end{proposition}
We now extend the procedure we defined for strings to symmetries between them, as follows:
\begin{definition}\label{def: lifting symmetries}
Let $\sigma_1;\dots;\sigma_n: s \to s'$ be a composition of basic blocks in $\Sym{S}$. Denote by $\sigma_1^l;\dots;\sigma_n^l$ the symmetry obtained by using the following procedure:
\begin{itemize}
\item Suppose $\sigma_1 := \id{u} \tensor \sigma_{s_1,s_2} \tensor \id{t}$. Define $\sigma_1^l$ by replacing $s_1, s_2$ and every generating object in $u,t$ with their labelled symbols so that the source of $\sigma_1^l$ is $s^l$;
\item Apply the previous point to $\sigma_{i+1}$ such that the source of $\sigma_{i+1}^l$ coincides with the target of $\sigma_i^l$;
\item Repeat the step for every $i$.
\end{itemize}
\end{definition}
\begin{example}
Consider $\sigma_1 = \id{a} \tensor \sigma_{a,a} \tensor \id{b \tensor b}$ and $\sigma_2 = \id{a \tensor a \tensor a} \tensor \sigma_{b,b}$. Then $\sigma^l_1;\sigma^l_2$ is given by composing $\sigma^l_1 = \id{a^1} \tensor \sigma_{a^2, a^3} \tensor \id{b^1 \tensor b^2}$ and $\sigma^l_2 = \id{a^1 \tensor a^3 \tensor a^2} \tensor \sigma_{b^1, b^2}$.
\end{example}
Note how Definitions~\ref{def: lifting strings} and~\ref{def: lifting symmetries} are well-defined since every object in a free symmetric monoidal category can be decomposed in a unique way.
\begin{definition}
A symmetry $\sigma: s \to s'$ is called \emph{swap-free} if, for \emph{any} decomposition of $\sigma$ into basic blocks $\sigma = \sigma_1; \dots; \sigma_n$ and fixed labelings of generators $t^j, t^k$, there is a bijective correspondence between the sets:
\begin{equation*}
\suchthat{i}{\sigma_i^l := \id{u} \tensor \sigma_{t^j,t^k} \tensor \id{t}} \simeq \suchthat{i}{\sigma_i^l := \id{u'} \tensor \sigma_{t^k,t^j} \tensor \id{t'}}
\end{equation*}
\end{definition}
Intuitively, a symmetry is swap-free if crossing wires with the same label can always be disentangled to identities. Indeed, it is easy to check that symmetries between objects which are products of distinct generators are always swap-free. Moreover, swap-free symmetries are well-behaved:
\begin{proposition}\label{prop: swap-free compose}
Composition of swap-free symmetries is swap-free.
\end{proposition}
\begin{proof}
Consider $\sigma, \tau$ composable and swap-free. Then, let $\sigma_1;\dots;\sigma_n;\tau_1;\dots;\tau_m$ be a decomposition of $\sigma;\tau$ into basic blocks. Consider $\sigma^l_1;\dots;\sigma^l_n;\tau^l_1;\dots;\tau^l_m$, and fix two labeled generators $t^j, t^l$. Then we have, applying some tedious but straightforward index rewriting to the $\tau$s:
\begin{align*}
&\suchthat{i}{\sigma_i^l := \id{u} \tensor \sigma_{t^j,t^k} \tensor \id{t}}
\sqcup
\suchthat{i}{\tau_i^l := \id{u} \tensor \tau_{t^j,t^k} \tensor \id{t}}
\simeq\\
&\qquad\qquad \simeq \suchthat{i}{\sigma_i^l := \id{u'} \tensor \sigma_{t^k,t^j} \tensor \id{t'}}
\sqcup
\suchthat{i}{\tau_i^l := \id{u} \tensor \tau_{t^j,t^k} \tensor \id{t}}\\
\end{align*}
Where the bijection follows because the sets are by hypothesis pairwise bijective.
\end{proof}
On the contrary, composition of non-swap-free symmetries is not preserved, as we can see in the following example:
\begin{example}\label{ex: non-swap-free do not compose}
Consider the symmetry $\sigma_{a,a}$. It is clearly not swap-free, but it can be composed with itself, giving $\sigma_{a,a};\sigma_{a,a} = \id{a \tensor a}$ which is swap-free.
\end{example}
\subsection{Goodbye, functoriality!}
The relevance of swap-free symmetries relies on the fact that ``counting how many times wires cross each other'' is the only property that, modulo coherence conditions for SMCs, allows us to distinguish different symmetries between objects.
In fact, if a FSSMC $\CategoryC$ has objects generated by a set $S$, then we can faithfully map $\Sym{S}$ to $\CategoryC$, and all the considerations we made on symmetries of $\Sym{S}$ can be transported to $\CategoryC$. By generalizing Proposition~\ref{prop: different objects unique symmetry}, given two objects there is at most one symmetry between them once we decide how -- if at all -- wires labeled in the same way have to cross. In particular, there is at most one swap-free symmetry between any two objects.
Proposition~\ref{prop: swap-free compose} and Example~\ref{ex: non-swap-free do not compose} determine how swap-freeness is the only type of wire-crossing which is invariant under composition, and hence the only viable choice for $\bar{\sigma}$ and $\bar{\sigma}'$ in Definition~\ref{def: free functor on morphisms}. Unfortunately, this is when our valiant chase after adjunctions collides with reality: Our choice is not functorial! In fact:
\begin{theorem}\label{prop: swap-freeness is not functorial}
A transition-preserving functor preserves swap-free symmetries iff it is injective on objects.
\end{theorem}
\begin{proof}
We prove the counternominal, which is easier. Suppose $F: \CategoryC \to \CategoryD$ is not injective on objects. Then there are generating objects $s_1 \neq s_2$ of $\CategoryC$ such that $Fs_1 = Fs_2$. Hence $\sigma_{s_1,s_2}$ is swap-free but $F\sigma_{s_1 s_2} = \sigma_{Fs_1, Fs_2} = \sigma_{Fs_1, Fs_1}$ is not, and $F$ doesn't preserve swap-free symmetries.
Conversely, suppose that $F$ is injective on objects. Then if $\sigma$ is swap-free and applying Definition~\ref{def: lifting strings} we get, for any decomposition of $\sigma$ into basic blocks and each couple of labelled generators $t^j, t^k$, an isomorphism
\begin{equation*}
\suchthat{i}{\sigma_i^l := \id{u} \tensor \sigma_{t^j,t^k} \tensor \id{t}} \simeq \suchthat{i}{\sigma_i^l := \id{u'} \tensor \sigma_{t^k,t^j} \tensor \id{t'}}
\end{equation*}
Since $F$ is injective, all the objects in $\CategoryC$ are mapped to different objects in $\CategoryD$, and labelings are preserved just by rewriting generators, so we get $F(t^j) = (Ft)^j$, $F(t^k) = (Ft)^k$. From this we can extend the isomorphism above to
\begin{equation*}
\suchthat{i}{F\sigma_i^l := \id{Fu} \tensor \sigma_{(Ft)^j,(Ft)^k} \tensor \id{Ft}} \simeq \suchthat{i}{\sigma_i^l := \id{Fu'} \tensor \sigma_{(Ft)^k,(Ft)^j} \tensor \id{Ft'}}
\end{equation*}
This is sufficient to prove swap-freeness of $F\sigma$, since all generating objects in $F\sigma$ are the image of generating objects of $\sigma$ through $F$.
\end{proof}
\begin{example}\label{ex: functoriality not working}
Suppose we require $\bar{\sigma}$ and $\bar{\sigma}'$ to be swap-free in Definition~\ref{def: free functor on morphisms}. Consider net morphisms $N \xrightarrow{f} M \xrightarrow{g} L$. By definition, if $f(t_N) = t_M$, then $\OFoldSym f$ will send the generator $t_N$ to $\sigma;t_M;\sigma'$, with $\sigma, \sigma'$ swap-free symmetries in $\OFoldSym M$. Again by definition, if $g(t_M) = t_L$, $\OFoldSym g$ will send $t_M$ to $\tau; t_L; \tau'$, with $\tau, \tau'$ swap-free in $\OFoldSym L$. Then the composition $\OFoldSym f ; \OFoldSym g$ will send $t_N$ to:
\begin{equation*}
(\OFoldSym g) \sigma; (\OFoldSym g) t_M; (\OFoldSym g) \sigma' = (\OFoldSym g) \sigma; \tau; t_L; \tau' ; (\OFoldSym g) \sigma'
\end{equation*}
If $g$ is not injective on places, $(\OFoldSym g) \sigma$, $(\OFoldSym g) \sigma'$ in are not in general swap-free. Hence, we cannot conclude that $(\OFoldSym g) \sigma; \tau$ and $\tau';(\OFoldSym g) \sigma'$ are swap-free. On the other hand, $\OFoldSym(f;g)$ sends $t_N$ to $\rho;t_L;\rho'$ with $\rho, \rho'$ swap-free in $\OFoldSym L$, proving $\OFoldSym(f;g) \neq \OFoldSym f ; \OFoldSym g$.
\end{example}
The significance of Proposition~\ref{prop: swap-freeness is not functorial}, in light of Example~\ref{ex: functoriality not working}, is that if we want our requirements in Definition~\ref{def: requirements} to be satisfied we have to give up functoriality: A functor between SSMCs is injective on objects only if its corresponding morphism between nets is, and restricting to injective net morphisms is unacceptable for our requirements. Hence, we conclude that there is no way to isolate ``nice'' ways to linearize multisets homomorphisms just from the topological properties of symmetries, at least not without violating Requirement~\ref{item: morphisms are automatically lifted} in our list.
Note also that we cannot hope for a non-invasive quick fix as we did in Definition~\ref{def: free functor on objects}, since in linearizing a multiset homomorphism we are forced to consider the relationship between its source and target: The only way to lift a net morphism to a functor between FSSMCs is to impose additional structure on the net \emph{which the morphisms are required to respect}. This is akin to the strategy adopted for pre-nets, which we have already deemed unsuitable for our purposes.
\subsection{A word on higher-categorical approaches}
One may be tempted to investigate a higher-categorical correspondence between nets and FSSMCs, replacing functors with pseudofunctors, and chasing a weak 2-adjunction instead. This approach does not work either. First, notice how natural transformations, which are the standard way to define 2-cells in our situation, do not really help: A natural transformation between functors $F,G: \CategoryC \to \CategoryD$ acts by selecting, for each object of $\CategoryC$, a morphism of $\CategoryD$. On the contrary, a transition-preserving functor acts by selecting a couple of symmetries in $\CategoryD$ for \emph{each} morphism generator in $\CategoryC$. This is to say that the idea of defining a natural transformation whose components are all symmetries between $\OFoldSym(f;g)$ and $\OFoldSym f ; \OFoldSym g$ won't work, since differences between the two are too granular to satisfy the required naturality conditions.
Another approach may be replacing natural transformations with endofunctors sending generators to permutations of themselves, and use such endofunctors to define 2-cells. This is the strategy that the authors pursued in trying both to preserve the adjunction and to have their requirements satisfied, before realizing this was not possible. Indeed, careful investigation of the categorical structure shows how, for these 2-cells to be effective in defining a pseudofunctorial correspondence between nets and FSSMCs, one needs to restrict to transition-preserving functors which are faithful, again violating Requirement~\ref{item: morphisms are automatically lifted}.
As a final note, the authors want to stress that implementing a 1-categorical version of the correspondence between nets and FSSMCs is already pushing the Idris compiler to its limits~\cite{Perone2019}. Hence, even supposing that it exists, the absence of tactics and other refined theorem-proving tools in all production-ready programming languages would indeed make implementing a 2-categorical version of our problem very difficult. Knowing this, the authors kept their investigations in the realm of higher category theory as limited as possible.
\section{A different approach}
Although the last section may seem to cast a somewhat pessimistic light on the question of how to relate nets and FSSMCs in a implementation-friendly way, all is not lost. In fact, we have the following result:
\begin{proposition}
There is a functor from $\FSSMC$ to $\Petri$, which can be restricted to a functor from $\GFSSMC$ to $\GPetri$.
\end{proposition}
\begin{proof}
We send each FSSMC $\CategoryC$ to $\UnFoldSym \CategoryC$. If $F: \CategoryC \to \CategoryD$ is a transition-preserving functor sending $t_\CategoryC$ to $\sigma;t_\CategoryD;\sigma'$, we define $\UnFoldSym F: \UnFoldSym \CategoryC \to \UnFoldSym \CategoryD$ by sending the transition $t_\CategoryC$ to the transition $t_\CategoryD$. On places, we just set $\UnFoldSym F p = \Multiplicity{} F p$. Functoriality is a straightforward check.
Restricting to grounded FSSMCs, if $F:\CategoryC \to \CategoryD$ is grounded then for each generating object $s$ in $\CategoryC$ $F s$ is a generating object in $\CategoryD$. By definition, so is $\Multiplicity{} F s$, straightforwardly proving the claim.
\end{proof}
As expected, $F$ acts by forgetting information, effectively delinearizing mappings of strings to multiset homomorphisms.
\begin{figure}[!h]
\centering
\scalebox{0.6}{
\begin{tikzpicture}
\node (c) {$\xleftarrow{\UnFoldSym}$};
\node (1a) [above left = 1cm and 1cm of c]{$N$};
\node (2a) [above right = 1cm and 1cm of c] {$\CategoryC$};
\node (1b) [below = 2.7cm of 1a] {$M$};
\node (2b) [below = 2.7cm of 2a] {$\CategoryD$};
\draw[->, bend left] (1a) to node[midway, above] {$\OFoldSym$} (2a);
\draw[->, bend left] (1b) to node[midway, above] {$\OFoldSym$} (2b);
\draw[<-, bend right] (1a) to node[midway, above] {$\UnFoldSym$} (2a);
\draw[<-, bend right] (1b) to node[midway, above] {$\UnFoldSym$} (2b);
\draw[->] (1a) to node[midway, left] {$f$} (1b);
\draw[->] (2a) to node[midway, right] {$F$} (2b);
\end{tikzpicture}
}
\caption{The mappings between nets and FSSMCs.}\label{fig: Petri-FSSMC mapping}
\end{figure}
\noindent
Putting everything together, the situation is as in Figure~\ref{fig: Petri-FSSMC mapping}. There is a way to go \emph{back and forth} from nets to FSSMCs, and a way to functorially map FSSMC morphisms to net morphisms. This suggests the following course of action: In our implementation, we \emph{always} manipulate FSSMCs. The user can nevertheless draw Petri nets, which are automatically mapped to their corresponding FSSMC through $\OFoldSym$. When the user wants to fetch information regarding some Petri net $N$, the corresponding FSSMC $\OFoldSym N$ is retrieved instead, and $\UnFoldSym \OFoldSym N$ is displayed.
Moreover, the user can specify a morphism between nets $f: N \to M$ by selecting how places and transitions are to be mapped. By applying Proposition~\ref{def: free functor on morphisms}, we can lift this information (\emph{non-functorially!}) to a functor $\OFoldSym N \to \OFoldSym M$ which maps $t_\CategoryC$ to $\sigma;t_\CategoryD;\sigma'$ where $\TrM{f}(t_\CategoryC) = t_\CategoryD$ and $\sigma, \sigma'$ are swap-free.
Notice that a transition-preserving functor $F: \CategoryC \to \CategoryD$ can be fully specified by providing a mapping between objects, a mapping between the set of generators of $\CategoryC$ and $\CategoryD$ and, for each $t_\CategoryC$ generator of $\CategoryC$, a couple $(\sigma, \sigma')$ of suitable symmetries in $\CategoryD$. The strategy highlighted in the paragraph above provides a way to automatically infer suitable couples $(\sigma, \sigma')$ for each generator $t_\CategoryC$ by providing a net morphism. \emph{In our setting, this can be seen as analogous to type inference}. Having obtained this list, the user can then further tweak each couple to suit particular needs. All in all, the user \emph{never} has to specify a morphism of nets, and instead works with FSSMCs from the start. However, morphisms between nets \emph{can} be used to automatically provide a template from which a morphism between their corresponding FSSMCs is refined and specified.
\subsection{Implementation advantages}
Notice how the strategy summarized respects all requirements of Definition~\ref{def: requirements}:
\begin{itemize}
\item All the categorical structures involved are free. Moreover, we are able to employ the definition of Petri net and FSSMC naively, with no additional structure curbing the expressiveness of both paradigms;
\item Mapping FSSMCs to Petri nets is as computationally feasible as just implementing FSSMCs, since nearly all the operations between nets and FSSMCs amount to forget some information;
\item FFSMCs represent net computations in a meaningful way: Each morphism in the FSSMCs can be mapped to a sequence of transition firings in the corresponding net, and for each sequence of transition firings there is at least a corresponding morphism in the FSSMC. Moreover, each FSSMCs can be functorially mapped to any other symmetric monoidal category, providing the functorial mapping to a semantics as we wanted;
\item Our mapping is useful for intuition, since we are able to naively leverage the graphical calculus of Petri nets on one hand, and string diagrams to represent net executions on the other;
\item Nets can be morphed into each other. Moreover, the user is able to tweak the morphism between corresponding net histories directly by specifying a transition-preserving functor between executions.
\end{itemize}
Another great advantage of using FSSMCs directly is that \emph{manipulating strings is way easier than manipuating multisets in a development environment}. This is because countless tools and data structures, such as lists, have been developed to deal with strings over the years, mainly to perform operations on text. To see the extent to which this is true, consider the method to parse the information defining a net from a string shown in Figure~\ref{fig: string to net}:
\begin{figure}[!h]
\centering
\includegraphics[height=100px]{img/StringToNet}
\caption{A way to convert a string to a Petri net, and vice-versa.}\label{fig: string to net}
\end{figure}
\noindent
We start with a list of natural numbers, where each number uniquely labels a place in the net. The number $0$ is special, and is used to split the list into sublists. Starting from the left, sublists are paired in couples, each couple defining input and output places of a transition. As the figure shows, this is enough information to construct a Petri net.
Such a procedure is fairly straightforward, but we might notice how such a list could also be used to define the symmetric monoidal category corresponding to the net. Lists are in fact naturally ordered, and each sublist in a pair can be used to define the source and target of a generating morphism: All in all, programming languages \emph{do prefer} working with FSSMCs, which is where many of the computational advantages of our strategy come from.
\subsection{Status of the implementation}
Having focused on implementation throughout this document, the reader may have begun to wonder whether such an implementation exists. Happily, this is indeed the case: Quite recently, we released \idrisct~\cite{StateboxTeamb}, an Idris library that provides formally verified definitions of categorical concepts such as category, functor, natural transformation etc. In particular, with \idrisct it is already possible to build symmetric monoidal categories specifying a list of generators in the very same format shown in Figure~\ref{fig: string to net}.
The functorial mapping from $\FSSMC$ to $\Petri$ has been worked out on a separate repository, which will be made public soon. The user interface code for visualizing nets and their executions in the frontend is also being worked on. The most challenging part of that is interconnecting the Idris code developed in \idrisct with more flexible languages, such as Purescript~\cite{PureScript}, used in the frontend and middleware.
Another direction of development consists in functorially linking \idrisct with \texttt{typedefs}~\cite{StateboxTeama}, a language-agnostic type construction language based on polynomials. Such a functor will enable the mapping of net executions to computations, thus fully realising Requirement~\ref{item: mapping is faithful and full}.
\section{Conclusion and future work}
In this paper, we reviewed some of the approaches pursued in linking Petri nets with free strict symmetric monoidal categories, traditionally interpreted as their categories of computations. After fixing a list of requirements, we argued that chasing an adjunction between the two may not be the most fruitful strategy if one's goal is to implement such a correspondence in code. As an alternative, we showed how a functor from free strict symmetric monoidal categories to Petri nets is enough to obtain a conceptually meaningful implementation that fully satisfies our requirements.
We consider this important since such an implementation has immediate practical implications, in that it enables the direct use of Petri nets in software development.
Future work will be focused mainly on fully implementing the strategy highlighted in this paper. On the purely categorical side, we would like to investigate whether the results recently published in~\cite{Balco2019} have any impact in representing Petri net computations.
\printbibliography
\newpage
|
1,116,691,499,166 | arxiv | \section{Introduction}\label{sec:intro}
In cryptographic protocols such as in key distribution and randomness expansion, it is often possible to guarantee that an adversary's knowledge about the secret $N$ held by honest players is bounded. The relevant quantity in many settings is the adversary's guessing probability of the secret $N$ given all his knowledge. However, the objective is usually not to create a secret that is only partly private but rather to create a (possibly smaller) secret that is almost perfectly private. The process of transforming a partly private string $N$ into one that is almost uniformly random $M$ from the adversary's point of view is called privacy amplification \cite{BBR88, BBCM95}. In order to perform privacy amplification, we apply to $N$ a function chosen at random from a set of functions $\{ f_s \}$ that has the property of being a randomness extractor. Randomness extractors are by now a standard tool used in many classical and quantum protocols. They are for example an essential ingredient in quantum key distribution and device independent randomness expansion protocols~\cite{Renner05,Vazirani12}. For such applications, it has been only relatively recently realized~\cite{Renner05} that it is crucial to explicitly consider quantum adversaries. It is by no means obvious that a quantum adversary also satisfying the guessing probability constraint on $N$ would not be able to have additional knowledge about the output $M$. In fact, as explained below, we know of an extractor construction that becomes useless against quantum adversaries~\cite{Gavinsky07}.
We believe that in the same way as communication complexity and Bell inequalities (multi prover games), the setting of randomness extractors provides a beautiful framework for studying the power and limitations of a quantum memory compared to a classical one.
Here we argue that the theory of operator spaces, sometimes also called ``quantized functional analysis'', provides a natural arena for studying this question. This theory has already been successfully applied in the context of understanding Bell inequality violations, see~\cite{Junge10,JP11} and references therein.
This document is structured as follows. In the next two subsections we define (quantum-proof) randomness extractors (Section~\ref{sec:intro_extractor}) and condensers (Section~\ref{sec:condenser_intro}), and give a summary of the known results that are relevant to our discussion. In Section~\ref{sec:overview} we present an overview of our results (leaving out all the proofs). This is then followed by some open questions stated in Section~\ref{sec:conclusions}. For the main body of the paper, we start with basic preliminaries on the theory of normed spaces and operator spaces (Section~\ref{sec:preliminaries}). In Section~\ref{sec:extractors} we prove our results about quantum-proof extractors. The last section is devoted to the proofs of our results concerning condensers (Section~\ref{sec:condenser}). Some technical arguments are deferred to the Appendices (Appendix~\ref{app:missing}--\ref{app:intersection}).
\subsection{Randomness extractors}\label{sec:intro_extractor}
Extractors map a weakly random system into (almost) uniform random bits, with the help of perfectly random bits called the seed. We use $N=2^{n}$ to denote the input system (consisting of strings of \(n\) bits), $M=2^{m}$ (bit-strings of length \(m\)) to denote the output system, and $D=2^{d}$ (\(d\) bits) to denote the seed system. Note that in a slight abuse of notation, we use the same letters for the actual system as well as its dimension as a linear vector space. An extractor is then a family of functions $\{f_1, \dots, f_{D}\}$ with $f_s : N \to M$ satisfying the following property. For any random variable on $N$ with
\begin{align}
H_{\min}(N):=-\log p_{\mathrm{guess}}(N)\geq k\ ,
\end{align}
where $p_{\mathrm{guess}}(N)$ denotes the maximal probability of guessing the input, and an independent and uniform seed $U$, the random variable $f_U(N)$ has a distribution which is $\varepsilon$-close in total variation distance to the uniform distribution $\upsilon_M$ on $M$. For us, it is more convenient to state the definition in terms of probability distributions. For this we associate to the functions $f_s$ an $M\times N$ matrix $F_s$ where the entry $(y,x)$ is equal to one if $f_s(x) = y$ and zero otherwise. With this notation, we have for any probability distribution $P_{N}$ of a random variable on $N$, $F_s(P_{N})$ is the distribution of $f_s(N)$. That is, a $(k,\varepsilon)$-extractor satisfies the following property. For all input probability distributions $P_{N}$ with $H_{\min}(N)_{P}\geq k$,
\begin{align}\label{eq:intro_weakmin}
\big\|\frac{1}{D}\cdot\sum_{s=1}^{D}F_s(P_N)-\upsilon_{M}\big\|_{\ell_1}\leq\varepsilon\ .
\end{align}
This definition is also referred to as weak extractors. An important special case of extractors are strong extractors~\cite{Nissan96}, for which the seed is part of the output, i.e., the output space has the form $M = D \times M'$ and $f_s(x)=(s,f'_s(x))$ for some function $f'_s:N\to M'$ (with $M'=2^{m'}$). This means that the invested randomness, the seed $D$, is not used up and can safely be reused later. The condition~\eqref{eq:intro_weakmin} then reads as
\begin{align}\label{eq:intro_strongmin}
\frac{1}{D}\cdot\sum_{s=1}^{D}\big\|F_s'(P_N)-\upsilon_{M'}\big\|_{\ell_1}\leq\varepsilon\ .
\end{align}
Such objects are needed for privacy amplification, since the eavesdropper is allowed to know which function is applied.
We now briefly discuss the parameters for which strong extractors exists. Typically we want to maximize the output length $m$ and minimize the seed length $d$. Radhakrishnan and Ta-Shma~\cite{Radhakrishnan00} show that every strong $(k,\varepsilon)$-extractor necessarily has
\begin{align}\label{eq:extractor_converse}
m\leq k-2\log(1/\varepsilon)+O(1)\quad\mathrm{and}\quad d\geq\log(n-k)+2\log(1/\varepsilon)-O(1)\ .
\end{align}
Using the probabilistic method one can show that random functions achieves these bounds up to constants~\cite{Sipser88,Radhakrishnan00}. There exists a strong $(k,\varepsilon)$-extractor with
\begin{align}\label{eq:extractor_prob}
m=k-2\log(1/\varepsilon)-O(1)\quad\mathrm{and}\quad d=\log(n-k)+2\log(1/\varepsilon)+O(1)\ .
\end{align}
Probabilistic constructions are interesting, but for applications we usually need explicit extractors. Starting with the work by Nisan and Ta-Shma~\cite{Nisan:1999kn} and followed by Trevisan's breakthrough result~\cite{Trevisan99} there has been a lot of progress in this direction, and there are now many constructions that almost achieve the converse bounds in~\eqref{eq:extractor_converse} (see the review articles~\cite{Shaltiel02,Vadhan07}).
For applications in classical and quantum cryptography (see, e.g., \cite{Renner05,Koenig12}) and for constructing device independent randomness amplification and expansion schemes (see, e.g., \cite{Chung14,Miller14,Brandao13,Coudron13}) it is important to find out if extractor constructions also work when the input source is correlated to another (possibly quantum) system $Q$. That is, we would like that for all classical-quantum input density matrices $\rho_{QN}$ with conditional min-entropy
\begin{align}
H_{\min}(N|Q)_{\rho}:=-\log p_{\mathrm{guess}}(N|Q)_{\rho}\geq k\ ,
\end{align}
where $p_{\mathrm{guess}}(N|Q)$ denotes the maximal probability of guessing $N$ given $Q$, the output is uniform and independent of $Q$,\footnote{Other notions for weaker quantum adversaires have also been discussed in the literature, e.g., in the bounded storage model (see~\cite[Section 1]{De12} for a detailed overview).}
\begin{align}\label{eq:q_extractor}
\big\|\frac{1}{D}\sum_{s=1}^{D} (\id_Q \otimes F_s)(\rho_{QN})-\rho_{Q}\otimes\upsilon_{M}\big\|_{1}\leq\varepsilon\ ,
\end{align}
and $\|\cdot\|_{1}$ denotes the trace norm (the quantum extension of the $\ell_1$-norm). Similarly, the corresponding condition for quantum-proof strong extractors reads
\begin{align}\label{eq:qs_extractor}
\frac{1}{D}\sum_{s=1}^{D} \big\|(\id_Q \otimes F'_s)(\rho_{QN})-\rho_{Q}\otimes\upsilon_{M'}\big\|_{1}\leq\varepsilon\ .
\end{align}
K\"onig and Terhal~\cite[Proposition 1]{Koenig08} observed that if we restrict the system $Q$ to be classical with respect to some basis $\{\ket{e}\}_{e\in Q}$ then every $(k,\varepsilon)$-extractor as in~\eqref{eq:intro_weakmin} is also a $\big(k+\log(1/\varepsilon),2\varepsilon\big)$-extractor in the sense of~\eqref{eq:q_extractor} (and the analogue statement for strong extractors is a special case of this). That is, even when the input source is correlated to a classical system $Q$, every extractor construction still works (nearly) equally well for extracting randomness. However, if $Q$ is quantum no such generic reduction is known and extractor constructions that also work for quantum $Q$ are called quantum-proof.\footnote{Note that the dimension of $Q$ is unbounded and that it is a priori unclear if there exist any extractor constructions that are quantum-proof (even with arbitrarily worse parameters). Furthermore, and in contrast to some claims in the literature, we believe that the question to what extent extractors are quantum-proof is already interesting for weak extractors. In particular, if weak extractors were perfectly quantum-proof then strong extractors would be perfectly quantum-proof as well (and we know that the latter is wrong~\cite{Gavinsky07}).} Examples of (approximately) quantum-proof extractors include:
\begin{itemize}
\item Spectral $(k,\varepsilon)$-extractors are quantum-proof $(k,2\sqrt{\varepsilon})$-extractors~\cite[Theorem 4]{Berta14}. This includes in particular two-universal hashing~\cite{Renner05,Tomamichel11}, two-wise independent permutations~\cite{Szehr11}, as well as sample and hash based constructions~\cite{Koenig11}.
\item One bit output strong $(k,\varepsilon)$-extractors are quantum-proof strong $(k+\log(1/\varepsilon),3\sqrt{\varepsilon})$-extractors~\cite[Theorem 1]{Koenig08}.
\item Strong $(k,\varepsilon)$-extractors constructed along Trevisan's ideas~\cite{Trevisan99} are quantum-proof strong $\big(k+\log(1/\varepsilon),3\sqrt{\varepsilon}\big)$-extractors~\cite[Theorem 4.6]{De12} (see also~\cite{Ben-Aroya12}).
\end{itemize}
We emphasize that all these stability results are specifically tailored proofs that make use of the structure of the particular extractor constructions. In contrast to these findings it was shown by Gavinsky {\it et al.}~\cite[Theorem 1]{Gavinsky07} that there exists a valid (though contrived) strong extractor for which the decrease in the quality of the output randomness has to be at least $\varepsilon\mapsto \Omega(m'\varepsilon)$.\footnote{Since the quality of the output randomness of Gavinsky {\it et al.}'s construction is bad to start with, the decrease $\varepsilon\mapsto \Omega(m'\varepsilon)$ for quantum $Q$ already makes the extractor fail completely in this case.} As put forward by Ta-Shma~\cite[Slide 84]{TaShma13}, this then raises the question if the separation found by Gavinsky {\it et al.} is maximal, that is:
\begin{center}
\begin{minipage}{0.92\textwidth}
{\it Is every $(k,\varepsilon)$-extractor a quantum-proof $\big(O(k+\log(1/\varepsilon)),O(m\sqrt{\varepsilon})\big)$-extractor or does there exists an extractor that is not quantum-proof with a large separation, say $\varepsilon\mapsto (2^{m}\varepsilon)^{\Omega(1)}$?}
\end{minipage}
\end{center}
We note that such a stability result would make every extractor with reasonable parameters (approximately) quantum-proof.
\subsection{Randomness condensers}\label{sec:condenser_intro}
In the general theory of pseudorandomness one interesting generalization of extractors are condensers. These objects were first defined in~\cite{RR99,RSW00} as an intermediate step in order to build extractors. For condensers the output is not necessarily close to the uniform distribution (as it is the case for extractors), but only close to a distribution with high min-entropy $k'$. More precisely, a (weak) $(k\to_{\varepsilon}k')$-condenser is family of functions $\{f_1, \dots, f_{D}\}$ with $f_s:N\to M$ such that for all input probability distributions $P_{N}$ with $H_{\min}(N)_{P}\geq k$
\begin{align}\label{eq:w_condenser}
H_{\min}^{\varepsilon}(M)_{\frac{1}{D}\sum_{s=1}^{D}F_{s}(P)}\geq k'\ ,
\end{align}
with the smooth min-entropy
\begin{align}
H_{\min}^{\varepsilon}(N)_{P}:=\sup_{R}H_{\min}(N)_{R}\ ,
\end{align}
and the supremum over all probability distributions $R_{N}$ such that $\|R_{N}-P_{N}\|_{\ell_1}\leq\varepsilon$. Observe that when $k'=\log M$, this is exactly the condition for being a (weak) $(k,\varepsilon)$-extractor. The reason this definition is called condenser is because we want constructions with $M < N$ so that the entropy is condensed in a smaller probability space. For the special case of strong condensers the output space has the form $M = D \times M'$ and $f_s(x)=(s,f'_s(x))$ for some function $f'_s:N\to M'$, and the condition~\eqref{eq:w_condenser} then reads
\begin{align}
H_{\min}^{\varepsilon}(M'D)_{\frac{1}{D}\sum_{s=1}^{D}F'_{s}(P)\otimes\proj{s}}\geq k'\ .
\end{align}
A (weak) quantum-proof condenser is as follows. For all classical-quantum input density matrices $\rho_{QN}$ with conditional min-entropy $H_{\min}(N|Q)_{\rho}\geq k$, the output should be close to a distribution with high conditional min-entropy $k'$
\begin{align}\label{eq:qcondenser}
H_{\min}^{\varepsilon}(M|Q)_{\frac{1}{D}\sum_{s=1}^{D}(\id\otimes F_{s})(\rho)}\geq k'\ ,
\end{align}
with the smooth conditional min-entropy
\begin{align}
H_{\min}^{\varepsilon}(N|Q)_{\rho}:=\sup_{\sigma}H_{\min}(N|Q)_{\sigma}\ ,
\end{align}
and the supremum over all classical-quantum density matrices $\sigma_{QN}$ such that $\|\sigma_{QN}-\rho_{QN}\|_{1}\leq\varepsilon$. Similarly, the corresponding condition for quantum-proof strong condensers reads
\begin{align}
H_{\min}^{\varepsilon}(M'D|Q)_{\frac{1}{D}\sum_{s=1}^{D}(\id\otimes F'_{s})(\rho)\otimes\proj{s}}\geq k'\ .
\end{align}
We note that the works~\cite{Koenig11,Dupuis13,Wullschleger14} can be understood as results about quantum-proof condensers. One reason why we would like to understand to what extent condensers are quantum-proof is that the best known explicit extractor constructions are built from condensers (see the review article~\cite{Vadhan07}).
As an extractor is a special case of a condenser with full output entropy $k' = m$, one can understand the construction of Gavinsky {\it et al.}~\cite[Theorem 1]{Gavinsky07} also as a valid randomness condenser that is not quantum-proof. But a condenser has the output entropy as an additional parameter, so it is natural to ask whether this construction is a quantum-proof condenser with slightly worse parameters, e.g., $k + c'' \to_{c \varepsilon} k' - c'$ for some constants $c, c'$ and $c''$. Note that when $c' > 0$, this does not correspond to an extractor condition anymore, as the output is only required to have large smooth min-entropy. It turns out that even if we relax the condition on the output min-entropy slightly, the condenser is still not quantum-proof. The reason is that the quantum attack given in~\cite{Gavinsky07} allows the adversary to correctly guess $\gamma$ bits of the output with a memory of size $O(\gamma \log n)$. Thus the smooth min-entropy of the output can be at most roughly $k' - \gamma$.
\section{Overview of results}\label{sec:overview}
\subsection{Extractors}\label{sec:extr_results}
A linear vector space which is equipped with a norm is called a normed space (we restrict ourselves to finite-dimensional spaces). Special examples are the linear space $\C^n$, which can be equipped with the $\ell_1$-norm, the sum of all absolute values of vector entries, or the $\ell_\infty$-norm, the largest absolute value of vector entries. Both norms are useful for studying extractors, as the first norm encodes the normalization constraint (the inputs are probability distributions), while the second is just the exponential of the min-entropy. Linear maps between normed spaces are naturally equipped with norms, defined as the maximum norm of any output, given that the input had bounded norm. Of course, the norms on the input and the output spaces can be different. Our first result is that the extractor condition~\eqref{eq:intro_weakmin} can be rewritten as a condition on the norm of a linear map. In the expression~\eqref{eq:intro_weakmin}, observe that
\begin{align}
P_{N} \mapsto \frac{1}{D} \sum_{s=1}^{D} F_s(P_N)
\end{align}
is a linear map. In addition, as $P_N$ is a probability distribution, we can write $\nu_M = (\mathbf{1}^{T} P_N) \nu_M$, where $\mathbf{1}^T = (1, \dots ,1) \in \bR^N$ is the vector with all ones. As a result,
\begin{align}
P_{N} \mapsto \frac{1}{D} \sum_{s=1}^{D} F_s(P_N) - (\mathbf{1}^{T} P_N) \nu_M
\end{align}
is a linear function in the input distribution. We can associate to an extractor $\mathrm{Ext}=\{f_s\}_{s\in D}$ a linear map from $N$ to $M$ given by
\begin{align}\label{eq:delta_map}
\Delta[\mathrm{Ext}](P_N):=\frac{1}{D}\sum_{s=1}^{D}F_s(P_{N})- (\mathbf{1}^{T} P_{N})\nu_{M}\ .
\end{align}
Using this notation, the extractor condition can be written as follows: for all distributions $P_{N}$ with $\|P_{N}\|_{\ell_1}=1$ and $H_{\min}(N)_{P}\geq k$, $\|\Delta[\mathrm{Ext}](P)\|_{\ell_1}\leq\varepsilon$. In order to capture the input constraints on $P_{N}$ we now define for $k\in(0,\infty)$ the $\cap\{2^{k}\ell_{\infty},\ell_{1}\}$-norm as
\begin{align}
\|P_{N}\|_{\cap\{2^{k}\ell_\infty,\ell_1\}}:=\max\Big\{2^{k}\|P_{N}\|_{\ell_\infty},\|P_{N}\|_{\ell_1}\Big\}\ .
\end{align}
We then get the bounded norm
\begin{align}
\left\|\Delta[\mathrm{Ext}]:\cap\{2^{k}\ell_{\infty},\ell_1\}\to\ell_1\right\|\equiv\|\Delta[\mathrm{Ext}]\|_{\cap\{2^{k}\ell_\infty,\ell_1\}\to \ell_1}:=\sup_{\|P_{N}\|_{\cap\{2^{k}\ell_\infty,\ell_1\}}\leq1}\|\Delta[\mathrm{Ext}](P_{N})\|_{\ell_1}\ ,
\end{align}
that gives the following alternative characterization for extractors.
\begin{restatable}{proposition}{extToNorm}\label{prop:ext-to-norm}
Let $\mathrm{Ext} = \{f_s\}_{s\in D}$ and $\Delta[\mathrm{Ext}]$ as defined in~\eqref{eq:delta_map}. Then, we have
\begin{align}
\left\|\Delta[\mathrm{Ext}]:\cap\{2^{k}\ell_{\infty},\ell_1\}\to\ell_1\right\|\leq\varepsilon\quad&\Rightarrow\quad\mathrm{Ext}\,\mathrm{is}\,\mathrm{a}\,(k, \varepsilon)\mbox{-}\,\mathrm{extractor}\label{eq:extractor_1}\\
\mathrm{Ext}\,\mathrm{is}\,\mathrm{a}\,(k-1,\varepsilon)\mbox{-}\,\mathrm{extractor}\quad&\Rightarrow\quad\left\|\Delta[\mathrm{Ext}]:\cap\{2^k\ell_\infty,\ell_1\}\to\ell_1\right\|\leq3\varepsilon\ .\label{eq:extractor_2}
\end{align}
\end{restatable}
Note that strong extractors are covered by this as a special case. To make this explicit we associate to a strong extractor $\mathrm{Ext}=\{f_s\}_{s\in D}$ the linear map
\begin{align}\label{eq:delta_mapstrong}
\Delta[\mathrm{Ext}]_{S}(P_N):=\frac{1}{D}\sum_{s=1}^{D}F_s(P_{N})\otimes\proj{s}_{D}-(\mathbf{1}^{T} P_{N})\nu_{M'}\otimes\nu_{D}\ ,
\end{align}
and the corresponding statement can then be read off by replacing $\Delta[\mathrm{Ext}]$ with $\Delta[\mathrm{Ext}]_{S}$ in Proposition~\ref{prop:ext-to-norm}.
The theory of normed spaces is often convenient in classical probability theory. However, if the systems of interests are in addition correlated with quantum systems, we have more structure available. A natural norm on quantum systems is the $\infty$-norm, the largest singular value. Hence we start by modeling a classical system as a normed space, and a quantum system as complex-valued matrices equipped with the $\infty$-norm. If we allow for correlations between the two, we have to define norms on their tensor product, fulfilling reasonable compatibility requirements. The framework of operator spaces axiomatizes such scenarios: an operator space is a normed space equipped with a sequence of norms describing possible correlations to quantum systems. If we now study linear maps between normed spaces, we can naturally consider these maps to be maps between operator spaces by letting them act trivially on the quantum part. Of course, the norm of the linear maps might change, since we now also allow for correlations to the quantum part (at the input as well as at the output). The associated norm, defined as the supremum with respect to quantum systems of any dimension, is called the completely bounded norm, or just $\mathrm{cb}$-norm.
From this discussion, it is reasonable to expect that the property of being quantum-proof can be modeled as a $\mathrm{cb}$-norm constraint. Indeed, we have that there exists operator space structures defined on the normed spaces described in Proposition~\ref{prop:ext-to-norm} such that the corresponding completely bounded norm
\begin{align}
\left\|\Delta[\mathrm{Ext}]:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{ 2^{k}\ell_{\infty},\ell_1\}\to\ell_1\right\|_{\mathrm{cb}}\equiv\|\Delta[\mathrm{Ext}]\|_{\mathrm{cb},\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\to \ell_1}
\end{align}
captures the property of being quantum-proof.
\begin{restatable}{theorem}{extToNormQuantum}\label{thm:ext-to-norm-quantum}
Let $\mathrm{Ext}=\{f_s\}_{s\in D}$ and $\Delta[\mathrm{Ext}]$ as defined in~\eqref{eq:delta_map}. Then, we have
\begin{align}
\left\|\Delta[\mathrm{Ext}]:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{ 2^{k}\ell_{\infty},\ell_1\}\to\ell_1\right\|_{\mathrm{cb}} \leq \varepsilon\quad&\Rightarrow\quad\mathrm{Ext}\,\mathrm{is}\,\mathrm{a}\,\mathrm{quantum}\mbox{-}\mathrm{proof}\,(k,2\varepsilon)\mbox{-}\,\mathrm{extractor}\label{eq:quantum_ext1}\\
\mathrm{Ext}\,\mathrm{is}\,\mathrm{a}\,\mathrm{quantum}\mbox{-}\,\mathrm{proof}\,(k-1, \varepsilon)\mbox{-}\mathrm{extractor}\quad&\Rightarrow\quad\left\|\Delta[\mathrm{Ext}]:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^k\ell_\infty, \ell_1\}\to\ell_1\right\|_{\mathrm{cb}} \leq 8\varepsilon\ .\label{eq:quantum_ext2}
\end{align}
\end{restatable}
Again the special case of strong extractors just follows by replacing $\Delta[\mathrm{Ext}]$ with $\Delta[\mathrm{Ext}]_{S}$ in Theorem~\ref{thm:ext-to-norm-quantum}. We note that we use the notation $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^k \ell_{\infty}, \ell_1\}$ for the operator space corresponding to the norm $\cap\{2^k \ell_{\infty}, \ell_1\}$. The reason is that there is a natural operator space associated with the norm $\cap\{2^k \ell_{\infty}, \ell_1\}$ for which one would use the same name, but the norm we consider here is slightly different.
\subsubsection*{Applications}
We conclude that the ratio between the bounded norm $\cap\{2^{k}\ell_{\infty},\ell_{1}\} \to \ell_{1}$ and its completely bounded extension in Theorem~\ref{thm:ext-to-norm-quantum} can be used to quantify to what extent extractors are quantum-proof. This type of comparison is of course very well studied in the theory of operator spaces. As a first straightforward application, we can use dimension dependent bounds for the maximal possible ratios between the completely bounded norm and the bounded norm,
\begin{align}
\|\cdot\|_{\mathrm{cb},\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\to \ell_1}\leq\sqrt{2^{m}}\|\cdot\|_{\cap\{2^{k}\ell_\infty,\ell_1\}\to \ell_1}
\end{align}
and with Proposition~\ref{prop:ext-to-norm} and Theorem~\ref{thm:ext-to-norm-quantum} we find that every $(k,\varepsilon)$-extractor is a quantum-proof $(k+1,6\sqrt{2^{m}}\varepsilon)$-extractor.\footnote{We should point out that showing a similar bound, where the quantum error is upper bounded by $2^m$ multiplied by the classical error, can be obtained with a basic argument (not making use of any operator space theory).} If $M$ is small this bound is non-trivial for weak extractors, but for strong extractors (for which the seed $D$ is part of the output $M=D\times M'$) the bound becomes useless. However, using an operator version of the Cauchy-Schwarz inequality due to Haagerup we find the following bound for strong extractors.
\begin{restatable}{theorem}{smallerrorent}\label{thm:small-error-ent}
Let $\mathrm{Ext}=\{f_s\}_{s\in D}$, and $\Delta[\mathrm{Ext}]_{S}$ as defined in~\eqref{eq:delta_mapstrong}. Then, we have
\begin{align}
\left\|\Delta[\mathrm{Ext}]_{S}:\cap\{2^{k}\ell_{\infty},\ell_1\}\to\ell_1\right\|\leq\varepsilon\,\Rightarrow\,\left\|\Delta[\mathrm{Ext}]_{S}:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k+\log(1/\varepsilon)}\ell_{\infty},\ell_1\}\to\ell_1\right\|_{\mathrm{cb}}\leq4\sqrt{2^{m'}}\sqrt{2\varepsilon}\ .
\end{align}
\end{restatable}
We get by Proposition~\ref{prop:ext-to-norm} that every strong $(k,\varepsilon)$-extractor is a quantum-proof strong $(k+\log(2/\varepsilon),12\sqrt{2^{m'}}\sqrt{2\varepsilon})$-extractor. This extends the result of K\"onig and Terhal for $m'=1$~\cite[Theorem 1]{Koenig08} to arbitrary output sizes. The bound is useful as long as the output size is small but does of course not match Ta-Shma's conjecture that asks for an error dependence of $O(m'\sqrt{\varepsilon})$.
We can also analyze the simpler bounded norm $2^{k}\ell_{\infty}\to\ell_{1}$ instead of $\cap\{2^{k}\ell_{\infty},\ell_{1}\}\to \ell_{1}$. Grothendieck's inequality~\cite[Corollary 14.2]{Pis12} then shows that the ratio between the bounded norm $2^{k}\ell_{\infty}\to\ell_{1}$ and its completely bounded extension is at most Grothendieck's constant $K_{G}<1.8$:
\begin{align}
\|\cdot\|_{\mathrm{cb},2^{k}\ell_{\infty}\to\ell_{1}}\leq K_{G}\|\cdot\|_{2^{k}\ell_{\infty}\to\ell_{1}}\ .
\end{align}
This gives the following bound.\footnote{Interestingly this also implies that extractors for non-normalized inputs are automatically quantum-proof (e.g., spectral extractors~\cite{Berta14}), and hence that the $\ell_{1}$-normalization condition on the input is crucial for studying to what extent extractors are quantum-proof.}
\begin{restatable}{theorem}{highMinEntExt}\label{thm:high-min-ent-ext}
Let $\mathrm{Ext} = \{f_s\}_{s\in D}$ and $\Delta[\mathrm{Ext}]$ as defined in~\eqref{eq:delta_map}. Then, we have
\begin{align}
\left\|\Delta[\mathrm{Ext}]:\cap\{2^{k}\ell_{\infty},\ell_1\}\to\ell_1\right\|\leq\varepsilon\,\Rightarrow\,\left\|\Delta[\mathrm{Ext}]:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_{\infty},\ell_1\}\to\ell_1\right\|_{\mathrm{cb}}\leq K_{G}2^{n-k}\varepsilon\ .
\end{align}
\end{restatable}
Hence, we get by Proposition~\ref{prop:ext-to-norm} and Theorem~\ref{prop:ext-to-norm} that every $(k,\varepsilon)$-extractor is a quantum-proof $(k+1,6K_{G}2^{n-k}\varepsilon)$-extractor. This applies to weak and strong extractors equally since the statement is independent of the output size. So in particular if the input min-entropy is very high, e.g., $k\geq n-\log n$, we get a quantum-proof $(k+1,6K_{G}n\varepsilon)$-extractor. Hence, very high min-entropy extractors are always (approximately) quantum-proof.\footnote{This result tightly matches with that the extractor construction of Gavinsky {\it et al.}~\cite{Gavinsky07}. In fact, their construction is an extractor for $k \geq n - n^{c}$ with error $\varepsilon = n^{-c'}$ for some constants $c,c'$ and it fails to be quantum-proof even if $k=n-O(\log n)$ with constant $\varepsilon$. Theorem~\ref{thm:high-min-ent-ext} says is that if the error in the classical case was a slightly smaller, for example super-polynomially small, then the extractor would have been quantum-proof.} We emphasize again that Theorem~\ref{thm:small-error-ent} and Theorem~\ref{thm:high-min-ent-ext} do only make use of the definition of extractors and not of any potential structure of extractor constructions.
\begin{table}[ht]
\centering
\begin{tabular}{l|ll}
Strong $(k,\varepsilon)$-extractor & Quantum-proof & with parameters\\
\hline\hline
Probabilistic constructions & ? & \\
\hline
Spectral (e.g., two-universal hashing) & $\checkmark$~~~\cite[Thm.~4]{Berta14} & $(k,c\sqrt{\varepsilon})$\\
\hline
Trevisan based & $\checkmark$~~~\cite[Thm.~4.6]{De12} & $(k+\log(1/\varepsilon),c\sqrt{\varepsilon})$\\
\hline
One-bit output & $\checkmark$~~~\cite[Thm.~1]{Koenig08} &$(k+\log(1/\varepsilon),c\sqrt{\varepsilon})$\\
\hline
Small output & $\checkmark$~~~[Thm.~\ref{thm:small-error-ent}] & $(k+\log(2/\varepsilon),c\sqrt{2^m}\sqrt{2\varepsilon})$\\
\hline
High entropy & $\checkmark$~~~[Thm.~\ref{thm:high-min-ent-ext}] & $(k+1,c2^{n-k}\varepsilon)$\\
\hline
\end{tabular}
\caption{Stability results for strong extractors: input $N=2^{n}$, output $M=2^{m}$, seed $D=2^{d}$, min-entropy $k$, error parameter $\varepsilon$, and $c$ represents possibly different constants.
}
\label{table:stability}
\end{table}
\subsection{Condensers}\label{sec:cond_results}
The framework of normed spaces and operator spaces also allows to analyze to what extent condensers are quantum-proof. Analogously as for extractors, we associate to a condenser $\mathrm{Con}=\{f_{s}\}_{s\in D}$ a linear map from $N$ to $M$ given by
\begin{align}\label{eq:cond_map}
[\mathrm{Con}](P):=\frac{1}{D}\cdot\sum_{s=1}^{D}F_{s}(P_{N})\ .
\end{align}
The input constraint for condenser is the same as for extractors and in order to characterize the output constraint~\eqref{eq:qcondenser} we define the norm
\begin{align}
\|P_{N}\|_{\Sigma\{2^{k'}\ell_\infty,\ell_1\}}:=\inf\left\{2^{k'}\|Q_{1}\|_{\ell_\infty}+\|Q_{2}\|_{\ell_1}:Q_{1}+Q_{2}=P_{N}\right\}\ .
\end{align}
The bounded norm
\begin{align}
\|[\mathrm{Con}]:\cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma\{2^{k'}\ell_\infty,\ell_1\} \|\equiv\|[\mathrm{Con}]\|_{\cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma\{2^{k'}\ell_\infty,\ell_1\}}:=\sup_{\|P_{N}\|_{\cap\{2^{k}\ell_\infty,\ell_1\}\leq1}}\|[\mathrm{Con}](P_{N})\|_{\Sigma\{2^{k'}\ell_\infty,\ell_1\}}
\end{align}
then gives the following norm characterization for condensers as shown in the following proposition.
\begin{restatable}{proposition}{condToNorm}\label{prop:cond-to-norm}
Let $\mathrm{Con}=\{f_{s}\}_{s\in D}$ and $[\mathrm{Con}]$ as defined in~\eqref{eq:cond_map}. Then, we have
\begin{align}
\|[\mathrm{Con}]:\cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma\{2^{k'}\ell_\infty,\ell_1\} \|\leq\varepsilon\quad&\Rightarrow\quad\mathrm{Con}\,\mathrm{is}\,\mathrm{a}\,\left(k\to_{\varepsilon}k'+\log(1/\varepsilon)\right)\mbox{-}\,\mathrm{condenser}\label{eq:condenser_1}\\
\mathrm{Con}\,\mathrm{is}\,\mathrm{a}\,\left(k-1\to_{\varepsilon}k'+\log(1/\varepsilon)\right)\mbox{-}\,\mathrm{condenser}\quad&\Rightarrow\quad\|[\mathrm{Con}]:\cap\{2^{k}\ell_\infty,\ell_1\} \to \Sigma\{2^{k'}\ell_\infty, \ell_1\} \| \leq 6\varepsilon\ .\label{eq:condenser_2}
\end{align}
\end{restatable}
Note that strong condensers are covered by this as a special case. To make this explicit we associate to a strong condenser $\mathrm{Con}=\{f_s\}_{s\in D}$ the linear map
\begin{align}\label{eq:cond_strongmap}
[\mathrm{Con}]_{S}(P):=\frac{1}{D}\cdot\sum_{s=1}^{D}F_{s}(P_{N})\otimes\proj{s}_{D}\ ,
\end{align}
and the corresponding statement can then be read off by replacing $[\mathrm{Con}]$ with $[\mathrm{Con}]_{S}$ in Proposition~\ref{prop:cond-to-norm}. Again we have that there exists operator space structures defined on the normed spaces described in Proposition~\ref{prop:cond-to-norm} such that the corresponding completely bounded norm
\begin{align}
\norm{[\mathrm{Con}]:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{2^{k'}\ell_\infty,\ell_1\}}_\mathrm{cb}\equiv\|[\mathrm{Con}]\|_{\mathrm{cb},\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{2^{k'}\ell_\infty,\ell_1\}}
\end{align}
captures the property of being quantum-proof.
\begin{restatable}{theorem}{condToNormQuantum}\label{thm:cond-to-norm-quantum}
Let $\mathrm{Con}=\{f_{s}\}_{s\in D}$ and $[\mathrm{Con}]$ as defined in~\eqref{eq:cond_map}. Then, we have
\begin{align}
\norm{[\mathrm{Con}]:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{2^{k'}\ell_\infty,\ell_1\}}_\mathrm{cb}\leq\frac{\varepsilon}{4}\quad\Rightarrow\quad\mathrm{Con}\,\mathrm{is}\,\mathrm{a}\,\mathrm{quantum}\mbox{-}\mathrm{proof}\left(k\to_{\varepsilon}k'+\log(1/\varepsilon)\right)\mbox{-}\,\mathrm{condenser}\ ,\label{eq:quantumcondenser1}
\end{align}
\begin{align}
\mathrm{Con}\,\mathrm{is}\,\mathrm{a}\,\mathrm{quantum}\mbox{-}\mathrm{proof}\,(k-1\to_{\varepsilon}k'+\log(1/\varepsilon))\mbox{-}\,\mathrm{condenser}\Rightarrow\quad\norm{[\mathrm{Con}]:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{2^{k'}\ell_\infty,\ell_1\}}_\mathrm{cb}\leq8\varepsilon\ .\label{eq:quantumcondenser2}
\end{align}
\end{restatable}
The special case of strong condensers just follows by replacing $[\mathrm{Con}]$ with $[\mathrm{Con}]_{S}$ in Theorem~\ref{thm:cond-to-norm-quantum}. Note that even though an extractor is just a condenser with full output entropy $k'=m$, our norm characterization for condensers can not directly be used to characterize extractors because there is a loss of $\log(1/\varepsilon)$ for the output entropy $k'$ (see Proposition~\ref{prop:cond-to-norm} and Theorem~\ref{thm:cond-to-norm-quantum}). However, for that reason we have the separate norm characterization for extractors in Section~\ref{sec:extr_results}.
\subsubsection*{Applications}
It is known that condensers are closely related to graph-theoretic problems~\cite{Umans07}, and here we make exactly such a connection (that is different from previously studied connections). Using the bounded norm characterization in Proposition~\ref{prop:cond-to-norm}, we can show that evaluating the performance of a condenser corresponds to an instance of a well studied combinatorial problem, called bipartite densest subgraph. For this we think of condensers as bipartite graphs $G=(N,M,V\subset N \times M,D)$ with left index set $N$, right index set $M$, edge set $V$ having left degree $D$, and neighbor function $\Gamma: N \times D \to M$. The identification with the usual definition of condensers $\mathrm{Con}=\{f_{s}\}_{s\in D}$ is just by setting
\begin{align}\label{eq:neighbor_graph}
\Gamma(\cdot,s):=f_{s}(\cdot)
\end{align}
for every value of the seed $s\in D$.
\begin{restatable}{proposition}{bipartitegraph}\label{prop:bipartite_graph}
Let $\mathrm{Con}=\{f_{s}\}_{s\in D}$, $[\mathrm{Con}]$ as defined in~\eqref{eq:cond_map}, $G=(N,M,V,D)$ be the bipartite graph as defined in~\eqref{eq:neighbor_graph}, and $\mathrm{Dense}(G,2^{k},2^{k'})$ be the optimal value of the quadratic program
\begin{center}
\begin{minipage}[t]{0.90\textwidth}
\begin{align}
\mathrm{Dense}(G,2^{k},2^{k'}):= \text{maximize}\quad & \sum_{(x,y) \in V} f_x \, g_y \label{eq:conddensestsubgraph}\\
\text{subject to}\quad & f_x, g_y \in \{0,1\}\\
& \sum_x f_x \leq 2^{k}\\
& \sum_y g_y \leq 2^{k'}\ .
\end{align}
\end{minipage}
\end{center}
Then, we have
\begin{align}
\|[\mathrm{Con}]:\cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma\{2^{k'}\ell_\infty,\ell_1\} \|\leq\varepsilon\quad\Leftrightarrow\quad\mathrm{Dense}(G,2^{k},2^{k'})\leq2^{k+d}\varepsilon\ .
\end{align}
\end{restatable}
Hence, we try to find the subgraph of $G$ with the biggest number of edges, having $2^{k}$ vertices on the left and $2^{k'}$ vertices on the right. The norm condition for being a condenser (Proposition~\ref{prop:cond-to-norm}) then just says that the size of the edge set of all such subgraphs has to be bounded by $2^{k}D\varepsilon$. This is exactly an instance of the bipartite densest $(2^{k},2^{k'})$-subgraph problem. The (bipartite) densest subgraph problem is well studied in theoretical computer science, but its hardness remains elusive. However, it is known that usual semidefinite program (SDP) relaxation possess a large integrality gap for random graphs~\cite{FS97,Bhaskara:2012wn}. Interestingly, we can show that the densest subgraph SDP relaxations are not only an upper bound on the quadratic program~\eqref{eq:conddensestsubgraph} characterizing characterizing the bounded norm of condensers, but also on the completely bounded norm of condensers.
Seeing condensers as bipartite graphs also allows us to define a Bell inequality (two-player game) such that the classical value characterizes the condenser property and the entangled value characterizes the quantum-proof condenser property (see~\cite{Brunner14} for a review article about Bell inequalities / two-player games). Starting from the bipartite graph $G=(N,M,V,D)$ as defined in~\eqref{eq:neighbor_graph}, we use operator space techniques by Junge~\cite{Junge:2006wt} to define a two-player game $(G;2^{k},2^{k'})$ with classical value $\omega(G;2^{k},2^{k'})$ and entangled value $\omega^{*}(G;2^{k},2^{k'})$, that has the following properties.
\begin{restatable}{theorem}{condensergame}\label{thm:condensergame}
Let $\mathrm{Con}=\{f_{s}\}_{s\in D}$, $[\mathrm{Con}]$ as defined in~\eqref{eq:cond_map}, $G=(N,M,V,D)$ be the bipartite graph as defined in~\eqref{eq:neighbor_graph}, and $(G;2^{k},2^{k'})$ be the two-player game defined in Section~\ref{sec:two_player}. Then, there is a constant $c>0$ such that
\begin{align}\label{eq:condensergame_b}
\omega(G;2^{k},2^{k'})\leq2^{-k'}\cdot\|[\mathrm{Con}]:\cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma\{2^{k'}\ell_\infty,\ell_1\} \|\leq c\cdot\omega(G;2^{k},2^{k'})\ ,
\end{align}
as well as
\begin{align}\label{eq:condensergame_cb}
\omega^{*}(G;2^{k},2^{k'})\leq2^{-k'}\cdot\norm{[\mathrm{Con}]:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{2^{k'}\ell_\infty,\ell_1\}}_\mathrm{cb}\leq c\cdot\omega^{*}(G;2^{k},2^{k'})\ .
\end{align}
Furthermore, the amount of entanglement used by the players corresponds to the dimension of the quantum side information system $Q$.
\end{restatable}
Theorem~\ref{thm:condensergame} shows that for every condenser construction there is a corresponding Bell inequality, and that the degree to which this inequality is violated by quantum mechanics characterizes how quantum-proof the condenser construction is (and vice versa). So in particular, fully quantum-proof condensers (which includes quantum-proof extractors) have a corresponding Bell inequality that is not violated by quantum mechanics.
\section{Open problems}\label{sec:conclusions}
We showed how the theory of operator spaces provides a useful framework for studying the behavior of randomness extractors and condensers in the presence of quantum adversaries. However, there are many questions left open from here and we believe that the following are particularly interesting to explore:
\begin{itemize}
\item The main question if there exist good classical extractors that are not quantum-proof with a large gap still remains open. More precisely we would like to understand the classical/quantum separation better by finding tighter upper bounds (as in Theorem~\ref{thm:small-error-ent} and Theorem~\ref{thm:high-min-ent-ext}) as well as tighter lower bounds (as in~\cite{Gavinsky07}) on the size of the gap.
\item Is it possible to give an upper bound on the dimension of the quantum adversary that is sufficient to consider? This is also a natural question in the norm formulation (Theorem~\ref{thm:ext-to-norm-quantum} and Theorem~\ref{thm:cond-to-norm-quantum}) and the Bell inequality formulation (Theorem~\ref{thm:condensergame}). In the first case it translates into the question for what dimension the completely bounded norm saturates, and in the latter case it translates into the question how much entanglement is needed to achieve the optimal entangled value.
\item Given our new connection to Bell inequalities (Theorem~\ref{cor:out-size}) it would be interesting to explore the corresponding Bell inequalities of quantum-proof extractor constructions (since they can not be violated by quantum mechanics).
\item What other explicit extractor constructions are quantum-proof? This includes variations of Trevisan's constructions as, e.g., listed in~\cite[Section 6]{De12}, but also condenser based constructions~\cite{RR99,RSW00}. Here, the motivation is that all of these constructions have better parameters than any known quantum-proof construction.
\item Operator space techniques might also be useful for analyzing fully quantum and quantum-to-classical randomness extractors as described in~\cite{Berta11_5,Berta12_2,Berta14}.
\end{itemize}
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Quantum information}
In quantum theory, a system is described by an inner-product space, that we denote here by letters like $N, M, Q$.\footnote{In the following all spaces are assumed to be finite-dimensional.} Note that we use the same symbol $Q$ to label the system, the corresponding inner-product space and also the dimension of the space. Let $\mathrm{Mat}_Q(S)$ be the vector space of $Q \times Q$ matrices with entries in $S$. Whenever $S$ is not specified, it is assumed to be the set of complex numbers $\bC$, i.e., we write $\mathrm{Mat}_Q(\bC)=:\mathrm{Mat}_{Q}$. The state of a system is defined by a positive semidefinite operator $\rho_{Q}$ with trace $1$ acting on $Q$. The set of states on system $Q$ is denoted by $\mathcal{S}(Q) \subset \mathrm{Mat}_Q(\bC)$. The inner-product space of a composite system $QN$ is given by the tensor product of the inner-product spaces $Q \otimes N=:QN$. From a joint state $\rho_{QN} \in \mathcal{S}(QN)$, we can obtain marginals on the system $Q$ by performing a partial trace of the $N$ system $\rho_Q := {\rm Tr}_{N}[\rho_{QN}]$. A state $\rho_{QN}$ on $QN$ is called quantum-classical (with respect to some basis) if it can be written as $\rho_{QN} = \sum_{x} \rho_{x} \otimes \proj{x}$ for some basis $\{\ket{x}\}$ of $N$ and some positive semidefinite operators $\rho_x$ acting on $Q$. We denote the maximally mixed state on system $N$ by $\upsilon_N$.
To measure the distance between two states, we use the trace norm $\| A \|_{1} := {\rm Tr}[\sqrt{A^{*}A}]$, where $A^{*}$ is the conjugate transpose of $A$. In the special case when $A$ is diagonal, $\| A \|_{1}$ becomes the familiar $\ell_1$ norm of the diagonal entries. Moreover, the Hilbert-Schmidt norm is defined as $\|A\|_{2}:=\sqrt{{\rm Tr}[A^{*}A]}$, and when $A$ is diagonal this becomes the usual $\ell_2$ norm. Another important norm we use is the operator norm, or the largest singular value of $A$, denoted by $\|A\|_{\infty}$. When $A$ is diagonal, this corresponds to the familiar $\ell_{\infty}$ norm of the diagonal entries. For a probability distribution $P_N \in \mathcal{S}(N)$, $\| P_N \|_{\ell_\infty}$ corresponds to the optimal probability with which $P_N$ can be guessed. We write
\begin{align}
H_{\min}(N)_P:= -\log \| P_N \|_{\ell_\infty}\ ,
\end{align}
the min-entropy of $P_{N}$. More generally, the conditional min-entropy of $N$ given $Q$ is used to quantify the uncertainty in the system $N$ given the system $Q$. The conditional min-entropy is defined as
\begin{align}
H_{\min}(N|Q)_{\rho}:=-\log\min_{\sigma_{Q}\in\mathcal{S}(Q)}\big\|(\1_{N}\otimes\sigma_{Q}^{-1/2})\rho_{NQ}(\1_{N}\otimes\sigma_{Q}^{-1/2})\big\|_{(\infty;\infty)}\ ,
\end{align}
with generalized inverses, and $\|\cdot\|_{(\infty;\infty)}$ the operator norm on $\mathrm{Mat}_{Q}\otimes\mathrm{Mat}_{N}=:\mathrm{Mat}_{QN}$. Note that in the special case where the system $Q$ is trivial, we have $H_{\min}(N)_{\rho}=-\log\|\rho_{N}\|_{\infty}$. In fact, the general case also corresponds to a norm, we have
\begin{align}
H_{\min}(N|Q)_{\rho}=-\log\|\rho_{QN}\|_{(1;\infty)}\ ,
\end{align}
where the norm $\| \cdot \|_{(1;\infty)}$ on $\mathrm{Mat}_{QN}$ is defined as
\begin{align}\label{eq:def-one-infty}
\| A \|_{(1;\infty)} := \inf \Big\{ \| B_1 \|_2 \| C \|_{(\infty;\infty)} \| B_2 \|_2 : A = (B_1 \otimes \1_{N}) C (B_2 \otimes \1_{N}); B_1, B_2 \in \mathrm{Mat}_{Q}\Big\}\ .
\end{align}
A proof of this is given in the Appendix as Proposition~\ref{prop:min-ent-norm}.
\subsection{Normed spaces}\label{sec:spaces}
A vector space $E$ together with a norm $\|\cdot\|_{E}$ defines a normed space denoted by $\left(E,\|\cdot\|_{E}\right)$. On the dual space $E^{*}$ the dual norm $\|\cdot\|_{E^{*}}$ is defined as
\begin{align}
\|f\|_{E^{*}}:=\sup_{\|e\|_{E}\leq1}|f(e)|\ ,
\end{align}
and hence $(E^{*},\|\cdot\|_{E^{*}})$ is again a normed space. We have that $E$ is isomorphic to $E^{**}$ since we restrict to finite-dimensional spaces.
The vector space of linear operators from $E$ to $F$ is denoted by $\mathrm{Lin}(E,F)$ and for normed spaces $E$ and $F$ there is a natural norm induced on $\mathrm{Lin}(E,F)$ defined by
\begin{align}\label{eq:bounded}
\|u\|_{E\to F}:=\sup_{\|x\|_E \leq 1} \|u(x)\|_F\ ,
\end{align}
where $u\in\mathrm{Lin}(E,F)$. This norm is called the bounded norm and we also use the notation
\begin{align}
\|u:E\to F \|:= \|u\|_{E \to F}\ .
\end{align}
For the corresponding normed space we write $B(E,F):=(\mathrm{Lin}(E,F),\|\cdot\|_{E\to F})$. Note that every $u:E\to F$ has finite bounded norm (since we restrict to finite-dimensional spaces). The dual of the $\|\cdot\|_{E \to F}$ norm is the $\|\cdot\|_{F^* \to E^*}$ norm. We define the adjoint map $u^* : F^* \to E^*$ as $(u^*(f^*))(e) = f^*(u(e))$ for $e \in E$ and $f^* \in F^*$. It is simple to check that we have
\begin{align}\label{eq:dual-norm-map}
\|u\|_{E \to F} = \|u^*\|_{F^* \to E^*}\ .
\end{align}
The norms we are interested in are constructed by combining the $\ell_{\infty}$ norm and the $\ell_1$ norm. The following gives a general way of combining two norms. For $\|\cdot\|_{\alpha}$ and $\|\cdot\|_{\beta}$ norms on a vector space $E$, we define the $\cap$-norm as
\begin{align}
\| x \|_{\cap\{\alpha,\beta\}}:=\max\big\{\| x \|_{\alpha}, \| x \|_{\beta}\big\}\ ,
\end{align}
where $x\in E$. The dual norm then takes the following form.
\begin{proposition}\label{prop:cap-sigma}
For the norm $\|\cdot\|_{\cap\{\alpha,\beta\}}$ on a vector space $E$, the corresponding dual norm $\|\cdot\|_{\Sigma\{\alpha^{*},\beta^{*}\}}$ on the dual space $E^{*}$ takes the form
\begin{align}
\| y \|_{\Sigma\{\alpha^{*},\beta^{*}\}} = \inf\big\{\| y_1 \|_{\alpha^*} + \| y_2 \|_{\beta^*} : y_1 + y_2 = y \big\}\ ,
\end{align}
where $y\in E^{*}$.
\end{proposition}
\begin{proof}
For a given $y \in E^*$, we write the dual convex program to $\| y \|_{\Sigma\{\alpha^{*},\beta^{*}\}} = \inf \{ \| y_1 \|_{\alpha^*} + \| y_2 \|_{\beta^*} : y = y_1 + y_2 \}$ as
\begin{align}
\sup_{x \in E} \inf_{y_1 y_2} \| y_1 \|_{\alpha^*} + \| y_2 \|_{\beta^*} + (y - y_1 - y_2)(x)\ .
\end{align}
But observe that $\inf_{y_1} \| y_1 \|_{\alpha^*} - y_1(x) = 0$ if $\| x \|_{\alpha} = \sup_{y_1} \{ y_1(x) : \| y_1 \|_{\alpha^*} \leq 1 \} \leq 1$ and $\inf_{y_1} \| y_1 \|_{\alpha^*} - y_1(x) = - \infty$ otherwise. Thus the dual convex program can be written as
\begin{align}
\sup_{x \in E} \{ y(x) : \|x\|_{\alpha} \leq 1, \| x \|_{\beta} \leq 1\}\ ,
\end{align}
which is the definition of the dual norm of the norm $\|\cdot\|_{\cap\{\alpha,\beta\}}$. We conclude by observing that strong duality holds in this case.
\end{proof}
\subsection{Operator spaces}\label{sec:operator_spaces}
For the convenience of the reader we recall a few basic facts about the theory of operator spaces, sometimes also referred to as ``quantized Banach spaces''. For a more in depth treatment we refer the reader to Pisier's book~\cite{Pis03}. An operator space is a normed space $(E,\|\cdot\|_E)$ together with a sequence of norms $\|\cdot\|_{M_Q(E)}$ on the spaces $\mathrm{Mat}_Q(E)$ satisfying the following consistency conditions for all $Q \geq 2$ (also $\|\cdot\|_{E}= \|\cdot\|_{M_1(E)}$).
\begin{definition}[Operator space]\label{def:operator_space}
An (abstract) operator space is a vector space $E$ together with a sequence of norms $\|\cdot\|_{M_Q(E)}$ on $\mathrm{Mat}_Q(E)$ such that for $Q\in\mathbb{N}$:
\begin{enum}
\item For all $x \in \mathrm{Mat}_Q(E)$ and $x' \in \mathrm{Mat}_Q(E)$\ ,
\begin{align}
\left\|
\left( \begin{array}{cc}
x & 0 \\
0 & x' \\
\end{array}
\right)
\right\|_{M_{Q+Q'}(E)} = \max\left\{ \|x\|_{M_Q(E)}, \|x'\|_{M_{Q'}(E)} \right\}\ .\label{eq:cond-direct-sum}
\end{align}
\item For all $\alpha, \beta \in \mathrm{Mat}_Q$ and $x \in \mathrm{Mat}_Q(E)$\ ,
\begin{align}\label{eq:cond-mult}
\| \alpha \cdot x \cdot \beta \|_{M_Q(E)} \leq \| \alpha \|_{\infty} \| x \|_{M_Q(E)} \| \beta \|_{\infty}\ ,
\end{align}
where the notation $\alpha \cdot x$ refers to usual multiplication of matrices.
\end{enum}
We write $\left(E,\|\cdot\|_{M_Q(E)}\right)$ for the operator space structure.
\end{definition}
The most important example of an operator space is $M_N$ which is the space $\mathrm{Mat}_N$ together with the norms
\begin{align}
\|\cdot\|_{M_Q(M_N)}:=\|\cdot\|_{(\infty;\infty)}
\end{align}
the usual operator norms on $\mathrm{Mat}_{QN}$. It is easy to verify that the two conditions~\eqref{eq:cond-direct-sum} and~\eqref{eq:cond-mult} are satisfied. Alternatively we could also define an operator space (concrete operator space) on a normed space $(E,\|\cdot\|_{E})$ by seeing $(E,\|\cdot\|_{E})$ as a subspace of $(\mathrm{Mat}_N,\|\cdot\|_{\infty})$ and the norms $\|\cdot\|_{M_Q(E)}$ induced by the operator norms $\|\cdot\|_{(\infty;\infty)}$ on $\mathrm{Mat}_{QN}$.
Since Ruan~\cite{Rua88} proved that every abstract operator space can be realized as a concrete operator space (see also~\cite[Chapter 2.2]{Pis03}) these two definitions are really equivalent.\footnote{In general, one needs to embed $E$ into the space of bounded operators on an infinite dimensional Hilbert space.} For example, consider the subspace $\mathrm{Diag}_N\subset\mathrm{Mat}_N$ of diagonal matrices. It is then immediate to deduce that for this concrete operator space $E$, the norm $\|\cdot\|_{M_1(E)}$ is simply the $\ell_{\infty}$ norm of the diagonal vector. As another example, consider the subspace $C_N$ ($R_{N}$) of matrices such that only the first column (row) contains non-zero elements. This defines a concrete operator space on $\bC^N$, the norm $M_1(C_N) = M_1(R_N)$ is simply the $\ell_2$ norm of the vector. Note that even though as normed spaces $C_{N}$ and $R_{N}$ are both isomorphic to $(\bC^N, \ell_2)$, the aforementioned operator space structures are different. In fact, a given normed space has in general many possible operator space structures. However, for many normed spaces that we are interested in, there is one ``natural'' operator space structure. For example, for the normed space $(\mathrm{Mat}_N,\|\cdot\|_{\infty})$ there is the natural operator space structure $M_N$ defined above. Also, for the normed space $(\mathrm{Mat}_N,\|\cdot\|_{1})$ there is a natural operator space structure as the operator space dual of $M_N$. We will discuss this in detail after the definition of the operator space dual (Definition~\ref{prop:s1-os}).
The bounded norm as in~\eqref{eq:bounded} is fundamental in understanding linear maps between normed spaces. The analogous norm for linear maps between operator spaces is the completely bounded norm (because completely bounded maps are the morphisms in the category of operator spaces). In the quantum information literature, the completely bounded norm usually refers specifically to maps from the operator space $M_N$ to $M_{M}$. The dual norm is called the diamond norm, first introduced in quantum information by Kitaev~\cite{Kit97}. Here we are concerned with the completely bounded norm between more general operator spaces.
\begin{definition}[Completely bounded norm]
For operator spaces $E$ and $F$ the completely bounded norm of $u\in\mathrm{Lin}(E,F)$ is defined as
\begin{align}\label{eq:cb-norm}
\|u\|_{\mathrm{cb},E\to F}:=\sup_{Q}\|u_{Q}\|_{M_Q(E)\to M_Q(F)}\ ,
\end{align}
where for $\{x_{ij}\}_{ij} \in \mathrm{Mat}_Q(E)$, $u_Q(\{x_{ij}\}_{ij}):=\{u(x_{ij})\}_{ij}$, or simply $u_Q = \1_Q \otimes u$. We also use the notation
\begin{align}
\|u:E\to F\|_{\mathrm{cb}}:= \|u\|_{\mathrm{cb},E\to F}\ .
\end{align}
\end{definition}
Hence $\left(\mathrm{Lin}(E,F),\|\cdot\|_{\mathrm{cb}, E \to F}\right)$ is also a normed space. Note that every $u:E\to F$ has finite completely bounded norm (since we restrict to finite-dimensional spaces), however $\|\cdot\|_{E \to F}$ is in general smaller than $\|\cdot\|_{\mathrm{cb}, E \to F}$. Later, we are interested in upper bounding this ratio for particular operator spaces and maps. For general $E,F$ and $u:E\to F$ of rank $M$ it is known that the ratio of the completely bounded to the bounded norm is at most $M/2^{1/4}$~\cite[Theorem 7.15]{Pis03}.
It is in general not true that we can restrict the supremum in~\eqref{eq:cb-norm} to finite $Q$. However, in case the that the target normed space is $M_N$, we have~\cite[Proposition 12]{Pis03} (or~\cite[Theorem 5]{Wat05})
\begin{align}\label{eq:cb-norm-to-mn}
\|u\|_{\mathrm{cb},E\to M_N}=\|u_N\|_{M_N(E)\to M_N(M_N)}\ .
\end{align}
This raises the question whether there are specific operator space structures such that all bounded norms are also completely bounded. Such structures do in fact exist, and are called minimal and maximal operator space structures.
\begin{theorem}\cite[Chapter 3]{Pis03}
Let $(E,\|\cdot\|_{E})$ be a normed space. Then there exists two (in general different) operator spaces $\max(E)$ and $\min(E)$, such that we have for all operator spaces $F$ and $u\in\mathrm{Lin}(E,F)$ and $v\in\mathrm{Lin}(F,E)$,
\begin{align}
\norm{u}_{\mathrm{cb},F \to \min(E)} = \norm{u}_{F \to \min(E)} \text{ and } \norm{v}_{\mathrm{cb},\max(E) \to F} = \norm{v}_{E \to F}\ .
\end{align}
\end{theorem}
If $E$ and $F$ are operator spaces, then we can define a natural operator space structure on $\mathrm{Lin}(E,F)$ called $CB(E,F)$. To see this, observe that we can see $\mathrm{Mat}_Q(\mathrm{Lin}(E,F))$ as elements of $\mathrm{Lin}(E,\mathrm{Mat}_Q(F))$. Thus for $x \in \mathrm{Mat}_Q(\mathrm{Lin}(E,F))$, we define
\begin{align}
\| x \|_{M_Q(CB(E,F))}:= \| x \|_{\mathrm{cb}, E \to \mathrm{Mat}_Q(F)}\ .
\end{align}
It is then simple to verify the two conditions~\eqref{eq:cond-direct-sum} and~\eqref{eq:cond-mult}.
By taking $F = \bC$, this allows us to define the notion of a dual operator space.
\begin{definition}[Operator space dual]\label{def:opertordual}
For an operator space $E$, the dual operator space denoted $E^*$ is defined as
\begin{align}
\| x \|_{M_Q(E^*)}:= \| x \|_{\mathrm{cb}, E \to M_Q}\ ,
\end{align}
where $x \in \mathrm{Mat}_Q(E^*)$ is viewed as an element of $\mathrm{Lin}(E,\mathrm{Mat}_Q)$.
\end{definition}
For $u\in\mathrm{Lin}(F,E)$ we have for the adjoint $u^*\in\mathrm{Lin}(F^{*},E^{*})$ that $\| u \|_{\mathrm{cb}} = \| u^* \|_{\mathrm{cb}}$. We also have $E^{**}=E$ with $M_Q(E^{**})=M_Q(E)$ since we restrict to finite-dimensional spaces. If we consider the norm for $Q = 1$, then $\|\cdot\|_{E^{*}}$ corresponds to the dual norm of $\|\cdot\|_{E}$ (this follows since $\|\cdot\|_{\mathrm{cb},E\to\mathbb{C}}=\|\cdot\|_{E\to\mathbb{C}}$ by, e.g., \cite[Proposition 1.10]{Pis03}). However, this is not the case for $Q\geq 2$, that is, $M_Q(E^*)$ on $\mathrm{Mat}_{Q}(E^{*})$ is in general not the dual norm of $M_Q(E)$ on $\mathrm{Mat}_{Q}(E)$.
As an example let us now consider the dual operator space of $M_N$, called the trace class operator space $S^1_N$.
\begin{proposition}\label{prop:s1-os}
The sequence of norms for the trace class operator space $S^1_N$ are given by
\begin{align}\label{eq:l1-os}
\|x\|_{M_Q(S^1_N)}=\|x\|_{(\infty;1)}:=\sup\big\{\|(A\otimes\1_{N})x(B\otimes\1_{N})\|_{(1;1)}:\|A\|_2,\|B\|_2 \leq1;A,B\in\mathrm{Mat}_Q\big\}\ ,
\end{align}
where $x \in \mathrm{Mat}_Q(\mathrm{Mat}_N)$, and $\|\cdot\|_{(1;1)}$ the trace norm on $\mathrm{Mat}_{QN}$.
\end{proposition}
We observe that the norm $\|\cdot\|_{(\infty;1)}$ is the dual norm of the norm $\|\cdot\|_{(1;\infty)}$ characterizing the conditional min-entropy~\eqref{eq:def-one-infty}; see Proposition~\ref{prop:duality-one-infty}.
\begin{proof}
In order to compute the dual norm, we first explicitly map $x \in \mathrm{Mat}_Q(\mathrm{Mat}_N)$ to an element of $\mathrm{Lin}(\mathrm{Mat}_N,\mathrm{Mat}_Q)$. For this, we see $x$ as the Choi matrix of a map $u \in \mathrm{Lin}(\mathrm{Mat}_N,\mathrm{Mat}_Q)$ defined by
\begin{align}
{\rm Tr}\left[du(c)\right]={\rm Tr}\left[x\left(d\otimes c^T\right)\right]
\end{align}
for any $c\in\mathrm{Mat}_N,d\in\mathrm{Mat}_Q$, where $c^T$ denotes the transpose of $c$ in the standard basis. Using the definition of operator space dual (Definition~\ref{def:opertordual}), we have that
\begin{align}
\|x\|_{M_Q(S^1_N)}=\|u:M_N\to M_Q\|_{\mathrm{cb}}=\|u:M_Q(M_N)\to M_Q(M_Q)\|\ ,
\end{align}
where we used~\eqref{eq:cb-norm-to-mn} to simplify the completely bounded norm. Continuing we get,
\begin{align}
\| x \|_{M_Q(S^1_N)}
&= \sup\left\{ \| (\1_Q \otimes u)(z) \|_{(\infty;\infty)} : z \in \mathrm{Mat}_{QN}, \| z \|_{(\infty;\infty)} \leq 1\right\} \\
&=\sup\left\{{\rm Tr}\left[\left(\1_Q\otimes u\right)(z)ab^{*}\right]:z\in\mathrm{Mat}_{QN},\|z\|_{(\infty;\infty)}\leq1;a,b\in\C^{Q^2},\|a\|_2,\|b\|_2\leq1\right\}\ .
\end{align}
A straightforward calculation (Lemma~\ref{lem:calc-vect-to-matr}) shows that
\begin{align}
{\rm Tr}\left[ (\1_Q \otimes u)(z) a b^{*} \right]= {\rm Tr}\left[ (\bar{B} \otimes \1_{N}) x (A^T \otimes \1_{N}) z^{T} \right]\ ,
\end{align}
where $A:= \sum_{ij} a_{ij} \ket{i} \bra{j}$ and $B:= \sum_{ij} b_{ij} \ket{i} \bra{j}$. To conclude, we use that $\| z^T \|_{(\infty;\infty)} = \| z \|_{(\infty;\infty)}$, $\|A^T \|_2 = \| a \|_2$, $\| \bar{B} \|_2 = \| b \|_2$, and that the trace norm is the dual norm of the operator norm.
\end{proof}
We now can define the analogs of the $\cap$- and $\Sigma$-norms for operator spaces. If $M_Q(\alpha)$ and $M_Q(\beta)$ denote two operator structures on the same vector space $V$, then we denote by $\opnorm{\cdot}_{\alpha^*}$ and $\opnorm{\cdot}_{\beta^*}$ the sequence of norms on the dual space $E^*$ giving rise to the operator space duals.
\begin{definition}[$\cap$-norm]\label{def:cap-sigma-os}
Let $E$ and $F$ be two operator spaces on the same vector space $V$. Then we define a new operator space $\cap\{E,F\}$ by the sequence of norms
\begin{align}
\norm{x}_{M_Q(\cap\{E, F\})}:= \max\left\{ \norm{x}_{M_Q(E)},\norm{x}_{M_Q(F)}\right\}\ .
\end{align}
The operator space dual of $\cap\{E,F\}$ is denoted $\Sigma\{E^*, F^*\}$.
\end{definition}
One might now think that $M_Q(\Sigma\{E^*,F^*\})$ is equal to the $\Sigma$-norm of the norms $M_Q(E^*)$ and $M_Q(F^*)$. This is almost the case but only up to a factor of $2$; see~\cite[Chapter 2.7]{Pis03}. A more detailed discussion of the $\cap$- and $\Sigma$-operator space structures can be found in~\cite[Chapter 2.7]{Pis03}.
\section{Extractors}\label{sec:extractors}
\subsection{Extractor property as a bounded norm}
Here we characterize extractors in terms of the bounded norm $\cap\{2^{k}\ell_{\infty},\ell_1\}\to\ell_1$.
\extToNorm*
\begin{proof}
We first prove~\eqref{eq:extractor_1}. For any probability distribution $P$ with min-entropy at least $k$ we have $\| P \|_{\ell_1} = 1$ as well as $\| P \|_{2^{k} \ell_{\infty}} \leq 1$. Hence, $\| P \|_{\cap(2^{k} \ell_{\infty}, \ell_1)} \leq 1$ and by the definition of the bounded norm this implies the claim $\|\Delta[\mathrm{Ext}](P)\|_{\ell_1}\leq\varepsilon$.
To prove~\eqref{eq:extractor_2} consider a distribution $P$ with $\|P\|_{\ell_1}\leq1$ and $\| P \|_{\ell_\infty} \leq 2^{-k}$. Then let $P^+(x) = \max\{P(x), 0\}$ and $P^-(x) = \max\{-P(x), 0\}$, and note that $\| P^+ \|_{\ell_\infty}, \| P^{-} \|_{\ell_\infty} \leq 2^{-k}$. As the extractor property only applies for normalized distributions, we extend $P^+$, $P^-$ into a probability distributions $\bar{P}^+ = P^+ + (1-\|P^+\|_{\ell_1})\upsilon_N$ and $\bar{P}^-(x) = P^-(x) +(1-\|P^-\|_{\ell_1}) \upsilon_N$. Now observe that $\|\bar{P}^+\|_{\ell_\infty}\leq\| P\|_{\ell_\infty}+\frac{1}{N}\leq2^{-(k-1)}$ and the similar bound holds for $\|\bar{P}^-\|_{\ell_\infty}$. Thus, we have
\begin{align}
\| \Delta[\mathrm{Ext}](P) \|_{\ell_1}
&= \| \Delta[\mathrm{Ext}](P^+) - \Delta[\mathrm{Ext}](P^-) \|_{\ell_1} \\
&\leq \| \Delta[\mathrm{Ext}](P^+) \|_{\ell_1} + \| \Delta[\mathrm{Ext}](P^-) \|_{\ell_1} \\
&\leq \| \Delta[\mathrm{Ext}](\bar{P}^+) \|_{\ell_1} + (1-\|P^+\|_{\ell_1}) \| \Delta[\mathrm{Ext}](\upsilon_N) \|_{\ell_1}+\| \Delta[\mathrm{Ext}](\bar{P}^-) \|_{\ell_1} + (1-\|P^-\|_{\ell_1}) \| \Delta[\mathrm{Ext}](\upsilon_N) \|_{\ell_1}\\
&\leq \| \Delta[\mathrm{Ext}](\bar{P}^+) \|_{\ell_1} + \| \Delta[\mathrm{Ext}](\bar{P}^-) \|_{\ell_1} + \| \Delta[\mathrm{Ext}](\upsilon_N) \|_{\ell_1}\\
&\leq 3\varepsilon\ ,
\end{align}
where we used the fact that $\bar{P}^+$, $\bar{P}^-$ and $\upsilon_N$ have min-entropy at least $k-1$.
\end{proof}
\subsection{Quantum-proof as a completely bounded norm}\label{sec:norms-ext}
Recall that the relevant norm for extractors is a maximum of two norms denoted $\cap\{2^k\ell_\infty,\ell_1\}$. Our objective is to extend this norm to matrices so that it gives an operator space structure and so that the requirement of quantum-proof extractors is captured. We discussed operator space structure for the $\ell_\infty$ norm as well as for the $\ell_1$ norm. Moreover, it is simple to define an operator space structure for the maximum of two norms, also called intersection norm because the corresponding unit ball is the intersection of the two unit balls of the two norms. However, it turns out that because of positivity issues, this norm is not the most adapted for our purpose. In order to introduce the norm we consider, it is useful to first see the norm defined above in terms of factorization norms.
\begin{definition}[Intersection norms for extractors]\label{def:intersection_norms}
We define the two operator space structures $\cap\{KM_N, S^1_N\}$ and $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{KM_N, S^1_N\}$ on $\mathrm{Mat}_N$. For $x\in\mathrm{Mat}_Q(\mathrm{Mat}_N)$, we define
\begin{align}\label{eq:def-cap}
\| x \|_{M_Q\left(\cap\{KM_N, S^1_N\}\right)} := \max\left\{ K\| x \|_{(\infty;\infty)}, \| x \|_{(\infty;1)}\right\}\ ,
\end{align}
and
\begin{align}\label{eq:def-cap-cap}
\| x \|_{M_Q\left(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{KM_N, S^1_N\}\right)} := \inf \Big\{&\max\left\{ \sqrt{K} \| a \|_{(\infty;\infty)}, \| \Gamma (a) \|_{\infty} \right\},\max\left\{ \sqrt{K} \| b \|_{(\infty;\infty)}, \| \Gamma(b) \|_{\infty} \right\} : x = a b^*\Big\} \ .
\end{align}
where $\Gamma\in\mathrm{Lin}(\mathrm{Mat}_{QN},\mathrm{Mat}_Q(N^{2}))$ is defined as
\begin{align}\label{eq:def-gamma}
\Gamma( \ket{i} \bra{j} \otimes \ket{k} \bra{\ell} ) = \ket{i} \bra{j} \otimes \bra{k} \otimes \bra{\ell} \ .
\end{align}
\end{definition}
It might appear that the norm $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$ is rather artificial but it can in fact be constructed from basic operator spaces (the row and column operators spaces) and a fundamental operator space tensor product called the Haagerup tensor product. For details on this we refer to Appendix~\ref{app:intersection}. As we always consider the intersection norms for $K = 2^k$ times the $\infty$-norm and the $1$-norm, we use the shorthand
\begin{align}
\cap:=\cap\left\{2^k M_N, S^1_N\right\}\quad\mathrm{and}\quad\cap\cdot\;\cap:=\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\left\{2^k M_N, S^1_N\right\}\ .
\end{align}
Observe that for extractors, we only consider matrices in $\mathrm{Mat}_Q(N)$. In this setting, we would write the norms as
\begin{align}
\cap=\cap\left\{2^k\ell_{\infty}, \ell_1\right\}\quad\mathrm{and}\quad\cap\cdot\;\cap=\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\left\{2^k\ell_{\infty}, \ell_1\right\}\ .
\end{align}
The norms $\cap$ and $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$ are actually closely related and this can be seen from the following lemma, where we write the $(\infty;\infty)$ and $(\infty;1)$ norms as an infimum over all possible factorizations.
\begin{lemma}\label{lem:factorization-norms}
Let $x \in \mathrm{Mat}_{QN}$. Then, we have
\begin{align}\label{eq:factorization-infty}
\| x \|_{(\infty; \infty)} = \inf \big\{ \| a \|_{(\infty; \infty)} \| b \|_{(\infty; \infty)} : x = a b^* \text{ and } a, b \in \mathrm{Mat}_{QN} \big\}\ .
\end{align}
as well as
\begin{align}\label{eq:factorization-infty-one}
\| x \|_{(\infty; 1)} = \inf \big\{ \| \Gamma (a) \|_{\infty} \| \Gamma(b) \|_{\infty} : x = a b^* \text{ and } a, b \in \mathrm{Mat}_{QN} \big\}\ ,
\end{align}
where $\Gamma\in\mathrm{Lin}(\mathrm{Mat}_{QN},\mathrm{Mat}_Q(N^{2}))$ as defined in~\eqref{eq:def-gamma}.
\end{lemma}
Note that in contrast, the factorization in the $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$-norm is restricted to be the same one for the two norms.
\begin{proof}
For one direction in~\eqref{eq:factorization-infty}, we have for any $x = a b^*$ that
\begin{align}
\| x \|_{(\infty;\infty)} \leq \| a \|_{(\infty;\infty)} \| b^* \|_{(\infty;\infty)}\ .
\end{align}
For the other direction, we write a polar decomposition of $x = UP$ with $U$ unitary and $P$ positive semidefinite. Then let $a = U \sqrt{P}$, $b = \sqrt{P}$, and hence $\| a \|_{(\infty;\infty)} = \| b \|_{(\infty;\infty)} = \| P \|_{(\infty;\infty)}^{1/2}$ which gives $\| a \|_{(\infty;\infty)} \| b \|_{(\infty;\infty)} = \| x \|_{(\infty;\infty)}$.
For~\eqref{eq:factorization-infty-one}, we use the definition of the $(\infty;1)$-norm in terms of the operator space dual of $M_N$. For that we see $x$ as the Choi matrix of $u\in\mathrm{Lin}(\mathrm{Mat}_N,\mathrm{Mat}_{Q})$, defined by
\begin{align}\label{eq:choi-matrix}
{\rm Tr}\left[d u(c)\right] = {\rm Tr}\left[ x \left(d \otimes c^T\right)\right]\ .
\end{align}
We then have $\| x \|_{(\infty; 1)} = \| u \|_{\mathrm{cb}, \infty \to \infty}$. Next we show the following useful claim: $x = ab^*$ is equivalent to $u(z) = \hat{a} (z \otimes \1_{NQ}) \hat{b}^{*}$ for all $z \in \mathrm{Mat}_N$, where $\hat{a} = \Gamma(a)$ and $\hat{b} = \Gamma(b)$. For this, let us write $x = \sum_{ii'kk'} x_{ii'kk'} \ket{i} \bra{i'} \otimes \ket{k} \bra{k'}$. Then $x = a b^*$ translates to $x_{ii'pq} = \sum_{j \ell} a_{ijp\ell} b^*_{i' j q \ell}$ for all $ii'pq$. On the other hand, we have that ${\rm Tr}[ \ket{i'} \bra{i} u(\ket{p} \bra{q})] = {\rm Tr}[ x \ket{i'} \bra{i} \otimes \ket{q} \bra{p} ] = x_{ii'pq}$. We explicitly write:
\begin{align}
\hat{a} = \sum_{ijk\ell} a_{ijk\ell} \ket{i} \bra{j} \otimes \bra{k} \otimes \bra{\ell} \qquad \hat{b}^* = \sum_{ijk\ell} b^*_{ijk\ell} \ket{j}\bra{i} \otimes \ket{k} \otimes \ket{\ell}
\end{align}
As a result,
\begin{align}
\hat{a} \ket{p} \bra{q} \otimes \1_{NQ} \hat{b}^* &= \sum_{ii' j \ell } a_{ijp\ell} b^*_{i'jq\ell} \ket{i} \bra{i'}\ ,
\end{align}
which proves the claim. To finish the proof for~\eqref{eq:factorization-infty-one}, we use~\cite[Theorem 5]{Wat09} which states that\footnote{We refer to Watrous' paper as the notation and proof are quantum information friendly, but such a result is well known in operator space theory and holds in more generality, see e.g., \cite[Theorem 1.6]{Pis03}.}
\begin{align}
\| u \|_{\mathrm{cb}, \infty \to \infty}
&= \inf\left\{ \| \hat{a} \|_{\infty} \| \hat{b} \|_{\infty} : u(\beta) = \hat{a} \beta \otimes \1_{NQ} \hat{b}^*\right\}\ .
\end{align}
In~\cite{Wat09}, things are written in the dual form: the diamond norm of $u^*$ is considered and $\hat{a}^*$ and $\hat{b}^*$ is a Stinespring pair for the channel $u^*$ in the sense that $u^*(\beta) = {\rm Tr}_{NQ}[\hat{a}^* \beta \hat{b}]$. Another point is that we can assume that the output dimension of $\hat{a}$ and $\hat{b}$ are $NQ^2$. This proves equality \eqref{eq:factorization-infty-one}.
\end{proof}
We now provide a simple bound on the $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$-norm.
\begin{proposition}\label{prop:new-old-norm}
Let $x \in \mathrm{Mat}_Q (\mathrm{Mat}_N)$. Then, we have
\begin{align}\label{eq:new-old-norm1}
\| x \|_{M_Q(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap)} \geq \| x \|_{M_Q(\cap)}\ ,
\end{align}
and if $x \geq 0$, we have
\begin{align}
\| x \|_{M_Q(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap)} = \| x \|_{M_Q(\cap)}\ .
\end{align}
Moreover, for $Q = 1$, i.e., $x \in \mathrm{Mat}_N$, we have $\| x \|_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap} = \| x \|_{\cap}$.
\end{proposition}
\begin{proof}
The inequality~\eqref{eq:new-old-norm1} follows directly from Definition~\ref{def:intersection_norms} and Lemma~\ref{lem:factorization-norms}. When $x \geq 0$, the corresponding map $u\in\mathrm{Lin}(\mathrm{Mat}_N,\mathrm{Mat}_Q)$ is completely positive. This implies that the completely bounded norm of $u$, as defined in~\eqref{eq:choi-matrix} is given by $\| u \|_{\mathrm{cb}, \infty \to \infty} = \| u(\1) \|_{\infty}$. We also know that for completely positive map, we can find a representation $u(\beta) = \hat{a} \beta \otimes \1_{NQ} \hat{a}^{*}$. This implies that $\| x \|_{(\infty;1)} = \| u \|_{\mathrm{cb}, \infty \to \infty} = \| \hat{a} \hat{a}^* \|_{\infty}$, and then we get
\begin{align}
\| x \|_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap}\leq\left(\max\left\{ \sqrt{2^k} \| a \|_{\infty}, \| \hat{a} \|_{\infty} \right\}\right)^2=\max \left\{2^k\| x \|_{\infty}, \| x \|_{(1;\infty)} \right\}\ .
\end{align}
For $x \in \mathrm{Mat}_N$, we perform a polar decomposition of $x$, $x = UP$ where $U$ is a unitary matrix and $P$ is positive semidefinite. Then let $a = U \sqrt{P}$, $b = \sqrt{P}$, and hence $\| a \|_{(\infty;\infty)} = \| b \|_{(\infty;\infty)} =\| P \|_{(\infty;\infty)}^{1/2}$. Moreover, we have $\| \Gamma(a) \|_{\infty} = \| a \|_2 = \| b \|_2 = \| \sqrt{P} \|_2 = \| P \|_1$, and we finally get $\max\{ 2^k \| x \|_{\infty}, \| x \|_{1} \} \leq \| x \|_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap}$.
\end{proof}
\begin{proposition}\label{prop:decomposition-positive}
Let $x \in \mathrm{Mat}_Q(\mathrm{Mat}_N)$. Then, we have
\begin{align}
x = x_1 - x_2 + i x_3 - i x_4\quad\mathrm{with}\quad x_i \geq 0\quad\mathrm{and}\quad\| x_i \|_{M_Q(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap)} \leq \| x \|_{M_Q(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap)}\ .
\end{align}
\end{proposition}
\begin{proof}
Let $x = ab^*$ be a factorization achieving the minimum of~\eqref{eq:def-cap-cap} with
\begin{align}
\max\left\{ \sqrt{2^k} \| a \|_{(\infty;\infty)}, \| \Gamma(a) \|_{\infty} \right\} = \max\left\{ \sqrt{2^k} \| b \|_{(\infty;\infty)}, \| \Gamma(b) \|_{\infty} \right\}\ ,
\end{align}
which can be achieved by scaling $a$ and $b$. We define $x_1 = \frac{1}{4} (a+b) (a+b)^*, x_2 = \frac{1}{4} (a - b) (a-b)^*, x_3 = \frac{1}{4} (a+ib) (a+ib)^*$ and $x_4 = \frac{1}{4}(a-ib)(a-ib)^*$. It is simple to verify that $x = x_1 - x_2 + i x_3 - i x_4$. For the norms, we have
\begin{align}
\| x_1 \|_{M_Q(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap)}
&\leq \frac{1}{4} \max\left\{ \sqrt{2^k} \| a + b\|_{(\infty;\infty)}, \| \Gamma(a+b)\|_{\infty} \right\}^2 \\
&\leq \frac{1}{4} \max\left\{ \sqrt{2^k} (\| a \|_{(\infty;\infty)} + \| b \|_{(\infty;\infty)}), \| \Gamma(a) \|_{\infty} + \| \Gamma(b) \|_{\infty} \right\}^2 \\
&\leq \frac{1}{4} \left(\max\left\{ \sqrt{2^k} \| a \|_{(\infty;\infty)}, \| \Gamma(a) \|_{\infty} \right\} + \max\left\{ \sqrt{2^k} \| b \|_{(\infty;\infty)}, \| \Gamma(b) \|_{\infty} \right\} \right)^2 \\
&= \| x \|_{M_Q(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap)}\ .
\end{align}
The same argument also holds for $x_i$ with $i \in \{2,3,4\}$.
\end{proof}
We now have all the operator spaces at hand that are relevant for extractors: the operator space $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^k \ell_{\infty}, \ell_1\}$ as in~\eqref{eq:def-cap-cap} for the input conditions and the trace class operator space $S_{N}^{1}$ with $(\infty;1)$ as in~\eqref{eq:l1-os} for the output condition.
\extToNormQuantum*
\begin{proof}
We first prove~\eqref{eq:quantum_ext1}. For $\rho_{QN}\in\mathcal{S}(QN)$ with $H_{\min}(N|Q)_{\rho}\geq k$ we have $\|\rho_{QN}\|_{(1;\infty)} \leq 2^{-k}$ as well as $\|\rho_{QN}\|_{(1;1)} \leq 1$. Hence, we get for $\sigma_{Q}\in\mathcal{S}(Q)$ that
\begin{align}
\| \Delta[\mathrm{Ext}](\rho_{QN})\|_{(1;1)} &= \| \sigma^{1/2}_Q\Delta[\mathrm{Ext}](\sigma^{-1/2}_Q \rho_{QN}\sigma^{-1/2}_Q)\sigma^{1/2}_Q\|_{(1;1)}\label{eq:cb_extractor1}\\
&\leq \| \Delta[\mathrm{Ext}](\sigma^{-1/2}_Q \rho_{QN} \sigma^{-1/2}_Q) \|_{(\infty;1)} \\
&\leq \| \Delta[\mathrm{Ext}] \|_{\mathrm{cb}} \| \sigma^{-1/2}_Q \rho_{QN} \sigma_Q^{-1/2} \|_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap} \\
&= \| \Delta[\mathrm{Ext}] \|_{\mathrm{cb}} \cdot \max\left\{ 2^k \| \sigma^{-1/2}_Q \rho_{QN} \sigma^{-1/2}_Q \|_{(\infty;\infty)},\|\sigma^{-1/2}_Q \rho_{QN} \sigma^{-1/2}_Q \|_{(\infty;1)}\right\}\ ,
\end{align}
where we used the simple expression for the $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$-norm of positive operators (Proposition~\ref{prop:new-old-norm}). We now apply the previous inequality to
\begin{align}
\sigma_{Q}:=\frac{\omega_{Q}+\rho_Q}{2}\quad\mathrm{with}\quad\rho_{QN} \leq 2^{-k} \omega_{Q} \otimes \1_N
\end{align}
and $\omega_{Q}\in\mathcal{S}(Q)$. Then, we have $2^k \rho_{QN} \leq \omega_Q \otimes \1_N \leq 2 \sigma_Q \otimes \1_N$, which means
\begin{align}
2^{k}\|\sigma^{-1/2}_Q \rho_{QN} \sigma^{-1/2}_Q \|_{(\infty;\infty)} \leq 2\ .
\end{align}
In addition, as $\rho_Q \leq 2 \sigma$ we get
\begin{align}
\| \sigma^{-1/2}_Q \rho_{QN} \sigma^{-1/2}_Q \|_{(\infty;1)}= \| \sigma^{-1/2}_Q \rho_Q \sigma^{-1/2}_Q \|_{\infty}\leq 2\ .\label{eq:cb_extractor2}
\end{align}
Taken~\eqref{eq:cb_extractor1}--\eqref{eq:cb_extractor2} together proves~\eqref{eq:quantum_ext1}.
We now prove~\eqref{eq:quantum_ext2}. Let $z\in\mathrm{Mat}_Q(N)$ with $\| z \|_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap} \leq 1$. By definition of the $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$-norm (Definition~\ref{def:intersection_norms}), there exists a factorization $z = a b^*$ such that
\begin{align}
\max\left\{\sqrt{2^{k}}\|a\|_{(\infty; \infty)}, \| \Gamma(a) \|_{\infty} \right\}=\max \left\{ \sqrt{2^{k}} \| b \|_{(\infty; \infty)},\|\Gamma(b) \|_{\infty} \right\} \leq 1\ .
\end{align}
We then define $\hat{z}\in\mathrm{Mat}_{2QN}$ as
\begin{align}\label{eq:zhat}
\hat{z}:=\begin{pmatrix}
aa^* & b a^* \\
a b^* & b b^*
\end{pmatrix}
=\left( a \otimes \ket{0} \bra{0} + b \otimes \ket{1} \bra{0} \right) \cdot \left( a \otimes \ket{0} \bra{0} + b \otimes \ket{1} \bra{0} \right)^*\ ,
\end{align}
and estimate
\begin{align}
\| \hat{z} \|_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap} &\leq\left(\max\left\{ \sqrt{2^{k}} \| a \otimes \proj{0} + b \otimes \ket{0} \bra{1} \|_{(\infty;\infty)}, \| \Gamma(a) \otimes \proj{0} + \Gamma(b) \otimes \ket{0} \bra{1} \|_{\infty}\right\}\right)^2 \\
&\leq\left(2\max\left\{ \sqrt{2^{k}} \| a \|_{(\infty; \infty)}, \| \Gamma(a) \|_{\infty}\right\}\right)^2 \\
&= 4 \| z \|_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap}\\
&\leq4\ .
\end{align}
By~\eqref{eq:zhat} we have $\hat{z}\geq 0$ and Proposition~\ref{prop:new-old-norm} then implies
\begin{align}\label{eq:zhat_action}
\max \left\{ 2^{k} \| \hat{z} \|_{(\infty;\infty)}, \| \hat{z} \|_{(\infty;1)} \right\}=\| \hat{z} \|_{\cap \cdot \cap}\leq4\ .
\end{align}
Now, we evaluate
\begin{align}\label{eq:Uopt1}
\| \Delta[\mathrm{Ext}](z) \|_{(\infty;1)}= \sup_{\stackrel{c,d \in \mathrm{Mat}_Q}{\| c \|_2 \leq 1, \| d \|_2 \leq 1}}\big\| c \Delta[\mathrm{Ext}](z) d \big\|_{(1;1)}&= \sup_{U, c,d} \Big| {\rm Tr}\big[c \Delta[\mathrm{Ext}](z) d \cdot U\big] \Big| \\
&= \sup_{U,c,d} \left| {\rm Tr}\left[ \Delta[\mathrm{Ext}]\left(
\begin{pmatrix}
c & 0 \\
0 & c
\end{pmatrix}
\hat{z}
\begin{pmatrix}
d & 0 \\
0 & d
\end{pmatrix}\right) \cdot U \otimes \ket{1} \bra{0} \right] \right| \ .\label{eq:Uopt2}
\end{align}
For the positive semidefinite operator
\begin{align}
\rho:=\frac{1}{8}\begin{pmatrix}
c & 0 \\
0 & c
\end{pmatrix}
\hat{z}
\begin{pmatrix}
d & 0 \\
0 & d
\end{pmatrix}
\end{align}
we get by the definition of the $(1;\infty)$-norm in~\eqref{eq:def-one-infty} as well as~\eqref{eq:zhat_action} that
\begin{align}
\| \rho \|_{(1;\infty)} \leq \frac{1}{8} \cdot 2 \| \hat{z} \|_{(\infty;\infty)} \leq2^{-k}\quad\mathrm{and}\quad\| \rho \|_{(1;1)} \leq \frac{1}{8} \cdot 2 \| \hat{z} \|_{(\infty;1)} \leq 1\ .
\end{align}
In order to have a valid state, we define
\begin{align}
\bar{\rho}:=\rho+\big(1-{\rm Tr}[\rho]\big)\frac{\1}{QN}\quad\mathrm{with}\quad\|\bar{\rho}\|_{(1;\infty)} \leq2^{-k} + \frac{1}{N} \leq2^{-(k-1)}\ .
\end{align}
Now, by assumption $\mathrm{Ext}$ is a quantum-proof $(k-1,\varepsilon)$-extractor and with~\eqref{eq:Uopt1}--\eqref{eq:Uopt2} we conclude that
\begin{align}
\| \Delta[\mathrm{Ext}](z) \|_{(\infty;1)}\leq8\cdot\| \Delta[\mathrm{Ext}](\rho) \|_{(1;1)}\leq8\cdot\| \Delta[\mathrm{Ext}](\bar{\rho}) \|_{(1;1)}\leq 8 \varepsilon\ .
\end{align}
\end{proof}
\subsection{Stability bounds}
This way of writing the extractor and the quantum-proof extractor conditions allows us to use tools from operator space theory to compare the two concepts. As a first straightforward application, we can use dimension dependent bounds that are known for the maximal possible ratios between the completely bounded norm and the bounded norm.
\begin{corollary}\label{cor:out-size}
Every $(k,\varepsilon)$-extractor is a quantum-proof $(k+1,6\sqrt{2^{m}}\varepsilon)$-extractor.
\end{corollary}
Observe that this result is only interesting when $m$ is small. In particular, it is only useful for weak extractors, as strong extractors have $2^m \geq 2^d = \Omega(\varepsilon^{-2})$ (by the converse bound~\eqref{eq:extractor_converse}).
\begin{proof}
By Proposition~\ref{prop:ext-to-norm} we have that $\| \Delta[\mathrm{Ext}] : \cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k+1} \ell_{\infty}, \ell_1\} \to \ell_1 \| \leq 3 \varepsilon$. Now, we estimate using~\cite[Theorem 3.8]{Pis03},
\begin{align}
\left\|\Delta[\mathrm{Ext}] : \cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k+1} \ell_{\infty}, \ell_1\} \to \ell_1\right\|_{\mathrm{cb}}&\leq\left\| \Delta[\mathrm{Ext}] : \cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k+1} \ell_{\infty}, \ell_1\} \to \min(\ell_1)\right\|_{\mathrm{cb}} \cdot\left\|\1 : \min(\ell_1) \to \ell_1\right\|_{\mathrm{cb}} \\
&=\left\|\Delta[\mathrm{Ext}] : \cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k+1} \ell_{\infty}, \ell_1\} \to \ell_1\right
\|\cdot\left\|\1 : \ell_{\infty} \to \max(\ell_{\infty})\right\|_{\mathrm{cb}} \\
&\leq 3 \varepsilon \sqrt{2^{m}}\ .
\end{align}
The claim then follows by Theorem~\ref{thm:ext-to-norm-quantum}.
\end{proof}
\smallerrorent*
\begin{proof}
By operator space duality for the trace class operator space (Proposition~\ref{prop:s1-os}) and Proposition~\ref{prop:decomposition-positive} we get
\begin{align}
\left\|\Delta[\mathrm{Ext}]_{S} : \cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k+\log(1/\varepsilon)} \ell_{\infty}, \ell_1\} \to \ell_1\right\|_{\mathrm{cb}}\leq4\sup\left\{\Big\|{\mathbb E}_s \sum_y \sum_x \left(\delta_{f_s(x) = y} - 2^{-m}\right)p(x) \otimes q(s,y)\Big\|_{(\infty;\infty)}\right\}\ ,
\end{align}
where the supremum is over all
\begin{align}
p(x)\in\mathrm{Mat}_Q\quad&\mathrm{with}\quad0\leq p(x)\leq\eps2^{-k}\1\,,\,\sum_x p(x) \leq \1\\
q(s,y)\in\mathrm{Mat}_Q\quad&\mathrm{with}\quad\norm{q(s,y)}_{\infty} \leq1
\end{align}
and $Q\in\mathbb{N}$. We now apply the operator version of the Cauchy-Schwarz inequality due to Haagerup (see, e.g., \cite[Chapter 7]{Pis03}),
\begin{align}
&\Big\|{\mathbb E}_s \sum_y \left(\sum_x \left(\delta_{f_s(x) = y} - 2^{-m'}\right)p(x)\right) \otimes q(s,y)\Big\|_{(\infty;\infty)} \notag\\
&\leq\Big\|{\mathbb E}_s \sum_y \left(\sum_x \left(\delta_{f_s(x) = y} - 2^{-m'}\right)p(x)\right) \otimes \overline{\left(\sum_{x'} \left(\delta_{f_s(x') = y} - 2^{-m'}\right)p(x')\right)}\Big\|^{1/2}_{(\infty;\infty)}\Big\|{\mathbb E}_s \sum_y q(s,y) \otimes \overline{q(s,y)}\Big\|^{1/2}_{(\infty;\infty)}\ .
\end{align}
The second term is upper bounded by
\begin{align}
\Big\|{\mathbb E}_s \sum_y q(s,y) \otimes \overline{q(s,y)}\Big\|_{(\infty;\infty)} \leq {\mathbb E}_s \sum_y \norm{q(s,y)}_{\infty}^2 \leq\sqrt{2^{m'}}\ ,
\end{align}
and we are left with the first term. The operator whose $(\infty;\infty)$-norm has to be estimated is hermitian, and hence we arrive via norm duality at
\begin{align}
&\sup_{\stackrel{C\in\mathrm{Mat}_Q}{{\rm Tr}[C^*C]\leq1}}\left|\,{\mathbb E}_s \sum_y \sum_{x,x'} \left(\delta_{f_s(x) = y} - 2^{-m'}\right){\rm Tr}\left[C p(x) C^* p(x')\right]\left(\delta_{f_s(x') = y} - 2^{-m'}\right)\right| \notag\\
&\leq\sup_{\stackrel{C\in\mathrm{Mat}_Q}{{\rm Tr}[C^*C]\leq1}}{\mathbb E}_s \sum_y \sum_{x'}\left| \sum_{x}\left(\delta_{f_s(x) = y} - 2^{-m'}\right) l_{xx'}(C) \right|\ ,
\end{align}
where \(l_{xx'}(C):={\rm Tr}[C p(x) C^* p(x')] \) is a positive function on \(N \times N\) with
\begin{align}
\sum_{x,x'} l_{xx'}(C) = {\rm Tr}\left[C \sum_x p(x) C^* \sum_{x'} p(x')\right] \leq 1\quad\mathrm{and}\quad l_{xx'}(C)\leq\eps2^{-k}\,{\rm Tr}[CC^* p(x')]\ .
\end{align}
Hence, the distribution $l_{xx'}(C)$ has conditional min-entropy $H_{\min}(X|X')_{l(C)}\geq k+\log(1/\varepsilon)$, and by Markov's inequality
\begin{align}
\pr{H_{\min}(X|X'=x')_{l(C)}\leq k}\leq\varepsilon\ .
\end{align}
Finally, we get by the assumption that the bounded norm of the extractor is upper bounded by $\varepsilon$ that
\begin{align}
{\mathbb E}_s \sum_y \sum_{x'}\left| \sum_{x}\left(\delta_{f_s(x) = y} - 2^{-m'}\right)l_{xx'}(C)\right|&={\mathbb E}_{x'}{\mathbb E}_s\sum_{y}\Big|\sum_{x}\left(\delta_{f_s(x) = y} - 2^{-m'}\right)l_{x|x'}(C)\Big|\\
&\leq2\varepsilon\ .
\end{align}
By putting everything together we get the upper bound $4\sqrt{2^{m'}}\sqrt{2\varepsilon}$ on the completely bounded norm of the extractor as claimed.
\end{proof}
An interesting application is for very high min-entropy extractors.
\highMinEntExt*
\begin{proof}
We have for any distribution $P$,
\begin{align}
\|P\|_{2^{k}\ell_{\infty}}\leq\|P\|_{\cap(2^{k}\ell_{\infty},\ell_{1})}\leq2^{n-k}\|P\|_{2^{k}\ell_{\infty}}\ ,
\end{align}
and this implies that
\begin{align}
\left\|\Delta[\mathrm{Ext}]:2^{k+1}\ell_{\infty}\to\ell_1\right\|\leq2^{n-k}\varepsilon\ .
\end{align}
By Grothendieck's inequality~\cite[Corollary 14.2]{Pis12}, we conclude
\begin{align}
\left\|\Delta[\mathrm{Ext}]:\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{ 2^{k}\ell_{\infty},\ell_1\}\to\ell_1\right\|_{\mathrm{cb}}\leq\left\|\Delta[\mathrm{Ext}]:2^{k}\ell_{\infty}\to\ell_1\right\|_{\mathrm{cb}}\leq K_G2^{n-k}\varepsilon\ .
\end{align}
\end{proof}
\section{Condensers}\label{sec:condenser}
\subsection{Condenser property as a bounded norm}
\condToNorm*
\begin{proof}
We first prove~\eqref{eq:condenser_1}. Let $P_{N}$ be a distribution with min-entropy at least $k$. Then, we have $\| P \|_{\cap(2^k\ell_\infty,\ell_1)}\leq1$ and this implies $\|[\mathrm{Con}](P)\|_{\Sigma\{2^{k'}\ell_\infty,\ell_1\}}\leq\varepsilon$. Hence, there is a decomposition $[\mathrm{Con}](P)=Q_1+Q_2$ such that $\|Q_1\|_{\ell_\infty}\leq\varepsilon 2^{-k}$ and $\|Q_2\|_{\ell_1}=\|P-Q_1\|_{\ell_1}\leq\varepsilon$. In order to conclude that $H_{\min}^{\varepsilon}(P)\geq k'+\log(1/\varepsilon)$, it only remains to show that $Q_1$ and $Q_2$ are nonnegative. In fact, consider $Q'_1 = \max(Q_1, 0)$ and $Q'_2 = P - Q'_1 = \min(Q_2, P)$. Then, we still have $Q'_1 + Q'_2 = P$, and in addition $\| Q'_1 \|_{2^{k'}\ell_\infty} \leq \|Q_1\|_{2^{k'}\ell_\infty}$ as well as $\|Q'_2\|_1 \leq \|Q_2\|_{\ell_1}$. This proves that without lost of generality $Q_1$ and $Q_2$ are nonnegative.
To prove~\eqref{eq:condenser_2}, assume that $\mathrm{Con}$ defines a $(k-1)\to_{\varepsilon}k'+\log(1/\varepsilon)$ and consider $P$ such that $\|P\|_{\ell_1}\leq1$ and $\|P\|_{\ell_\infty} \leq 2^{-k}$. Let $P^+(x) = \max\{P(x), 0\}$ and $P^-(x) = \max\{-P(x), 0\}$, and note that $\| P^+ \|_{\ell_\infty}, \| P^{-} \|_{\ell_\infty} \leq 2^{-k}$. We extend $P^+$ into a probability distribution $\bar{P}^+ = P^+ + (1-\|P^+\|_{\ell_1})\upsilon_N$ and similarly $\bar{P}^-(x) = P^-(x) +(1-\|P^-\|_{\ell_1}) \upsilon_N$. Now observe that $\| \bar{P}^+ \|_{\ell_\infty} \leq \| P \|_{\ell_\infty} + \frac{1}{N} \leq 2^{-(k-1)}$ and similarly for $\| \bar{P}^+ \|_{\ell_\infty}$. Thus, we have that
\begin{align}
\|[\mathrm{Con}](P) \|_{\ell_1}
&= \| [\mathrm{Con}](P^+) - [\mathrm{Con}](P^-) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}} \\
&\leq \| [\mathrm{Con}](P^+) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}}+\| [\mathrm{Con}](P^-) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}} \\
&\leq \| [\mathrm{Con}](\bar{P}^+) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}} + (1-\|P^+\|_{\ell_1}) \| [\mathrm{Con}](\upsilon_N) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}}\notag\\
&\quad+\| [\mathrm{Con}](\bar{P}^-) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}} + (1-\|P^-\|_{\ell_1}) \| [\mathrm{Con}](\upsilon_N) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}} \\
&\leq \| [\mathrm{Con}](\bar{P}^+) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}} + \| [\mathrm{Con}](\bar{P}^-) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}} + \| [\mathrm{Con}](\upsilon_N) \|_{ \Sigma\{2^{k'}\ell_\infty, \ell_1\}}\ .
\end{align}
Now the distributions $\bar{P}^+, \bar{P}^-$ and $\upsilon_N$ have min-entropy at least $k-1$. Hence, there exists $Q$ with $\|P-Q\|_{\ell_1}\leq\varepsilon/6$ and $\| Q \|_{2^{k'}\ell_\infty}\leq\varepsilon/6$. This then implies $\|P\|_{\Sigma\{2^{k'}\ell_\infty, \ell_1\}}\leq\varepsilon/3$, which proves the desired result.
\end{proof}
\subsection{Quantum-proof as a completely bounded norm}
As we saw in Proposition~\ref{prop:ext-to-norm}, a condenser maps the \(\cap\)-normed space to its dual space. Since we expect the same to happen in the quantum-proof case, it is useful to have an understanding about the operator space dual of \(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\). By expressing the \(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\)-operator space using the Haagerup tensor product, the operator dual is easily identified. However, we do not want to elaborate further on this, since we will just use a simple estimate (see Lemma~\ref{lem:sigmanormforpos} below), and refer to Appendix~\ref{app:intersection} for the exact characterization. We again use a shorthand notation and denote the operator space dual of \(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\) by $\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma$. In the setting of this paper, we are naturally interested in the case of matrices which are diagonal with respect to the first system, that is, elements of $\mathrm{Mat}_Q(N)$. The norms $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$ and the $\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma$ are then defined on $\mathrm{Mat}_Q(N)$ via the embedding of $\mathrm{Mat}_Q(N)$ into $\mathrm{Mat}_{QN}$ as block-diagonal matrices. To emphasize this case, we again use the notation
\begin{align}
\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^k\ell_\infty, \ell_1\}\quad\mbox{as well as}\quad\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{\ell_\infty,2^{-k}\ell_1\}=2^{-k}\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{2^k\ell_\infty, \ell_1\}
\end{align}
for the corresponding dual space. For positive $x \in \mathrm{Mat}_Q(N)$, the operator space dual norm $\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma$ has the following simple estimate.
\begin{lemma}\label{lem:sigmanormforpos}
Let $x = \sum_{j} x(j) \otimes \proj{i} \in \mathrm{Mat}_Q(N)$ be positive. Then, we have
\begin{align}
\frac{1}{2} \, \norm{x}_{M_Q(\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma)}\leq\inf\left\{\Big\|2^{-k}\sum_j A(j) + B\Big\|_\infty \;:\; x(j) \leq A(j) + B, \, A(j) \geq 0, \, B \geq 0 \right\} \leq \norm{x}_{M_Q(\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma)}\ .
\end{align}
\end{lemma}
\begin{proof}
By the definition of the operator space dual, we have
\begin{align}\label{eq:mq-sigsig}
\| x \|_{M_Q(\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma)} = \sup\left\{\Big\|\sum_j x(j) \otimes y(j)\Big\|_{(\infty;\infty)}\;:\; \Big\|\sum_{j} y(j) \otimes \proj{j}\Big\|_{M_Q(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap)} \leq 1 \right\}\ .
\end{align}
Let us now decompose $y$ according to Proposition~\ref{prop:decomposition-positive}, $y = y_1 - y_2 + iy_3 - i y_4$. Then, we observe that in the maximization in \eqref{eq:mq-sigsig}, up to an additional factor of two we can assume that $y$ is also Hermitian. This implies that we can take $y_3 = y_4 = 0$. But then using the fact that $x \geq 0$, we have
\begin{align}
\big\| \sum_{j} x(j) \otimes y(j) \big\|_{(\infty;\infty)} \leq \big\| \sum_j x(j) \otimes y_1(j)\big\|_{(\infty;\infty)}\ ,
\end{align}
and thus we can assume that $y \geq 0$ in the further study of \eqref{eq:mq-sigsig}. Recalling that when $y \geq 0$,
\begin{align}
\| y \|_{M_Q(\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap)} = \| y \|_{M_Q(\cap)} = \max\left\{2^k\| y \|_{\infty} , \big\| \sum_{j} y(j) \big\|_{\infty}\right\}\ ,
\end{align}
we have
\begin{align}
&\sup\left\{ \bra{\psi} \sum_j x(j) \otimes y(j)\ket{\psi} \;:\; 0 \leq y(j) \leq 2^{-k}\1,\, \sum_j y(j) \leq \1, \, \| \ket{\psi} \|_2 \leq 1 \right\}\notag\\
&= \sup\left\{ \sum_{j,l,l'} \lambda^{*}_{l} \lambda_{l} \bra{u_l} x(j) \ket{u_{l'}} \bra{v_l} y(j) \ket{v_{l'}} \;:\; 0 \leq y(j) \leq2^{-k}\1,\, \sum_j y(j) \leq \1 \, \| \ket{\psi} \|_2 \leq 1 \right\}\ ,
\end{align}
where $\ket{\psi} = \sum_{l} \lambda_l \ket{u_l} \ket{v_l}$ is a Schmidt decomposition. Now using the fact that $\bra{v_l} y(j) \ket{v_{l'}} = \bra{v_{l'}} y(j)^T \ket{v_{l}}$ and writing $C = \sum_{l} \lambda_l \ket{v_l} \bra{u_l}$, we can write
\begin{align}
\| x \|_{M_Q(\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma)}=\sup\left\{{\rm Tr}\left[\sum_j x(j) C^* y(j)^T C \right]\;:\; 0 \leq y(j) \leq2^{-k}\1,\,\sum_j y(j) \leq \1,\, {\rm Tr}[C^* C] \leq 1 \right\}\ .
\end{align}
Moreover, the transposition leaves the operator inequalities involving the \(y(j)\)'s invariant, we can further simplify to (set $\sigma = C^* C$)
\begin{align}\label{eq:sigsig-sdp}
\sup\left\{{\rm Tr}\left[\sum_j x(j) \hat{y}(j) \right]\;:\; 0 \leq \hat{y}(j) \leq2^{-k}\sigma,\, \sum_j \hat{y}(j) \leq \sigma,\, {\rm Tr}[\sigma] \leq 1,\,\sigma \geq 0 \right\}\ .
\end{align}
This is an SDP and its dual is given by
\begin{align}\label{eq:sigsig-sdp-dual}
\inf\left\{\Big\|2^{-k}\sum_j A(j) + B\Big\|_\infty \;:\; x(j) \leq A(j) + B, \, A(j) \geq 0, \, B \geq 0 \right\}\ .
\end{align}
The program~\eqref{eq:sigsig-sdp} is clearly feasible and~\eqref{eq:sigsig-sdp-dual} is strictly feasible and thus strong duality is satisfied, i.e., the programs~\eqref{eq:sigsig-sdp} and~\eqref{eq:sigsig-sdp-dual} have the same value.
\end{proof}
Such kind of intersection norms for operator spaces have been extensively employed by Junge and his co-workers in their study of non-commutative $L_p$-spaces and their relation to free probability, see for instance the monograph~\cite{Junge:2010cg}.\footnote{We expect that many of Junge and his co-workers' techniques are applicable to questions regarding the stability of pseudorandmom objects and hope that our work serves as a starting point for such kind of investigations.} We first need a lemma relating the dual norm of $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$ for positive elements to the smooth conditional min-entropy.
\begin{lemma}\label{lem:smoothmin-sigmanorm}
Let $\rho \in \mathrm{Mat}_{QN}$ be positive. Then, we have
\begin{align}
2^{k}\|\rho\|_{M_Q(\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma)}\leq\varepsilon\quad&\Rightarrow\quad H_{\min}^{2\sqrt{\varepsilon}}(N|Q)_\rho \geq k + \log(1/\varepsilon)\label{eq:smooth_min1}\\
H_{\min}^{\varepsilon}(N|Q) \geq k + \log(1/\varepsilon)\quad&\Rightarrow\quad2^{k}\|\rho\|_{M_Q(\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma)}\leq2\varepsilon\ .\label{eq:smooth_min2}
\end{align}
\end{lemma}
\begin{proof}
We first prove~\eqref{eq:smooth_min1}. Due to the fact that $\rho$ is positive, the optimal decomposition of $x$ with respect to Proposition \ref{prop:decomposition-positive} is one where only the positive term is non-zero. Hence, we have due to Proposition~\ref{prop:new-old-norm},
\begin{align}
\|\rho\|_{M_Q(\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma)}=\sup\big\{ |{\rm Tr}[\rho x]| \,:\, \norm{x}_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^k\ell_\infty,\ell_1\}} \leq 1 \big\} &= \sup\big\{ |{\rm Tr}[\rho x]| \,:\, x \geq 0,\,\norm{x}_{\cap\{2^k\ell_\infty,\ell_1\}} \leq 1\big\}\\
&= 2^{-k}\,\sup\left\{ {\rm Tr}[\rho x] \,:\,x \geq 0,\, {\rm Tr}_N(x) \leq 2^k\1, x \leq \1\right\}\ .
\end{align}
This is a semidefinite program, and by strong duality we get
\begin{align}
\sup\left\{ {\rm Tr}[\rho x] \,:\,x \geq 0,\,{\rm Tr}_N(x) \leq 2^k\1, x \leq \1\right\} =\inf\left\{{\rm Tr}[A_1] + 2^{k}{\rm Tr}[A_2] \,:\, \rho \leq A_1 + \1_N \otimes A_2, \; A_{1,2} \geq 0\right\}\ .
\end{align}
Hence, we find two positive matrices $A_1$ and $A_2$ such that ${\rm Tr}[A_1] \leq \varepsilon$, and ${\rm Tr}[A_2] \leq \varepsilon \, 2^{-k}$. We now consider
\begin{align}
\hat{\rho}:=B^* B\quad\mathrm{with}\quad B:=\rho^{1/2} (A_1 + \1 \otimes A_2)^{-1/2} \1 \otimes A_2^{1/2}\ .
\end{align}
We have that its trace is smaller than one,
\begin{align}
{\rm Tr}[\hat{\rho}] &= {\rm Tr}\left[\1 \otimes A_2^{1/2} (A_1 + \1 \otimes A_2)^{-1/2} \rho (A_1 + \1 \otimes A_2)^{-1/2} \1 \otimes A_2^{1/2}\right]\\
&= {\rm Tr}\left[\rho (A_1 + \1 \otimes A_2)^{-1/2} \1 \otimes A_2 (A_1 + \1 \otimes A_2)^{-1/2}\right]\\
&\leq {\rm Tr}[\rho]\\
&= 1\ ,
\end{align}
since $(A_1 + \1 \otimes A_2)^{-1/2} \1 \otimes A_2 (A_1 + \1 \otimes A_2)^{-1/2} \leq (A_1 + \1 \otimes A_2)^{-1/2} A_1 + \1 \otimes A_2 (A_1 + \1 \otimes A_2)^{-1/2} = \1$. This implies that $\hat{\rho}$ is a sub-normalized state, and its min-entropy is as least $k + \log(1/\varepsilon)$ since we have $(A_1 + \1 \otimes A_2)^{-1/2} \rho (A_1 + \1 \otimes A_2)^{-1/2} \leq \1$ by construction and ${\rm Tr}[A_2] \leq \varepsilon \, 2^{-k}$. Simple rescaling of $A_2$ makes it into a density matrix, and the corresponding factor is picked up by the inner term. Moreover, let us consider its trace norm distance to $\rho$. First, we have using the bounded trace of $\rho$ and $B^* B$,
\begin{align}
\left\|\rho - B^*B\right\|_1 =\left\|(\rho^{1/2} - B^*)\rho^{1/2} + B^*(\rho^{1/2} - B)\right\|_1 \leq2\left\|\rho^{1/2} - B\right\|_2\ .
\end{align}
We then estimate further, using the Powers-Stoermer inequality in the last step,
\begin{align}
\norm{\rho^{1/2} - B}_2 &=\left\|\rho^{1/2} (A_1 + \1 \otimes A_2)^{-1/2} (A_1 + \1 \otimes A_2)^{1/2} - \rho^{1/2} (A_1 + \1 \otimes A_2)^{-1/2} \1 \otimes A_2^{1/2}\right\|_2\\
&\leq\left\|\rho^{1/2} (A_1 + \1 \otimes A_2)^{-1/2}\right\|_\infty\cdot\left\|(A_1 + \1 \otimes A_2)^{1/2} - \1 \otimes A_2^{1/2}\right\|_2\\
&\leq\sqrt{\norm{A_1}_1}\\
&= \sqrt{\varepsilon}\ ,
\end{align}
and the first statement follows.
To prove~\eqref{eq:smooth_min2}, let $\hat{\rho}$ be the state which achieves the smooth min-entropy, i.e., we have $\hat{\rho} \leq \varepsilon\,2^{-k}\1 \otimes \sigma$ and $\norm{\hat{\rho} - \rho}_1 \leq \varepsilon$. Let $x$ be now chosen such that ${\rm Tr}_N(x) \leq 2^k\1$, $x \leq \1$. We then have
\begin{align}
2^{-k}{\rm Tr}[x \rho] = {\rm Tr}[x \hat{\rho}] + {\rm Tr}[x (\hat{\rho} - \rho)] \leq \varepsilon \, 2^{-k} {\rm Tr}\big[\sigma {\rm Tr}_N[x]\big] + \varepsilon\,\norm{x}_\infty\leq2\cdot2^{-k}\varepsilon\ ,
\end{align}
and the assertion is proven.
\end{proof}
Expressing the smooth conditional min-entropy in terms of the $\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma$-norm (Lemma~\ref{lem:smoothmin-sigmanorm}) allows us to characterize quantum-proof condensers using operator space norms.
\condToNormQuantum*
\begin{proof}
We first prove~\eqref{eq:quantumcondenser1}. Let $\rho_{NQ} \in \mathrm{Mat}_Q(N)$ be a state with conditional min-entropy at least $k$, and let $\omega \in \mathrm{Mat}_Q$ such that $\rho_{NQ} \leq 2^{-k} \omega \otimes \1_N$. We proceed as in the proof of Theorem~\ref{thm:ext-to-norm-quantum} and define $\sigma = \frac{\rho_Q + \omega}{2}$. It follows that $\norm{\sigma^{-1/2} \rho \sigma^{-1/2}}_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap[2^{k}\ell_\infty,\ell_1]}\leq 2$ and hence we have due to Lemma~\ref{lem:sigmanormforpos} that there are positive matrices $A \in \mathrm{Mat}_Q(N)$, $B \in \mathrm{Mat}_Q$ such that
\begin{align}
\mathrm{Con}(\rho)=\sigma^{1/2} \mathrm{Con}\left(\sigma^{-1/2} \rho \sigma^{-1/2}\right) \sigma^{1/2}\leq \sigma^{1/2}\left(A + \1 \otimes B\right) \sigma^{1/2}\quad\mathrm{and}\quad\left(\sum_{y \in M} A(y) + 2^{k'} \,B\right) \leq \varepsilon \, \1\ .
\end{align}
Now take $x \in \mathrm{Mat}_Q(M)$, fulfilling $x \geq 0$, $x \leq \1$ and $\sum_{y \in M} x(y) \leq 2^{k'} \1$. Then, we have
\begin{align}
{\rm Tr}[\rho x]\leq {\rm Tr}\left[\sigma^{1/2} (A + \1_M \otimes B) \sigma^{1/2} x\right]&\leq {\rm Tr}_Q\left[\sigma^{1/2} \sum_{y \in M} A(y) \sigma^{1/2}\right] + {\rm Tr}_Q\left[\sigma^{1/2} B \sigma^{1/2} \sum_{y \in M} x(y)\right]\\
&\leq{\rm Tr}\left[\sigma\left(\sum_{y \in M} A(y) + 2^{k'} \,B\right)\right]\\
&\leq\varepsilon\ .
\end{align}
and~\eqref{eq:quantumcondenser1} follows by Lemma~\ref{lem:smoothmin-sigmanorm}.
For proving~\eqref{eq:quantumcondenser2}, take $x \in \mathrm{Mat}_Q(N)$ with $\norm{x}_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}} \leq 1$. Due to Proposition~\ref{prop:decomposition-positive} we can find a decomposition of $x = x_1 - x_2 + i(x_3 - x_4)$ into positive terms fulfilling the same norm constraint. We will proceed to bound
\begin{align}
\big\|\sum_{y \in M} z(y) \otimes \mathrm{Con}(x_i)(y)\big\|_{(\infty;\infty)}\leq\varepsilon\ ,
\end{align}
where $\norm{z}_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k'}\ell_\infty,\ell_1\}} \leq 2^{k'}$. This implies the assertion by operator space duality. Due to the fact that $\mathrm{Con}(x_i)(y)$ is positive, the optimal decomposition of $z$ with respect to Proposition~\ref{prop:decomposition-positive} is one where only the positive term is non-zero. Hence, it is enough to bound the expression above for positive $z$:
\begin{align}
\big\|\sum_{y \in M} z(y) \otimes \mathrm{Con}(x_i)(y)\big\|_\infty= \sup_{\ket{\psi} \in \C^{Q^2}} \bra{\psi} \sum_{y \in M} z(y) \otimes \mathrm{Con}(x_i)(y) \ket{\psi}= \sup_{C}{\rm Tr}\left[\sum_{y \in M} C z(y)^T C^*\, \mathrm{Con}(x_i)(y)\right]\ ,
\end{align}
where the last supremum is over all operators $C$ on $Q$ with singular values equal to the Schmidt coefficients of $\psi$. Due to the properties of $x_i$,
\begin{align}
\sum_{k \in N} {\rm Tr}_Q\left[C^* x_i(k) C\right] \leq 1\quad\mathrm{and}\quad C^* x_i(y) C \leq 2^{-k} C^* C\ ,
\end{align}
which implies that $\rho_i(k) = C^* x_i(k) C$ defines a sub-normalized classical-quantum state on $NQ$ having min-entropy at least $k$. In order to have a valid state, we define
\begin{align}
\bar{\rho_i} = \rho_i + (1-{\rm Tr}[\rho_i]) \frac{\1}{Q} \otimes \frac{\1}{N}\ .
\end{align}
As before, we can say that $\| \bar{\rho} \|_{(1;\infty)}\leq2^{-k}+\frac{1}{N}\leq2^{-k+1}$ and hence the min-entropy is at least $k-1$. It follows that $\mathrm{Con}(C^* x_i C) \leq \mathrm{Con}{\bar{\rho_i}} = \omega_i$ where $\omega_i$ is a state with $\varepsilon$-smooth min-entropy at least $k'$. Hence, we find by Lemma~\ref{lem:smoothmin-sigmanorm}
\begin{align}
{\rm Tr}\left[\sum_{y \in M} C z(y)^T C^* \, \mathrm{Con}(x_i)(y)\right]= {\rm Tr}\left[\sum_{y \in M} z(y^T) \, \mathrm{Con}(C^* x_i C)(y)\right] \,\leq\, {\rm Tr}\left[\sum_{y \in M} z(y)^T \omega_i(y)\right] \,\leq \, 2 \,\varepsilon\ .
\end{align}
\end{proof}
\subsection{Graph Theory}\label{sec:graph_theory}
\bipartitegraph*
\begin{proof}
Using the fact that $\Sigma\{2^{k'}\ell_{\infty},\ell_1\}$ is the dual norm of the norm $\cap\{2^{-k'}\ell_1,\ell_{\infty}\}$, we write
\begin{align}
&\left\|\mathrm{Con}:\cap\{2^{k}\ell_\infty, \ell_1\} \to \Sigma\{2^{k'}\ell_\infty, \ell_1\}\right\|\notag\\
&= \max\left\{\Big|\sum_{y \in M} \mathrm{Con}(f)_y \cdot g_y\Big| \;:\; \norm{f}_{\cap\{ 2^k \ell_\infty, \ell_1\}} \leq 1 ,\; \norm{g}_{\cap\{2^{-k'} \ell_1, \ell_{\infty}\}} \leq 1\,\right\} \\
&= \max\left\{ \Big| \frac{1}{D} \sum_{x \in N, y \in M, s \in D} \delta_{\Gamma(x,s)=y} f_x \cdot g_y \Big| \;:\; \norm{f}_{\cap\{ 2^k \ell_\infty, \ell_1\}} \leq 1 ,\; \norm{g}_{\cap\{2^{-k'} \ell_1, \ell_{\infty}\}} \leq 1\, \right\} \\
&= \frac{1}{D \cdot 2^k} \max\left\{ \Big| \sum_{x \in N, y \in M, s \in D} \delta_{\Gamma(x,s)=y} f_x \cdot g_y \Big| \;:\; \norm{f}_{\cap\{ \ell_\infty, 2^{-k} \ell_1\}} \leq 1 ,\; \norm{g}_{\cap\{2^{-k'} \ell_1, \ell_{\infty}\}} \leq 1\,\right\}\ ,
\end{align}
where $f_x$, $g_y$ denote the components of the vectors $f \in \Rl^N$, $g \in \Rl^M$. Because the matrix elements of the tensor are all positive, we can restrict the maximization to vectors with positive entries. Then, the norm conditions on the vector $f$ translates into $0\leq f_x \leq 1$, $\sum_x f_x \leq2^{k}$. However, the extreme points of this convex sets are just the characteristic vectors of subsets of $N$ of size $2^{k}$. Due to the convex character of the objective function in both variables $f$ and $g$, we hence end up with the quadratic program as claimed.
\end{proof}
\subsection{Bell inequalities}\label{sec:two_player}
As discussed in the review article~\cite{Brunner14} Bell inequalities and two-player games are the same, and in the following we will take the game perspective. In particular, we are interested in a two-player game such that its classical value is related to the bounded norm of the condenser as in Proposition~\ref{prop:cond-to-norm}, and its entangled value relates to the completely bounded norm of the condenser as in Theorem~\ref{thm:cond-to-norm-quantum}. The game is as follows. There are the two players Alice and Bob, and a referee. The bipartite graph $G=(N,M,V,D)$ as defined by the condenser $\mathrm{Con}=\{f_{s}\}_{s\in D}$ via the neighbor function in~\eqref{eq:neighbor_graph} is known to all parties. First, the referee samples $\gamma=2^{n-k}$ elements $x_1,\dots,x_\gamma$ out of $N$ uniformly at random (with replacement), and likewise $\gamma'=2^{m-k'}$ independent and uniformly chosen random entries $y_1,\dots,y_{\gamma'}$ out of $M$, as well as a value of the seed $s$ according to the uniform distribution. We collect these random indices into vectors $\vec{x} \in N^\gamma$ and $\vec{y} \in M^{\gamma'}$; the questions for Alice and Bob, respectively. Alice and Bob then provide indices $1\leq \alpha \leq \gamma$ and $1\leq \beta \leq \gamma'$ of these vectors as answers. They win if
\begin{align}
\Gamma(x_\alpha,s) = y_\beta\ .
\end{align}
We use the notation $(G;2^{k},2^{k'})$ for this game. A classical strategy amounts to a pair of deterministic mappings $f: N^\gamma \to \{1,\dots,\gamma\}$ and $g:M^{\gamma'} \to \{1,\dots,\gamma'\}$ (independent of the value of $s$), and the classical value of the game is\footnote{Instead of deterministic functions $f$ and $g$, we could also allow for shared randomness, which does not increase the value of the game.}
\begin{align}
\omega\left(G;2^{k},2^{k'}\right):=\sup\Big\{ {\mathbb E}_s {\mathbb E}_{\vec{x},\vec{y}} \, \Gamma\big[\vec{x}(f(\vec{x})),s;\vec{y}(g(\vec{y}))\big]\Big\}\ ,
\end{align}
where the supremum is over all classical strategies, and
\begin{align}
\Gamma[x,s;y]:=
\begin{cases}
1 & \mbox{for}\;\Gamma(x,s) = y\\
0 & \mbox{otherwise}
\end{cases}
\end{align}
If Alice and Bob are allowed to share an entangled state of local dimension $Q$, they are not restricted to classical deterministic strategies. Instead, each player has a set of positive operator valued measures (POVM's), indexed by his questions, acting on his share of the entangled state. That is, Alice and Bob each have a set of positive operators in $\mathrm{Mat}_Q$ labeled by the questions $\vec{x}$ and $\vec{y}$ and possible outcomes $\alpha$ and $\beta$,
\begin{align}
\hat{p}(\alpha;\vec{x})\geq0\quad&\mathrm{with}\quad\sum_\alpha \hat{p}(\alpha;\vec{x})\leq\1\label{eq:quantum_strategy1}\\
\hat{q}(\beta;\vec{y})\geq0\quad&\mathrm{with}\quad\sum_\beta \hat{q}(\beta;\vec{y})\leq\1\label{eq:quantum_strategy2}
\end{align}
for all questions $\vec{x}$ and $\vec{y}$ (again independent of the value of $s$). Note that we allow for incomplete POVM's (they don't sum to the identity), since we can always include a dummy answer into the game, associated to the missing normalization, thereby not changing the value of the game. For fixed $Q\in\mathbb{N}$ we define
\begin{align}
\omega^*_Q\left(G;2^{k},2^{k'}\right):=\sup \left\{ {\mathbb E}_s {\mathbb E}_{\vec{x},\vec{y}} \, \sum_{\alpha,\beta}\,\Gamma\big[\vec{x}(\alpha)),s;\vec{y}(\beta)\big]\cdot\bra{\psi} \hat{p}(\alpha;\vec{x}) \otimes \hat{q}(\beta;\vec{y})\ket{\psi}\,\right\}\ ,
\end{align}
where the supremum is over all bipartite pure state vectors $\ket{\psi}\in\C^Q \otimes\C^Q$, and all corresponding quantum strategies as in~\eqref{eq:quantum_strategy1}--\eqref{eq:quantum_strategy2}. The supremum over all bipartite pure state vectors just gives the operator norm and hence we have
\begin{align}
\omega^*_Q\left(G;2^{k},2^{k'}\right)=\sup\left\{\Big\|{\mathbb E}_s {\mathbb E}_{\vec{x},\vec{y}}\,\sum_{\alpha,\beta}\,\Gamma\big[\vec{x}(\alpha)),s;\vec{y}(\beta)\big]\, \hat{p}(\alpha;\vec{x}) \otimes \hat{q}(\beta;\vec{y})\Big\|_{(\infty;\infty)}\right\}\ .
\end{align}
Finally, the entangled value of the game is given by
\begin{align}
\omega^*\left(G;2^{k},2^{k'}\right):=\sup_{Q\in\mathbb{N}}\omega^*_Q\left(G;2^{k},2^{k'}\right)\ .
\end{align}
If we ask condensers only to be quantum-proof against $Q$-dimensional quantum side information the relevant quantity to bound becomes (as in Theorem~\ref{thm:cond-to-norm-quantum})
\begin{align}
\|[\mathrm{Con}]\|_{Q,\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{2^{k'}\ell_\infty,\ell_1\}}:=\sup_{\stackrel{P_{N}\in\mathrm{Mat}_{Q}(N)}{\|P_{N}\|_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\leq1}}}\|[\mathrm{Con}](P_{N})\|_{\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{2^{k'}\ell_\infty,\ell_1\}}\ .
\end{align}
For the proof of Theorem~\ref{thm:condensergame} we will show that for any dimension $Q\in\mathbb{N}$,
\begin{align}\label{eq:condenser_basicthm}
\omega^{*}_Q(G;2^{k},2^{k'})\leq2^{-k'}\cdot\|[\mathrm{Con}]\|_{Q,\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}\to\Sigma \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \Sigma\{2^{k'}\ell_\infty,\ell_1\}}\leq c\cdot\omega^{*}_Q(G;2^{k},2^{k'})\ ,
\end{align}
and with this prove all the claims at once (by letting $Q=1$ for~\eqref{eq:condensergame_b}, $Q\to\infty$ for~\eqref{eq:condensergame_cb}, or leaving $Q$ free). For the second inequality in~\eqref{eq:condenser_basicthm} we start from elements $p\in\mathrm{Mat}_Q(N)$ and $q\in \mathrm{Mat}_Q(M)$, satisfying
\begin{align}
\sum_x p(x) \leq \frac{2^{k}}{4}\cdot\1,\;p(x)\leq\1\quad\text{and}\quad\sum_y q(y)\leq\frac{2^{k'}}{4}\cdot\1,\;q(y)\leq\1\ ,
\end{align}
and construct POVM's $\hat{p}(\alpha;\vec{x})$ and $\hat{q}(\beta;\vec{y})$ associated to the questions and answers of the game such that
\begin{align}
\frac{1}{64}\cdot\frac{4}{2^{k}}\cdot\frac{4}{2^{k'}}\Big\|\sum_{x,y} \,{\mathbb E}_s\,\Gamma[x,s;y]\,p(x) \otimes q(y)\Big\|_{(\infty;\infty)}\leq\Big\|{\mathbb E}_s {\mathbb E}_{\vec{x},\vec{y}} \sum_{\alpha, \beta}\Gamma\big[\vec{x}(\alpha),s;\vec{y}(\beta)\big]\,\hat{p}(\alpha;\vec{x})\otimes \hat{q}(\beta;\vec{y})\Big\|_{(\infty;\infty)}\ .
\end{align}
Since the supremum of the left hand side over all such $q$ and $p$ is by operator space duality just $1/64$ times $\norm{\mathrm{Con}}_\mathrm{cb}$, and the statement follows by taking the expectation over the value of the seed $s$ on both sides. Conversely, the first inequality in~\eqref{eq:condenser_basicthm} is proven by constructing operators satisfying the $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$ norm estimates out of POVM elements.
The POVM's $\hat{q}(\beta;\vec{y})$ and $\hat{p}(\alpha;\vec{x})$ are built in a similar manner. Let $\vec{x}$ denote again the vector with entries corresponding to the choice of $\gamma$ independent random indices in $N$. For $0\leq \alpha \leq \gamma$, $1 \leq l \leq \alpha$ or \(0 \leq \beta \leq \gamma'\), \(1 \leq l' \leq \beta\) we define for $p\in\mathrm{Mat}_Q(N)$ and $q\in \mathrm{Mat}_Q(M)$,
\begin{align}\label{eq:rt1}
r_{\vec{x},l,\alpha}:=\prod_{l\leq k \leq \alpha} \big(\1 - p[\vec{x}(k)]\big)^{1/2}\ ,\quad t_{\vec{y},l',\beta}:=\prod_{l'\leq k \leq \beta} \big(\1 - q[\vec{y}(k)]\big)^{1/2}
\end{align}
as well as
\begin{align}\label{eq:rt2}
r_{\vec{x},\alpha+1,\alpha} :=\1 \;,\;\;r_{\vec{x},\alpha} := r_{\vec{x},1,\alpha}\quad\mathrm{and}\quad t_{\vec{y},\beta+1,\beta}:= \1 \;,\;\;t_{\vec{y},\beta} := t_{\vec{y},1,\beta}\ .
\end{align}
The POVM's are then constructed as
\begin{align}
\hat{p}(\alpha;\vec{x}):= r_{\vec{x},\alpha-1}^*\,p[\vec{x}(\alpha)]\,r_{\vec{x},\alpha-1}\quad\mathrm{and}\quad\hat{q}(\beta;\vec{y}):= t_{\vec{y},\beta-1}^*\,q[\vec{y}(\beta)]\,t_{\vec{y},\beta-1}\ .
\end{align}
The following lemma asserts that these are valid POVM's and also provides some estimates for remaining terms, to be used later on. It is a special case of~\cite[Lemma 6.7]{Junge:2006wt}, but for the convenience of the reader we restate it, and provide an adapted proof in Appendix~\ref{app:missing}.
\begin{restatable}{lemma}{jungetechnical}\label{lem:Jungelemma}
Let $p\in\mathrm{Mat}_Q(N)$, and $\alpha$, $\gamma$, $r_{\vec{x},\alpha}$ as in~\eqref{eq:rt1}--\eqref{eq:rt2}. Then, we have
\begin{align}
&\sum_{\alpha=1}^{\gamma} r_{\vec{x},\alpha-1}^* p[\vec{x}(\alpha)] r_{\vec{x},\alpha-1} \leq \1\label{eq:junge1}\\
&\sum_{\alpha=1}^{\gamma} {\mathbb E}_{x_1,\dots,x_{\alpha-1}} \left(\1 - r_{\vec{x},\alpha-1}\right) \leq \frac{\gamma}{8}\cdot\1\label{eq:junge2}\\
&\sum_{\alpha=1}^{\gamma} {\mathbb E}_{x_1,\dots,x_{\alpha-1}} [\1 - r_{\vec{x},\alpha-1}^*] [\1 - r_{\vec{x},\alpha-1}] \leq \frac{\gamma}{4}\cdot\1\ .\label{eq:junge3}
\end{align}
Similar estimates hold for $q\in\mathrm{Mat}_Q(M)$ with the replacements $\alpha\mapsto\beta$, $\gamma\mapsto\gamma'$, and $r_{\vec{x},\alpha}\mapsto t_{\vec{y},\beta}$ as defined in~\eqref{eq:rt1}--\eqref{eq:rt2}.
\end{restatable}
\begin{proof}[Proof of Theorem~\ref{thm:condensergame}]
We start with the direction game $\implies$ condenser. The main idea of the proof is similar to the proof of~\cite[Proposition 6.9]{Junge:2006wt}. According to the rule
\begin{align}
t^* q t= q + (1-t^*) 1 + q(1-t) - (1-t^*) q (1-t) \leq q + (1-t^*) 1 + q(1-t) + (1-t^*) q (1-t)\ ,
\end{align}
we split the following sum into $16$ terms,
\begin{align}\label{eq:condensergameexpandterms}
&\sum_{\alpha,\beta}\Gamma[\vec{x}(\alpha),s;\vec{y}(\beta)] \, p[\vec{x}(\alpha)] \otimes q[\vec{y}(\beta)] \notag\\
&\leq \sum_{\alpha,\beta} \Gamma[\vec{x}(\alpha),s;\vec{y}(\beta)] \, r_{\vec{x},\alpha-1}^* p[\vec{x}(\alpha)] r_{\vec{x},\alpha-1} \otimes t_{\vec{y},\beta-1}^* q[\vec{y}(\beta)] t_{\vec{y},\beta-1} \notag\\
&\quad+ \sum_{\alpha,\beta} \Gamma[\vec{x}(\alpha),s;\vec{y}(\beta)] \, [\1 - r_{\vec{x},\alpha-1}^*] p[\vec{x}(\alpha)] \otimes [\1 - t_{\vec{y},\beta-1}^*] q[\vec{y}(\beta)] \notag\\
&\quad+\sum_{\alpha,\beta} \Gamma[\vec{x}(\alpha),s;\vec{y}(\beta)] \, p[\vec{x}(\alpha)] [\1 - r_{\vec{x},\alpha-1}] \otimes q[\vec{y}(\beta)] [\1 - t_{\vec{y},\beta-1}] \notag\\
&\quad+ \dots \text{ 12 other terms } \notag\\
&\quad+ \sum_{\alpha,\beta} \Gamma[\vec{x}(\alpha),s;\vec{y}(\beta)] \,[\1 - r_{\vec{x},\alpha-1}^*] p[\vec{x}(\alpha)] [\1 - r_{\vec{x},\alpha-1}] \otimes [\1 - t_{\vec{y},\beta-1}^*] q[\vec{y}(\beta)] [\1 - t_{\vec{y},\beta-1}]\ .
\end{align}
We now take the expectation over the random choices $\vec{x}$ and $\vec{y}$ and the seed value \(s\). The left hand side reduces to
\begin{align}
{\mathbb E}_s {\mathbb E}_{\vec{x},\vec{y}} \sum_{\alpha,\beta}\Gamma[\vec{x}(\alpha),s;\vec{y}(\beta)] \, p[\vec{x}(\alpha)]\otimes q[\vec{y}(\beta)]&= {\mathbb E}_s {\mathbb E}_{x_1,\dots,x_\gamma,y_1,\dots,y_{\gamma'}} \sum_{\alpha,\beta} \Gamma[x_\alpha,s;y_\beta]\,p[x_\alpha]\otimes q[y_\beta]\\
&= \gamma \,\gamma' \, {\mathbb E}_s {\mathbb E}_x {\mathbb E}_y \Gamma[x,s;y] p(x)\otimes q(y)\ .
\end{align}
The first term on the right hand side of \eqref{eq:condensergameexpandterms} is the Bell (or game) operator which norm is equal to the value of the strategy given by the POVM's $\hat{q}(\beta;\vec{y})$ and $\hat{p}(\alpha;\vec{x})$. Let us examine the remaining terms. Expressions such as the second or third one are evaluated to
\begin{align}
&\sum_{\alpha,\beta}{\mathbb E}_s {\mathbb E}_{x_\alpha} {\mathbb E}_{y_\beta}\Big\{\Gamma[\vec{x}(\alpha),s;\vec{y}(\beta)] \, \left({\mathbb E}_{x_1,\dots,x_{\alpha-1}}[\1 - r_{\vec{x},\alpha-1}^*]\right) p[\vec{x}(\alpha)] \otimes \left({\mathbb E}_{y_1,\dots,y_{\beta-1}}[\1 - t_{\vec{y},\beta-1}^*]\right) q[\vec{y}(\beta)]\Big\} \notag\\
&\leq \frac{\gamma \, \gamma'}{8 \cdot 8} \norm{{\mathbb E}_s {\mathbb E}_x {\mathbb E}_y \Gamma[x,s;y] p(x) \otimes q(y)}_{(\infty;\infty)}\cdot\1\ ,
\end{align}
where we applied Lemma~\ref{lem:Jungelemma}, \eqref{eq:junge2}, to the operators in the brackets. Note that we had to use that the estimate are independent of the seed value. Terms involving $[\1 - r_{\dots}]$ or its starred version on both sides of $p[\dots]$ (and similarly those involving $[\1 - t_{\dots}]$ on both sides of $q[\dots]$) are estimated as follows
\begin{align}
&\sum_{\alpha,\beta}{\mathbb E}_s {\mathbb E}_{x_\alpha} {\mathbb E}_{y_\beta} \Big\{\Gamma[\vec{x}(\alpha),s;\vec{y}(\beta)] \,{\mathbb E}_{x_1,\dots,x_{\alpha-1}} \left\{[\1 - r_{\vec{x},\alpha-1}^*] p[\vec{x}(\alpha)] [\1 -r_{\vec{x},\alpha-1}]\right\}\otimes{\mathbb E}_{y_1,\dots,y_{\beta-1}} \left\{[\1 - t_{\vec{y},\beta-1}^*] q[\vec{y}(\beta)] [\1 - t_{\vec{y},\beta-1}]\right\}\Big\} \notag\\
&\leq \norm{{\mathbb E}_s {\mathbb E}_x {\mathbb E}_y \Gamma[x,s;y] p(x) \otimes q(s,y)}_{(\infty;\infty)}\cdot\big\|\sum_\alpha {\mathbb E}_{x_1,\dots,x_{\alpha-1}} [\1 - r_{\vec{x},\alpha-1}^*] [\1 - r_{\vec{x},\alpha-1}]\big\|_{(\infty;\infty)}\notag\\
&\quad\,\,\cdot\max_s \,\norm{\sum_\beta{\mathbb E}_{y_1,\dots,y_{\beta-1}} [\1 - t_{s,\vec{y},\beta-1}^*] [\1 - t_{\vec{y},\beta-1}]}_{(\infty;\infty)}\cdot\1\\
&\leq \frac{\gamma \, \gamma'}{4 \cdot 4} \norm{{\mathbb E}_s {\mathbb E}_x {\mathbb E}_y \Gamma[x,s;y] p(x) \otimes q(y)}_{(\infty;\infty)}\cdot\1\ .
\end{align}
Here we used that for completely positive maps $\phi$ and positive operators $a$ it holds that $\phi(a) \leq \norm{a} \,\phi(\1)$, which we applied to the cp maps given by the Kraus operators \(\1 - r_{\vec{x},\alpha-1}\) and \(\1 - t_{\vec{y},\beta-1}\). The estimate then followed by applying Lemma~\ref{lem:Jungelemma}, \eqref{eq:junge3}, and again using that the estimate are independent of the seed value. Estimating all cross terms according to the two strategies leads to the final estimate
\begin{align}
\frac{1}{4}\cdot2^{-k-k'}\big\|\sum_{x,y} {\mathbb E}_s \Gamma[x,s;y] p(x) \otimes q(y)\big\|_{(\infty;\infty)}\,&=\, \gamma \,\gamma' \, \norm{{\mathbb E}_s {\mathbb E}_x {\mathbb E}_y \Gamma[x,s;y] p(x) \otimes q(y)}_{(\infty;\infty)}\\
&\leq\big\|\sum_{\alpha,\beta} {\mathbb E}_s {\mathbb E}_{\vec{x},\vec{y}} \Gamma[\vec{x}(\alpha),s;\vec{y}(\beta)] \,\hat{p}(\alpha;\vec{x}) \otimes \hat{q}(\beta;\vec{y})\big\|_{(\infty;\infty)}\ .
\end{align}
For the converse part, let $\hat{q}(\beta;\vec{y})$ and $\hat{p}(\alpha;\vec{x})$ be arbitrary quantum strategies. We perform the following transformation
\begin{align}
{\mathbb E}_s {\mathbb E}_{\vec{x},\vec{y}} \sum_{\alpha, \beta}\Gamma[\vec{x}(\alpha),s;\vec{j}(\beta)] \hat{p}(\alpha;\vec{x}) \otimes \hat{q}(\beta;\vec{y})= \sum_{x'\in N, y' \in M} \,{\mathbb E}_s\, \Gamma[x,s;y] \sum_\alpha {\mathbb E}_{\vec{x}} \left[ \delta_{\vec{x}(\alpha) = x'} \hat{p}(\alpha;\vec{x})\right] \otimes \sum_\beta {\mathbb E}_{\vec{y}} \left[ \delta_{\vec{y}(\beta) = y'} \hat{q}(\beta;\vec{y})\right]\ .
\end{align}
We now show that the collection of operators
\begin{align}\label{eq:definecondenseropsthroughbellops}
p(x')= \sum_\alpha {\mathbb E}_{\vec{x}} \left[ \delta_{\vec{x}(\alpha) = x'} \hat{p}(\alpha;\vec{x})\right]\quad\mathrm{and}\quad q(y')= \sum_\beta {\mathbb E}_{\vec{y}} \left[ \delta_{\vec{y}(\beta) = y'} \hat{q}(\beta;\vec{y})\right]
\end{align}
satisfy the $\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap$-norm estimates
\begin{align}\label{eq:gametocondenserestimates}
\sum_{x'} p(x') \leq \1\quad\mathrm{and}\quad p(x') \leq2^{-k}\1\quad\mathrm{plus}\quad\sum_{y'} q(y') \leq \1\quad\mathrm{and}\quad q(y') \leq2^{-k'}\1\ .
\end{align}
Hence, we have
\begin{align}
&\Big\|{\mathbb E}_s {\mathbb E}_{\vec{x},\vec{y}} \sum_{\alpha, \beta} \Gamma[\vec{x}(\alpha),s;\vec{j}(\beta)] \hat{p}(\alpha;\vec{x}) \otimes \hat{q}(\beta;\vec{y})\Big\|_{(\infty;\infty)}\notag\\
&\leq \sup\left\{\Big\| \sum_{x,y} \,{\mathbb E}_s \Gamma[x,s;y] p(x) \otimes q(y)\Big\|_{(\infty;\infty)}\;:\;\norm{p}_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k}\ell_\infty,\ell_1\}} \leq 1\,,\; \norm{q}_{\cap \nonscript\mskip-3mu \cdot \nonscript\mskip-3mu \cap\{2^{k'}\ell_\infty,\ell_1\}} \leq 1 \right\}\ ,
\end{align}
and again operator space duality provides the last argument.
To show the first set of estimates in~\eqref{eq:gametocondenserestimates}, note
\begin{align}
\sum_{x'} \sum_\alpha {\mathbb E}_{\vec{x}} \left[ \delta_{\vec{x}(\alpha) = x'} \hat{p}(\alpha;\vec{x})\right] = \sum_\alpha {\mathbb E}_{\vec{x}} \left[ \sum_{x'} \delta_{\vec{x}(\alpha) = x'} \hat{p}(\alpha;\vec{x})\right] = \sum_\alpha \, \hat{p}(\alpha;\vec{x}) \leq \1 \,,
\end{align}
and similarly $\sum_{y',\beta} {\mathbb E}_{\vec{y}} \left[ \delta_{\vec{y}(\beta) = y'} \hat{q}(\beta;\vec{y})\right] \leq \1$. Moreover, we have $\hat{p}(\alpha;\vec{x}) \leq \1$ and hence
\begin{align}
\sum_\alpha {\mathbb E}_{\vec{x}} \left[ \delta_{\vec{x}(\alpha) = x'} \hat{p}(\alpha;\vec{x})\right] \leq \sum_\alpha {\mathbb P}[x_\alpha = x'] \,\leq \frac{\gamma}{N} \1 =2^{-k}\1\ ,
\end{align}
and again similarly ${\mathbb E}_{\vec{y}} \left[ \delta_{\vec{y}(\beta) = y'} \hat{q}(\beta;\vec{y})\right]\leq2^{-k'}$.
\end{proof}
\section*{Acknowledgments}
We acknowledge discussions with Matthias Christandl, Fabian Furrer, Patrick Hayden, Christopher Portmann, Renato Renner, Oleg Szehr, Marco Tomamichel, Thomas Vidick, Stephanie Wehner, Reinhard Werner, and Andreas Winter. Part of this work was done while OF and VBS were visiting the Institute for Quantum Information and Matter at Caltech and we would like to thank John Preskill and Thomas Vidick for their hospitality. Most of this work was done while OF was at ETH Zurich supported by the the European Research Council grant No. 258932. Additional funding was provided by the EU under the project ``Randomness and Quantum Entanglement'' (RAQUEL). VBS was supported by an ETH postdoctoral fellowship.
|
1,116,691,499,167 | arxiv | \section{Introduction}
\begin{figure}[t]
\begin{center}
\centering
\includegraphics[width=.7\textwidth]{./intro}\\
\caption{ Given an image and initial region labels obtained by the classifiers, we aim to improve the labels using constraints learned from the data. (a) shows a query image, (b) shows the human annotated image (Ground Truth), (c) are labels obtained by classifiers, and (d) shows our results after enforcing constraints. }\label{intro}
\end{center}
\end{figure}
Semantic segmentation is one of the essential tasks to interpret and analyze an image; objects types and their placement in the scene provide the substantial information about the 3D world captured in the image. The constraints implied by the relations between different type of objects and the geometry of the objects in the scene can aid in processing and understating images. The effectiveness and advantage of using world knowledge in understanding images have been explored and assessed in early vision researches \cite{parma1981experiments}, and we want to exploit this concept to label the pixels of an image.
Labeling segments in an image can be considered as assigning labels to a set of variables, which can be pixels or a group of pixels(super-pixels) while considering (or enforcing) the dependency among the labels.
Several approaches use linear models for this type of problems. Given features of each super-pixel, for instance as an input, the goal is to use inference to find the best assignments of labels to the super-pixels as output.
Many learning methods have been proposed for structured modeling, which mostly attempt to model dependency among labels during learning process by optimizing a global objective function (such as using Markov Random Field and Conditional Random Field formulations). However, due to efficiency and tractability, these methods are confined to encode only local relationships.
Although, non-local dependencies can be coded in such a model as well, the model needs to learn more parameters (e.g., in graphical models more edges and weights need to be included to model long-term dependencies). While this can be done by infusing the knowledge in the model as constraints, rather than using a fully connected graph to capture the all possible interactions or higher order representation can be employed, however, in both cases the complexity of method during learning and inference increases.
\begin{figure*}[t]
\begin{center}
\centering
\includegraphics[width=.98\textwidth]{./over_view}\\
\caption{ An overview of the proposed approach. For training, we begin by segmenting images into super-pixels and extracting the feature matrix. Then classifiers ( extreme gradient boosting trees) are trained. Also we find the scene-labels association matrix to capture global context. In the inference part during testing, for a given image the label scores are obtained via the classifiers results. Finally, labels scores are updated by applying constraints learned from knowledge-based rules through the optimization function by Integer Programming. }\label{overview}
\end{center}
\end{figure*}
In this paper, we propose to use the knowledge of some relevant interactions between labels {\em directly} in the model, instead of {\em indirectly} via learning. In doing so, we benefit from the prior knowledge in inference stage, by adding some constraints that should be satisfied. As a result, we apply inference only in the testing, and we do not require solving any inference problem during the training process. Therefore, we can use any training algorithm, and apply constrained inference model; that way the features and constraints can be distinguished.
We use a data-driven approach to extract rules to form the training data which is later used to generate expressive constraints. Since the constraints are formed as Boolean functions, they can be represented as logical expressions. By analyzing constraints extracted from the data, we discovered that we need to model soft-constraints in addition to the hard constraints. For instance, most of the time mountain and building are not seen together, but this is not always the case. Thus, we use slack variables in the objective function to model the soft constraints. To solve the inference problem with expressive constraints, we use an Integer Programming formulation.
By exploiting declarative constraints, we eliminate the need for re-training the model for new incoming data; because we can simply add more constraints in inference part without changing the prediction problem. Furthermore, we exploit global context of the image by learning scene-label association weights, to increase the probabilities of the most confident labels and limit the number of labels, which need to be explored.
In order to demonstrate the performance of our method, we report experimental results on three benchmark datasets including SIFTflow \cite{liu2011sift} MSRC2 ~\cite{shotton2009textonboost} and LMSun~\cite{tighe2013finding}.
In summary, we make following contributions:
\begin{itemize}
\item We use extreme gradient boosting algorithm to learn efficient classifiers to label local features with no need to train detectors or expensive classifiers.
\item We improve the scores of super-pixels by combining local classifiers results and global scene information obtained by a learned scene-label association.
\item We incorporate high-level knowledge of relations between labels via imposing rule-based constraints in an Integer Programming problem.
\end{itemize}
The rest of the paper organized as follow: section 2 reviews relative work proposed for scene labeling, in section 3 our proposed method is described in details. The experiments and evaluations of our method are presented in section 4, and finally, we conclude our paper.
\section {Related Work}
Scene labeling has been the interest of many research works recently. Proposed methods differ regarding features and descriptors they use, primitive elements (pixels, patches or regions), employed classifiers and how context techniques are utilized. Conditional Random Fields ~\cite{ladicky2010and} has been used in a vast number of methods. These works use mainly appearance (local features) as unary potential and smoothness between neighboring elements as the pairwise term ~\cite{shotton2009textonboost}. Also using higher order CRF to integrate potential of features at different levels (pixels and superpixels) have been explored ~\cite{russell2009associative}, \cite{gould2009decomposing}. In addition to local features, some methods benefit from object detectors and combine the results from detectors and context information \cite{ladicky2010and}, \cite{tighe2013finding}.
Other approaches, labels from a dataset of known labels, a retrieval set, transfer to the segments of the query image. To this aim, for a given image, a nearest neighbor algorithm is used to retrieve similar images from a sample data, then a Markov random field (MRF) model is applied on pixels (or super-pixels) in the image to label the pixels ~\cite{liu2011sift} , ~\cite{tighe2010superparsing} and \cite{tighe2014scene}.
Many works are the extension of this type of labeling, for example, in \cite{eigen2012nonparametric}the weights of descriptors are learned in an off-line manner to reduce the impact of incorrectly retrieved super-pixels. Also, \cite{singh2013nonparametric} proposed to use a locally adaptive distance metric to find the relevance of features for small patches in the image and to transfer the labels from retrieved candidates to small patches of the image. In \cite{gould2012patchmatchgraph} instead of using a retrieval set to transfer the labels, a graph of dense overlapping patch correspondences is constructed; and the query image is labeled by using established patch correspondences. \\
In \cite{isola2013scene} an image is represented as a collage of warped, layered objects obtained from reference images. The scene is analyzed through synthesizing the information. A dictionary of object segment candidates that match the image is retrieved form samples for a test image; then the image is represented by combining these matches. To do so, a dataset of labels exemplars is required. In \cite {guo2014labeling} authors proposed to find the bounding boxes of the objects using object detectors and then regions are classified by combining information from detectors and surface occlusions; they also use RGB-depth to understand the scene.
In some other works, context information is employed in the modeling of the problem via utilizing global features of the image or incorporating the co-occurrence of the labels in training\cite{vu2013improving}.
Recently, deep learning techniques have been used in scene labeling. For instance, in \cite{farabet2012scene} for each pixel in the image multi-scale features are gained, and the deep network aggregates feature maps and labels the regions with highest scores.
These models need an extensive data for training; in scene labeling task pixel-wise annotation is expensive work. Also, unlike expressive models [understanding] insight of the relations between labels is hard to achieve \cite{farabet2012scene}, \cite {socher2011parsing}. \\
Most of these approaches neglect to use domain knowledge and the structure -the interactions and dependencies between different parts of the image geometrically (spatially) or conceptually- implied by the images, or they only consider the smoothness of neighboring pixels. While the early researches in computer vision
(e.g., VISIONS \cite{parma1981experiments}) tried to interpret scenes using knowledge from the structure of the image. In this paper, we aim to incorporate modeling constraints into inference to leverage from this type of information.
Previous works mainly use fully connected graphs to model long-distance dependencies in contrast to local potentials such as MRF models, in which pairwise terms usually are based on the first-order Markov assumption due to reducing the complexity of the inference. However, increasing the number of edges between nodes in a graph causes the inference to be competently expensive. Our goal is to use knowledge-based information to discard the unnecessary edges and make the model simple. We aim to capture the real and common dependencies using the rules extracted from the data instead of letting the CRF finds the weights for these dependencies. Also, our method can be general in a sense that the various types of relations can be formulated in the same framework of rules such as geometrical dependency, non-co-existence, co-occurrence, presence, and adjacency. While in other works (e.g. \cite{ladicky2010graph}, \cite {roy2014scene}. and \cite{rabinovich2007objects} ) only a subset of these constraints are modeled as hard constraints, whereas the rules which extracted from the real data show that some of the constraints are not hard constraints. Therefore, we use soft constraints for those rules that are most of the time valid, but not always.
\section{Approach}
Our system consists of two main phases. The first phase consists of off-the-shelf parts including feature extraction and classifier training based on local features of the sample training images.
The second phase is the inference, in which for a given query image, using scores computed by the classifiers for each possible label, an objective function is maximized such that the constraints are not violated. An overview of our proposed approach is shown in figure~\ref{overview}.
In training, first we segment images using efficient graph-based segmentation \cite{felzenszwalb2004efficient}.
We use this specific method to be consistent with other methods we compare our method with. Next, for each super-pixel, local features, including SIFT, color histogram, mean and standard derivation of color, area and texture, are extracted. Given these local features, classifiers (extreme gradient boosting trees forest) are trained to label super-pixels using their local features.
In the inference part, we update the classifiers scores by reducing the scores of the assignments which conflict with the defined constraints. We use the product of experts (\cite{hinton1999products}, \cite{chang2012structured}) by combining the probabilities of the label assignments given by the classifiers and the degree of inconsistency provided by the constraints. This can be done by multiplying the probabilities or equivalently adding their logarithm. Note that this is different from using various features and weighting those features. We define a new scoring function:\\
\begin{eqnarray}
score(\textup{y}^{i},\textup{x}^{j})= \Phi (\textup{y}^{i}|\textup{x}^{j}) \times {C} (\textup{y}^{i}) ,
\end{eqnarray}
where $\Phi $ is classifier probabilities and $C$ is constraints violation degrees.\\
We formulate the prediction of a label for each segment in the image by constrained optimization problem, and employ Integer Linear Programing (ILP) with quadratic constraints as an optimization framework. In doing so, we assign a binary label $y_{i}^{j} \in \{0,1\}$ to a super-pixel $i$, in order to find the best assignment $\mathcal{Y} = \{y_{1}^{j} , y_{2}^{j} ,..., y_{n}^{j} \}$, where $n$ is the number of super-pixels and $j$ belongs to labels $ {1,...,l}$. Hence, given the classifier score for each super-pixel, we formulate the inference function as
\begin{eqnarray}
\arg \max_{\textup{y}} \sum_{i,j}\textup{w}(i,j)\ {y}_{i}^{j},
\label{eq:obj_func}
\end{eqnarray}
where $\textup{w}(i,j)$ is the score of assigning label $j$ to super-pixel $i$, provided by our local feature based classifier. This objective function maximize the number of correct predictions for the super-pixels in the image. Toward exploiting context and prior knowledge, we introduce different types of constraints such as non-coexistence, presence and etc, which are explained in \ref{consts} . The first constraint, which should be applied to all instances, enforces that each super pixel gets exactly one label.
\begin{eqnarray}
\forall i \ \ \sum_{j=1}^{n} {y}_{i}^{j} =1.
\label{eq:oneLabel}
\end{eqnarray}
In following sections, we explain each part of the approach in detail. \\
\subsection{Local Classifiers}
In this section, we explain the first step of our method. In training, we start with segmenting each sample image into super-pixels using efficient graph-based segmentation method \cite{felzenszwalb2004efficient}, followed by computing a feature vector( including, SIFT, color mean) for each super-pixel in the image.
We use the same features as used in \cite{tighe2010superparsing}.
We use a sigmoid function to rescale the classifier scores to give a chance to other classes, beside the one the classifier with a maximum score, to compete during the optimization phase. By doing so, if the classifier mislabels a super-pixel there is a chance that the label may be changed during the inference phase by applying the constraints.
We adapt the parameters of the function using the validation sample data.
Also, the sizes (areas) of super pixels are multiplied by the scores to make larger super-pixels more important during the optimization.
We use Extreme Gradient Boosting \cite{friedman2001greedy} with softmax objective function to categorize each super-pixel in an image. Since the training data inevitably is noisy, super-pixels may break the structure of the data, the bagging using a subset of training examples and subsets of features are used to reduce the effects of the noisy data.
Unlike some of the other methods, which train object detectors in addition to the region classifiers, we only use region features and simple classifiers to obtain the initial label scores for super-pixels.
In our experiments, boosting trees achieved better results in terms of average accuracy among all the classes, even though we discard some of the samples randomly during the training. We discard some samples since the number of sample data for some classes is enormous.
\subsection{Global Context Information }
In performing scene labeling, incorporating scene information, such as the places, can be helpful. For instance, {\em if the probability of being a desert is high for an image presumably the chance of seeing chair or river would be low}. Therefore, we use image level categories to refine the scores of the local classifiers. We use Places CNN model \cite{zhou2014learning} to find the most probable scene semantics in a given image.
Unlike pixel-level annotation, image level annotation is more feasible, thus training a deep network is doable. This deep network is trained on scene labeled images and includes 205 places categories. Using our training data, we learn a mapping between these categories and the label set. To do so, we employ non-negative sparse regression formulation, and extract a weight matrix $W$ for scene-label association as follow:
\begin{eqnarray}
\min_{W \geq 0} {\left \| Y-WX\right \|_{F}^{2}+\lambda \left \| W\right \|_1},
\label{eq:sparse}
\end{eqnarray}
where $X$ is the confidence score matrix for scene-categories of training images, and $Y$ is a matrix of present labels in corresponding images. In the interest of putting emphasis on learning the mapping between scene categories and smaller super-pixels and rare classes, in $Y$ if label $l_{i}$ is present we use $1-n(l_{i})/\sum_{j}{n(l_{j})} $ in $i_{th}$ element. The solution of this problem can be efficiently obtained by FISTA algorithm \cite{beck2009fast} which is implemented in SPArse Modeling Software(SPAMS) \cite{mairal2014sparse}.
Then in testing time, for a given image, using scene categories and the weight matrix the most confident labels are obtained and used to refine the label assignment.
\subsection{Extracting Rules and Creating Constraints} \label{consts}
In this section, we describe the types of constraints that we add to the aforementioned optimization problem. Note that even though these constraints can be obtained by common sense knowledge, we explore the training data to discover plausible rules in the context of semantic labeling of images. For instance, the common sense about label \emph{sky} and \emph{building} would be: {\em sky always is above the building}; however, since images are 2D projections of a 3D world this is not always correct. Therefore, we capture these types of constraints as relative constraints with some penalty if they are violated. We call them soft-constraints and describe them in section \ref{soft} \\
The first type of constraints are {\bf geometrical constraints}; whether two labels can have a particular geometrical relationship or not. For example, \emph{sky} always is seen above the \emph{sea}. For each super-pixel, we keep the bounding boxes information, then we apply a rule: {\em When the label for a super-pixel is sea, its lower super-pixels can not get label sky}. This type of constraints can be formulated as follows:
\begin{eqnarray}
{y}_{i}^{a}\sum_{j=i+d}^{n}{y}_{j}^{b} \leqslant 0,
\label{eq:top_const}
\end{eqnarray}
where $a$ and $b$ respectively are labels of the lower and upper parts of the image, and $d$ is the displacement of super-pixel $j$ from super-pixel $i$. By exploring the data, we can find hard constraints belonging to this group (e.g (sky, field), (sea, sand), (ceiling, floor)...).
The second type of constraints are {\bf mutually-exclusive}, which represent cases {\em when labels $a$ and $b$, never occur together in an image}. This can be formulated as follows
\begin{eqnarray}
\sum_{i}y_{i}^{a}\sum_{j}y_{j}^{b} \leqslant 0.
\label{eq:mute_ex}
\end{eqnarray}
{\bf Presence} is another type of constraints which can be applied.
According to this constraint, {\em if a certain label $a$ appears in the image, then there should be at least one super-pixel having label $b$}. We can express this constraints as one of the following formulations:
\begin{eqnarray}
\sum_{i}y_{i}^{a}\sum_{i}y_{i}^{b}\geqslant 1, \\
\sum_{i} y_{i}^{a}-\sum_{i,j} y_{i}^{a}y_{j}^{b} \leqslant 0
\label{eq:mute_ex}
\end{eqnarray}
For example, we have found that whenever the label \emph{balcony} appears in the image there exists, at least, one region labeled as a \emph{building}, similarly whenever the label \emph{pole} appears in the image, at least, one region is labeled as a \emph{road}. Note that; this constraint is different from ``co-occurrence”. In co-occurrence two labels frequently appear together in images; the constraint is not necessarily always valid, thus, it can suitably be formulated as a soft-constraint.\\
These rules can be extended, for example, we could add adjacency constraints to enforce the neighboring super-pixels receive the same labels, or even we can apply some constraints on the other features of the specific labels.
We define the rules as described in this part, and then we use a sample data to find relations between classes which follow these rules. We create a cube which with the size of $number of labels \times number of labels \times number of relations$. Each matrix of this cube shows the frequency of a relation happening between classes, and each cell indicates the frequency of a particular relationship between two categories (class labels). For example, to find labels that have a specific relation, we count how many times this relation occurs in a sample data for each pair of classes. If the value is higher than a threshold, we will consider it as a constraint. In addition, we distinguish between soft and hard constraints if the other way around has occurred or not. For instance, about the geometry relation like above, for a pair semantic labels (ceiling, floor) the relations is always true. Therefore, we consider it as hard constraints. While, for (sky, building) there are some images in which the building super-pixels are above sky, thus we consider this relation as soft constraint, it means that the main formulation can tolerate violating this constraint with a penalty.
\subsection{Integer Linear Programming with soft constraints} \label{soft}
Our proposed approach for extraction of rules not only helps us in deriving the hard constraints, but it also contributes to providing the soft constraints such as \emph{co-occurrence} and \emph{relative geometrical} constraints.
Since some of the rules may not necessarily be satisfied by all the data, we use 0-1 soft-constraint modeling \cite{srikumar2013soft} and define soft constraints by introducing a new binary variable, $z_{k}$, for each constraint and an associated penalty, $c$, which indicates the degree of violation (how the confidence of assignment should reduce when the constraint is not satisfied). Then objective function in \ref {eq:obj_func} becomes:
\begin{eqnarray}
\arg \max_{y} \sum_{i,j}\textup{w}(i,j)\ {y}_{i}^{j} - \sum_{k}c_{k}(1-z_{k}),
\label{eq:obj_func2}
\end{eqnarray}
which implies that if constraint $k$ is violated $z_{k}$ would be zero and consequently a penalty will be imposed to the optimization function. Moreover, we need to connect the
$z_{k}$ to the constraint $C_{k}$ as $z_{k}\leftrightarrow C_{k}$. We are able to do that by using logical representation and adding these constraints into the objective function.
For example, \emph{sky mostly is above the building}; however, based on the rules that we have extracted from the database, this is not always true. Therefore, we change the constraint to soft constraint as \\
\begin{eqnarray}
\ {y}_{i}^{building}\sum_{j=i+d}^{n}{y}_{j}^{sky} =0 \leftrightarrow z_{k},
\label{eq:obj_bsky}
\end{eqnarray}
here { $ \leftrightarrow $ is an equivalence for the conditional constraints, meaning that if constraint k is violated, $z_{k}$ will be zero. Consequently, in the equation \ref{eq:obj_func2} $ c_{k}(1-z_{k})$ will be a positive number, which is subtracted from the score. In order to formulate if-else condition in the Integer programming we use the binary variable and add the constraints as inequalities, which is a common practice. }
The penalty values are obtained statistically from the dataset by finding the probability of each case divided by the number of their appearances. For example, {\em sky} and {\em building} most of the time occur together, but not all the time. Therefore, we find a penalty using the following formula, which takes into account the frequency of the constraint violations in the data,
\begin{eqnarray}
c_{k}=-\log \frac{P(C_{k}=0)}{P(C_{k}=1)},
\label{eq:penalty}
\end{eqnarray}\\
where $P(C_{k}=0)$ and $P(C_{k}=1)$ are respectively the probability of violating and satisfying the constraint $k$. \\
\begin{table*}[t]
\begin{center}
\centering
\caption{\textbf{Summery of the rules extracted from the sample data. }
\label{tab:rules}}
\begin{tabular}{|l|l|l|}
\hline
Name & Logic representation & examples \\ \hline
non-coexistence & $ \mathbf{y}^{l_{1}} \wedge \mathbf{y}^{l_{2}} =0 , \ \mathbf{y}^{k}=(y_{1}^{k}\vee y_{2}^{k} \vee ... \vee y_{n}^{k})$ &
(desert, sidewalk) \\ \hline
geometric & ${y}_{i}^{l_{1}} \wedge \mathbf{y}_{i+z}^{l_{2}} =0,\ \mathbf{y}^{k}=(y_{i+z}^{k} \vee ... \vee y_{n}^{k}) $
(road below window) \\ \hline
presence & $ {y}_{i}^{l_{1}} \Rightarrow \mathbf{y}^{l_{2}} ,\ \mathbf{y}^{k}=(y_{1}^{k}\vee y_{2}^{k} \vee ... \vee y_{n}^{k})$ &
(balcony, building) \\ \hline
co-occurrence & $ \mathbf{y}^{l_{1}} \wedge \mathbf{y}^{l_{2}} =1 , \ \mathbf{y}^{k}=(y_{1}^{k}\vee y_{2}^{k} \vee ... \vee y_{n}^{k})$ &
(car, road) \\ \hline
adjacency & $ {y}_{i}^{l_{1}} \equiv {y}_{j}^{l_{2}} = \neg ( {y}_{i}^{l_{1}} \oplus {y}_{i}^{l_{2}})=1 $ & (sky, sun) \\ \hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}
\begin{center}
\centering
\includegraphics[width=1\textwidth]{./results1}\\
\caption{{ Examples results obtained by our method on SIFTFlow dataset, (a) query images, (b) ground truths, (c) initial classifier outputs and (d)
our final results.} }\label{segres}
\end{center}
\end{figure*}
\subsection{Solving Integer Programming }
Thanks to many available numerical packages for estimating Integer Programming solutions, we are able to solve the problem of our size in a short time. We use Gurobi toolkit \cite{gurobi}, which can solve 70 IQP per second on a desktop computer. The solver uses a piece-wise linear optimization and relaxed LP to solve the integer programming. Also, it is feasible to convert the quadratic constraints to linear ones by adding slack variables due to the boolean nature of our problem. Moreover, one may use other greedy or heuristic methods to solve the objective function.
\begin{figure*}
\begin{center}
\centering
\includegraphics[width=1 \textwidth]{./msrc_res}\\
\caption{{ Example results obtained by our method on MSRC dataset, (a) query images, (b) ground truths, (c) initial classifier output and (d) our final results.} }\label{msrcfig}
\end{center}
\end{figure*}
\section {Experiments and Results}
We use the SIFTFlow dataset \cite{liu2011sift}, which consists of 2,488 train and 200 test images collected from LabelMe\cite{russell2008labelme}. These images are from the broad categories such as natural images (coasts, country, etc.) and urban images. Ground truth images are available, in which each pixel of the image is labeled from 33 classes including different objects (e.g., cars, windows) and stuff (e.g., sky, sea).
In table \ref{tabres}, the accuracy results of our method compared to the state of the similar methods (regarding using similar features) are shown. Our method obtains comparable per-pixel accuracy; even though we do not use object detectors or massively training algorithms. {Also in table \ref{siftDet2}, we show the accuracy gained in different steps of our method, indicating the improvement due to addition of constraints.}
In figure \ref{segres}, we show some qualitative results from SIFTflow data set including sample images, ground truth labeling, results from ~\cite{tighe2013finding}, and results obtained by our method before optimization and after applying constraints based optimization. As it is clear,
we are able to improve the results without over smoothing. Also, our method achieves promising results in terms of per-class accuracy as well.
For instance, in the third-row super-pixels below the sea are labeled as mountain by our local classifiers, yet constraints such as {\em mountains are not below sea}, are able to handle the miss-labeling and change it to the {\em tree}. Note that plant and tree are very similar in terms of the appearance.
\begin{table}[h]
\begin{center}
\centering
\caption{{{ Detailed Results on SIFTFlow dataset } }}
\label{siftDet2}
\begin{tabular}{lll}
\hline \noalign{\smallskip}
{Method } & {Avg Accuracy} \\
\hline
\hline
{Local Features + Classifiers } & {72.8} & \\
{Local Features + Global Context } & {77.3} & \\
{Local + Global + Rules } & {80.9 } &\\
\noalign{\smallskip} \hline
\end{tabular}
\end{center}
\vspace{-5mm}
\end{table}
\begin{table}[h]
\begin{center}
\centering
\caption{{Accuracy on SIFTflow dataset } }
\label{tabres}
\begin{tabular}{llll}
\hline \noalign{\smallskip}
Method & Per-Pixel & Per-Class\\
\hline
\hline
Farabet natural \cite{farabet2012scene} & 78.5 & 29.6 & \\
Farabet balanced \cite{farabet2012scene} & 74.2 & 46.0 & \\
Tighe \cite{tighe2013finding} & 78.6 & 39.2 & \\
Collage Parsing \cite{tung2014collageparsing} & 77.1 & 41.1 & \\
Gerorge w/out Fisher Vectors\cite{george2015image}& 77.5 & 47.0 \\
Gerorge Full \cite{george2015image} & 81.7 & 50.1\\
\hline
Ours & 80.9 & 50.3 &\\
\noalign{\smallskip} \hline
\end{tabular}
\end{center}
\vspace{-5mm}
\end{table}
The second data set which we assessed our approach on is LMSun~\cite{tighe2013finding}. This data set contains 45,676
training images and 500 test images including indoor and outdoor scenes, with different image sizes. However, the number of available samples for different labels are not balanced, and some labels are rare. While some labels, for instance, sky and building, have a large number of samples, some others have fewer samples. To make the training more feasible, we use all samples from rare classes and only $25\% $ of samples from common classes for training our extreme gradient boosting classifiers. Also, in learning as well as while using the scene-label associations we assign more weights to smaller super-pixes and rare classes to avoid the influence of common labels such as sky or building.
\begin{table}[h]
\begin{center}
\centering
\caption{{Accuracy on LMSun dataset } }
\label{tablmSun}
\begin{tabular}{l l l l}
\hline \noalign{\smallskip}
Method & Per-Pixel & Per-Class\\
\hline
\hline
Tighe \cite{tighe2013finding} & 61.4 & 15.2 & \\
Gerorge w/out Fisher Vectors\cite{george2015image} \ & 58.2 & 13.6 \\
Gerorge Full \cite{george2015image} & 61.2 & 16.0\\
\hline
Ours Local Classifiers & 47.4 & 13.4 \\
Ours Local Classifiers +Global Context & 56.1 & 14.6 \\
Ours & 58.4 & 17.3 &\\
\noalign{\smallskip} \hline
\end{tabular}
\end{center}
\vspace{-5mm}
\end{table}
We also applied our method on MSRCV2 data set \cite{shotton2009textonboost} which has 591 images of 23 classes. We use the provided split, 276 images in training and 255 images for testing. The qualitative results are shown in figure \ref{msrcfig} and our result using the aforementioned approach is presented in table \ref {tabdetailMSRCV}.
\begin{table}[h]
\centering
\caption{Detailed results for MSRCV2 dataset}
\label{tabdetailMSRCV}
\begin{tabular}{llll}
\hline
Method & Per Pixel Accuracy \\ \hline \hline
Local Features + Classifiers & 76.7 \\ \hline
\begin{tabular}[c]{@{}l@{}}Local Features + Global Context \end{tabular} & 78.3 \\ \hline
\begin{tabular}[c]{@{}l@{}}Local + Global + Rules\end{tabular} & 84.4 \\ \hline
\end{tabular}
\vspace{-4mm}
\end{table}
\section {Discussion}
{The proposed method can get similar results compared to other models such as CRF by having fewer edges i.e. not considering fully connected graph. The two sources of information, including visual features and high-level knowledge-based constraints, are combined to obtain better results. Some relevant information about the data, such as geometrical relationships or non-coexistence, which is hard to learn from the features automatically, can be easily captured by our proposed method and used to refine the labels. Our approach can scale and generalize.
In our approach, we add constraints to the trained model, so without retraining, we can add constraints which are declarative and hard to model using only features. Hence, in cases when not enough training data for some categories (class labels) is available, the constraints will help us to obtain better results. Also, when new classes are added to the data set, the proposed method can model the dependencies between classes without learning the pairwise terms since we keep the features (classifier learning) and constraints separate. As shown in experiments on different data sets, with a various number of labels, our method can get promising results. The constraints are assumed to be trusted, and penalties are obtained from data by simply finding the frequency in contrast to learning from the features.} It should be noted that our primary contribution is improving the labeling on top of classifier scores; therefore, using extensive classifiers, such as deep learning, can boost the final results.
\section{Conclusions}
In this paper, we proposed a novel scene labeling approach, in which we use an enhanced inference method that enables the model to incorporate general constraints structure. We use an integer programming formulation, which can be solved by linear programming relaxation to address the problem. We also proposed to use soft constraints in addition to hard constraints to make the model more flexible.Experimental results on three data sets show the effectiveness of our method.
\section*{References}
|
1,116,691,499,168 | arxiv | \section{Introduction and event selection}
Charm mixing remains elusive. In the Standard Model the charm mixing rate
is greatly suppressed by small CKM matrix
elements and strong GIM suppression. This very fact, however, provides a unique opportunity
to search for new physics.
The search for charm mixing requires tagging the flavor of a neutral charm meson at production and
again tagging the flavor at decay. The production flavor is determined using $D^{*+}\!\rightarrow\!D^0\pi_s^+$
decays. The decay flavor is determined by reconstructing a Cabibbo favored (CF) decay with a charged kaon such as
$D^0\!\rightarrow\!K^-\pi^+$. A right-sign (RS) decay is defined as one in which the kaon and soft pion charges are
opposite while same kaon and $\pi_s$ charges are wrong-sign (WS) decays. The $D^0$ can also decay directly to
$K^+\pi^-$ via a doubly Cabibbo suppressed decay (DCSD). The time dependent WS decay rate can be
written as:\\[-12pt]
\begin{equation}
\vspace{-1pt}
\label{eq:wsrate}
R_{WS}(t) = e^{-\Gamma t}\left( R_D + \sqrt{R_D} y' \Gamma t + \frac{1}{4}\left({x'}^2+{y'}^2\right) \Gamma^2 t^2\right)
\end{equation}
where the three terms correspond to DCSD, interference between mixing and DCSD, and mixing.
$R_D$ is the DCS branching ratio relative to the Cabibbo favored mode. The paramters
$x' \equiv x \cos{\delta_{K\pi}} + y \sin{\delta_{K\pi}}$ and $y' \equiv y \cos{\delta_{K\pi}} - x \sin{\delta_{K\pi}}$
are rotated versions of the mixing parameters $x \equiv \frac{\Delta M}{\Gamma}$ and
$y \equiv \frac{\Delta \Gamma}{2\Gamma}$ (mass and lifetime
splitting terms) and $\delta_{K\pi}$ is the strong phase between CF and DCS decay.
Discovery of hadronic charm mixing requires separating the three components
of Eq.~\ref{eq:wsrate} using the different lifetime distributions. Note that Eq.~\ref{eq:wsrate} has no information
on the sign of $x'$ and therefore the relevent fit
variable is ${x'}^2$. In this analysis, \textit{CP} violation effects are not considered and charge conjugates are implied.
FOCUS recorded data during the 1996--7 fixed-target run at Fermilab.
A photon beam impinged on BeO targets.
16 silicon strip planes provide vertexing and tracking.
Charged particles are tracked and momentum analyzed as they pass
through up to two dipole magnets and up to five sets of multiwire proportional chambers.
Three \v{C}erenkov counters, two EM calorimeters, and two muon detectors identify particles.
A candidate driven vertexing algorithm is used to reconstruct charm. For
$D^0\!\rightarrow\!K^-\pi^+$, two tracks must verticize with
CL $>$ 2\%. This candidate is used to seed the
production vertex which must have CL $>$ 1\%. The soft pion $\pi_s^+$ from the
$D^{*+}\!\rightarrow\!D^0\pi_s^+$ must
be consistent with originating from the production vertex and the track is refit to the production
vertex.
Separating charm from hadronic background is primarily accomplished by requiring the decay
vertex be distinct from the production vertex. Other cuts are made to
ensure the vertex is isolated and is more charm like than background like.
Strong \v{C}erenkov cuts are applied to the combination of $K^-$ and $\pi^+$ especially if the
invariant mass under a reflection of $K^-\!\rightarrow\!\pi^-$ and $\pi^+\!\rightarrow\!K^+$ is
close to the $D^0$ mass.
If more than one $D^{*+}$ candidate is found for one $D^0$ candidate an additional
cut may be made.
If a RS (WS) candidate is found with $2.5\,\textrm{MeV} < Q(D^*) < 9.5\,\textrm{MeV}$
then no WS (RS) candidates are allowed.
\section{Fit description and results}
A 3D binned maximum likelihood fit is used in this analysis. Two dimensions,
$M(D^0)$ and $Q(D^*)$ separate signal from background while
$\tau(D^0)$ distringuishes DCSD, mixing, and intereference. The RS
and WS data are fit simultaneously.
The fit components generally have a shape determined by a
Monte Carlo simulation and a yield which is free to float, some with weak constraints imposed. The
components making up the fit which are modeled by the Monte Carlo are RS signal,
WS signal (DCSD, interference, and mixing), real
$D^0\!\rightarrow\!K^-\pi^+$ decay with a fake soft pion.
reflections to both RS and WS ($K^-K^+$, $\pi^-\pi^+$, $\pi^-\pi^+\pi^0$, $K^0\pi^-\pi^+$), RS background
($K^-\ell^+\nu$ and $K^-\pi^+\pi^0$), and WS background (double misidentification of $K^-\ell^+\nu$,
$K^-\pi^+\pi^0$, $K^-\pi^+$). The remaining background is some combination of non-charm and poorly
reconstructed charm
events. This is modeled with functional forms in mass
$a \exp{\left(b m\right)}$, energy release $\alpha q^{1/2} + \beta q^{3/2}$, and lifetime
$\exp{\left(-t/\tau_1\right)} + \eta \exp{\left(-t/\tau_2\right)}$.
Penalty terms are added to ensure backgrounds
are consistent with known branching ratios. The affected backgrounds are the $D^0$ decays
to $K^+K^-$, $\pi^+\pi^-$, $\pi^+\pi^-\pi^0$, $K^0\pi^+\pi^-$, $K^-\pi^+\pi^0$, and $K^-\ell^+\nu$ which
all appear in RS and WS\@.
The background due to double misid of the RS signal
is fixed based on the RS yield and relative efficiency.
Mini Monte Carlo tests of the fit were used to verify the accuracy of the reported fit errors and to
check for possible biases in the fit. No significant bias was found and the reported fit errors agree
with the mini Monte Carlo results.
Fit variants with different binning, different constraints, and different accounting of the random
background were tried. No significant differences were observed. Variations of all the selection
criteria were also analyzed with no significant differences in the results. For the branching ratios
reported, systematic errors due to fit variants and cut variants were obtained
from the r.m.s of the variations and then added in quadrature.
\begin{figure}
\centerline{\includegraphics[width=2.48in,height=1.93in]{fig_ws_mass.eps}\hspace{-5pt}
\includegraphics[width=2.48in,height=1.93in]{fig_ws_qval.eps}}
\centerline{\includegraphics[width=2.48in,height=1.75in]{fig_wssig_tau.eps}\hspace{-5pt}
\includegraphics[width=2.48in,height=1.75in]{fig_xycontour.eps}}
\caption{The top plots show projections onto $M(D^0)$ and $Q(D^*)$ of the fit and data for WS events.
The bottom-left plot shows the contributions to the WS signal versus $\tau(D^0)$.
The bottom-right plots shows the 95\% CL ${x'},{y'}$ contour with statistical (inner) and full
errors (outer).}
\label{fig:mqt}
\end{figure}
Fitting without mixing we find
$R_{WS} = (0.430^{\,+\,0.062}_{\,-\,0.061}\pm 0.031)\%$.
From the standard mixing fit we find a DCS branching ratio of
$R_D = (0.382^{\,+\,0.167}_{\,-\,0.163}\pm0.069)\%$.
Figure~\ref{fig:mqt} shows the WS projections onto $M(D^0)$ and $Q(D^*)$.
The RS data is dominated by $54452 \pm 242$ signal events.
Due to significant correlations between
${x'}$ and $y'$, the interesting physics result is
a contour in the ${x'}$, $y'$ plane. Although we fit for ${x'}^2$, we choose to plot ${x'}$.
The 95\% CL contour is defined as the location in the
$x'$, $y'$ plane where $\Delta \log{\mathcal{L}} = 2.996$ relative to the minimum
$-\log{\mathcal{L}}$ in the physical region of the $x'$, $y'$ plane.
While the true minimum occurs at ${x'}^2 = -0.06\%$ and $y' = 1.0\%$,
the minimum with ${x'}^2 \ge 0$ occurs at ${x'}^2 = 0$ and $y' = 0.5\%$
with a change in $-\log{\mathcal{L}}$ of only $0.006$ relative to the true minimum. Systematic checks were
performed with 120 fit and cut variants. The contour variation is consistent with differences in the
returned value of ${x'}^2$ and $y'$. For the systematic error, we first find the change
in $-\log{\mathcal{L}}$ between the global minimum and the ${x'}$, $y'$ location for each of the
variants. We then find the value greater than 95\% of these differences which is $0.482$. We
find the contour at which the change in $-\log{\mathcal{L}}$ is $2.996+0.482 = 3.478$ and
call this the 95\% CL including systematic error. The $\tau(D^0)$ projection and $x',y'$ contours
are shown in Fig.~\ref{fig:mqt}.
Defining 95\% CL limits on $x'$ and $y'$ based on the projection of the
contour onto the respective axis we find ${x'}^2 < 0.83\%$ and $7.2\% < {y'} < 4.1\%$.
\end{document}
|
1,116,691,499,169 | arxiv | \section{Introduction}
Relativistic heavy-ion collision experiments at RHIC have found
evidences for a new state of matter, the quark-gluon plasma
(QGP)~\cite{whitepapers1}\cite{whitepapers2}. The collective flow
phenomena, the transport properties and other theoretical
developments indicate that the new state of matter is not a weakly
coupled gas as expected, but a strongly coupled ideal fluid,
referred to as sQGP, probably in a wide temperature region
$T_c<T\lesssim 2T_c$~\cite{sQGP1}\cite{sQGP2}\cite{sQGP3}. What is
the microscopic dynamics responsible for the small viscosity of the
strongly coupled quark gluon plasma and how is the evolution of the
property of this fluid in the process of crossover from hadronic gas
to sQGP are still open questions.
Many attempts have been done to explain the small viscosity.
Theoretically, people obtain transport coefficient perturbatively by
using Kubo formula in the scope of thermal field
theory~\cite{kubo1}-\cite{kubo5} or solve the transport equation
from kinetic theory~\cite{kinetic1}-\cite{kinetic4}. They found that
it is not enough to account for the small viscosity if only
considering the perturbative interaction. While ref.~\cite{xuzhe}
claimed that they can reproduce the small viscosity for a gluon gas
if including $gg\to ggg$ bremsstrahlung in the pQCD calculation
within relativistic kinetic theory. Another way to reach small
viscosity is to solve the strongly coupled super-symmetric
Yang-Mills gauge theory by AdS/CFT
duality~\cite{adscft1}\cite{adscft2}. In phenomenology,
Ref.~\cite{shuyak} developed a new picture of QGP with multiple
colored bound states and the increased re-scattering among bound
states were expected to reduce the viscosity of QGP.
Quantitative investigation on the liquid property of plasma is
possible by considering two particle correlation function in
coordinate space or its Fourier transform --- the structure function
in momentum space. The characteristic behavior of pair correlation
function reveals the intrinsic feature of plasma in different phase.
For example, in the case of liquid state the pair correlation
function exhibits a pronounced peak and one or two small and broad
additional peaks. The first peak corresponds in coordinate space to
the near-neighbor ``shell'' of atoms because of short-range order.
The next near-neighbor ``shell'' in the liquid is much less
prominent and the next outer shell may hardly visible due to the
lack of long-range order. In the case of a solid crystalline phase,
where a long-range order exists, a number of sharp peaks with
comb-like shape are observed. In the ideal gas phase, where no order
is present, the pair correlation function shows no clear structure.
A sketch of the pair correlation function for liquid and gas is
shown in Fig.~\ref{gr1}.
Though it is not easy to obtain the viscosity of a liquid
quantitatively from its pair correlation function, the tendency of
the viscosity could be inferred via the shape evolution of the
latter. As stated above the pair correlation function of liquid
exhibits a number of peaks with diminishing heights. The location of
the last visible peak can be taken as the range of effective
interaction in the liquid. The increasing of the latter results in a
diminishing mean free path, which in turn causing the reduction of
viscosity. Therefore, the larger distance the peaks of pair
distribution function with reduced amplitudes shows, the less
viscous the liquid is. Ref.~\cite{Thoma} calculated the correlation
function and structure function for QGP perturbtively by using hard
thermal loop approximation and it failed to represent the typical
character of liquid state since only weakly coupled QGP is
considered.
\begin{figure}
\includegraphics[width=0.5\linewidth]{gr-sketch.eps}
\caption{\label{gr1} Sketch of the pair correlation function vs.
distance between two atoms in the gas and liquid phase.}
\end{figure}
Recently, a dynamical percolation model based on the molecule-like
aggregation is proposed to describe the crossover transition between
hadronic matter and QGP~\cite{xumm}. In this model as the increase
of temperature, partons inside hadrons are delocalized to a certain
extent, tunneling between neighboring hadrons through bonds, to form
a sort of grape-shape QGP (gQGP), which is a special form of sQGP.
Inside the gQGP, partons move around with large cross section and
short mean free path, resulting in a quark-gluon matter possessing
the property of near-perfect fluid. In this article we investigate
the pair correlation function of the delocalized quarks from this
molecule-like aggregation model and discuss the information about
the structure of the grape shape QGP during the process of
crossover.
\section{\label{section2} A brief review on molecule-like aggregation model}
In order to answer the question how to crossover from the hadronic
to partonic phase in QCD without contradiction with color
confinement, a basic assumption is proposed~\cite{xumm}, which
states that quarks in neighboring hadrons can be delocalized, i.e.
tunnel between the hadrons to form clusters. After delocalization
the clusters, which are molecule-like aggregations, are color
singlets and the original hadrons turned to colored objects, called
cells.
To exhibit the physical consequence of the assumption on {\it
molecule-like aggregation} a toy model basing on percolation
procedure is constructed~\cite{xumm}. In the simplified 2-D version
of the model the initial system is set to be a nucleon gas
consisting of $2\times 197$ cells, {\it i.e.}\ } \def\eg{{\it e.g.}\ } \def\cf{{\it cf.}\ hadrons, which are small
circles of hard-core radius $r_e=0.1$ fm distributed randomly in a
big circle of radius $R=7$ fm.
In calculating the probability for bond-formation between neiboring
cells, or delocalization of quarks, a non-relativistic model used in
nuclear force theory~\cite{wangfan} is utilized. The model
Hamiltonian for the 6 constituent quarks in the c.m. system of two
nearby cells is assumed to be
\begin{equation}
\mathcal{H}=\sum_{i=1}^{6}\left(m_{i}+\frac{p_{i}^{2}}{2m_{i}}\right)-T_{\rm
cm}+\sum_{i<j}V_{ij}^{C}, \label{eqH}
\end{equation}
where $T_{\rm cm}$ is the center-of-mass kinetic energy. When the
quarks $i,j$ belong to a same cell, a square-confinement potential
$V_{ij}^{C}=-a_{c}\vec{\lambda}_{i}\cdot\vec{\lambda}_{j}r_{ij}^{2}$
is assumed. When they belong to two nearby cells, the infinite
potential between them will drop down, forming a potential barrier,
and a parametrization $V_{ij}^{C}
=-a_{c}\vec{\lambda}_{i}\cdot\vec{\lambda}_{j}\frac{1-e^{-\mu
r_{ij}^{2}}}{\mu}$ is used, where $\mu$ is a model parameter,
proportional to temperature square from dimensional consideration.
In doing variational calculation for the ground state energy the
trial wave function of the two-cell system in adiabatic
approximation is chosen to be an
antisymmetric six-quark product state
\begin{equation}
\left} \def\r{\right} \def\la{\langle} \def\ra{\rangle|\Psi_6(S)\g>=\mathcal {A}\left} \def\r{\right} \def\la{\langle} \def\ra{\rangle[\prod_{i=1}^{3}\psi_{\rm L}(\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{r}_{i})
\prod_{i=4}^{6}\psi_{\rm R}(\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{r}_{i})\eta_{I_1 S_1}^{B_1}\eta_{I_2
S_2}^{B_2}\chi_{c}^{B1}\chi_{c}^{B_2}\g]_{00},
\end{equation}
where $\mathcal {A}$ is the anti-symmetrization operator, which
permutes quarks between the two cells; $[\cdots]_{00}$ means that
the spin, isospin and color of the two cells are coupled to a
particular color singlet state with total spin and isospin equal
zero. For the orbital motion, we have the left (right) single-quark
orbital wave function $ \phi_{L}(\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{r}_{i})=\left} \def\r{\right} \def\la{\langle} \def\ra{\rangle(\frac{1}{\pi
b^2}\g)^{\frac{3}{4}}\e^{-\frac{(\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{r}_{i}+\frac{\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{S}}{2})^2}{2b^2}},\
\phi_{R}(\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{r}_{i})=\left} \def\r{\right} \def\la{\langle} \def\ra{\rangle(\frac{1}{\pi
b^2}\g)^{\frac{3}{4}}\e^{-\frac{(\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{r}_{i}-\frac{\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{S}}{2})^2}{2b^2}},$
where $\pm\frac{\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{S}}{2}$ are the cell-centers, $S$ is the
distance between the two cells. $b$ is a baryon-size parameter.
Delocalized orbit is defined as
$\psi_{L}(\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{r})=\frac{1}{N}[\phi_{L}+\epsilon\phi_{R}],\
\psi_{R}(\boldsymbol} \def\nd{\noindent} \def\nbf{\nd\bf} \def\mn{\vskip0.5cm\nd{r})=\frac{1}{N}[\epsilon\phi_{L}+\phi_{R}],$
where $\epsilon$ is a variational parameter characterizing the
degree of delocalization and $N$ a normalization factor. At each
separation $S$, $\epsilon$ is determined by minimizing the energy
$E(S)=\frac{\left} \def\r{\right} \def\la{\langle} \def\ra{\rangle<\Psi_6(S)\left} \def\r{\right} \def\la{\langle} \def\ra{\rangle|\mathcal
{H}\g|\Psi_6(S)\g>}{\left} \def\r{\right} \def\la{\langle} \def\ra{\rangle<\Psi_6(S)|\Psi_6(S)\g>}.$ The model
parameters are: $m_{\rm u}=m_{\rm d}=313~{\rm MeV}$, $b=0.603~{\rm
fm}$, $a_{c}=101.14~{\rm MeV}/{\rm fm}^2$, while $\mu$ is leaving as
a free parameter.
It is found from variational calculation that a maximum distance
$S_0$ for delocalization exists for a fixed temperature, i.e. fixed
$\mu$. Using $S_0$ as input, a bond percolation model is
constructed. After the percolation procedure all the cells in the
system are grouped into clusters. In each cluster, any two cells are
connected by bonds with zigzag path, mimic the atoms in molecule,
cf. Fig.~\ref{Fig. 2}(a).
\begin{figure}
\includegraphics[width=3.4in]{fig2.eps}
\caption{\label{Fig. 2} (Color online) Continuously distributed
cells connected by bonds form clusters. In (b) the big cluster
extending from the left- to the right-boundary is an {\it infinite
cluster}. In (c) all the cells are connected to an infinite
cluster.}
\end{figure}
When temperature increases, the potential barrier between two cells
decreases, and the maximum distance $S_0$ for bond formation
increases. The crossover from hadronic phase to partonic phase
starts when an infinite cluster {\it i.e.}\ } \def\eg{{\it e.g.}\ } \def\cf{{\it cf.}\ a cluster extending from one end
of the big circle to the other end appears, cf. Fig.~\ref{Fig.
2}(b). The corresponding temperature is $T_c$. The colored partons
are confined in the group of cells connected by bonds, being able to
move around inside the big cluster via quantum tunneling through the
potential barriers, resulting in a quark-gluon matter with the
property of fluid inside the cluster. The crossover process is
completed when all the cells belong to one infinite cluster, cf.
Fig.~\ref{Fig. 2}(c), and the corresponding temperature is denoted
by $T_c'$ . In the following we will investigate the structure of
sQGP and of the intermediate states during the crossover from
hadronic phase to sQGP through the pair correlation function.
\section{\label{section3} Definition of pair correlation function}
In the liquid state theory, the pair correlation function $g(r)$ is
defined as the probability of finding two atoms in the liquid at a
distance $r$ from each other~\cite{gr1}\cite{gr2}. In two
dimensional space the quantity $2\pi\rho g(r)rdr$ is the mean number
of atoms inside a ring of radius $r$ with thickness $dr$, centered
on an ``average'' atom. In this expression, $\rho$ is the number
density of the bulk homogeneous liquid. From this definition we can
calculate $g(r)$ in two dimensional coordinate space in Monte Carlo
simulation by the formula:
\begin{equation}
g(r)=\frac{dN(r)}{2\pi\rho rdr}, \label{eqgr0}
\end{equation}
where $dN(r)$ is the number of atoms inside a ring with radius
$(r,r+dr)$ apart from the selected center atom. $dN(r)$ is
normalized by the uniform distribution, so that $g(r)=1$ when there
is no correlation. It needs to be noted that for a finite system,
the boundary effect has to be taken into account since it will
affect the normalization factor. Assume the selected central atom is
near the boundary, then part of the ring with radius from $r$ to
$r+dr$ may lie out of the finite system. Thus the normalization
factor should be $\theta\rho rdr$ instead of $2\pi\rho rdr$, where
$\theta$ is the angle of the arc inside the system. To eliminate the
boundary effect, we add a weight $w$ to the above formula, and
define
\begin{equation}
g(r)=\frac{dN(r)\cdot w}{2\pi\rho rdr}, \qquad w={2\pi\over\theta}.
\label{eqgr1}
\end{equation}
Fig.~\ref{hadrongas} is the pair correlation function $g(r)$ for a
randomly distributed hadron gas. Fig.~\ref{hadrongas}(a) shows the
result from Eq.~(\ref{eqgr0}) and Fig.~\ref{hadrongas}(b) is that
after the boundary effect correction by Eq.~(\ref{eqgr1}). Evidently
the correction is effective up to
the radius 7\;fm of the big circle.
All the following calculations are restricted in this $r$ range.
\begin{figure}
\includegraphics[width=0.9\linewidth]{grhadron.eps}
\caption{\label{hadrongas} Pair correlation function for randomly
distributed hadron gas. (a) is without boundary correction and (b)
is with boundary correction. }
\end{figure}
To calculate the pair correlation function for quarks in the
above-mentioned molecule-like aggregation model, the usual
definition about $g(r)$ has to be modified since, due to color
confinement the correlation between quarks does not happen along
straight lines in geometrical space but are along the zigzag path of
the connecting bonds. In order to reveal the structure information
of the sQGP inside clusters, the correlation function must be
defined as a function of the zigzag distance $D$ along bonds between
quarks, {\it i.e.}\ } \def\eg{{\it e.g.}\ } \def\cf{{\it cf.}\ $g(D)$. The zigzag distance $D$ is referred to as
``chemical distance'' in the language of percolation
theory~\cite{chemicaldistance}. The formula to calculate $g(D)$ in 2
dimensional space is
\begin{equation}
g(D)=\sum_r\frac{dN(D,r)\cdot w}{2\pi\rho rdr}. \label{eqgd}
\end{equation}
In the calculation, a quark is randomly selected as center and the
distance $D$ and $r$ between this center and any other quarks in the
same cluster are evaluated. $dN(D,r)$ is the number of quarks in the
small region $(D,D+dD;r,r+dr)$. The normalization $2\pi \ta r\d r$
for normal liquid is still used. The color confinement in the
present case, which causes quarks to be able to communicate only
through tunneling bonds, makes the resulting $g(D)$ tends to a value
smaller than unity.
\begin{figure}
\includegraphics[width=0.9\linewidth]{gd_all_scale.eps}
\caption{\label{s019} Quark pair correlation function at temperature
$T=0.475T_c$ and $T=T_c$ in molecule-like aggregation model.}
\end{figure}
\section{\label{section4} pair correlation function from molecule-like aggregation model}
In our molecule-like aggregation model, the formation of infinite
clusters is taken as the appearance of a new constituent {\it i.e.}\ } \def\eg{{\it e.g.}\ } \def\cf{{\it cf.}\ sQGP in
the system. The temperature where infinite clusters start to form is
referred to as $T_c$ and all the temperatures are scaled by it while
investigating the quark pair correlation function inside clusters.
\begin{figure*}
\includegraphics[width=0.7\linewidth]{beforeTc.eps}
\caption{\label{beforeTc} Quark pair correlation function $g(D)$ in
the range $D>0.2$ fm before $T_c$ in the molecule-like aggregation
model.}
\end{figure*}
\begin{figure}
\includegraphics[width=0.7\linewidth]{afterTc.eps}
\caption{\label{afterTc} (color online) Quark pair correlation
function $g(D)$ in the range $D>0.2$ fm from the start of crossover
to the end of crossover in the molecule-like aggregation model.}
\end{figure}
At each temperature, one quark is randomly selected from a randomly
chosen cluster in the system as the center particle. The geometrical
distance $r$ and chemical distance $D$ between this center quark and
any other quarks inside this cluster are calculated. Pair
correlation function $g(D)$ is then evaluated by Eq.~(\ref{eqgd}).
Fig.~\ref{s019} shows the pair correlation function for $T=0.475
T_c$ and $T=T_c$. At $T=0.475 T_c$, the maximum bond length
$S_0=0.19$\;fm is less than the smallest distance between any two
cells {\it i.e.}\ } \def\eg{{\it e.g.}\ } \def\cf{{\it cf.}\ $S_0<2 r_e$ and there is no bond formed between different
cells. Thus the peak at $D<0.2$\;fm in Fig.~\ref{s019} shows the
intra-cell structure, which is analog to the unpenetrable distance
in the usual liquid state theory. When $T=T_c$, infinite clusters
appear with small probability, and we see from the right panel of
Fig.~\ref{s019} that small bumps are shown at larger $D$ beside the
peak at $D<0.2$\;fm. To study the quark correlation induced by
delocalization and the formation of bonds, we will not look at the
trivial intra-cell correlation and only care the region where the
new correlations appear. In the following figures, only $g(D)$ in
the range $D>0.2$ fm is drawn to focus on the structure of the pair
correlation function induced by quark delocalization.
Fig.'s~\ref{beforeTc} show the $g(D)$ evolution at $T<T_c$. When $T$
is much less than $T_c$, i.e. long before crossover started, there
are no clear correlation peaks, while when going near to crossover
temperature, shoulder appears and quickly grows into the first
pronounced peak at $T=0.804 T_c$. As $T$ increases further new
shoulder emerges following this peak and the second peak forms when
$T=T_c$, cf. the full line in Fig.~\ref{afterTc}.
From the evolution of the pair correlation function with
temperature, we can get the following information about the
structure of gQGP formed in the molecule-like aggregation model:
(1) The first highest peak of the pair correlation function
represents the correlation from the nearest neighboring quarks
between different hadrons. The other small peaks are diminishing as
$D$ increases because of the long range disorder. The shape of
$g(D)$ around $T_c$ shows the typical short-range-order behavior of
liquid state, which indicates that the quark matter in the
molecule-like aggregation model possesses liquid structure.
(2) During the process of crossover, {\it i.e.}\ } \def\eg{{\it e.g.}\ } \def\cf{{\it cf.}\ $T_c<T<1.39T_c$, the
position of peaks shift to larger $D$ as the temperature increases,
cf. Fig.~\ref{afterTc}, showing that the correlation length
increases with temperature. This indicates that the viscosity is
getting smaller and smaller when temperature increases.
\section{Conclusion}
The molecule-like aggregation model provides a clear picture for the
structure of the matter formed in crossover, {\it i.e.}\ } \def\eg{{\it e.g.}\ } \def\cf{{\it cf.}\ gQGP, which is a
special form of sQGP, and the evolution of the structure in the
process of crossover. In this picture, quarks are moving from one
cell, which is initially hadron, to the other through bonds to form
grape shape quark matter. Basing on this model the pair distribution
function related to chemical distance $D$ for gQGP and its evolution
in the whole process of crossover are investigated. The typical
behavior of pair distribution function demonstrates that liquid
structure exists for the gQGP. The temperature dependence of the
correlation range qualitatively shows that the viscosity is getting
lower as the temperature increases during the crossover process.
It is worth while noticing that the above conclusion on the behavior
of pair correlation function in the process of crossover is mainly
based on the molecule-like aggregation assumption itself, and does
not rely much on the toy model used in the calculation. It could be
expected that the qualitative features shown in Fig's.2 and 6 will
persist in a more realistic case.
|
1,116,691,499,170 | arxiv | \section{Introduction and Main Results}
\subsection{Definitions}
In this paper, we consider the \textbf{semi-random process} suggested by Peleg Michaeli and studied recently in \cite{beneliezer2019semirandom,beneliezer2020fast,gao2020hamilton} that can be viewed as a ``one player game''. The process starts from $G_0$, the empty graph on the vertex set $[n]:=\{1,\ldots,n\}$ where $n \ge 1$. In each step $t$, a vertex $u_t$ is chosen uniformly at random from $[n]$. Then, the player (who is aware of graph $G_t$ and vertex $u_t$) must select a vertex $v_t$ and add the edge $u_tv_t$ to $G_t$ to form $G_{t+1}$. The goal of the player is to build a (multi)graph satisfying a given property $\mathcal{P}$ as quickly as possible. It is convenient to refer to $u_t$ as a {\bf square}, and $v_t$ as a {\bf circle} so every edge in $G_t$ joins a square with a circle. We say that vertex $j \in [n]$ is \textbf{covered} by the square $u_t$ arriving at round $t$,
provided $u_t = j$. The analogous definition extends to the circle $v_t$. Equivalently, we may view $G_t$ as a directed graph where each arc directs from $u_t$ to $v_t$. For this paper, it is easier to consider squares and circles for counting arguments.
A \textbf{strategy} $\mathcal{S}$ is defined by specifying for each $n \ge 1$, a sequence of functions $(f_{t})_{t=1}^{\infty}$, where for each $t \in \mathbb{N}$, $f_t(u_1,v_1,\ldots, u_{t-1},v_{t-1},u_t)$ is a distribution over $[n]$
which depends on the vertex $u_t$, and the history of the process up until step $t-1$. Then, $v_t$ is chosen according to this distribution. If $f_t$ is an atomic distribution, then $v_t$ is determined by $u_1,v_1, \ldots ,u_{t-1},v_{t-1},u_t$. Observe that this means that the player needs to select her strategy in advance, before the game actually starts. We then denote $(G_{t}^{\mathcal{S}}(n))_{i=0}^{t}$ as
the sequence of random (multi)graphs obtained by following the strategy $\mathcal{S}$ for $t$ rounds; where we shorten $G_{t}^{\mathcal{S}}(n)$
to $G_t$ or $G_{t}(n)$ when clear.
Suppose $\mathcal{P}$ is a monotonely increasing property. Given a strategy $\mathcal{S}$ and a constant $0<q<1$, let $\tau_{\mathcal{P}}(\mathcal{S},q,n)$ be the minimum $t \ge 0$ for which $\mathbb{P}[G_{t} \in \mathcal{P}] \ge q$,
where $\tau_{\mathcal{P}}(\mathcal{S},q,n):= \infty$ if no such $t$ exists. Define
\[
\tau_{\mathcal{P}}(q,n) = \inf_{ \mathcal{S}} \tau_{\mathcal{P}}( \mathcal{S},q,n),
\]
where the infimum is over all strategies on $[n]$.
Observe that for each $n \ge 1$, if $0 \le q_{1} \le q_{2} \le 1$, then $\tau_{\mathcal{P}}(q_1,n) \le \tau_{\mathcal{P}}(q_2,n) $ as $\mathcal{P}$ is increasing. Thus, the function $q\rightarrow \limsup_{n\to\infty} \tau_{\mathcal{P}}(q,n)$ is non-decreasing,
and so the limit,
\[
\tau_{\mathcal{P}}:=\lim_{q\to 1^-}\limsup_{n\to\infty} \frac{\tau_{\mathcal{P}}(q,n) }{n},
\]
is guaranteed to exist. The goal is typically to compute upper and lower bounds on $\tau_{\mathcal{P}}$
for various properties $\mathcal{P}$.
\subsection{Main Results}
In this work, we focus on the property of having a perfect matching, which we denote by ${\tt PM}$. We remark that by convention, we say that a graph on an odd number of vertices has a perfect matching, if the matching saturates all but one vertex. Hence, if $n$ is odd, then $\tau_{{\tt PM}}(q,n)\le \tau_{{\tt PM}}(q,n+1)$ by the following natural coupling between the two corresponding processes. Indeed, the first strategy on $n$ vertices may steal the second strategy on $n+1$ vertices if the square and the circle are both on $[n]$. Otherwise, the square and the circle are placed uniformly at random on $[n]$. Immediately the subgraph induced by $[n]$ by the second strategy is a subgraph of the graph constructed by the first strategy and the inequality follows. Since $\tau_{{\tt PM}}$ is an asymptotic definition in $n$, it suffices to consider only even $n$. Note also that since we focus on creating perfect matchings, we shall hereby restrict our attention to strategies which do not create self-loops.
Our first result is an improvement on the upper bound of $\tau_{\texttt{PM}}$ from $1+2/e < 1.73576$ to $1.20524$. The current upper bound of $1+2/e$ follows from the fact that one may couple the semi-random process with the following process that generates a random bipartite graph which is known to have a perfect matching with probability tending to $1$ as $n \rightarrow \infty$ (\emph{a.a.s.})~\cite{pittel}. The graph has two bipartite parts of size $n/2$ and is generated in two rounds. The first round is the bipartite version of the 1-out process. That is, each vertex chooses an out-neighbour independently and uniformly at random from the other vertex part. A vertex is classified as unpopular if it has been chosen by at most one vertex. In the second round, each unpopular vertex chooses another out-neighbour independently and uniformly at random from the other vertex part. The final bipartite graph is obtained by ignoring the directions of the arcs.
We propose a fully adaptive algorithm to construct a semi-random graph with a perfect matching, which gives the following upper bound on $\tau_{\texttt{PM}}$.
\begin{theorem} \label{thm:main_upper_bound}
$\tau_{\texttt{PM}} \le \beta + 10^{-5} \le 1.20524$, where $\beta$ is derived
from a system of differential equations.
\end{theorem}
Our second result is an improvement on the lower bound of $\tau_{\texttt{PM}}$ from $\ln(2) \ge 0.69314$ to $0.95489$.
First observe that trivially $\tau_{{\tt PM}} \ge 1/2$, as any strategy of the player must wait at least $n/2$ rounds in order to build a perfect matching. In fact, this lower bound can be improved to $\ln(2) \ge 0.69314$, as was first observed in~\cite{beneliezer2019semirandom}. There are two obvious necessary conditions for the existence of a perfect matching, both giving exactly the same lower bound. If there exists a perfect matching in $G_t$, then $G_t$ has the minimum degree at least 1 and there are at least $n/2$ vertices in $G_t$ with at least one square. In order to warm-up we re-prove this result in Subsection~\ref{sec:warm-up}.
\begin{proposition}[\cite{beneliezer2019semirandom}] \label{prop:warm_up_lowerbound}
$\tau_{\texttt{PM}} \ge \ln(2) \ge 0.69314$.
\end{proposition}
The precise statement of our lower bound is in terms of a root of a function. Specifically,
define
\[
\alpha = \inf\{b\ge 0: g(b) \ge 1/2 \},
\]
where
\begin{equation*}
g(b) := 1 + \frac {1-2b}{2} \exp(-b) - (b+1) \exp(-2b) - \frac {1}{2} \exp(-3b)
\end{equation*}
We prove the following:
\begin{theorem} \label{thm:main_lower_bound}
$\tau_{\texttt{PM}} \ge \alpha \ge 0.93261$.
\end{theorem}
\subsection{Previous Results}
Let us briefly describe a few known results on the semi-random process. In the very first paper~\cite{beneliezer2019semirandom}, it was shown that the process is general enough to approximate (using suitable strategies) several well-studied random graph models. In the same paper, the process was studied for various natural properties such having minimum degree $k \in {\mathbb N}$ or having a fixed graph $H$ as a subgraph.
The property of having a Hamilton cycle, which we denote by ${\tt HAM}$, was also studied for the semi-random process. As observed in~\cite{beneliezer2019semirandom}, if $G_t$ has a Hamilton cycle, then $G_t$ has the minimum degree at least 2 yielding $\tau_{\tt HAM} \ge \ln 2+ \ln(1+\ln2) \ge 1.21973$. On the other hand, it is known that the famous $3$-out process is \emph{a.a.s.}\ Hamiltonian~\cite{3out}. As the semi-random process can be coupled with the $3$-out process, we get that $\tau_{\tt HAM} \le 3$. A new upper bound was obtained in~\cite{gao2020hamilton} in terms of an optimal solution of an optimization problem, whose value is believed to be $2.61135$ by numerical support. In the same paper, the lower bound mentioned above was shown
to not be tight.
Finally, let us mention about the property of containing a given spanning graph $H$ as a subgraph. It was asked by Noga Alon whether for any bounded-degree $H$, one can construct a copy of $H$ \emph{a.a.s.}\ in $O(n)$ rounds. This question was answered positively in a strong sense in~\cite{beneliezer2020fast}, in which is was shown that any graph with maximum degree $\Delta$ can be constructed \emph{a.a.s.}\ in $(3\Delta/2+o(\Delta))n$ rounds. They also proved that if $\Delta = \omega (\log(n))$, then this upper bound improves to $(\Delta/2 + o(\Delta))n$ rounds. Note that both of these upper bounds are asymptotic in $\Delta$. When $\Delta$ is constant in $n$, such as in both the perfect matching and Hamiltonian cycle setting, determining the optimal dependence on $\Delta$ for the number of rounds needed to construct $H$ remains open.
\section{Upper Bound}
In this section, we describe an adaptive algorithm which builds a perfect matching in around $1.2n$ rounds \emph{a.a.s.}
In order to warm-up, we start with a slightly simpler algorithm to analyze and then discuss the adjustments needed to claim the desired upper bound.
Note that throughout the paper, floor is automatically applied to any number that needs to be an integer (typically a function of $n$).
\subsection{Warming-up}\label{warm-up-upper-bound}
In this section we analyse a simplified algorithm which constructs a semi-random graph with a perfect matching in less than $1.28n$ steps.
The algorithm operates in two stages.
In the first stage, the algorithm keeps building the matching greedily whenever possible, but will keep $u_tv_t$ for future augmentation if $u_t$ lands on a saturated vertex.
In each step of the algorithm we will keep track of a matching $M_t$ that is currently built and the set $U_t$ of unsaturated vertices. We will also maintain a set of red vertices which is a subset of vertices that are saturated by $M_t$.
If a red vertex is hit by a square, then an augmenting path will be created and we can update $M_t$ to a larger matching.
In order to briefly explain the idea, imagine $uv$ is an edge in the matching. If a square lands on $u$ then we will place its corresponding circle on an unsaturated vertex $x$, and then colour $v$ red. If in the future a square is placed on $v$, then we will place its corresponding circle on an unsaturated vertex $y$. Now the path $xuvy$ is an augmenting path. When the red vertex $v$ receives a square, the algorithm needs to know the neighbour of $u$ that is unsaturated, and there might be several of them. In this simple algorithm, we keep track of only the first square that landed on $u$.
Suppose that at step $t$, $u_t$ lands on a saturated vertex $u$ and $v_t$ is the corresponding circle which is necessarily placed on an unsaturated vertex. Colour vertex $u$ as well as the edge $u_tv_t$ green, and colour the mate of $u$ in the matching red.
Below is a formal description of the $t$-th step of the algorithm.
\begin{enumerate}[label=(\subscript{A}{{\arabic*}})]
\item If $u_t\in U_{t-1}$ then let $v_t$ be a uniformly random vertex in $U_{t-1}$. Let $M_t=M_{t-1}\cup \{ u_tv_t \}$ and $U_t=U_{t-1}\setminus \{u_t,v_t\}$. For every green vertex $x$, if it is adjacent to either $u_t$ or $v_t$ by a green edge then uncolour this green edge and uncolour $x$ (from green) and uncolour the mate of $x$ in $M_{t}$ (from red). \label{eqn:randomized_case_1}
\item If $u_t$ lands on a red vertex, then let $v_t$ be a uniformly random vertex in $U_{t-1}$. Let $x$ be the mate of $u_t$ in $M_{t-1}$. Let $y$ be the vertex in $U_{t-1}$ which is adjacent to $x$ by a green edge. Let $M_t$ be the matching obtained by augmenting along the path $yxu_tv_t$, and let $U_t = U_{t-1} \setminus \{y, v_t\}$. Update the green vertices and edges and the red vertices accordingly as in \ref{eqn:randomized_case_1}. \label{eqn:randomized_case_2}
\item If $u_t$ lands on an uncoloured saturated vertex, then choose $v_t$ uniformly at random from $U_{t-1}$. Colour the edge $u_tv_t$ and the vertex where $u_t$ lands on green and colour the mate of $u_t$ in $M_{t-1}$ red. Let $M_t=M_{t-1}$ and $U_t = U_{t-1}$. \label{eqn:randomized_case_3}
\item If $u_t$ lands on a green vertex, then let $v_t$ be an arbitrary vertex in $[n]$. Let $M_t=M_{t-1}$ and $U_t = U_{t-1}$. The edge $u_tv_t$ will not be used in the matching construction in the future.
\end{enumerate}
The first stage terminates at the step when $|U_t|$ becomes at most $\eps n$ where $\eps=10^{-14}$. In the second stage, we design the algorithm which saturates the remaining unsaturated vertices in at most $100\sqrt{\eps}n=10^{-5}n$ steps. We will define and analyse the second stage after analysing the first.
\medskip
Let $X(t)=2|M_t|$ and let $R(t)$ denote the number of red vertices. By the algorithm, $R(t)$ is also equal to the number of green vertices, and thus equal to the number of green edges.
Let $H_t:=(X(i), R(i))_{0\le i\le t}$. Note that $H_t$ does \textit{not} encompass the full history of random graph process at time $t$ (i.e., $G_0, \ldots ,G_t$ -- the first $t+1$
graphs constructed by the randomized algorithm).
We condition on less information so that the circle placements amongst the unsaturated vertices remain distributed uniformly at random. This allows
us to claim the following expected difference equations:
\begin{eqnarray*}
\mathbb E[X(t+1)-X(t)\mid H_t]&=&\frac{2(n-X(t) + R(t))}{n} + O(1/n),\\
\mathbb E[R(t+1)-R(t)\mid H_t]&=& \frac{n-X(t)}{n} \cdot \frac{-2R(t)}{n-X(t)}+ \frac{R(t)}{n} \left(-1- \frac{2(R(t)-1)}{n-X(t)} \right) \\
&& + \frac{X(t)-2 R(t)}{n} + O(1/n).
\end{eqnarray*}
The first equation above is obvious, as the probability that $u_{t+1}$ lands on a vertex in $U_t$ or a red vertex is $(n-X(t)+R(t))/n$. The probability that $v_{t+1}$ is on the same vertex as $u_{t+1}$ in \ref{eqn:randomized_case_1}, or is on vertex $y$ in \ref{eqn:randomized_case_2}, is $O(1/n)$.
In order to justify the second equation, note that if $u_{t+1}$ lands on a vertex in $U_t$, then two vertices $z_1$ and $z_2$ in $U_t$ become saturated after the augmentation.
Since the ends of the set of green edges in $U_t$ are uniformly distributed, the expected number of green edges incident with $z_1$ or $z_2$ is equal to $2R(t)/(n-X(t))$.
The other ends of these green edges become uncoloured from green after the augmentation which, in turn, forces their mates to become uncoloured from red. If $u_{t+1}$ lands on a red vertex, the situation is similar, except that the mate of $u_{t+1}$ is first uncoloured from green. If $u_{t+1}$ lands on a green vertex, then there is no change to $R(t)$. Finally, if $u_{t+1}$ lands on a vertex that is neither unsaturated nor coloured, then a new green vertex is created. This explains the second equation above.
By writing $x(s)=X(sn)/n$ and $r(s)=R(sn)/n$, we have that
\begin{eqnarray}
x'&=&2(1-x+z),\label{x'}\\
r'&=&\frac{-2r}{1-x}(1-x+r)-r+x-2r, \label{y'}
\end{eqnarray}
with the initial conditions $x(0)=r(0)=0$.
The right hand sides of~(\ref{x'}) and~(\ref{y'}) are continuous and Lipschitz in the connected open set $D=\{(s,x,r): -\eps<s<3, |r|<2, -\eps<x<1-\eps/4\}$ which contains the point $(0,x(0),z(0))=(0,0,0)$. Let $T_D$ be the first step $t$ where $(t/n,X(t)/n, R(t)/n)\notin D$. Then, $|X(t+1)-X(t)|\le 1$ for every $t<T_D$, and with probability $O(n^{-2})$, $|R(t+1)-R(t)|\le \log^2 n$ for every $t<T_D$ (in order to see the second assertion, note that, in expectation, every unsaturated vertex is adjacent to $R(t)/|U_t|\le 4/\eps$ green vertices, and thus the claim easily follows by standard bounds on the tail probability of a binomial variable). By the differential equation method of Wormald (see, for example,~\cite[Theorem 5.1]{de} or~\cite{warnke2019wormalds}) (applied with $\gamma=O(n^{-2})$, $\beta=\log^2 n$ and $\lambda=n^{-1/4}$),
the differential equations~(\ref{x'}) and~(\ref{y'}) with the given initial conditions have a unique solution that can extend arbitrarily close to the boundary of $D$, and \emph{a.a.s.}\ $X(t)=x(t/n)n+O(\lambda n)$ for every $t$ where $t/n<\sigma$ where $\sigma$ is the supremum of $s$ where $x(s)\le 1-\eps/3$ and $s<3$.
Numerical calculations show that $x$ reaches $1-\eps/3$ before $s=1.2769497$.\footnote{Maple worksheet may be accessed at \texttt{https://math.ryerson.ca/\~{}pralat/}.} Hence, \emph{a.a.s.}\ after $1.2769497n$ steps there are at most $\eps n$ unsaturated vertices.
\medskip
Finally, we describe the second stage of the algorithm, which we refer to as the \textbf{clean-up algorithm}. Let $j_0$ be the number of unsaturated vertices after the first stage. Thus, $j_0\le \eps n$. Let $j_k=\lfloor(3/4)^k j_0\rfloor$, for $k=1,2,\ldots$.
For each $k\ge 1$, the algorithm saturates $j_{k-1}-j_k$ unsaturated vertices by doing the following.
\begin{enumerate}
\item[(i)] Uncolour all vertices and squares in the whole graph.
\item[(ii)] Sprinkle $\lfloor\sqrt{3 j_{k-1}n}/4\rfloor$ semi-random edges $u_tv_t$ as follows. If $u_t$ lands on a saturated vertex $x$, then let $v_t$ be an unsaturated vertex selected uniformly at random, and colour the mate of $x$ in $M_t$ by red (which is possibly already red). If $u_t$ lands on an unsaturated vertex, then let $v_t$ be an arbitrary vertex, and the algorithm will not use this edge for constructing the matching.
\item[(iii)] Let ${\mathcal R}$ be the set of red vertices after (ii). At each step $t$, if $u_t$ lands on a red vertex or an unsaturated vertex then let $v_t$ be a uniformly random unsaturated vertex, and then augment $M_t$ as in \ref{eqn:randomized_case_1} and \ref{eqn:randomized_case_2} as in the first stage, and update ${\mathcal R}$ accordingly. Stop when $|U_t|=j_k$. Let $T_k$ be the total number of steps in (ii) and (iii).
\end{enumerate}
The total number of steps in the final stage of constructing a perfect matching is then $\sum_{k\ge 1} T_k$. Next we bound $T_k$ for every $k$. Of course, the number of steps in (ii) is at most $\sqrt{3 j_{k-1}n}/4$. It is easy to show by standard concentration arguments that with probability $1-O(n^{-2})$, $|{\mathcal R}| \ge \sqrt{3 j_{k-1}n}/5$ after (ii). (Indeed, since $j_{k-1} \le \eps n$ and the number of rounds is $\sqrt{3 j_{k-1}n}/4 = o(n)$, $\mathbb E[|{\mathcal R}|] \ge (1-\eps+o(1)) \sqrt{3 j_{k-1}n}/4$.) During the execution of (iii), for each augmentation that is performed, the expected number of red vertices that become uncoloured is at most
\[
1+2\cdot \frac{\sqrt{3 j_{k-1}n}/4}{j_k},
\]
since the total number of red vertices is at most $\sqrt{3 j_{k-1}n}/4$, and the total number of unsaturated vertices in each round is at least $j_k$.
There are $\frac{1}{2}(j_{k-1}-j_k)$ total augmentations in (iii). Thus, by standard concentration arguments, with probability $1-O(n^{-2})$, at the end of (iii),
\[
|{\mathcal R}|\ge \frac{\sqrt{3 j_{k-1}n}}{5}-2\cdot\left(1+ 2\cdot \frac{\sqrt{3 j_{k-1}n}/4}{j_k} \right) \cdot \frac{j_{k-1}-j_k}{2} = \frac{\sqrt{3j_{k-1}n}}{30}-\frac{j_{k-1}}{4}\ge \frac{\sqrt{2j_{k-1}n}}{30},
\]
where the last inequality holds since $j_{k-1}\le \eps n$ where $\eps=10^{-14}$ by our choice.
Consequently, the probability that a square lands on a red vertex at each step in (iii) is at least $\sqrt{2j_{k-1}}/30\sqrt{n}$.
Hence,
\[
{\mathbb E} T_k\le \frac{1}{4}\sqrt{3j_{k-1}n}+\left(\frac{\sqrt{2j_{k-1}}}{30\sqrt{n}}\right)^{-1}\frac{j_{k-1}-j_k}{2}<3.1\sqrt{j_{k-1}n}.
\]
Then, again by standard concentration arguments, \emph{a.a.s.}\
\[
T_k\le \max\{4\sqrt{j_{k-1}n},\log^2 n\} =4\sqrt{j_{k-1}n} \le 4\sqrt{(3/4)^{k-1}\eps}\cdot n,\ \ \mbox{for every $k\ge 1$}.
\]
Thus, \emph{a.a.s.}\ the total number of steps consumed in the second stage is at most
\[
\sum_{k\ge 1} T_k \le \sum_{k\ge 1} 4\sqrt{(3/4)^{k-1}\eps}\cdot n <100\sqrt{\eps} n.
\]
By our choice of $\eps$, $100\sqrt{\eps}=10^{-5}$, \emph{a.a.s.}\ the total number of steps the algorithm spend on constructing a perfect matching is at most
$(1.2769497+10^{-5})n<1.27696n$.
\subsection{Improving the Upper Bound}
In this section, we describe a modified algorithm which allows us to get the claimed upper bound
of Theorem \ref{thm:main_upper_bound}. As before, the algorithm keeps building the matching greedily whenever possible. However, instead of
making random choices, it will choose its circles both deterministically and in a strategic way. Specifically,
suppose that $u_t$ lands on a saturated vertex. In this case, we choose $v_t$ such that $v_t$ has the minimum number of circles
on it amongst the vertices of $U_{t-1}$. We make the analogous decision when $u_t$ lands on an unsaturated vertex,
as well as when $u_t$ lands on a red vertex and we wish to extend the matching via an augmenting path. We now formally describe our algorithm
for round $t$:
\begin{enumerate}[label=(\subscript{B}{{\arabic*}})]
\item If $u_t\in U_{t-1}$, then let $v_t$ be a vertex of $U_{t-1}$ with a minimum number of circles among those in $U_{t-1}\setminus\{u_t\}$. Let $M_t=M_{t-1}\cup \{ u_tv_t \}$ and $U_t=U_{t-1}\setminus \{u_t,v_t\}$. For every green vertex $x$, if it is adjacent to either $u_t$ or $v_t$ by a green edge then uncolour this green edge and $x$ (from green) and uncolour the mate of $x$ in $M_{t}$ (from red). \label{eqn:deterministic_case_1}
\item If $u_t$ lands on a red vertex, then let $x$ be the mate of $u_t$ in $M_{t-1}$. Let $y$ be the vertex in $U_t$ which is adjacent to $x$ by a green edge. Let $v_t$ be a vertex of $U_{t-1}$ with a minimum number of circles among those in $U_{t-1}\setminus\{y\}$. Let $M_t$ be the matching obtained by augmenting along the path $yxu_tv_t$, and let $U_t = U_{t-1} \setminus \{y, v_t\}$. Update the green vertices and edges and the red vertices accordingly as in \ref{eqn:deterministic_case_1}. \label{eqn:deterministic_case_2}
\item If $u_t$ lands on an uncoloured saturated vertex, then let $v_t$ be a vertex of $U_{t-1}$ with a minimum number of circles. Colour the vertex where $u_t$ lands on green and colour the mate of $u_t$ in $M_{t-1}$ red. Colour the edge $u_tv_t$ green. Let $M_t=M_{t-1}$ and $U_t = U_{t-1}$. \label{eqn:deterministic_case_3}
\item If $u_t$ lands on a green vertex, then let $v_t$ be an arbitrary vertex in $[n] \setminus U_{t-1}$. (We restrict to $[n] \setminus U_{t-1}$ so that we can conveniently keep track of the number of circles on our unsaturated vertices.) Let $M_t=M_{t-1}$ and $U_t = U_{t-1}$. The edge $u_tv_t$ will not be used in the matching construction in the future. \label{eqn:deterministic_case_4}
\end{enumerate}
As before, we define $X(t)$ to be the number of saturated vertices after $t \ge 0$ rounds.
Given $q \ge 0$, we define $\widetilde{R}_{q}(t)$ to be the number of unsaturated vertices with
precisely $q$ circles after $t$ rounds. Note that $\widetilde{R}_{q}(0)=0$ for all $q \ge 1$, and
$\widetilde{R}_{0}(0)=n$. Now, for $q \ge 1$, we say that a red vertex is of \textbf{type}
$q$ at round $t$, provided its mate in $M_{t}$ is
adjacent to an unsaturated vertex via a green edge with precisely $q$ circles on it.
We denote $R_{q}(t)$ as the number of red vertices of type $q$ at time $t \ge 0$.
Observe that $R_{q}(t) = q \widetilde{R}_{q}(t)$ for $q \ge 1$ and $t \ge 0$. Observe also that for every $t\ge 0$, there is $q$ where all red vertices are either of type $q$ or of type $q+1$.
For each $q \ge 1$, let us define the stopping time $\tau_{q}$ to
be the smallest $t \ge 0$ such that $\widetilde{R}_{j}(t) = 0$ for all $j<q$. It is obvious that $\tau_q$ is well-defined and is non-decreasing in $q$. Moreover, after the step $\tau_q$, all unsaturated vertices in the graph have precisely $q$ circles. This is obvious if what happened in step $\tau_q$ is in case \ref{eqn:deterministic_case_3}. Case \ref{eqn:deterministic_case_4} cannot happen in step $\tau_q$, as the number of circles on the unsaturated vertices does not change in this case. If case \ref{eqn:deterministic_case_1} or \ref{eqn:deterministic_case_2} occurs in step $\tau_q$, it is possible that $v_t$ receives more than $q$ circles in some rare situations, but $v_t$ becomes saturated after the augmentation and so the number of circles on every other unsaturated vertex is equal to $q$.
It follows then that $\widetilde{R}_{q}(t) = |U_t| = n - X(t)$. By definition, $\tau_{0}=0$. Let us refer to \textbf{phase} $q$ as those $\tau_{q-1} \le t < \tau_{q}$.
We shall analyze our algorithm for the first $k$ phases using the differential equation
method. For the case when $k = 1100$, we show that \emph{a.a.s.}\ $X(\tau_k) \ge (1-10^{-6}) n$ and $\tau_{k} \le 1.20365 n $. We complete the perfect matching
by running the randomized algorithm from Subsection~\ref{warm-up-upper-bound}.\footnote{Maple worksheet may be accessed at \texttt{https://math.ryerson.ca/\~{}pralat/}.} The only difference is that the initial conditions are $x(0)=1-10^{-6}$ (instead of $x(0)=0$) and $z(0)=0$. The conclusion is that \emph{a.a.s.}\ after additional $0.00158n$ steps there are at most $10^{-14}n$ unsaturated vertices. As before, the clean-up algorithm for the remaining unsaturated vertices takes at most $0.00001n$ rounds, yielding the upper bound of $(1.20365 + 0.00158 + 0.00001)n = 1.20524n$.
\subsubsection{Analyzing the $k$ phases}
Unlike in the analysis of the randomized algorithm, we can safely condition on
the full history of the random graph process in each phase.
If $G_{0},\ldots ,G_{t}$ correspond to the first $t+1$ graphs constructed
by the deterministic algorithm, then let $H_{t}:=(G_0,\ldots, G_t)$.
In phase $1$, we place circles on the unsaturated vertices, thus creating red vertices of type~$1$. This leads to the following
equations regarding the expected changes of $X(t)$ and $R_1(t)$, for $0 \le t < \tau_1$:
\begin{eqnarray*
\mathbb E[X(t+1)-X(t) \mid H_t]&=& 2 \left(\frac{n-X(t) + R_{1}(t)}{n} \right),\\
\mathbb E[R_{1}(t+1)- R_{1}(t) \mid H_t]&=& \left( \frac{X(t) - 2R_{1}(t)}{n}\right) - \frac{2 R_{1}(t)}{n} + O(1/n).
\end{eqnarray*}
Observe that the first equation follows, since $2$ vertices become saturated whenever
a square lands on a red vertex, or an unsaturated vertex. The second equation follows
since we create an additional red vertex whenever a square lands on an uncoloured saturated
vertex (of which there exist $X(t) - 2R_{1}(t)$). We destroy a red vertex whenever a square
lands on a type $1$ red vertex, or an unsaturated vertex with one circle on it. There
are $R_{1}(t)$ of each of these. Note that the $O(1/n)$ term accounts for contributions from the following two cases. In the first case, $u_t$ lands on an unsaturated vertex which is the unique one in $U_t$ which is not covered by a circle. In that case, $R_1(t)$ decreases by one, but the probability that this case occurs is $O(1/n)$. In the other case, $u_t$ lands on a red vertex as in case \ref{eqn:deterministic_case_2} where $y$ happens to be the only unsaturated vertex in $U_t$ that is not covered by any circle. Again, it is easy to see that the expected contribution from this case is $O(1/n)$.
Suppose that the functions $x_1,y_1: [0, \infty) \rightarrow \mathbb{R}$
are the the unique solution to the following system of differential equations:
\begin{eqnarray*}
x_1' &=& 2(1 - x_1 + y_1) \\
y_{1}' &=& x - 4y_1,
\end{eqnarray*}
with initial conditions $x_1(0)=y_{1}(0)=0$.
Let $c_1 > 0$ be the unique value such that $y_{1}(c_1)= 1 - x_{1}(c_1)$.
By applying the differential equation method, we can derive the following:
\begin{proposition} \label{prop:phase_one}
There exists a function $\eps_1(n) =o(1)$, such
that $a.a.s$ for all $0 \le t \le \tau_1$ it holds that
\begin{equation*}
\max\{|X(t) -x_1(t/n) n|,|R_{1}(t) - y_{1}(t/n) n|\} \le \eps_1 n.
\end{equation*}
Moreover, $|\tau_{1}-c_{1} n| \le \eps_1 n$ $a.a.s$.
\end{proposition}
Let us now consider phase $q \ge 2$.
Observe that there exist red vertices only of types $q-1$ and $q$ during phase $q$.
Thus, we can track the total number of red vertices at time $t$ by just focusing on
the random variables $R_{q-1}(t)$ and $R_{q}(t)$.
Specifically, observe the following for $\tau_{q-1} \le t < \tau_{q}$:
\begin{eqnarray*}\label{eqn:phase_expected_changes}
\mathbb E[X(t+1)-X(t) \mid H_t]&=&2 \left(\frac{n-X(t)}{n}+\frac{R_{q-1}(t)}{n} + \frac{R_{q}(t)}{n} \right),\\
\mathbb E[R_{q}(t+1)- R_{q}(t) \mid H_t]&=& q\left( \frac{(X(t) - 2R_{q}(t) - 2 R_{q-1}(t)) - R_{q}(t)}{n}\right) \\
&&- \frac{R_{q}(t)}{n} + O(1/n),
\end{eqnarray*}
where
\[
R_{q-1}(t)=(q-1)\left(n-X(t)-\frac{R_q(t)}{q}\right).
\]
The first equation follows since $2$ vertices become saturated whenever a square lands on either an unsaturated
vertex, or a red vertex. The second equation describes the expected change in the red vertices of type
$q$. Note that $q$ such vertices are created when a square lands on a saturated and uncoloured
vertex, of which there are precisely $X(t) - 2R_{q}(t) -2R_{q-1}(t)$. Similarly, $q$ red vertices of type $q$ are destroyed
when the square lands on a red vertex of type $q$, as well as when the square lands on an unsaturated vertex with precisely
$q$ circles. Observe that there are $R_{q}(t)$ of the former, and $\widetilde{R}_{q}(t)= R_{q}(t)/q$ of the latter.
Finally, the $O(1/n)$ term accounts for the rare case when $v_t$ is forced to be placed on a vertex with $q$ circles for the same reason as in phase 1.
We define
a system of differential equations for each phase $q \ge 2$, whose initial conditions
are determined inductively. The differential equation method implies that the trajectories of the random variables $X(t)$ and $R_q(t)$ are concentrated around the solutions of these differential equations in every phase. Let us assume that
we have defined a system for phase $q-1$, whose solution has functions $x_{q-1}$ and $y_{q-1}$, and a value $c_{q-1} \ge 0$
such that $y_{q-1}(c_{q-1}) = 1 - x_{q-1}(c_{q-1})$ (observe that the base case holds by Proposition~\ref{prop:phase_one}). Note that we may interpret $c_{q-1} n$ as approximately the point in
the process at which all the unsaturated vertices have $q-1$ circles, thus corresponding to the end of phase $q-1$.
Define $x_{q}, y_{q}, z_{q}:[c_{q-1}, \infty) \rightarrow \mathbb{R}$
to be the unique solution to the following system of differential equations:
\begin{eqnarray*
x_{q}' &=& 2(1 - x_q + y_q + z_{q}), \\
y_{q}' &=& q (x_q - 3 y_{q} - 2z_{q}) - y_{q},
\end{eqnarray*}
with initial conditions $x_{q}(c_{q-1})=x_{q-1}(c_{q-1})$, $z_{q}(c_{q-1}) = y_{q-1}(c_{q-1})$, and $y_{q}(c_{q-1})=0$, where $z_q=(q-1)(1-x-y_q/q)$.
Let $c_{q} \ge 0$ be the unique value such that $y_{q}(c_q) = 1- x_{q}(c_q)$ -- observe
that this is precisely the value such that $z_{q}(c_q)=0$,
for all $q \ge 2$.
By applying the differential equation method inductively for each phase $q$, we get the following:
\begin{proposition} \label{prop:phase_q}
For each $q \ge 1$, there exists a function $\eps_q(n) =o(1)$, such
that $a.a.s$, for all $0 \le t \le \tau_q$ it holds that
\begin{equation*}
\max\{|X(t) -x_q(t/n) n|,|R_{q}(t) - y_{q}(t/n) n|\} \le \eps_q n.
\end{equation*}
Moreover, $|\tau_{q}-c_{q} n| \le \eps_q n$ $a.a.s$.
\end{proposition}
\begin{proposition}
If $k = 1100$, then \emph{a.a.s.}\ $\tau_{k} \le 1.20365$ and $X(\tau_{k}) \ge (1-10^{-6}) n$.
\end{proposition}
After $k=1100$ phases, we have constructed a matching $M_{\tau_k}$ which \emph{a.a.s.}\ has at least $(1-10^{-6})n/2$ edges.
To complete the perfect matching, we apply the 2-stage randomised algorithm from Section~\ref{warm-up-upper-bound}. First, we uncolour all vertices and edges of the graph. Then, we keep exactly $(1-10^{-6})n/2$ edges in $M_{\tau_k}$, and ``artificially'' delete the other edges in the matching, and put the incident vertices into $U_{\tau_k}$. Starting again from $t=0$, the argument
from Section \ref{warm-up-upper-bound} can be generalized slightly to show that $X(t)$ and $R(t)$ follow the solutions of the differential equations~(\ref{x'}) and~(\ref{y'}) with initial conditions $x(0)=1-10^{-6}$ and $r(0)=0$. Numerical solutions show that $x(0.00158)>1-\eps$, where $\eps=10^{-14}$. The clean-up
algorithm from Section ~\ref{warm-up-upper-bound} completes the construction in $100\sqrt{\eps} n =10^{-5} n$ rounds. This yields an upper bound of $1.20365+0.00158+10^{-5}=1.20524$ on $\tau_{\texttt{PM}}$, as in Theorem \ref{thm:main_upper_bound}.\qed
\section{Lower Bound}
Throughout the section, we will repeatedly apply the following claim when deriving bounds on $\tau_{\mathcal{P}}$
for a property $\mathcal{P}$:
\begin{proposition} \label{prop:equivalence}
Given $\alpha > 0$, suppose that for any strategy $\mathcal{S}$ and $0 < \delta < \alpha$,
\[
\mathbb{P}[G^{\mathcal{S}}_{T}(n) \in \mathcal{P}] \rightarrow 0
\]
as $n \rightarrow \infty$, where $T=T(n) = (\alpha-\delta) n$.
Then $\tau_{\mathcal{P}} \ge \alpha$.
\end{proposition}
\subsection{Warming-up}\label{sec:warm-up}
We start with the proof of Proposition~\ref{prop:warm_up_lowerbound} as a warm-up for our improved result.
Suppose that we consider some strategy $\mathcal{S}$ of the player
which begins on the empty graph on vertex set $[n]$. Recall
that we say the vertex $j \in [n]$ is \textbf{covered} by the square $u_t$ arriving at round $t$,
provided $u_t = j$. The analogous definition extends to the circle $v_t$.
Let $S_j(t)$ denote the number of squares covering $j$, and let $C_j(t)$ denote the number of circles covering $j$, after $t$ steps of the strategy.
Suppose that $X(t)$ is the number of vertices of $G_t$
which are covered by at least one square. Let us fix
a constant $c < \ln(2)$, and consider $T=T(n) = cn$.
Observe that if $G_t$ has a perfect matching
at time $t \ge 0$, then at least $n/2$ vertices of $[n]$ must have been covered
by at least one square; that is, $X(t) \ge n/2$. On the other hand,
$X(t) = n - \sum_{j \in [n]} \bm{1}_{\{S_j(t)=0\}}$,
and so
\begin{eqnarray*}
\mathbb{E}[X(t)] &=& \sum_{j\in[n]}(1 - \mathbb{P}[S_j(t)=0]) = n \left(1 -\left(1 - \frac{1}{n}\right)^{t} \right) \\
&=& (1+o(1)) n (1 - \exp(-t/n)).
\end{eqnarray*}
Thus, combined with a second moment computation, one can conclude that for any fixed $t \ge 0$, \emph{a.a.s.}
\[
X(t) = (1+o(1)) n (1 - \exp(-t/n)).
\]
Now, $T/n \le c$, and $c < \ln(2)$,
so \emph{a.a.s.}
\[
X(T) \le n (1 + o(1)) (1 - \exp(-c)) < n/2.
\]
Therefore, \emph{a.a.s.}\ $G_T$ does not have a perfect matching. Since this holds for each $c < \ln(2)$,
Proposition \ref{prop:equivalence} implies the desired property and the proof is finished.
\subsection{Reducing to an Approximate Perfect Matching}
Given a constant $c > 0$ and a function $\omega = \omega(n)$, we say that a strategy $\mathcal{S}$ is $\omega$-\textbf{well-behaved}
for $cn$ rounds, if for all $0 \le t \le \lfloor cn \rfloor$, each vertex of $G_{t}(n)$ is covered by at most $\omega$ circles.
In order to prove Theorem \ref{thm:main_lower_bound}, we work with strategies which are well-behaved
for an appropriate choice of $\omega$. We will also define a new property ${\mathcal P}$ which satisfies $\tau_{{\mathcal P}}\le \tau_{\texttt{PM}}$. Moreover, we will show that general strategies do \textit{not} perform significantly better than well-behaved strategies for building a graph in ${\mathcal P}$. This allows us to restrict to analysing well-behaved strategies, and the various random variables yielded by these strategies. Since the strategies are well-behaved, the set of random variables under concern have the nice ``bounded-Lipschitz'' property that is essential for the application of the differential equation method. We start with the definition of the new property ${\mathcal P}$ on the family of directed graphs, and we use $D[S]$ to denote the directed graph induced by set $S \subseteq V(D)$ for a directed graph~$D$.
\begin{definition}\label{def:approximate_pm}
Suppose that $D$ is a directed graph with $m$ arcs on vertex set $[n]$.
We say that $D$ has an $(\mu, \delta)$-\textbf{approximate perfect matching}, denoted by $D\in\texttt{PM}(\mu,\delta)$, if there exists a subset $S \subseteq [n]$
such that:
\begin{enumerate}[label=(\subscript{A}{{\arabic*}})]
\item $D[S]$ has a perfect matching. \label{eqn:perfect_matching_subgraph}
\item All the vertices of $S$ have in-degree at most $\mu$ in $D$. \label{eqn:limited_in_degree}
\item $|S| \ge n - 2 m \delta$. \label{eqn:large_subgraph}
\end{enumerate}
A perfect matching of $D[S]$ is then called a $(\mu,\delta)$-\textbf{approximate perfect matching} of~$D$.
\end{definition}
We first observe the following relation between $\texttt{PM}$ and $\texttt{PM}(\mu,\delta)$.
\begin{proposition}
For any function $\mu = \mu(n)$, it holds that
\[
\tau_{\texttt{PM$(\mu, \mu^{-1})$}} \le \tau_{\texttt{PM}}.
\]
\end{proposition}
\begin{proof}
Suppose that $D$ is a directed graph which has a perfect matching $M$. Let ${\mathcal V}_1$ be the set of vertices whose in-degree is greater than $\mu(n)$. Let ${\mathcal V}_2$ be the set of isolated vertices obtained once vertices in ${\mathcal V}_1$ are removed from $M$. Clearly, $|{\mathcal V}_2|\le |{\mathcal V}_1|\le m/\mu$.
Let $S=[n]\setminus ({\mathcal V}_1\cup {\mathcal V}_2)$. Obviously, $D[S]$ has a perfect matching, and $|S|\ge n-|{\mathcal V}_1|-|{\mathcal V}_2|\ge n-2|{\mathcal V}_1|\ge n-2m\mu^{-1}$. It follows that it is at least as easy to create an approximate perfect matching than to create a perfect one. The proof is completed.
\end{proof}
Next, we confirm that general strategies do not behave significantly better than well-behaved strategies for constructing a directed graph in $\texttt{PM$(\mu, \mu^{-1})$}$, where we view every edge in $G_t$ as an arc from the square to the circle. Moreover we may safely restrict to strategies that construct only simple graphs, as intuitively relocating $v_t$ when $u_tv_t$ is already an edge would perform no worse, if not better. In the next proposition, we obtain a strategy $\mathcal{S}_2$ from a given strategy $\mathcal{S}_1$ by copying the moves of $\mathcal{S}_1$ whenever the move does not create a multiple edge, or create a vertex with too many circles. Otherwise, $\mathcal{S}_2$ relocates the circle to a different vertex (for counting purpose we colour relocated circles blue). We show that $\mathcal{S}_2$ performs at least as well as $\mathcal{S}_1$ quantitatively.
\begin{proposition} \label{prop:strategy_reduction}
For any constant $0 < c < 1$ and function $\mu =\mu(n)=\omega(1)$,
if $\mathcal{S}_1$ is a strategy of the player, then there exists another strategy
$\mathcal{S}_2$ which is $2\mu$-well-behaved for $cn$ rounds, and satisfies
\begin{equation*}
\mathbb{P}[\text{$G_{cn}^{\mathcal{S}_1}(n)$ satisfies $\texttt{PM$(\mu, \mu^{-1})$}$}] \le \mathbb{P}[\text{$G_{cn}^{\mathcal{S}_2}(n)$ satisfies $\texttt{PM$(2\mu, \mu^{-1})$}$}] .
\end{equation*}
Moreover, $\mathcal{S}_2$ does not create any multi-edges, and so $G_{cn}^{\mathcal{S}_2}(n)$ is a simple graph.
\end{proposition}
\begin{proof}
In order to prove the claim,
we couple the execution of $\mathcal{S}_1$
with another strategy $\mathcal{S}_2$ which is
$2\mu$-well-behaved, and which constructs a directed graph in
$\texttt{PM$(2\mu, \mu^{-1})$}$ no later than
when $\mathcal{S}_1$
constructs one in $\texttt{PM$(\mu, \mu^{-1})$}$.
We define $\mathcal{S}_2$ by \textit{stealing}
the strategy of $\mathcal{S}_1$, while making a slight
modification to avoid multi-edges and to ensure that no vertex of $[n]$ is covered
by more than $2\mu$ circles.
Suppose that the squares presented to strategy $\mathcal{S}_1$
are $u_{1}, \ldots , u_{cn}$, and whose corresponding
circles are $v_{1}^{1},\ldots ,v^{1}_{cn}$. We denote
the graph formed by $\mathcal{S}_1$ after $0 \le t \le cn$ rounds
by $G_{t}^{1}$. Let us define another strategy $\mathcal{S}_2$ with the same set of squares as presented to $\mathcal{S}_1$, whose
circles we denote by $v_{1}^{2},\ldots ,v_{cn}^{2}$.
For each
$1 \le t \le cn$, suppose that $u_t$ is presented to $\mathcal{S}_2$, and
$G_{t-1}^{2}$ is the current graph constructed by $\mathcal{S}_2$ (where $G_0^2$
is the empty graph):
\begin{enumerate}
\item If adding $u_{t}v_{t}^{1}$ to $G_{t-1}^{1}$
does \textit{not} cause the number of
circles on $v_{t}^{1}$ to exceed
$\mu$, and $u_tv_t^{1}$ is \textit{not} an edge in $G_{t-1}^{2}$ (note here that we consider $G_{t-1}^{2}$ as an undirected graph),
then set $v_{t}^{2}:= v_{t}^{1}$ \label{eqn:easy_round}.
\item Otherwise, choose $v_{t}^{2} \in [n] \setminus N_{G_{t-1}^{2}}(u_t)$
amongst those vertices which have at most $c/(1-c)$ blue circles, and then
colour the circle $v_{t}^2$ blue.\label{eqn:problematic_round}
\item Add $(u_t, v_{t}^{2})$ to $G_{t-1}^{2}$ to get $G_{t}^2$.
\end{enumerate}
We first observe that the strategy $\mathcal{S}_2$ is well-defined in that there always
exists a choice of $v_{t}^{2}$ which satisfies the requirements of $\eqref{eqn:problematic_round}$.
To see this, notice that trivially $|N_{G_{t-1}^{2}}(u_t)| \le c n$, and so there are at least
$(1 - c) n> 0$ vertices which are not adjacent to $u_t$ in $G_{t-1}^{2}$, as $c <1$ by assumption. Thus, since there
are at most $cn$ blue circles, a simple averaging argument ensures that there exists a vertex
of $[n] \setminus N_{G_{t-1}^{2}}(u_t)$ which is covered by at most $c/(1-c)$ blue circles.
We now argue that the strategy $\mathcal{S}_2$ satisfies all the desired properties of Proposition \ref{prop:strategy_reduction}. It is clear
that $\mathcal{S}_2$ does not create any multi-edges by definition, so it suffices to verify that
each vertex of $G_{cn}^{2}$ is covered by at most $2\mu$ circles, and that $G_{cn}^{2}$ is
in \texttt{PM$(2\mu, \mu^{-1})$} if $G_{cn}^{1}$ is in \texttt{PM$(\mu, \mu^{-1})$} . Observe that if we fix $j \in [n]$, then step \eqref{eqn:easy_round} places at
most $\mu$ (uncoloured) circles on $j$, and step \eqref{eqn:problematic_round} places at most
$c/(1-c) +1<\mu$ blue vertices of $j$. Thus, in total there are at most $2\mu$ circles are placed on $j$,
and so the strategy $\mathcal{S}_2$ is $2\mu$-well-behaved. As a result,
if $G_{cn}^{1}$ has a $(\mu,\mu^{-1})$-approximate perfect matching, then $G_{cn}^{2}$
must have a $(2\mu, \mu^{-1})$-approximate perfect matching.
Thus,
\[
\mathbb{P}[\text{$G_{cn}^{1}$ satisfies $\texttt{PM$(\mu, \mu^{-1})$}$}] \le \mathbb{P}[\text{$G_{cn}^{2}$ satisfies $\texttt{PM$(2\mu, \mu^{-1})$}]$}.
\]
and so the proof is complete.
\end{proof}
By combining the above propositions with Proposition \ref{prop:equivalence}, we get the following
lemma:
\begin{lemma}\label{lem:light_matching}
Given $0 < \alpha \le 1$ and $\mu = \mu(n) = \omega(1)$, suppose that for each $c < \alpha$ and each strategy $\mathcal{S}$
which is $2\mu$-well behaved for $cn$ rounds, it holds that
\[
\mathbb{P}[\text{$G_{cn}^{\mathcal{S}}(n)$ satisfies $\texttt{PM$(2\mu, \mu^{-1})$}$}] \rightarrow 0, \quad \mbox{as $n \rightarrow \infty$.}
\]
Then, $\tau_{\texttt{PM}} \ge \alpha$.
\end{lemma}
\subsection{Proving Theorem \ref{thm:main_lower_bound}}
In this section, we complete the proof of Theorem \ref{thm:main_lower_bound}.
Recall that $g: [0, \infty) \rightarrow \mathbb{R}$ is defined such that for $b \ge 0$,
\begin{equation} \label{eqn:first_bound}
g(b) := 1 + \frac {1-2b}{2} \exp(-b) - (b+1) \exp(-2b) - \frac {1}{2} \exp(-3b),
\end{equation}
and in viewing $g$ as monotonely increasing on $[0, \infty)$, we have that
\[
\alpha =\min\{\alpha \ge 0: g(b)\ge 1/2 \}.
\]
Let $\mu=\sqrt{n}$ (indeed any function of $n$ that grows with $n$ in a sub-linear rate would work) and let $0<c<\alpha \le 1$ be an arbitrary constant. Define $\omega$ to be $2\mu$. This specifies the parameters for the set of $\omega$-well behaved strategies for $cn$ rounds that are considered below.
Suppose that $\mathcal{S}$ is an $\omega$-well behaved strategy
for $cn$ rounds. Since $c < \alpha \le 1$, we may apply
Lemma \ref{lem:light_matching} in order to prove Theorem \ref{thm:main_lower_bound}.
Thus, it suffices to show that
\begin{equation}\label{eqn:well_behaved_strategy}
\mathbb{P}[\text{$G_{cn}^{\mathcal{S}}(n)$ satisfies $\texttt{PM$(\omega, \mu^{-1})$}$}] \rightarrow 0,\quad\mbox{as $n \rightarrow \infty$}.
\end{equation}
For simplicity and without confusion, we remove $\mathcal{S}$ from the superscript.
In order to prove \eqref{eqn:well_behaved_strategy}, we prove that a certain set of squares cannot be simultaneously contained in any $(\omega, \mu^{-1})$-approximate perfect matching of $G_{cn}$ and there are many such squares to exclude.
Given a vertex $j \in [n]$, we say that $j$ is \textbf{redundant} at time $t \ge 0$, provided the following conditions hold:
\begin{enumerate}[label=(\subscript{B}{{\arabic*}})]
\item Vertex $j$ is covered by precisely one square, say $u_s$ for $s \le t$.
\item The circle $v_{s}$ connected to $u_s$ by the player is covered by at least one
square, which arrives after round $s$. \label{eqn:strategy_independece}
\end{enumerate}
We denote by $U(t)$ the number of redundant vertices at time $t$. Observe that \ref{eqn:strategy_independece} ensures
that $U(t)$ depends only on the placement of the squares, not the strategy $\mathcal{S}$. Observe also that the redundant square on $j$ and any square on $v_s$ cannot be contained simultaneously in any $(\omega, \mu^{-1})$-approximate perfect matching. Recall that $X(t)$ denotes the number of vertices covered by at least one square after step $t$.
Suppose that $j$ is redundant at time $t$ thanks to the arrival of the square $u_s$ at time $s \le t$.
We then say that $j$ is \textbf{well-positioned},
provided the vertex $v_s$ is also redundant at time $t$. Denote $W(t)$
as the number of well-positioned redundant vertices at time $t$. Clearly,
$W(t) \le U(t)$ by definition. We observe then the following inequality:
\begin{proposition} \label{prop:first_improvement}
If $G_{t}$ has an $(\omega, \mu^{-1})$-approximate perfect matching at time $t \ge 0$,
then
\begin{equation}
X(t) - U(t) + W(t) \ge \frac{n}{2} - \frac{3 t}{\mu}. \label{lowerbound1}
\end{equation}
\end{proposition}
\begin{proof}
Since $G_t$ has an $(\omega, \mu^{-1})$-approximate perfect matching, there exists some subset
$S \subseteq [n]$, such that $|S| \ge n - 2t /\mu$, for which
$G_{t}[S]$ has a perfect matching. Now, if $\mathcal{M}$ is a perfect matching
of $G_{t}[S]$, then clearly $|\mathcal{M}| = |S|/2 \ge n/2 - t/\mu$.
On the other hand, suppose $\mathcal{X}_{S}$ is the collection of vertices within
$S$ covered by at least one square, $\mathcal{U}_{S}$ is the collection of
vertices of $S$ which are redundant, and $\mathcal{W}_{S}$
is the number of well-positioned vertices of $G_{t}[S]$.
Let us denote $X_{S}:= |\mathcal{X}_{S}|$, $U_{S}:= |\mathcal{U}_{S}|$ and $W_{S}:=|\mathcal{W}_S|$ for convenience.
We claim the following inequality:
\begin{equation} \label{eqn:subgraph_perfect_matching}
X_{S} - U_{S} + W_{S} \ge |\mathcal{M}| \ge n/2 - t/\mu.
\end{equation}
To see the left-hand side of this inequality, we first partition $S$
into $\mathcal{X}_{S} \setminus \mathcal{U}_{S}$, $\mathcal{U}_{S}$
and $\mathcal{Z}_{S}:= S \setminus \mathcal{X}_{S}$, thus ensuring that
\begin{equation} \label{eqn:perfect_matching_decomposition}
|\mathcal{M}| = e_{\mathcal{M}}(\mathcal{X}_{S} \setminus \mathcal{U}_{S}, S) + e_{\mathcal{M}}(\mathcal{U}_S, \mathcal{U}_S) + e_{\mathcal{M}}(\mathcal{U}_S, \mathcal{Z}_{S}).
\end{equation}
Now, clearly $e_{\mathcal{M}}(\mathcal{X}_{S} \setminus \mathcal{U}_{S}, S) \le |\mathcal{X}_{S} \setminus \mathcal{U}_{S}|$.
On the other hand, we claim that $e_{\mathcal{M}}(\mathcal{U}_S, \mathcal{Z}_{S}) = 0$. This is because each vertex of $\mathcal{U}_S$
is redundant, and so its single out-neighbour is a vertex covered by at least one square, and thus
cannot be a member of $\mathcal{Z}_S$. Similarly, its in-neighbours are covered by squares by definition,
and thus are also not in $\mathcal{Z}_S$.
It remains to upper bound $e_{\mathcal{M}}(\mathcal{U}_S, \mathcal{U}_S)$. In order
to do so, we partition $\mathcal{U}_S$
into $\mathcal{U}_{S} \setminus \mathcal{W}_{S}$ and $\mathcal{W}_S$, which ensures that
\[
e_{\mathcal{M}}(\mathcal{U}_S, \mathcal{U}_S) = e_{\mathcal{M}}(\mathcal{U}_{S} \setminus \mathcal{W}_{S}, \mathcal{U}_{S} \setminus \mathcal{W}_{S}) + e_{\mathcal{M}}(\mathcal{W}_S,\mathcal{U}_S).
\]
Now, by definition each vertex $j \in \mathcal{U}_{S} \setminus \mathcal{W}_S$
has precisely one square on it, and the circle corresponding to this square lies in $\mathcal{X}_{S} \setminus \mathcal{U}_S$.
Thus, $e_{\mathcal{M}}(\mathcal{U}_{S} \setminus \mathcal{W}_S) = 0$. Moreover,
we can upper bound $e(\mathcal{W}_S, \mathcal{U}_S)$ by $|\mathcal{W}_S|$.
By applying these bounds to \eqref{eqn:perfect_matching_decomposition},
\eqref{eqn:subgraph_perfect_matching} follows.
We can complete the proof by observing
that $X_{S} \le X(t)$, $W_{S} \le W(t)$, and $U_{S} \ge U(t) - (n -|S|) \ge U(t) - t/\mu$.
Thus, after rearranging \eqref{eqn:subgraph_perfect_matching}, we get that
\[
X(t) - U(t) + W(t) \ge \frac{n}{2} - \frac{3 t}{\mu},
\]
as claimed.
\end{proof}
As we saw in Proposition \ref{prop:warm_up_lowerbound}, it is easy to control the behaviour of
$X(t)$ for $t \ge 0$. Controlling the behaviour of $U(t)$ requires more care, and so
it is useful to define $Y(t)$ as the number of vertices of $G_t$ which are covered
by precisely one square.
Let $H_{t}=(G_0,\ldots, G_t)$ be the history of the random graph process
up to step $t$. We then have that for each $t \ge 1$:
\begin{equation} \label{eqn:square_covered_vertices}
\mathbb{E}[X(t+1) - X(t) \, | \, H_{t}] = 1 - \frac{X(t)}{n},
\end{equation}
\begin{equation} \label{eqn:singularly_square_covered_vertices}
\mathbb{E}[Y(t+1) - Y(t) \, | \, H_t] = \frac{n - X(t)}{n} - \frac{Y(t)}{n},
\end{equation}
and
\begin{equation} \label{eqn:redundant_vertices}
\mathbb{E}[U(t+1) - U(t) \, | \, H_{t}] = \frac{Y(t) - U(t)}{n} - \frac{U(t)}{n}.
\end{equation}
Verification of~(\ref{eqn:square_covered_vertices})--(\ref{eqn:redundant_vertices}) is straightforward. We briefly verify~(\ref{eqn:redundant_vertices}). The number of pairs of vertices $(u,v)$ such that $u$ is covered by exactly one square whose corresponding circle is on $v$ and $v$ has no squares arriving after that circle, is exactly $Y(t)-U(t)$ by step $t$. A redundant vertex is created if a square lands on $v$. On the other hand, a redundant vertex can be destroyed if it receives another square. This explains~(\ref{eqn:redundant_vertices}).
In order to control the behaviour of $W(t)$, we introduce another definition.
Let us say that a vertex $j_1$ is \textbf{dangerous} at time $t \ge 0$,
provided there exist times $t_{1} < t_{2} \le t$, such that the following hold:
\begin{enumerate}[label=(\subscript{C}{{\arabic*}})]
\item At time $t_1$, the square $u_{t_1}$ lands on $j_1$, and connects its circle $v_{t_1}$ to some
vertex $j_2 \neq j_1$.
\item At time $t_2$, the vertex $j_2$ is covered by the square $u_{t_2}$, at which point
the circle $v_{j_2}$ is placed on some vertex $j_{3} \notin \{j_1,j_{2}\}$.
\item No squares land on $j_3$ between times $t_2$ and $t$, and
$u_{t_1}$ and $u_{t_2}$ are the only squares on $j_1$ and $j_2$, respectively.
\end{enumerate}
Let us denote $D(t)$ as the number of dangerous vertices at time $t$. Observe then
that
\begin{equation} \label{eqn:dangerous_vertices}
\mathbb{E}[ D(t+1) - D(t) \, | \, H_{t}] = \frac{Y(t) - U(t)}{n} - \frac{3 D(t)}{n},
\end{equation}
where we recall that $Y(t)$ is the number of vertices covered by exactly one square,
and $U(t)$ is the number of redundant vertices. Note that the equality in \eqref{eqn:dangerous_vertices}
follows since the strategy we are analyzing does \textit{not} create multi-edges by assumption.
We can use $D(t)$ to help us track the behaviour of $W(t)$ at time $t$;
that is, the well-positioned vertices. Specifically, observe that
\begin{equation*} \label{eqn:well_positioned_vertices}
\mathbb{E}[ W(t+1) - W(t) \, | \, H_t] =\frac{D(t)}{n} - \frac{2 W(t)}{n}.
\end{equation*}
Now, by assumption $\mathcal{S}$ is $\omega$-well behaved---that is,
there are at most $\omega = \omega(n)$ circles on any vertex of $G_t$. We can thus bound
the worst case one-step changes of our random
variables by $\omega(n)$. More explicitly, observe that for each $t \ge 0$
\begin{equation*}
|X(t+1) - X(t)| \le \omega(n),
\end{equation*}
and the analogous upper bound is also true for the other random variables.
Suppose now that the functions $x,y,u,d,w: [0, \infty) \rightarrow \mathbb{R}$ are the unique solution to
the following system of differential equations:
\begin{eqnarray*}
x' &=& 1 - x, \\
y' &=& 1 - x - y, \\
u' &=& y - 2 u, \\
d' &=& y - u - 3 d,\\
w' &=& d -2 w,
\end{eqnarray*}
with initial conditions $x(0) = y(0)= u(0)= d(0) =w(0)= 0$.
These functions have the following closed form solution:
\begin{itemize}
\item $x(b) = 1 - \exp(-b)$,
\item $y(b) = b \exp(-b)$,
\item $u(b) = (b-1) \exp(-b) + \exp(-2b)$.
\item $d(b) = \frac{1}{2} \exp(-b) + \frac{1}{2} \exp(-3b) -\exp(-2b)$,
\item $w(b) = \frac{1}{2} \exp(-b) - b \exp(-2b) - \frac{1}{2} \exp(-3b)$.
\end{itemize}
The following lemma follows immediately by Nick Wormald's differential equation method (see, for example,~\cite[Theorem 5.1]{de} or~\cite{warnke2019wormalds}).
\begin{lemma} \label{lem:differential_equation_application} There exists a function $\eps=\eps(n) = o(1)$
such that \emph{a.a.s.}, for all $0 \le t \le cn$,
\begin{equation*}
\max\{|X(t) -x(t/n) n|,|U(t) - u(t/n) n| , |W(t) - w(t/n) n|\} \le \eps n.
\end{equation*}
\end{lemma}
By applying Lemma \ref{lem:differential_equation_application},
we are guaranteed the existence of a function $\eps = \eps(n) = o(1)$,
such that \emph{a.a.s.}\ for all $t \ge 0$,
\begin{itemize}
\item $X(t)/n \le 1 - \exp(-t/n) + \eps$,
\item $U(t)/n \ge \left( \frac{t}{n} -1\right) \exp(-t/n) + \exp(-2t/n) - \eps$,
\item $W(t)/n \le \left( \frac{1}{2} \exp(-t/n) - \frac{1}{2} \exp(-3t/n) - \frac{t}{n} \exp(-2 t/n) \right) + \eps$.
\end{itemize}
Let us now assume that $\alpha \ge 0$ is the infimum over
those $b \ge 0$ such that
$g(b) \ge 1/2$,
where $g(b)$ is defined in~(\ref{eqn:first_bound}).
Now, $c < \alpha$ and
$T(n):= \lfloor c n \rfloor$ satisfies $T = (1-o(1))c n$. Thus,
\[
\frac{X(T)}{n} + \frac{W(T)}{n} - \frac{U(T)}{n} \le g(c) + 3 \eps.
\]
Moreover, $g(c) <1/2$ as $c < \alpha$, and so since $\eps,\mu^{-1} = o(1)$,
it follows that \emph{a.a.s.}\
\[
X(T) + W(T) - U(T) < n/2 - 3T/\mu.
\]
Thus, $G_T$ does not have an $(\omega,\mu^{-1})$-approximate perfect matching \emph{a.a.s.}\ by Proposition \ref{prop:first_improvement}. As $c < \alpha$ was arbitrary, the proof of Theorem \ref{thm:main_lower_bound} is complete after applying Lemma \ref{lem:light_matching}. \qed
\section{Conclusion}
We have reduced the gap between the previous best upper and lower bounds on $\tau_{{\tt PM}}$ by roughly a factor of four. That being said,
we do not believe that any of our new bounds are tight. For instance, in the case of our upper bound, our strategy does not
make use of subsequent squares which land on a green vertex. One way to improve the algorithm's performance would be to control these
additional squares. The other way is to consider longer augmenting paths. However, there are not many saturated vertices with more than one square and thus the numerical improvement is not so significant. Tracing longer augmenting paths is harder to analyse and we did not attempt that. In the case of our lower bound, we have identified certain subgraphs which preclude the strategy from building a perfect matching too quickly, and which no strategy can avoid creating. To improve our lower bound,
one could track more complicated subgraphs which also prevent a perfect matching from being created too quickly. For instance, one could track
the number of vertices which are covered by two squares, and whose neighbours are themselves \textit{not} redundant. However, such subgraphs
can be avoided by the strategy, and so they are harder to control. More importantly, these subgraphs are much rarer
than redundant vertices, and so their contribution to the improvement of the lower bound is rather small.
We believe it is an interesting open question to resolve the remaining gap between our bounds on
$\tau_{{\tt PM}}$. More generally, we believe it is an interesting open question to better understand the value of $\tau_{{\tt H}}$,
when $H$ is a graph spanning all vertices in $[n]$ with absolutely bounded maximum degree. For instance, it is not even clear that any graph in this family can \emph{a.a.s.} be constructed
in time $O(n)$.
\bibliographystyle{plain}
|
1,116,691,499,171 | arxiv | \section{Introduction}
This article is concerned with a major unsolved problem of QED,
that of an exact formulation of the notion of gauge invariance,
more especially the problem of an exact characterization of gauge
transformations and their uses. Why are such seemingly disparate
formulations as the Gupta-Bleuler (GB) formalism and the Coulomb
gauge (C gauge) description physically equivalent, and how are
they connected? Needless to say, this problem will not be solved
here or even fully described. I will merely put forward a few
possibly useful remarks and suggestions. Attention will be
restricted to the two most widely used `gauges' already mentioned,
the GB and the C gauges. And since there still does not exist a
rigorous formulation of QED in any gauge, these results will be
based on the experience gained in perturbation theory (PT). Any
result which is valid in every order of PT has a good chance of
describing a feature present in a possibly existing exact theory.
That this statement must be taken with a grain of salt will become
apparent later on.
For the structural studies we have in mind, the Wightman functions
are more convenient tools than the Green's functions of the
traditional formulations. The PT of the Wightman functions of QED
has been developed in \cite{1}. Our results are based on the rules
derived there. But the reader's acquaintance with these rules will
not be assumed. The claims made will therefore in general be
substantiated by somewhat heuristic arguments rather than full
proofs. The emphasis is on statements of facts and the discussion
of their significance, rather than on proofs. However, all the
results claimed {\em can} be rigorously proved to all orders of
PT. And they are usually plausible enough as they stand.
\section{Formal Considerations}
The basic fields of QED with charged particles of spin 1/2 are the
electromagnetic potentials $A_{\mu}(x)$ and the Dirac spinors
$\psi(x)$ and $\bar{\psi}(x)=\psi^*(x)\gamma^0$.
The {\em GB formalism} has the advantage of working with local,
covariant, fields $A_{\mu},\,\psi,$ as we like them. But it also
has two grave drawbacks. First, its state space ${\cal V}_{GB}$ is
equipped with an indefinite scalar product, hence it is not a
Hilbert space. This contradicts the basic rules of quantum
mechanics, and it is mathematically inconvenient. Second, the
Maxwell equations
\begin{equation}
\partial_{\nu} F^{\nu\mu}(x) = j^{\mu}(x)\ , \label{1}
\end{equation}
with
\begin{equation}
F^{\nu\mu}(x) =
\partial^{\nu}A^{\mu}(x)-\partial^{\mu}A^{\nu}(x)\,,\qquad
j^{\mu}(x) = e\,\bar{\psi}(x)\gamma^{\mu}\psi(x)\ , \label{2}
\end{equation}
are not satisfied as operator equations on ${\cal V}_{GB}$, even
after renormalization. This means that ${\cal V}_{GB}$ contains
unphysical states, that is vectors which do not describe states
ever encountered in a laboratory. This raises the question of how
to characterize the physical states, and the second question
whether ${\cal V}_{GB}$ can be defined such that it contains
sufficiently many physical states to give a full description of
reality. The standard textbook answer to the first question is
this: the divergence
\begin{equation}
B(x) = \partial_{\mu}A^{\mu}(x) \label{3}
\end{equation}
solves the free field equation $\partial^{\mu}\partial_{\mu}
B(x)=0$, hence can be split into a creation part $B^-(x)$ and an
annihilation part $B^+(x)$. And the physical subspace ${\cal
V}_{ph}\subset{\cal V}_{GB}$ is defined to be the kernel of $B^+$:
\begin{equation}
B^+(x)\,{\cal V}_{ph} = 0\,. \label{4}
\end{equation}
This condition ensures the validity of the Maxwell equations on
${\cal V}_{ph}$. The second question is, as a rule, simply
ignored. A clean definition of ${\cal V}_{GB}$ and thus of
${\cal{V}}_{ph}$ is hardly ever given.\footnote{There exist more
thoughtful treatments of the problem outside of textbooks, for
example \cite{2,3,4}. These approaches make essential use of the
notion of asymptotic fields, which is not unproblematic in QED and
will be avoided in the present work.}
The {\em C gauge} has complementary advantages and drawbacks. Its
state space ${\cal V}_C$ is a Hilbert space and contains only
physical states. The Maxwell equations are satisfied. But the
basic fields $A_{\mu},\:\psi,\:\bar{\psi},$ are neither local
(i.e. they do not commute at spacelike distances) nor covariant.
This is very inconvenient for a detailed {\em ab ovo} elaboration
of the theory, especially for a convincing formulation of
renormalization. And the lack of manifest covariance also raises
the question of how observational Lorentz invariance emerges from
the theory.
The complementary aspects of the two methods suggests a joining of
their forces. This may consist in first working out the theory of
the GB-fields $A_{\mu},\:\psi,$ as fully as possible, and then
identifying in this framework ``physical fields" ${\cal
A}_{\mu},\:\Psi,$ which generate ${\cal V}_{ph}$ out of the
vacuum $\Omega$, yielding the C gauge formulation. Formally, such
``C-fields" are obtained from the GB-fields by the Dirac ansatz
\begin{eqnarray}
{\cal A}_{\mu}(x) & = & A_{\mu}(x) - \frac{\partial}{\partial
x^{\mu}} \int dy\,r^{\rho}(x-y)\,A_{\rho}(y)\,, \nonumber \\
\Psi(x) & = & \exp\Big\{ie\int
dy\,r^{\rho}(x-y)\,A_{\rho}(y)\Big\}\,\psi(x)\,, \label{5} \\
\bar{\Psi}(x) & = & \Psi^*(x)\,\gamma^0\,, \nonumber
\end{eqnarray}
with
\begin{equation}
r^0=0\,,\qquad r_j(x)=-r^j(x)=\delta(x^0)\,\partial_j
\frac{1}{|\vec{x}|}\, . \label{6}
\end{equation}
for $j=1,2,3$. The auxiliary `functions' $r^{\mu}$ satisfy
\begin{equation}
\partial_{\mu} r^{\mu}(x) = \delta^4(x)\ , \label{7}
\end{equation}
or in momentum space
\begin{equation}
p_{\mu}\tilde{r}^{\mu}(p) = i \label{8}
\end{equation}
with $\tilde{r}_{\mu}(p)=\int dx\,\exp\{ipx\}\,r^{\mu}(x)$. Notice
that {\em formally} the definitions (\ref{5}) have the form of a
gauge transformation with the operator valued and field-dependent
gauge function
\begin{equation}
G(x) = - \int dy\,r^{\rho}(x-y)\,A_{\rho}(y)\;. \label{9}
\end{equation}
That the fields (\ref{5}) really are the desired C-fields follows
from the fact that they satisfy
\begin{equation}
\big[B(x),\,\Psi(y)\big] = \big[B(x),\,\bar{\Psi}(y)\big] =
\big[B(x),\,{\cal A}_{\mu}(y)\big] = 0\;, \label{10}
\end{equation}
which equations then also hold for the annihilation part $B^+$.
And this together with $B^+\Omega=0$ shows that states of the form
\begin{equation}
\Phi = {\cal P}(\Psi,\,\bar{\Psi},\,{\cal A}_{\mu})\,\Omega\;,
\label{11}
\end{equation}
with ${\cal P}$ a polynomial -- or a more general function, if
definable -- of fields averaged over test functions, are physical
in the sense of (\ref{4}). The property (\ref{10}) has been called
``strict gauge invariance" by Symanzik \cite{5} and, following
him, by Strocchi and Wightman \cite{6}. A proof of (\ref{10}) is
found in the first of these references.
\section{Problems, and a Solution}
The contents of Sect.~2 were purely formal. In order to give the
definition (\ref{5}) a rigorous meaning we must start from a
rigorous description of the GB formalism, especially of ${\cal
V}_{GB}$. The use of a suitably adapted version of the Wightman
formalism \cite{7} suggests itself. Of course, the naturally
defined scalar product being indefinite, ${\cal V}_{GB}$ cannot be
a Hilbert space. But we assume it to be equipped with a
non-degenerate scalar product. And we wish to retain all the other
Wightman axioms. In particular, the fields
$A_{\mu},\,\psi,\,\bar{\psi},$ are to be operator valued tempered
distributions. And we assume the subspace ${\cal V}_0$ spanned by
tempered field polynomials applied to the vacuum $\Omega$ to be
dense in ${\cal V}_{GB}$ in the weak topology induced by the
scalar product. ${\cal V}_0$ is called the space of ``local
states". The density of ${\cal V}_0$ is important because it
allows to construct ${\cal V}_{GB}$ from the Wightman functions by
the reconstruction theorem. The Wightman functions are easily
calculated in PT and other schemes of approximation. And their
properties, as properties of tempered distributions, are easier to
investigate than those of unbounded operators in a space with
indefinite metric. On ${\cal V}_0$ the Wightman functions
determine the scalar product.
Right at the start we are confronted with a slightly disturbing
fact. Let the charge operator $Q$ be defined by
\begin{equation}
[Q,\,\psi(x)]=-e\,\psi(x)\,,\quad
[Q,\,\bar{\psi}(x)]=e\,\bar{\psi}(x)\,, \quad [Q,\,{\cal
A}_{\mu}(x)] = 0\,,\quad Q\,\Omega = 0\;, \label{12}
\end{equation}
with $e$ the charge of the positron. Define the state $\Phi$ to be
{\em physical} if the Gau\ss\ law
\begin{equation}
Q\,\Phi = \int_{x^0=t}d^3x\,\nabla\vec{E}(x)\,\Phi \label{13}
\end{equation}
holds on it\footnote{As is well known, in this crude version of
the condition the right-hand side does not make sense. Suitable
regularizations in space and time are necessary. But this problem
is immaterial to our present purposes.}, with $\vec{E}$ the
electric field strength. Then it is known \cite{8} that ${\cal
V}_0$ contains no charged physical states. This means that the
construction of charged physical states in ${\cal V}_{GB}$ as
limits of local states is a non-trivial task.\footnote{That it is
not necessarily an unsolvable task has recently been
re-em\-pha\-sized by Morchio and Strocchi \cite{8a}.}
Let us now turn to the problem of giving a rigorous meaning to the
equations (\ref{5}) defining the C-fields in terms of the
GB-fields, supposing a rigorous theory of the latter to be at hand
as just explained. In this attempt we encounter two problems, an
ultraviolet (UV) one and an infrared (IR) one.
The UV problem is this: The factor $\psi(x)$ in $\Psi(x)$ is a
distribution, not a function, and so is the exponential factor.
Notice that the auxiliary `functions' $r^j$ are not functions in
the strict sense of the word, let alone test functions, so that
the exponent does not exist as a function. This problem can be
solved by standard renormalization procedures, most easily by
subtraction at $p=0$ in momentum space (`intermediate
renormalization'). Unfortunately such a subtraction destroys the
product form of $\Psi$, because in $p$-space the product
$a(x)b(x)$ becomes the convolution $\int dk\,\tilde{a}(p-k)\,
\tilde{b}(k)$ (the tilde denotes the Fourier transform). This
reads in its subtracted form
\[ \int dk\,\big[\tilde{a}(p-k)\,\tilde{b}(k)-\tilde{a}(-k)\,
\tilde{b}(k)\big]\, , \]
which is no longer a convolution. Hence the mapping
$(\psi,A)\to(\Psi,{\cal A})$ has no longer the form of a gauge
transformation. It is very unlikely that this problem can be
solved by a more sophisticated method of renormalization. {\em It
is my opinion that we should not bemoan this fact, but accustom
ourselves to the idea that the notion of gauge transformations is
not as useful in QED as it is in classical electrodynamics, but is
only of a heuristic value}.\footnote{This scepticism does not extend
to gauge transformations of the first kind, that is global
transformations with an $x$-independent real gauge function.} The true
problem is to find fields satisfying the condition (\ref{10}) of
strict gauge invariance, no matter whether or not they can be
derived from the GB-fields by a gauge transformation. That the
renormalized Dirac ansatz yields a formulation of QED which
satisfies all the necessary requirements, fully justifies its
use.
The IR problem is this: Are $\Psi,\,{\cal A}_{\mu}$, after
renormalization, defined as fields on ${\cal V}_{GB}$? In other
words, is the space ${\cal V}_C$ generated from the vacuum
$\Omega$ by the C-fields a subspace of ${\cal V}_{GB}$? At first,
the answer gleaned from PT is a plain `no'! The scalar product
$(\Phi_0,\,\Phi)$ of the physical state
\begin{equation}
\Phi = \int dx\,f(x)\,\Psi(x)\,\Omega\quad \in{\cal V}_C
\label{14}
\end{equation}
and the local state
\begin{equation}
\Phi_0 = \int dy\,\overline{g(y)}\,{\bar{\psi}}^{\,*}\,\Omega
\quad \in{\cal V}_0\;, \label{15}
\end{equation}
with $f$ and $g$ tempered test functions, can be shown to diverge
already in second order of PT (see end of Appendix). This
divergence is caused by the divergence at large $y$ of the
exponent $\int dy\,r^{\rho}(x-y)\,A_{\rho}(y)$ in (\ref{5}): it is
an IR problem.
But this result is misleading. As is well known, the generic
$S$-matrix element of QED calculated with the LSZ reduction
formula is in general IR divergent in finite orders of PT. But
these divergences can be isolated and summed over all orders,
yielding a {\em vanishing} result. And this result is expected to
be closer to the truth than the finite-order divergences, because
it agrees with information obtained from other sources, especially
the Bloch-Nordsieck model. Something similar might happen for the
mixed 2-point function
\begin{equation}
F(x,y) = (\Omega,\,\bar{\psi}(y)\,\Psi(x)\,\Omega) \label{16}
\end{equation}
occurring in $(\Phi_0,\,\Phi)$. And, indeed, it does happen! The
IR divergences in $F$ can be isolated in all orders of PT and
summed to yield
\begin{equation}
F(x,y) \equiv 0\;. \label{17}
\end{equation}
For a sketch of the proof we refer to the Appendix. Again, we
expect the result (\ref{17}), rather than the finite-order
divergences, to correspond to the true situation. But, as in the
case of the $S$-matrix, this does not solve our problem. In the
same way as (\ref{17}) it can be generally shown that $\Phi$ is
orthogonal to all local states. The state $\Phi$ of charge $-e$ is
orthogonal to ${\cal V}_0$, hence it cannot be the weak limit of a
sequence of local states, hence ${\cal V}_C$
cannot be a subspace of ${\cal V}_{GB}$, in which we assumed ${\cal
V}_0$ to be dense.
As a result we obtain:
\medskip
\begin{it}
The state space ${\cal V}_C$ of the Coulomb gauge cannot be
obtained as a subspace of an extension ${\cal V}_{GB}$ of ${\cal
V}_0$, if the scalar product defined on ${\cal V}_0$ can be
extended to ${\cal V}_{GB}$ in such a way that ${\cal V}_0$ is
weakly dense in ${\cal V}_{GB}$.
\end{it}
\medskip
The problems discussed in this section lead to the following
conclusions. The C-fields $\Psi,\,{\cal A}_{\mu}$, cannot be
defined as fields acting on ${\cal V}_{GB}$. Nor are they related
to the GB-fields $\psi,\,A_{\mu}$, by a gauge transformation. The
traditional explanations of the connection between the two
formulations do not work. A working method of relating the two
formalisms has been derived in Chap.~12 of \cite{1}. It will be
briefly described without giving proofs. The method makes
essential use of the Wightman reconstruction theorem. At first it
is demonstrated (in PT) that the Wightman functions of the
C-fields can be obtained from those of the GB-fields by a limiting
procedure, like this: Replace the auxiliary functions $r^j(x)$ in
(\ref{5}) and (\ref{6}) by the regularized version
\begin{equation}
r^j_{\xi}(x) = \chi(\xi x)\, r^j(x)\;,\qquad 0<\xi<\infty\;,
\label{18}
\end{equation}
with $\chi(u)$ a test function with compact support, satisfying
$\chi(0)=1$. The resulting fields $\Psi_{\xi},\,{\cal
A}^{\mu}_{\xi}$ {\em are} definable as acting on a slight
extension of ${\cal V}_0$. Their Wightman functions can be
computed. And the limits $\xi \to 0$ of the latter exist and
define a field theory via the reconstruction theorem, which has
all the desired properties of QED in the C gauge. It {\em is} QED
in the C gauge! But the states and the fields of this theory have
no discernible direct connection with the states and fields of the
GB formalism.
\medskip
Let us end this section with a few remarks on how observational
Lorentz invariance emerges from the not manifestly covariant C
gauge formalism. The problem has not yet been discussed in depth.
I can therefore only state a program rather than results. This
program is based on the spirit of the local observables approach
\cite{9} to quantum field theory, according to which only
observables localized in bounded regions of space-time are true
observables. The first problem facing us is finding an exact
characterization of the operators representing observables. Since
the C-fields should describe the theory fully, an observable in
the bounded domain ${\cal R}$ must be a function of the fields
with arguments in ${\cal R}$. Observables should be represented by
hermitian operators (self-adjointness is hard to handle in PT),
and they should satisfy the axioms of local quantum physics. In
particular, two observables localized in relatively spacelike
domains should commute. And the algebras of observables in the
domain ${\cal R}$ and its image ${\cal R}_{\Lambda}$ under the
Lorentz transformation $\Lambda$ should be related by an
automorphism $\alpha_{\Lambda}$, which we can, of course, not
expect to be unitarily implemented. The traditional requirement
that observables should be gauge invariant is difficult or
impossible to formulate in C gauge, on account of the doubtful
status of gauge transformations. We propose to solve this
conundrum like for the fields, by starting from the GB formalism.
There, observables must satisfy the same requirements as stated
for the C gauge. But the additional requirement of gauge
invariance can now be interpreted to denote strict gauge
invariance. This means that observables must commute with
$B(x)$.\footnote{On ${\cal V}_C$ $B$ vanishes identically.} The C
gauge equivalent of such a GB observable $A$ can then be defined
as a bilinear form by
\begin{equation}
(\Omega,\,\bar{\Psi}(x)\,A\,\Psi(y)\,\Omega) = \lim_{\xi\to0}
(\Omega,\,\bar{\Psi}_{\xi}(x)\,A\,\Psi_{\xi}(y)\,\Omega)
\label{19}
\end{equation}
and obvious generalizations. Under exactly what conditions on $A$
this limit exists has not yet been investigated. It is clear that
the observable fields $F_{\mu\nu}$ and $j_{\mu}$ belong to this
class.
Given the local observables of the theory, the second problem is
that of describing observational Lorentz invariance without using
a non-existent unitary representation of the Lorentz group. The
formulation proposed here is based on the important insights put
forward by Haag and Kastler in \cite{10} concerning the relevance
of Fell's theorem (a purely mathematical statement) to quantum
mechanics. This relevance rests on the observation that in any
given experiment we can only measure a {\em finite number} of {\em
local} observables with a {\em finite accuracy}. In view of this
we can formulate observational invariance as follows.
\medskip
\begin{it}
Let $A_1,\cdots,A_n,$ be a finite set of local observables with
measuring accuracies $\epsilon_i$,
$A_i^{\Lambda}=\alpha_{\Lambda}(A_i)$ their images under the
Lorentz transformation $\Lambda$, and let $\Phi$ be a (physically
preparable) state in ${\cal V}_C$. Then there exists a state
$\Phi_{\Lambda}\in{\cal V}_C$ such that
\begin{equation}
|(\Phi_{\Lambda},\,A_i^{\Lambda}\,\Phi_{\Lambda}) -
(\Phi,\,A_i\,\Phi)|<\epsilon_i \label{21}
\end{equation}
for $i=1,\cdots,n$.
\end{it}
\medskip
If $\Lambda$ is a boost, then $\Phi_{\Lambda}$ represents the
original state as seen by a moving observer, as far as the
experiment in question is concerned. That this form of Lorentz
invariance is satisfied in ${\cal V}_C$ is at the moment still a
conjecture. But it has a good chance of being true. Partial
results in this direction can be found at the end of Chap.~12 in
\cite{1}.
\section*{Appendix: {\large Proving Equation (\ref{17})}}
\setcounter{equation}{0}
\renewcommand{\theequation}{A.\arabic{equation}}
Using the methods of \cite{1} a rigorous proof of the claimed
result \ref{17} can be given by summing the IR relevant parts of
$F$ in all orders of PT. In this appendix we will not give the
full proof, but only sketch its essential ideas.
We denote the vacuum expectation value $(\Omega,\cdots \Omega)$ by
$\langle\cdots\rangle$. The expression of interest is
\begin{eqnarray}
A & = & \langle \bar{\psi}(y)\,\Psi(x)\rangle \nonumber \\
& = & \langle\bar{\psi}(y)\,\exp\{ie\int du\,r^j(x-u)\,
A_j(u)\} \,\psi(x)\rangle \label{A1}
\end{eqnarray}
with
\begin{equation}
r^j(v) = \delta(v^0)\,\partial^j \frac{1}{|\vec{v}|}\,. \label{A2}
\end{equation}
We are only interested in the IR problems connected with this
expression, ignoring the UV problems which can, however, easily be
taken into account. The reader is free to get rid of them by a
suitable UV regularization. This means that our problem is the
existence of the $u$-integral in (\ref{A1}) {\em at large} $u$.
Assume $x,y,$ to be restricted to bounded regions by integrating
them over test functions with compact support. Again, this
restriction is not essential. Then, because of the $\delta$-factor
in (\ref{A2}), $u$ can tend to infinity only in spacelike
directions. For large $u$ we can then neglect $\vec{x}$ with
respect to $\vec{u}$ in $r^j(x-u)$. Using the cluster property we
find
\begin{equation}
A \sim \langle\psi(x)\,\bar{\psi}(y)\rangle\langle \exp\{ie\int
du\,r^j(-u)\,A_j(u)\}\rangle\;, \label{A3}
\end{equation}
where the symbol $\sim$ means that only the potentially IR
dangerous contributions to $A$ are considered. The first factor
depending on $x$ and $y$ is IR harmless. We need therefore only
discuss the second factor, which we call $X$. Going over to
momentum space we find
\begin{equation}
X = \Big\langle\exp\Big\{e\int
dk^0\,d^3k\,\frac{k^j}{|\vec{k}|^2}\,\tilde{A}_j(k)
\Big\}\Big\rangle \label{A4}
\end{equation}
with $\tilde{A_j}$ the Fourier transform of $A_j$ (the tilde will
be omitted in the sequel). The existence of the $k^0$-integral is
part of the UV problem which we ignore. The same goes for for the
existence of the $\vec{k}$-integral at $|\vec{k}|\to\infty$. Our
only concern is the possible divergence of the $\vec{k}$-integral
at $\vec{k}=0$. In PT the exponential in (\ref{A4}) is defined as
a power series. (Notice the factor $e$ in the exponent.) Only the
even terms in this expansion survive because the Wightman
functions of an odd number of $A $'s and no $\psi$ or $\bar{\psi}$
vanish. Hence we find
\begin{equation}
X = \sum^{\infty}_{\sigma=0}\frac{e^{2\sigma}}{(2\sigma)!}
\bigg\langle\Big(\int dk\,\frac{k^j}{|\vec{k}|^2}\,
A_j(k)\Big)^{2\sigma}\bigg\rangle\;. \label{A5}
\end{equation}
This expression contains terms of the form
\[ \left\langle\prod^{2{\sigma}}_{\omega=1}
A_{j_{\omega}}(k_{\omega})\right\rangle \] which must be evaluated
in PT, and its IR divergences isolated. The graph rules for these
functions are similar to, but somewhat more complicated than, the
familiar Feynman rules for Green's functions. As in this case it
is found that connected components with $\ge4$ external lines of a
graph give convergent contributions to $X$, because they vanish
sufficiently strongly at $k_{\omega}=0$ to overcome the
$1/|\vec{k_{\omega}}|$ singularities in (\ref{A5}). This is a
consequence of the Ward identities, which for these fermion-less
connected graphs take the form $k^{\mu}A_{\mu}(k)=0$ in all orders
except the $0^{th}$. These contributions can be factored out. The
IR problem is concentrated in graphs consisting exclusively of
2-variable components. But such a 2-point function is assumed to
satisfy the renormalization condition
\begin{equation}
\langle A_{\mu}(k_1)\,A_{\nu}(k_2)\rangle = \delta^4(k_1+k_2)\,
\big(g_{\mu\nu} \delta_+(k_1) + {\rm IR\ harmless\ terms}\big)
\label{A6}
\end{equation}
with $\delta_+(k)=\theta(k_0)\,\delta(k^2)$. The IR divergences
are entirely due to the zero-order term $g_{\mu\nu}\delta_+$.
Inserting this into the expansion (\ref{A5}) and doing the
combinatorics right we find
\begin{equation}
X \sim {\rm (finite\ factor)}\:\cdot\:\exp\Big\{-e^2 \int
\frac{d^3k}{2|\vec{k}|^3}\Big\}\;. \label{A7}
\end{equation}
The $k$-integral diverges positively at $\vec{k}=0$. The UV
divergence at $|\vec{k}|\to\infty$ is in a more detailed treatment
removed by renormalization and does not concern us here. The
result is, then:
\begin{equation}
X \sim 0 \label{A8}
\end{equation}
which proves (\ref{17}). Notice that the $e^2$ term in the power
series expansion of (\ref{A7}) diverges, as was claimed in
Sect.~3.
|
1,116,691,499,172 | arxiv | \section{Observation of squeezed light from one atom excited with two photons : Supplementary Information}
\end{widetext}
\section{Theory}
In this section we develop the theoretical description of the squeezing process in our atom-cavity system and outline the derivation of the equations used to interpolate the experimental data. To complement previous approaches based on the pure-state formalism\cite{Carmichael08,Carmichael91}, we make a direct connection between the squeezing of the atomic dipole and the squeezing of the cavity field, and point at a distinction between the contributions of single-photon and two-photon excitation processes.
\subsection{Quadrature squeezing of the cavity field}
We consider a single cavity mode defined by the annihilation and creation operators, $a$ and $a^\dag$ respectively, and obeying the canonical commutation rules $[a,a^\dag]=1$. The quadratures of the intracavity field are defined by
\ber
X_\theta=\frac{1}{2}( e^{-i\theta}\aop+e^{i\theta}\caop ),
\eer
where $\theta$ is an adjustable phase. Defining the fluctuation of $a$ with respect to the steady state value $\langle\, a\,\rangle$ as $\Delta a=a-\langle\, a\,\rangle$, the quadrature variance $\langle\Delta X_\theta^2\rangle=\langle(X_\theta-\langle X_\theta\rangle)^2\rangle$ reads:
\begin{multline}
\langle\Delta X_\theta^2\rangle
=\frac{1}{2}(\Re(e^{-2i\theta}\langle\Delta a^{2}\rangle)+\langle\Delta a^\dagger \Delta a\rangle)+\frac{1}{4},
\end{multline}
which, when normally ordered (symbol $::$), reduces to
\begin{multline}
\langle:\Delta X_\theta^2:\rangle=\frac{1}{2}(\Re(e^{-2i\theta}\langle\Delta a^{2}\rangle)+\langle\Delta a^\dagger \Delta a\rangle)\ .
\end{multline}
For a coherent or a vacuum state, one has $\langle\Delta a^{2}\rangle=\langle\Delta a^\dagger \Delta a\rangle=0$, thus the quadrature variance is equal to the shot noise, $\langle\Delta X_\theta^2\rangle=1/4$, i.e. $\langle:\Delta X_\theta^2:\rangle=0$. The state of the field will be quadrature-squeezed if for some $\theta$ the variance of $X_\theta$ drops below the shot noise, $\langle:\Delta X_\theta^2:\rangle <0$.
The term $\langle\Delta a^\dagger \Delta a\rangle$ is the incoherent part of the spectrum and can never be negative. In order to observe squeezing in a nearly-resonant dissipative system such as ours, this term must remain as small as possible, which can be achieved at weak enough excitation intensities. In this coherent (phase-sensitive) limit it becomes negligible compared to the coherent term $\langle\Delta a^{2}\rangle$ and the quadrature squeezing reduces to
\beq
\label{EqDX0}
\langle : \Delta X_\theta^2 : \rangle
\approx\frac{1}{2}\Re(e^{-2i\theta}\langle\Delta a^{2}\rangle)\ .
\eeq
\subsection{Squeezing generated by one atom in an optical cavity}
To calculate $\langle\Delta a^{2}\rangle$, we consider a two state atom, with a ground state $|g\,\rangle$ and an excited state $|e\,\rangle$, interacting with the mode of the cavity with photon number states $|0\,\rangle, |1\,\rangle , |2\,\rangle ...$. The relevant physical parameters are:
\begin{itemize}
\item{The atom-cavity coupling $g$,}
\item{The cavity field decay rate $\kappa$,}
\item{The atomic dipole decay rate $\gamma$,}
\item{The frequency of the driving laser $\omega$,}
\item{The amplitude of the driving laser $\epsilon$,}
\item{The detuning between the driving beam and the cavity resonance frequency $\Delta_c=\omega-\omega_c$,}
\item{The detuning between the driving beam and the atomic resonance frequency $\Delta_a=\omega-\omega_a$.}
\end{itemize}
The evolution of the system's density matrix $\rho$ is given by a master equation
\beq
\frac{d\rho}{dt}=\mathcal{L}\rho=-\frac{i}{\hbar}[H_{JC}+H_P,\rho]
+\kappa\La\rho+\gamma\Lsig\rho\ .
\eeq
In the rotating wave approximation and in the interaction picture with respect to the laser frequency, the Jaynes-Cummings hamiltonian is
\ber
H_{JC}/\hbar=-\Delta_c a^\dag a -\Delta_a \sigma^\dag \sigma +g (a^\dag \sigma + a\sigma^\dag),
\eer
where $\sigma=|g\rangle\langle e|$ and $\sigma^\dag=|e\rangle\langle g|$ are, respectively, the lowering and raising operators. The terms
\ber
\kap\La\rho&=&\kap(2a\rho a^\dag-\rho a^\dag a-a^\dag a\rho)\\
\gam\Lsig\rho&=&\gam(2\sigma\rho\sigma^\dag-\rho \sigma^\dag \sigma-\sigma^\dag \sigma\rho),
\eer
respectively describe the losses through the cavity mirrors and spontaneous emission from the atom into the free space modes.
The laser light incident on the input mirror results in a coherent excitation of the cavity mode:
\ber
H_{P}/\hbar=\epsilon (a^\dag + a)\ .
\eer
The time evolution of the expectation value of a time-independent operator
$\mathcal O$ is computed as $d\langle \mathcal O\rangle/dt=Tr(\mathcal O\mathcal{L}\rho)$. Defining complex detunings as $\wct=\Delta_c+i\kappa, \wat=\Delta_a+i\gamma$, we have
\ber
&&\frac{d}{dt}\langle a\rangle=i(\wct \langle a\rangle-g\langle\sigma\rangle-\epsilon)\\
&&\frac{d}{dt}\langle \sigma\rangle=i(\wat \langle \sigma \rangle+g\langle a\sigma_z\rangle)\\
&&\frac{d}{dt}\langle a^2\rangle=2i(\wct \langle a^2\rangle -g\langle a \sigma\rangle-\epsilon\langle a\rangle)\\
&&\frac{d}{dt}\langle a\sigma\rangle=i((\wat+\wct) \langle a\sigma \rangle+g\langle a^2\sigma_z\rangle-\epsilon\langle \sigma \rangle)\;\;\;\;\;\;\;
\eer
where $\sigma_z=|e\rangle\langle e|-|g\rangle\langle g|$ is the population inversion of the atom.
We operate in the weak excitation regime, where the population of the excited atomic state $|e\rangle$ is negligible. In this case $\langle \sigma_z\rangle\approx -1$, $\langle a\sigma_z\rangle\approx -\langle a\rangle$, $\langle a^2\sigma_z\rangle\approx -\langle a^2\rangle$, and the equations above provide the steady-state solutions
\ber
\langle a\rangle&=&\frac{\epsilon \wat}{\wat\wct-g^2},\\
\langle \sigma\rangle&=&\frac{\epsilon g}{\wat\wct-g^2},\\
\langle a^2\rangle&=&\frac{\epsilon^2(\wat(\wat+
\wct)+g^2)}{(\wct(\wat+\wct)-g^2)(\wat\wct-g^2)},\\
\langle a\sigma\rangle&=&\frac{\epsilon^2 g(\wat+\wct)}{(\wct(\wat+\wct)-g^2)(\wat\wct-g^2)},
\eer
which yields for the fluctuations:
\ber
\langle \Delta a^2\rangle&=&\frac{-\epsilon^2g^4}{(\wct(\wat+\wct)-g^2)
(\wat\wct-g^2)^2}\;\;\;\;\;\label{EqDeltAA}\\
\langle \Delta a\Delta \sigma\rangle&=&\frac{-\epsilon^2 g^3\wct}{(\wct(\wat+\wct)-g^2)(\wat\wct-g^2)^2}\label{EqDeltAS}
\eer
Notice that, under the assumption of weak excitation, the field amplitude $\langle a\rangle$ and the atomic coherence $\langle\sigma\rangle$ scale as $\epsilon$, whereas $\langle a^2\rangle$ and $\langle a\sigma\rangle$ scale as $\epsilon^2$. More involved calculations\cite{Carmichael08,Carmichael91} would show that $\langle\Delta a^\dagger \Delta a\rangle$ scales as $\epsilon^4$. This is nothing else than saying that the spectrum is coherent at weak excitation, $\langle a^\dag a\rangle\approx|\langle a \rangle|^2\propto\epsilon^2$, thereby justifying the use of Eq.\ref{EqDeltAA} to compute Eq.\ref{EqDX0}.
We now define the complex detunings of the dressed states $|n\pm\rangle$ as
\ber
\nonumber\widetilde{{\omega}}_{n\pm}&=&(n-1)\wct+\frac{1}{2}(\wct+\wat)\\
&&\mp\frac{1}{2}\sqrt{4ng^2+(\wct-\wat)^2}\ .
\eer
This allows us to simplify
\ber
\label{EqSig}
\langle \sigma\rangle&=&\frac{\epsilon g}{\widetilde{{\omega}}_{1+}
\widetilde{{\omega}}_{1-}},\\
\langle \Delta a^2\rangle&=&K(-\langle\,\sigma\,\rangle^{2}),
\eer
where the constant $K$ is simply
\beq
\label{EqK}
K=\frac{2g^{2}}{\widetilde{{\omega}}_{2+}\widetilde{{\omega}}_{2-}}\ ,
\eeq
with the result in the body of the paper,
\beq
\label{EqDeltXX}
\langle:\Delta X_\theta^2:\rangle=-\frac{1}{2}\Re(e^{-2i\theta}K\langle\,\sigma\,\rangle^{2})\ .
\eeq
\subsection{Physical interpretation of the squeezing mechanism}
Equation \ref{EqDeltXX} shows that the squeezing of the optical field in the cavity is directly related to the squeezing of the atomic coherence. Due to the two-level structure of the atom, the elementary property $\sigma^2=0$ directly gives $\langle(\,\sigma-\langle\,\sigma\,\rangle\,)^2\rangle=
-\langle\,\sigma\,\rangle^{2}$. In the dressed-state picture, $\langle\,\sigma\,\rangle$ depends only on the normal modes $|1\pm\rangle$, see Eq.\ref{EqSig}, thus highlighting the one excitation processes. The constant $K$, in contrast, connects the squeezing of the cavity field to the squeezing of the atomic coherence and demonstrates the importance of the two-photon states $|2\pm\rangle$, as obvious from its definition Eq.\ref{EqK}. The observed squeezing is therefore fundamentally dependent on the anharmonic structure of the atom-cavity system. The underlying physical mechanism can be summarized as such: At low atomic excitation, the dynamics of the internal state of the atom are largely coherent, which due to the quantized nature of the atom (two-level structure) directly translates into polarization squeezing, $-\langle\,\sigma\,\rangle^{2}$, and couples back to the cavity mode to create quadrature squeezing, $-K\langle\,\sigma\,\rangle^{2}$.
\subsection{Time-domain evolution and squeezing spectrum}
The quantum regression theorem provides a set of linear differential equations for the two-time correlations:
\ber
\nonumber\frac{d}{d\tau}\langle \Delta a(\tau)\Delta a(0)\rangle
&=&i(\wct \langle\Delta a(\tau)\Delta a(0)\rangle\\
&&-g\langle\Delta \sigma(\tau)\Delta a(0)\rangle),\\
\nonumber\frac{d}{d\tau}\langle\Delta \sigma(\tau)\Delta a(0)\rangle
&=&i(\wat \langle\Delta \sigma(\tau)\Delta a(0)\rangle\\
&&-g\langle\Delta a(\tau)\Delta a(0)\rangle).
\eer
Solved with the initial conditions given by Eqs. \ref{EqDeltAA} and \ref{EqDeltAS}, it yields for $\tau>0$
\beq
\langle:\Delta X_\theta(\tau)\Delta X_\theta(0):\rangle=-\frac{1}{2}\Re (e^{-2i\theta}K\langle\sigma\rangle^2 f(\tau))\ ,\label{EqDeltXtX0}
\eeq
where the regression of the fluctuations is given by
\ber
f(\tau)&=&
\frac{\widetilde{{\omega}}_{1+}}{\widetilde{{\omega}}_{1+}-\widetilde{{\omega}}_{1-}}
\exp(i\widetilde{{\omega}}_{1-}\tau)\\
&&- \frac{\widetilde{{\omega}}_{1-}}{\widetilde{{\omega}}_{1+}-\widetilde{{\omega}}_{1-}} \exp(i\widetilde{{\omega}}_{1+}\tau).\;\;\;
\eer
The squeezing spectrum measured outside the cavity, normalized to the shot noise, can be obtained by a simple Fourier transform of the autocorrelations. Taking into account the overall detection efficiency $\eta$ and the fact that the fluctuations of the field leaking outside the cavity can be related to those inside by $\langle\Delta a_{out}^2\rangle=2\kappa\langle\Delta a^2\rangle$, it reads
\begin{multline}
S_\theta(\Omega)=1+\eta2\kappa\int_{0}^\infty d\tau\cos(\Omega\tau)\langle:\Delta X_\theta(\tau)\Delta X_\theta(0):\rangle\;\;\;\;\;\;\;\;\;\;\\
=1+8\eta\kappa\Re\left\{e^{-2i\theta}\langle\Delta a^2\rangle\frac{i\wpt\wmt}{\wpt-\wmt}\right.\\
\left.\times\left(\frac{1}{\Omega^2-\wmt^2}-\frac{1}{\Omega^2-\wpt^2}\right) \right\},
\end{multline}
sum of two Lorentzians with frequencies and widths defined by the normal modes.
\subsection{Interpretation in the dressed-state picture}
The physical description can now be transposed into the dressed-state picture to determine the transition mechanism shown on Fig. 1 in the main paper body. Two pump photons make a near-resonant excitation of the second doublet of dressed states, $|2\pm\rangle$, which decay via the normal modes $|1\pm\rangle$. The time dependence of the measured signal, determined by $f(\tau)$, corresponds to the beatnote of these photons with the local oscillator which has the same frequency $\omega$ as the probe. It is the sum of two terms oscillating at the detuning frequencies $\omega-\omega_{1\pm}$ of the normal modes and damping according to their linewidths. For each of the two possible transitions, the two emitted photons appear in opposite sidebands with respect to the local oscillator and, due to the coherence of the emission process, this results in quadrature squeezing at the corresponding frequency.
\section{Experimental setup}
The atom-cavity system is based on a $123$\,$\mu$m - long Fabry-Perot optical resonator formed by two identical mirrors (transmission $2.8$\,ppm, absorption losses $4.0$\,ppm, radius of curvature $200$\,mm, diameter $7.75$\,mm), reaching a finesse of $\approx 470.000$. A $785$\,nm laser beam, resonant with a TEM$_{00}$ cavity mode, is used to stabilize the cavity length and to form a red-detuned dipole trap for single $^{85}$Rb atoms, injected into the cavity by an atomic fountain. A $780.24$ nm probing beam excites another TEM$_{00}$ cavity mode, nearly resonant with the $(5^2S_{1/2}, \; F=3, \;m_F=3 \rightarrow 5^2P_{3/2}, \; F=4, \; m_F=4)$ atomic transition. The probing power impinging on the cavity is set to $8.5$\,pW, which corresponds to $2.0$ photons per relevant temporal mode defined by the cavity decay time of $60$\,ns. The effective atom-light coupling $g/2\pi=12$\,MHz exceeds the atomic dipole and cavity field decay rates (respectively $\gamma/2\pi=3$\,MHz and $\kappa/2\pi=1.3$\,MHz) and brings the system into the strong-coupling regime. The quality of the coupling, reduced compared to the maximal value $g_{max}/2\pi=16$\,MHz by the motion of the atom inside the trap, is deduced from the cavity transmission measured with a single photon counter: strongly coupled atoms drive the cavity off resonance and decrease the transmitted photon flux by a factor of $50$ \cite{Schuster08}. The light exiting the cavity through the input mirror is picked up by an optical circulator and reflected towards a balanced homodyne detector (Thorlabs PDB120A). The phase of the local oscillator, controlled by two mirrors mounted on piezoelectric actuators, is locked using an auxiliary $772$\,nm frequency-stabilized beam far off-resonant with respect to any cavity mode and hence directly reflected by the input cavity mirror. The homodyne signal is digitized with a 14-bit resolution at a $100$\,MHz rate. Within the overall $40$\,MHz bandwidth of the homodyne detection system, limited by the anti-alias filter of the digitizer, the $800$\,$\mu$W local oscillator power yields a shot noise level $\geq 9$\,dB above the electronic noise.
\section{Experimental sequence}
The experimental sequence is divided in three parts: preparation, measurement, and control. During the preparation time ($\approx 3$\,s), atoms are loaded into a magneto-optical trap located $25$\,cm below the cavity, optically cooled down to $\approx 5$ $\mu$K, and launched towards the cavity which they reach at nearly zero velocity. A sharp drop in the cavity transmission heralds the arrival of a well-coupled atom, trapped by raising the dipole potential from $0.2$ to $0.9$\,mK. This triggers a $20$ ms measurement sequence, which depends on the probing frequency. When the probe beam is resonant with the cavity ($\omega=\omega_c$), a cavity cooling mechanism leads to good atomic storage times and the system is probed continuously with a $P_{in}=8.5$\,pW incoming power. When the probe is tuned close to the atomic resonance ($\omega \approx \omega_a$) no cooling occurs: in this case the measurement sequence is divided into alternating $200$\,$\mu$s probing and cooling intervals (the probe power and frequency being set to $P_{in}=8.5$\,pW, $\omega \approx \omega_a$ for probing and $P_{in}=1.7$\,pW, $\omega = \omega_c$ for cooling). After each measurement, the last part of the sequence ($\approx 1$\,s) is used to verify the powers of the trapping and cooling beams, as well as the power and the phase of the local oscillator, and to shift this phase by $\pm\pi/2$ to alternate between the $X$ and $P$ quadrature measurements.
The extremely low signal level combined with short atomic storage times puts drastic requirements on the experimental stability and technical noise suppression. This is achieved by actively controlling all experimental parameters and by optimizing the data acquisition procedure. During the $20$ ms measurement sequence, the atom remains strongly coupled for $1.6$\,ms on average before it leaves the cavity. The remaining data, acquired when the cavity is empty, is used as a reference. This nearly simultaneous signal and reference acquisition leads to an excellent cancelation of technical noises and slow thermal drifts. For each probing frequency we repeat the measurement sequence during $\sim 100$\,h to acquire $\approx 3$\,s of strong-coupling and $\approx 30$\,s of reference data for each quadrature.
\section{Data analysis}
For each trapped atom, we divide the $20$\,ms data sample into $200\,\mu$\,s intervals and, using the measured cavity transmission, we select those where the atom was strongly coupled (transmission $T\leq 0.04\,T_{max}$), and those where the cavity was empty ($T\geq 0.7\,T_{max}$). For each interval, we calculate the autocorrelations with the maximal $10$\,ns time resolution allowed by the digitizer's speed, we average the calculated functions over all qualified intervals with a given parameter set, and we subtract the mean value at large time delays ($t>1$\,$\mu$s) where signals become uncorrelated.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{RawAutocorr.eps}
\caption{``Raw'' homodyne autocorrelations, comparing the signal (strongly coupled atom) and reference (uncoupled atom) data for a given parameter set ($X$ quadrature, probe frequency $\omega=\omega_c$, for other parameters the curves are very similar). The visible oscillations correspond to the impulse response of the detector, i.e. to the autocorrelations of the shot noise measured with the finite detection bandwidth.}
\label{RawAutocorr}
\end{center}
\end{figure}
Figure \ref{RawAutocorr} shows that the obtained ``raw'' autocorrelations of the strong-coupling signal are practically superimposed with those of the empty-cavity reference. The oscillations visible at this scale correspond only to the impulse response of the detector, i.e. to the autocorrelations of the shot noise measured with a finite bandwidth. Physically relevant signals, $\approx 10^4$ times smaller, become only accessible by calculating the difference of these curves, where these large oscillations cancel out, leaving a quantity proportional to ${\langle :\Delta X_\theta(\tau) \Delta X_\theta(0):\rangle}$. In order to properly normalize it, we unlock the cavity and measure the offset on the $X$ quadrature: all of the light is then reflected towards the homodyne detector, which corresponds to measuring a coherent state $|\alpha\rangle$ with $\langle X\rangle=\alpha=\sqrt{2.0}$ and $\langle P\rangle=0$. The corresponding normalization factor perfectly matches the value expected from the detector's parameters and includes its overall efficiency $\eta_d=0.55$, without artificially compensating any other experimental imperfection such as the losses in the atom-cavity system itself.
An accurate subtraction between the signal and the reference requires an extremely stable shot noise level, determined by the local oscillator's power. This power is actively stabilized during the preparation and the control phases of the measurement sequence, but the stabilization system is too slow to follow the fast switching between the $200$\,$\mu$s probing and cooling intervals, and during the $20$\,ms measurement time the power is controlled by a sample-and-hold circuit. This circuit presents a small systematic drift, and the shot noise level changes by $\approx10$\,dBm (i.e. $\approx 0.2\%$) during these $20$\,ms. Since the strong coupling data corresponds to the beginning, and the reference data to the end of the acquisition, this variation appears on the differential autocorrelations, although it is too small to be visible on Fig. \ref{RawAutocorr}. To measure and compensate for this drift, for each parameter set we select the sequences where the atom was not trapped at all (triggering the acquisition but leaving the trap within the first $100$\,$\mu$s). In this case the variance of the homodyne signal corresponds only to the shot noise. By averaging over all such sequences, we obtain the evolution of the shot noise during the $20$\,ms acquisition. We then determine the average time interval when the atoms stayed strongly coupled (first $1.6$\,ms) and the one when the cavity was empty (last $16$\,ms of the $20$\,ms acquisition) and, for each case, calculate the average shot noise variance. Rescaling the reference data by their ratio $\zeta$ (typically $0.998$) compensates for the drift and allows to completely cancel out the contribution of the shot noise on the differential autocorrelations: dominant on Fig. \ref{RawAutocorr}, it is absent from Fig. 2 in the main body of the paper.
Although the agreement between the measured data and the theoretical autocorrelation function $\langle:\Delta X_\theta(\tau)\Delta X_\theta(0):\rangle$ (Eq. \ref{EqDeltXtX0}) is satisfactory, it can be further improved by taking into account the percussional response of the detector $D(\tau)$, obtained by renormalizing to 1 the integrated shot noise autocorrelations on Fig. \ref{RawAutocorr}. The theoretical curves on Fig. 2 in the main paper body correspond convolution of the theoretical autocorrelations $\langle:\Delta X_\theta(\tau)\Delta X_\theta(0):\rangle$ with $D(\tau)$.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{RawSpectra.eps}
\caption{``Raw'' signal and reference homodyne noise spectra ($X$ quadrature, $\omega=\omega_c$, for other parameters the curves are very similar). The visible features correspond to the frequency dependence of the detection gain, defined by the anti-alias filter (steeply rolling off beyond $40$\,MHz), and to a small ($200$\,mdB) and narrow ($<10$\,kHz) residual technical noise peak at $6.3$\,MHz with an equal height on both curves. The DC offset peak ($-38.9$\,dBV) is cut off.}
\label{RawSpectra}
\end{center}
\end{figure}
We calculate the noise spectra with a $10$\,kHz resolution by a direct Fourier transform of the time-domain homodyne signal divided into $100$\,$\mu$s intervals. Figure \ref{RawSpectra} shows that the ``raw'' spectra of the signal and the reference are, like the autocorrelations, completely superimposed, the reference spectrum being less noisy due to better statistics. The only visible features are the smooth variations of the shot noise level due to a non-flat response of the digitizer's anti-alias filter, and a small ($200$\,mdB) and narrow ($<10$\,kHz) residual technical noise peak at $6.3$\,MHz with an equal height on both curves. We rescale the reference spectrum by the same factor $\zeta$ as the autocorrelations to compensate for the small shot noise drift, and calculate the difference between the two spectra in logarithmic scale. Like for the autocorrelations, this cancels out the residual technical noise, and brings the noise level for frequencies above $30$\,MHz within $1$\,mdB from $0$: as can be expected from the parameters of the atom-cavity system, no spectral features appear at those frequencies, and the noise level of the signal is equal to the shot noise level of the reference. Since the narrow $10$\,kHz resolution is only necessary for efficient technical noise cancelation and is otherwise excessive for a system where relevant decay times remain below $0.1$\,$\mu$s, we average out narrow-band spectral fluctuations by convolution with a $1$\,MHz Lorentzian filter, which amounts to assuming that signals separated by $\tau>0.16$\,$\mu$s are uncorrelated. Finally, we correct the obtained spectra by the independently measured $55\%$ homodyne detection efficiency, to obtain the curves presented on Fig.\,3 in the paper body. We verified that the same spectra can be obtained by Fourier-transforming the differential autocorrelation functions and rescaling the result by the cavity decay rate and the overall detection efficiency including cavity losses.
\end{document}
|
1,116,691,499,173 | arxiv |
\section{Introduction}
Reconstruction of hadron spectral function from Euclidean correlation
functions is a notoriously difficult problem.
In lattice quantum chromodynamics (LQCD) computations, which have so
far been the only practical method to calculate non-perturbative
quantities with errors under control, physical quantities are extracted
from $n$-point correlation functions obtained on an Euclidean lattice.
It means that all momenta inserted are space-like, so that physical
amplitudes, especially those for on-shell particles or resonances,
have to be read off from this unphysical setup.
The ground state contribution can be obtained relatively easily by
measuring an exponential fall-off of the correlator at long distances,
while excited states are much harder to identify because the
exponential function of the form $\exp(-Et)$ with an energy $E$ and a
time separation $t$ is numerically very similar for different $E$'s
especially when many energy levels are close to each other as in the
experimental situation.
Typically, then, only one or even none of the excited states can be
identified.
Still, because the spectral function is of phenomenological interest
for various applications, several groups developed methods to extract
it with some extra pieces of information.
The maximum entropy method \cite{Nakahara:1999vy,Asakawa:2000tr} is
one of such attempts, where one assumes that unknown functions (or its
parameters) are statistically equally distributed within the model
space and tries to determine the most ``likely'' function.
A slightly different statistical approach based on Bayesian statistics
was also proposed \cite{Burnier:2013nla}.
Unfortunately, they are not free from uncertainties since the
{\it statistical distribution} of the spectral function does not
really have theoretical basis.
Similar problem would also remain in a recent attempt to use machine
learning to reconstruct the spectral function \cite{Kades:2019wtd}.
Another attempt to approach the problem is the use of the
Backus-Gilbert method, which is a deterministic method to obtain a
smeared spectral function \cite{Hansen:2017mnd}.
The smearing kernel is automatically determined by the data, requiring
that the width of the smearing is minimized.
In practice, one has to relax the minimization by adding an extra term
to the function to be minimized in order to avoid numerical instability.
The smearing kernel is therefore unknown until one actually performs
such analysis.
There is also a proposal to arrange the Backus-Gilbert
method such that the smearing kernel obtained becomes close to what
one inputs \cite{Hansen:2019idp}.
Several such methods have been tested with mock data led from a known
spectral function \cite{Tripolt:2018xeo}, and there is no obvious best
solution found so far.
We propose an alternative method to reconstruct the spectral function
with smearing specified by arbitrary kernels.
The Chebyshev polynomials are introduced to approximate the kernel
function, and a smeared spectral function under this approximation is
obtained from lattice data of temporal correlator.
The procedure is deterministic and the systematic error due to a
truncation of the Chebyshev polynomials can be estimated.
Limitation of the method comes from statistical error of the lattice
data, which prevents one from using high order polynomials.
The smeared spectral function can offer an intermediate quantity that
can be used to compare experimental data with non-perturbative
theoretical calculation provided by LQCD.
For instance, let us consider the $R$-ratio $R(s)$ defined for the
$e^+e^-\to q\bar{q}$ cross section as
$R(s)=\sigma_{e^+e^-\to q\bar{q}}(s)/\sigma_{e^+e^-\to\mu^+\mu^-}(s)$.
It cannot be directly compared with perturbative calculations in the
resonance region where perturbative expansion does not converge.
The LQCD calculation as it stands is not useful either, because it
can only calculate low-lying hadron spectrum and scattering phase
shift for specified final states, such as $\pi^+\pi^-$ or $K\bar{K}$,
but fully inclusive rate is unavailable.
Our method concerns how to extract the information for inclusive
processes, such as $q\bar{q}$, from lattice results of hadron
correlators without specifying any final states.
The $R$-ratio as a function of invariant mass of the final states
is not directly accessible in our method, but the function that is
smeared with some kernel can be related to quantities calculable on
the lattice.
The smeared spectral function was considered in the early days of
perturbative QCD by Poggio, Quinn and Weinberg \cite{Poggio:1975af}.
They considered a smearing of the form
\begin{equation}
\label{eq:smearing}
\bar{R}(s,\Delta_s)=\frac{\Delta_s}{\pi}\int_0^\infty ds'
\frac{R(s')}{(s'-s)^2+\Delta_s^2},
\end{equation}
whose kernel approaches a delta function $\delta(s-s')$ in the limit
of the width $\Delta_s\to 0$.
The $R$-ratio (or the spectral function) can be related to the vacuum
polarization function through the optical theorem
$R(s)=(1/\pi){\rm Im}\Pi(s)$
and the dispersion relation\footnote{In practice, one needs to use
the subtracted version to avoid ultraviolet divergences.}
\begin{equation}
\label{eq:dispersion}
\Pi(q^2)=\frac{1}{\pi}\int_0^\infty ds \frac{{\rm Im}\Pi(s)}{s-q^2}.
\end{equation}
The smeared spectrum can then be written using the vacuum polarization
function at complex values of $q^2$:
\begin{equation}
2i\bar{R}(s,\Delta_s)=\Pi(s+i\Delta_s)-\Pi(s-i\Delta_s).
\end{equation}
The main observation was that, unlike the imaginary part
${\rm Im}\Pi(s)$ evaluated on the cut,
one can avoid non-perturbative kinematical region and calculate
$\Pi(s+i\Delta_s)$ in perturbation theory as long as the smearing
range $\Delta_s$ is large enough.
This is the argument behind the quark-hadron duality.
The width parameter $\Delta_s$ is typically chosen larger than the QCD
scale $\Lambda_{\rm QCD}$, but there is no {\it a priori} criteria of
how large $\Delta_s$ should be for perturbation theory to work to a
desired accuracy.
One can also consider a Gaussian smearing instead of
(\ref{eq:smearing}), and arrive at the same conclusion
\cite{Bertlmann:1984ih}.
In many applications of perturbative QCD, such as deep inelastic
scattering or inclusive hadron decays, the smearing is not as
transparent as in this example.
Some smearing over kinematical variables is involved depending on the
setup of the problems, and the question of how much smearing is
introduced is more obscure.
Yet, one usually assumes that perturbation theory works; systematic
error due to this duality assumption is unknown.
Another commonly used form of smearing is the Laplace transform
\begin{equation}
\label{eq:Laplace}
\tilde{\Pi}(M^2)=\frac{1}{M^2}\int_0^\infty\! ds\,
\left[\frac{1}{\pi} {\rm Im}\Pi(s)\right]
e^{-s/M^2},
\end{equation}
which is related to the Borel transform involved in QCD sum rule
calculations \cite{Shifman:1978bx}.
Since $\tilde{\Pi}(M^2)$ can be written using the vacuum polarization
function $\Pi(Q^2)$ in the space-like domain, the non-perturbative
kinematical region is avoided.
The effective range of smearing is controlled by the parameter $M^2$.
Of course, one can view the dispersion relation
(\ref{eq:dispersion}) for a space-like value of $q^2=-Q^2$:
\begin{equation}
\Pi(Q^2)=\frac{1}{\pi}\int_0^\infty ds\,\frac{{\rm Im}\Pi(s)}{s+Q^2},
\end{equation}
or its subtracted version
\begin{equation}
\Pi(Q^2)-\Pi(0)=
-\frac{Q^2}{\pi}\int_0^\infty ds\,\frac{{\rm Im}\Pi(s)}{s(s+Q^2)},
\end{equation}
as a sort of smearing.
The range of smearing is effectively infinity as the weight function
decreases only by a power of $s$.
This is another way of keeping away from the resonance region, and
perturbation theory is expected to be applicable.
One still needs to include power corrections using the operator
product expansion, which involves unknown parameters (or condensates);
LQCD calculation is desirable to eliminate such uncertainties.
We develop a formalism of LQCD calculation that allows us to compute
the quantities mentioned above,
{\it i.e.} those obtained by applying some smearing on the
spectral function.
The smearings are designed to escape from the resonance region to some
extent, so that perturbation theory is applicable, but some
uncertainty still remains as mentioned above.
By using LQCD, on the other hand, fully non-perturbative calculation
can be achieved and no remnant uncertainty due to the singularities
nor the power corrections remains.
In other words, one can entirely avoid the assumption of quark-hadron
duality.
Through this method, the comparison between experimental data and
lattice calculation would provide a theoretically clean test of QCD.
It can also provide a testing ground for the perturbative QCD
analysis including the operator product expansion against the fully
non-perturbative lattice calculation.
This paper is organized as follows.
In Section~\ref{sec:spectral_function} we introduce the method to
reconstruct the smeared spectral function.
It includes an approximation using the Chebyshev polynomials, whose
performance is demonstrated for several cases with toy examples in
Section~\ref{sec:cheb_approx}.
Then, the method is tested with actual lattice data of Charmonium
correlator in Section~\ref{sec:charmonium}.
The discussion section (Section~\ref{sec:discussions})
lists possible applications of the methods including phenomenological
ones as well as those of theoretical interests related to the QCD sum
rule.
Our conclusions are given in Section~\ref{sec:conclusions}.
\section{Reconstruction of smeared spectral function}
\label{sec:spectral_function}
We are interested in the spectral function
\begin{equation}
\label{eq:spectral_func}
\bar\rho(\omega)=\frac{\langle\psi|\delta(\hat{H}-\omega)
|\psi\rangle}{\langle\psi|\psi\rangle}
\end{equation}
for some state $|\psi\rangle$.
It does not have to be an eigenstate of Hamiltonian $\hat{H}$, but can be
created by applying some operator on the vacuum, {\it e.g.}
$\sum_xJ_\mu(x)|0\rangle$ for the case of $e^+e^-\to q\bar{q}$.
Here $J_\mu(x)$ is the electromagnetic current and the sum over $x$
gives the (spatial) zero-momentum projection.
An extension to the case of different initial and final states should
be possible.
The spectral function $\bar\rho(\omega)$ in (\ref{eq:spectral_func})
is normalized such that it becomes unity when integrated over all
possible energy $\omega$.
On the lattice, we calculate the temporal correlation function
\begin{equation}
\label{eq:correlator}
\bar{C}(t) = \frac{\langle\psi|e^{-\hat{H}t}|\psi\rangle}{
\langle\psi|\psi\rangle},
\end{equation}
which is normalized to one at zero time separation.
It can be rewritten using the spectral function as
\begin{equation}
\label{eq:rhobar}
\bar{C}(t) =
\frac{\langle\psi| \int_0^\infty\! d\omega\,
\delta(\hat{H}-\omega) e^{-\omega t} |\psi\rangle}{
\langle\psi|\psi\rangle} =
\int_0^\infty\! d\omega\,\bar\rho(\omega)e^{-\omega t},
\end{equation}
thus the conventional spectral decomposition of a
correlator.
In practice, we can take the state $|\psi\rangle$ as
\begin{equation}
|\psi\rangle = e^{-\hat{H}t_0} \sum_x V_\mu|0\rangle
\end{equation}
with some (small, but non-zero) time separation $t_0$
in order to avoid any potential divergence due to a contact term when
evaluating $\langle\psi|\psi\rangle$.
For instance, by taking $t_0=1$ in the lattice unit,
we can identify (\ref{eq:correlator}) as $C(t+2)/C(2)$ for a
vector correlator $C(t)=\langle 0|V_\mu(t)V_\mu(0)|0\rangle$, where
$\mu$ stands for a spatial direction and not summed over.
We note that the Hamiltonian $\hat{H}$ in (\ref{eq:correlator}) is not
explicitly written in lattice QCD simulations,
but we assume that it exists so that the time evolution is written by
a transfer matrix $\hat{z}=e^{-\hat{H}}$.
We also assume that the eigenvalues of $\hat{H}$ are non-negative and
equivalently that the eigenvalues of $\hat{z}$ are constrained to lie
between 0 and 1.
Strictly speaking, many lattice actions currently used in numerical
simulations do not satisfy the {\it reflection positivity}, which is a
necessary condition for a Hermitian Hamiltonian to exist.
Any problem due to the violation of the reflection positivity is
expected to disappear in the continuum limit.
Our assumption is, therefore, that our lattice calculations are
sufficiently close to the continuum limit.
Using the transfer matrix $\hat{z}$,
the correlator in (\ref{eq:correlator}) is simply written as
$\bar{C}(t)=\langle\psi|\hat{z}^t|\psi\rangle/\langle\psi|\psi\rangle$.
For the vacuum polarization function due to electromagnetic currents,
\begin{equation}
\Pi_{\mu\nu}(q)=(q_\mu q_\nu-q^2\delta_{\mu\nu})\Pi(q^2)
= \int\!d^4x\,e^{iq\cdot x}\langle 0|J_\mu(x)J_\nu(0)|0\rangle,
\end{equation}
the spectral function is often defined as
$\rho(s)=(1/\pi){\rm Im}\Pi(s)$.
The Euclidean correlator is then expressed as
\begin{equation}
C(t)=\int_0^\infty d\omega\,\omega^2\rho(\omega^2) e^{-\omega t}.
\end{equation}
(See \cite{Bernecker:2011gh}, for instance.)
This is slightly different from (\ref{eq:correlator}), but can be
related by redefining the spectral function.
Namely, we define $\bar{C}(t)$ as
$\bar{C}(t)\equiv C(t+2t_0)/C(2t_0)$, so that the spectral function
$\bar{\rho}(\omega)$ in (\ref{eq:rhobar}) is
$\bar{\rho}(\omega)=(1/C(2t_0))\omega^2\rho(\omega^2)e^{-2\omega t_0}$.
We note that the normalization factor $C(2t_0)$ is explicitly
calculable on the lattice.
Now we define a smeared spectral function $\bar\rho_\Delta(\omega)$
for a smearing kernel $S_\Delta(\omega,\omega')$ as
\begin{eqnarray}
\bar\rho_\Delta(\omega) &=& \int_0^\infty d\omega' S_\Delta(\omega,\omega')
\bar\rho(\omega')
\\
&=&
\frac{\langle\psi| \int_0^\infty d\omega'
S_\Delta(\omega,\omega')
\delta(\hat{H}-\omega') |\psi\rangle}{\langle\psi|\psi\rangle}
\\
&=& \frac{\langle\psi| S_\Delta(\omega,\hat{H})|\psi\rangle}{
\langle\psi|\psi\rangle}.
\end{eqnarray}
Then, the matrix element to be evaluated is
$\langle\psi|S_\Delta(\omega,\hat{H})|\psi\rangle$.
The form of the smearing kernel $S_\Delta(\omega,\omega')$ is
arbitrary; we can consider the choices discussed in the previous
section.
In the following, to be explicit, let us assume a specific form for
$S_\Delta(\omega,\omega')$ as
\begin{equation}
S_\Delta(\omega,\omega')=\frac{1}{\pi}
\frac{2\Delta}{(\omega-\omega')^2+\Delta^2},
\label{eq:smearing}
\end{equation}
where $\Delta$ represents a range of smearing.
We consider a polynomial approximation of $S_\Delta(\omega,\hat{H})$ of
the form
\begin{equation}
S_\Delta(\omega,\hat{H}) \simeq \frac{c_0(\omega)}{2} +
\sum_{j=1}^N c_j(\omega) T_j(\hat{z}),
\label{eq:cheb_approx}
\end{equation}
where $T_j(x)$ stands for the Chebyshev polynomial and the sum is up
to its maximal order $N$.
The first few terms of the Chebyshev polynomials are
$T_0(x)=1$, $T_1(x)=x$, $T_2(x)=2x^2-1$, $T_3(x)=4x^3-3x$, $\cdots$,
and one can use the recursion relation
$T_{j+1}(x)=2xT_j(x)-T_{j-1}(x)$ to construct the following ones.
According to the general formula of the Chebyshev approximation,
the coefficients $c_j(\omega)$ in (\ref{eq:cheb_approx}) can be
obtained as
\begin{equation}
c_j(\omega)=\frac{2}{\pi}\int_{-1}^1 \frac{dx}{\sqrt{1-x^2}} f_\omega(x) T_j(x),
\label{eq:cheb_coeff}
\end{equation}
where the function $f_\omega(x)$ is written as
\begin{equation}
f_\omega(x) = \left\{
\begin{array}{ll}
\displaystyle
\frac{1}{\pi}\frac{2\Delta}{(\omega + \ln x)^2+\Delta^2} & (0<x\le 1),
\\
0 & (-1\le x\le 0).
\end{array}
\right.
\end{equation}
Here, $x$ corresponds to eigenvalues of $\hat{z}=e^{-\hat{H}}$,
so that the function $f_\omega(x)$ represents the smearing function
(\ref{eq:smearing}) with $\omega'$ replaced by $\hat{H}$.
This standard formula for the Chebyshev approximation is
written for a function $f_\omega(x)$ defined in $[-1,1]$.
Here we use it only between $[0,1]$ and assume that $f_\omega(x)$
vanishes for $x\le 0$.
Technically, the numerical integral (\ref{eq:cheb_coeff}) becomes
unstable for large $j$'s due to a divergence of the integrand as
$x\to 1$.
Instead, one may use an alternative formula
\begin{equation}
c_j(\omega)=\frac{2}{\pi}\int_0^\pi d\theta\,
f_\omega(\cos\theta) \cos(j\theta),
\end{equation}
evaluation of which is more stable for large $j$'s.
For the range of $x$ between $[0,1]$, this integral is up to $\pi/2$.
To optimize the Chebyshev approximation, one can use a modified form
written in terms of the shifted Chebyshev polynomials
$T^*_n(x)\equiv T_n(2x-1)$, which is defined in
$0\le x\le 1$.
Its first few terms are
$T_0^*(x)=1$, $T_1^*(x)=2x-1$, $T_2^*(x)=8x^2-8x+1$,
$T_3^*(x)=32x^3-48x^2+18x-1$, $\cdots$.
The corresponding formula for the coefficients appearing in the
Chebyshev approximation is
\begin{equation}
c_j^*(\omega)=\frac{2}{\pi}\int_0^\pi d\theta\,
f_\omega\left(\frac{1+\cos\theta}{2}\right) \cos(j\theta).
\end{equation}
The approximation formula (\ref{eq:cheb_approx}) is unchanged
other than replacing $c_j(\omega)T_j(\hat{z})$ by
$c_j^*(\omega)T_j^*(\hat{z})$.
Since the range of $x$ is narrower, this series gives a better
approximation of the original function for a given order $N$.
Finally, remember that the matrix elements of the transfer matrix
$\hat{z}$ and its power $\hat{z}^t$ can be written as
$\bar{C}(t)=
\langle\psi|\hat{z}^t|\psi\rangle/\langle\psi|\psi\rangle$.
Then, we arrive at an expression for $\bar\rho_\Delta(\omega)$:
\begin{equation}
\label{eq:rhoDelta_approx}
\bar\rho_\Delta(\omega) \simeq \frac{c_0^*(\omega)}{2}
+\sum_{j=1}^N c_j^*(\omega)\langle T_j^*(\hat{z})\rangle,
\end{equation}
where the last term $\langle T_j^*(\hat{z})\rangle$
may be constructed from the correlator $\bar{C}(t)$
by replacing the power of the transfer matrix $\hat{z}^t$ appearing in
$T_j^*(\hat{z})$ by $\bar{C}(t)=C(t+2t_0)/C(2t_0)$.
Therefore, the first few terms are obtained as
\begin{eqnarray}
\label{eq:construct_Tj}
\langle T_0^*(\hat{z})\rangle & = & 1,\nonumber\\
\langle T_1^*(\hat{z})\rangle & = & 2\bar{C}(1)-1,\nonumber\\
\langle T_2^*(\hat{z})\rangle & = & 8\bar{C}(2)-8\bar{C}(1)+1,\nonumber\\
\langle T_3^*(\hat{z})\rangle & = & 32\bar{C}(3)-48\bar{C}(2)+18\bar{C}(1)-1,
\\ & \vdots & \nonumber
\end{eqnarray}
which is a deterministic procedure.
The general expression (\ref{eq:rhoDelta_approx}) for an approximation
of $\bar\rho_\Delta(\omega)$ is valid for any smearing kernel and for
any value of $\omega$, as long as the coefficients $c_j^*(\omega)$ are
calculated appropriately.
As it is well known, the Chebyshev approximation provides the
{\it best} approximation of any function defined in $0\le x\le 1$.
It is the best among any polynomials at a given order $N$ in the sense
that the minmax error, the maximum deviation from the true function in
the same range, is minimum; in order to achieve a better
approximation, one needs a higher polynomial order $N$.
Since the (shifted) Chebyshev polynomials $T_j^*(x)$ are oscillating
functions between 0 and 1, it is necessary to use larger $N$ in order
to better approximate detailed shape of the original spectral
function $\bar\rho(\omega)$ by narrowing the width $\Delta$ of the
smearing kernel.
The approximation is demonstrated in the next section by taking a few
examples.
The shifted Chebyshev approximation works only when the argument $x$
is in $0\le x\le 1$.
In our case, it corresponds to the condition that the eigenvalues of
$\hat{z}$ are in $[0,1]$, which should be satisfied because $\hat{z}$
is the transfer matrix $\hat{z}=e^{-\hat{H}}$.
For a given eigenvalue $z_i$ of $\hat{z}$, each polynomial $T_j^*(z_i)$
takes a value between $-1$ and $1$, and if the state $|\psi\rangle$ is
decomposed as $|\psi\rangle=\sum_ia_i|i\rangle$ with a normalization
$\sum_i|a_i|^2=\langle\psi|\psi\rangle$,
the individual polynomial becomes
$\langle\psi|T_j^*(\hat{z})|\psi\rangle
=\sum_i|a_i|^2 T_j^*(z_i)$,
which is bounded by $\pm\sum_i|a_i|^2=\pm\langle\psi|\psi\rangle$ so
that $\langle T_j^*(\hat{z})\rangle$ is bounded by $\pm 1$.
This provides a non-trivial constraint that must be satisfied by
the correlator $\bar{C}(t)$.
\section{Chebyshev polynomial approximation: examples}
\label{sec:cheb_approx}
First, we demonstrate how well the smearing kernel
$S_\Delta(\omega,\omega')$, Eq.~(\ref{eq:smearing}), is approximated by
the Chebyshev polynomials.
Setting $\omega'=\omega_0$ = 1, in some unit, say the lattice unit,
we draw a curve of $S_\Delta(\omega,\omega_0)$ in
Fig.~\ref{fig:cheb_approx} (left).
The Chebyshev approximation of the form (\ref{eq:cheb_approx}),
replacing $\hat{z}$ by $e^{-\omega_0}$ in this equation, is also
plotted for $N$ = 10 (dotted), 15 (dot-dashed) and 20 (dashed curve).
On the right panels, we plot the error of the approximation,
namely
a difference from the true function
$S_\Delta^{\rm (approx)}(\omega,\omega_0)-
S_\Delta^{\rm (true)}(\omega,\omega_0)$.
From the plots one can confirm that the smearing function with a larger
width $\Delta$ = 0.3 is well approximated by a limited order of the
polynomials.
Namely, the polynomials up to order $N$ = 20 or even 15 give a nearly
perfect approximation; the deviation is at a few per cent level.
Apparently, the approximation becomes poorer when the function is
sharper, $\Delta$ = 0.2 or 0.1.
One needs higher order polynomials to achieve better approximation.
We limit ourselves to $N$ = 10--20, because these are the orders that
can be practically used for the analysis of lattice data, as we
discuss in the next section.
When the target energy $\omega_0$ is lower, $\omega_0$ = 0.5,
we observe a very similar pattern as shown in
Fig.~\ref{fig:cheb_approx_w0=0.5}.
An important difference is, however, that
the approximation is better than those for $\omega_0$ = 1.0,
as one can see by comparing the size of the error
$S_\Delta^{\rm (approx)}(\omega,\omega_0)-
S_\Delta^{\rm (true)}(\omega,\omega_0)$.
This is probably because the approximation is constructed as a
function of $z=e^{-\omega}$, and the Chebyshev approximation works
uniformly between $z\in[0,1]$.
The range of $\omega\in[0.5,1.5]$, which is the central region for
$\omega_0=1$ is mapped onto $z\sim[0.22,0.61]$, while
$\omega\in[0,1]$, for $\omega_0=0.5$ corresponds to $z\sim[0.37,1]$,
which stretches over a wider range and the Chebyshev approximation
works more efficiently.
\begin{figure}[tbp]
\centering
\includegraphics[width=8cm]{{plots/cheb_approx_d0.1}.pdf}
\includegraphics[width=8cm]{{plots/cheb_err_d0.1}.pdf}\\
\includegraphics[width=8cm]{{plots/cheb_approx_d0.2}.pdf}
\includegraphics[width=8cm]{{plots/cheb_err_d0.2}.pdf}\\
\includegraphics[width=8cm]{{plots/cheb_approx_d0.3}.pdf}
\includegraphics[width=8cm]{{plots/cheb_err_d0.3}.pdf}\\
\caption{Left: Chebyshev approximation of the smearing kernel
$S_\Delta(\omega,\omega_0)$ for $\omega_0=1$ and $\Delta$ = 0.1
(top), 0.2 (middle) and 0.3 (bottom).
Solid line is the true function, while dotted, dot-dashed and
dashed lines are the approximations with $N$ = 10, 15 and 20,
respectively.
Right: Its error compared to the true function.
}
\label{fig:cheb_approx}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=8cm]{{plots/cheb_approx_w0.5_d0.1}.pdf}
\includegraphics[width=8cm]{{plots/cheb_err_w0.5_d0.1}.pdf}\\
\includegraphics[width=8cm]{{plots/cheb_approx_w0.5_d0.2}.pdf}
\includegraphics[width=8cm]{{plots/cheb_err_w0.5_d0.2}.pdf}\\
\includegraphics[width=8cm]{{plots/cheb_approx_w0.5_d0.3}.pdf}
\includegraphics[width=8cm]{{plots/cheb_err_w0.5_d0.3}.pdf}\\
\caption{Same as Fig.~\ref{fig:cheb_approx} but for $\omega_0$ =
0.5.
}
\label{fig:cheb_approx_w0=0.5}
\end{figure}
In order to see the rate of convergence of the Chebyshev
approximation, we plot the coefficients $c_j^*(\omega)$ as a function
of $j$ in Fig.~\ref{fig:cheb_coeff}.
As an example, the point $\omega=\omega_0$ is taken because this is
where the error is largest.
(It is not always the case especially when the approximation is
already good. See Figs.~\ref{fig:cheb_approx} and
\ref{fig:cheb_approx_w0=0.5}.)
One can see that the magnitude of the coefficient $|c_j^*(\omega)|$
decreases roughly exponentially as $j$.
When the approximation is better (larger $\Delta$), the decrease of
$|c_j^*(\omega)|$ is faster.
Since the Chebyshev polynomial $|\langle T_j^*(\hat{z})\rangle|$ is
bounded from above by 1, this shows (the upper limit of) the rate of
convergence.
The Calculation of $c_j^*(\omega)$ is numerically inexpensive, and one can
easily estimate the error of the approximation due to a truncation at
the order $N$ by $\pm|c_{N+1}^*(\omega)|$.
\begin{figure}[tbp]
\centering
\includegraphics[width=8cm]{{plots/cheb_coeff_w1.0}.pdf}
\includegraphics[width=8cm]{{plots/cheb_coeff_w0.5}.pdf}
\caption{Coefficient of the Chebyshev approximation $c_j^*(\omega)$
at $\omega=\omega_0$ plotted against $j$ for $\omega_0$ = 1.0
(left) and 0.5 (right).
Results for $\Delta$ = 0.1 (squares), 0.2 (triangles), and 0.3
(circles).
}
\label{fig:cheb_coeff}
\end{figure}
As another test, we consider the Laplace transform (\ref{eq:Laplace}),
which is achieved by a smearing kernel
\begin{equation}
\label{eq:Laplace_smearing}
S_{\rm Lap}(M^2,\omega') = \frac{2\omega'}{M^2} e^{-\omega'^2/M^2}.
\end{equation}
In Fig.~\ref{fig:cheb_approx_exp} we draw a curve of
$S_{\rm Lap}(M^2,\omega_0)$, which represents a
convolution with a trivial spectrum $\delta(\omega'-\omega_0)$,
together with its approximations (\ref{eq:rhoDelta_approx}).
The plots for $\omega_0$ = 1.0 (left) and 0.5 (right) demonstrate that
the Chebyshev polynomials provide a very precise approximation even
with $N$ = 10.
This is not unreasonable because the Laplace transform is a smooth
function over the entire range of energy.
\begin{figure}[tbp]
\centering
\includegraphics[width=8cm]{{plots/cheb_approx_exp}.pdf}
\includegraphics[width=8cm]{{plots/cheb_approx_exp_w0.5}.pdf}
\caption{Chebyshev approximation of the kernel
$S_{\rm Lap}(M^2,\omega_0)$ corresponding to the Laplace transform
as a function of $1/M^2$.
The true function (solid line) as well as its approximations
(dotted, dot-dashed, dashed) are shown for $\omega_0$ = 1.0 (left)
and 0.5 (right).
}
\label{fig:cheb_approx_exp}
\end{figure}
\section{Smeared spectral function from lattice data:
Charmonium correlators}
\label{sec:charmonium}
We test the method with LQCD data for Charmonium correlators.
Our data are obtained on the lattice with 2+1 flavors of M\"obius
domain-wall fermions.
They have been previously used for an extraction of the charm quark
mass from the Charmonium temporal moments \cite{Nakayama:2016atf}.
(The same lattice ensembles are also used for
calculations of the Dirac spectrum \cite{Cossu:2016eqs,Nakayama:2018ubk},
short-distance current correlators \cite{Tomii:2017cbt},
topological susceptibility \cite{Aoki:2017paw},
$\eta'$ meson mass \cite{Fukaya:2015ara}.)
Among 15 ensembles generated at various lattice spacings and lattice
sizes, we use the one at a lattice spacing $a$ = 0.080~fm, a lattice
size $32^3\times 64$, and bare
quark masses $am_{ud}$ = 0.007 and $am_s$ = 0.040.
The corresponding pion mass is 309(1)~MeV.
We take 100 gauge configurations and calculate the Charmonium
correlator with a tuned charm quark mass $am_c$ = 0.44037, and compute
charm quark propagators from $Z_2$ noises distributed over a time
slice to construct Charmonium correlators.
We then construct the Charmonium correlators, which correspond to
those of local currents in the pseudo-scalar (PP) and vector (VV)
channels.
This calculation has been repeated for 8 source time slices to
improve statistical signal, so that the total number of measurements
is 800.
Three spatial polarizations are averaged for the VV channel.
\begin{figure}[tbp]
\centering
\includegraphics[width=10cm]{plots/eff_energy_new.pdf}
\caption{
Effective mass of the pseudo-scalar (circles) and vector
(squares) correlators calculated on the lattice with lattice
cutoff $1/a$ = 2.453(4)~GeV.
The charm quark mass is tuned such that the spin-averaged 1S mass
matches the experimental data.
}
\label{fig:effmass}
\end{figure}
Fig.~\ref{fig:effmass} shows the effective mass
$E_{\rm eff}(t)=\ln[C(t)/C(t+1)]$.
We observe that the correlator is nearly saturated by the ground
state at around $t=18$, and the information of the excited states are
encoded in the region of smaller $t$'s.
Energy levels and amplitudes obtained by a multi-exponential fit of
the form $C(t)=\sum_iA_ie^{-E_it}$ are given in
Table~\ref{tab:mass-amp}.
We include four levels in the fit, yet show the results only up to
three levels since the error of the third amplitude is already large.
Statistical correlation among different $t$'s is taken into account in
the fit.
The results reproduce the experimental data reasonably well.
For instance, the lowest-lying vector meson masses are 3.09 and
3.76(19)~GeV, which correspond to the experimentally observed $J/\psi$ and
$\psi(2S)$ states of masses 3.10 and 3.69~GeV, respectively.
\begin{table}[tbp]
\centering
\begin{tabular}{c|cc|cc}
\hline
& \multicolumn{2}{c|}{PP channel} & \multicolumn{2}{c}{VV channel}
\\\hline
$i$ & $E_i $ & $A_i$ & $E_i$ & $A_i$
\\\hline
0 & 1.22576(18) & 0.17638(31) & 1.2589(17) & 0.1308(35) \\
1 & 1.521(19) & 0.205(22) & 1.534(79) & 0.184(62) \\
2 & 1.831(15) & 0.25(17) & 1.834(15) & 0.04(88) \\
\hline
\end{tabular}
\caption{Energy levels and amplitudes for the Charmonium correlators.}
\label{tab:mass-amp}
\end{table}
We construct the Chebyshev matrix elements
$\langle T_j^*(\hat{z})\rangle$ defined in (\ref{eq:rhoDelta_approx})
from the current correlators.
The calculation is straightforward.
Namely, we replace the term of $\hat{z}^t$ in
the polynomials by $C(t+2t_0)/C(2t_0)$ and calculate the linear
combinations of them with the coefficients of the shifted Chebyshev
polynomials.
We take $t_0=1$ in the lattice unit.
The results are shown in Fig.~\ref{fig:Chebz};
the error is calculated using the jackknife method.
It turned out that the Chebyshev matrix elements are precisely
determined up to $j=11$, which corresponds to $t=13$.
Beyond that point, the statistical error grows rapidly, and the
results eventually get out of the range of $\pm 1$, which must be
satisfied for the Chebyshev polynomials.
\begin{figure}[tbp]
\centering
\includegraphics[width=10cm]{plots/jackknife_Tj.pdf}
\caption{
Chebyshev matrix elements $\langle T_j^*(\hat{z})\rangle$
for the PP (circles) and VV (squares) Charmonium correlators.
The VV points are slightly shifted horizontally for clarity.
The statistical error is estimated using the jackknife method.
}
\label{fig:Chebz}
\end{figure}
In fact, the statistical error grows exponentially for higher-order
polynomials as shown in Fig.~\ref{fig:error_Chebz}.
Its growth rate is about a factor of three to proceed by another order
in $j$, which means that 10 times larger statistical samples would be
needed to include yet another order to improve the Chebyshev
approximation.
This is not unreasonable because we try to construct the quantities of
$O(1)$ as a linear combination of terms of exponentially different
orders.
For the Charmonium correlator at the lattice spacing chosen in this
work, the terms of $\hat{z}^t$ are suppressed roughly by $e^{-1.2t}$,
which is however compensated by the Chebyshev coefficients growing
even faster.
In the end, a strong cancellation among different powers of $\hat{z}$
gives a number between $-1$ and $+1$, and the noise is relatively
enhanced.
For instance, at the order $j=12$, a cancellation of four orders of
magnitudes takes place and it becomes even harder for higher orders.
\begin{figure}[tbp]
\centering
\includegraphics[width=10cm]{plots/error_jackknife_Tj.pdf}
\caption{
Statistical error of $\langle T_j^*(\hat{z})\rangle$ as a function
of $j$.
Circles and squares represent the data for PP and VV channels,
respectively.
}
\label{fig:error_Chebz}
\end{figure}
Since the Chebyshev approximation drastically fails outside
the domain $0\le x\le 1$, we are not able to use it beyond the order
where
$|\langle T_j^*(\hat{z})\rangle|$
exceeds 1.
Instead, we introduce a fit to determine these matrix elements
$\bar{T}_j\equiv\langle T_j^*(\hat{z})\rangle$
in such a way that they are consistent with $\bar{C}(t)$ while
satisfying a constraint $|\bar{T}_j|\le 1$.
To do so, we can use the reverse formula of the shifted Chebyshev
polynomials \cite{Cheb_rev}:
\begin{equation}
\label{eq:reverse_Cheb}
x^n = 2^{1-2n}\sideset{}{'}\sum_{r=0}^n \left(
\begin{array}[c]{c}
2n\\ n-r
\end{array}
\right)
T_r^*(x),
\end{equation}
where the prime on the sum indicates that the term of $r=0$ is to be
halved.
The relation to be satisfied is then
\begin{equation}
\label{eq:to_fit}
\bar{C}(t) = 2^{1-2t}
\left[
\frac{1}{2}\left(\begin{array}[c]{c} 2t\\t\end{array}\right)
+ \sum_{r=1}^t\left(
\begin{array}[c]{c} 2t\\t-r\end{array}
\right)\bar{T}_r
\right].
\end{equation}
We take $T_r^*$'s as free parameters to be determined and fit the
lattice data $\bar{C}(t)$ with a constraint $|\bar{T}_j|\le 1$.
Statistical correlations of $C(t)$ among different $t$'s are taken
into account in the fit using the least-squares fit package
{\sf lsqfit} by Lepage \cite{lsqfit}.
The resulting values of $\bar{T}_j$ are listed in Table~\ref{tab:Tr}.
Since the numbers of inputs, $\bar{C}(t)$'s, and unknowns, $\bar{T}_j$'s,
are the same, the condition (\ref{eq:to_fit}) may be solved as a system
of linear equations unless the constraints are introduced.
In fact, the results are unchanged from the direct determination
through (\ref{eq:construct_Tj}) within the statistical error
for small $j$'s up to $j\simeq 10$.
Beyond that, they are affected by the constraints.
The error becomes large, of order 1, for large $j$'s, where they are
essentially undetermined by the fit but still kept within $\pm 1$.
\begin{table}[tbp]
\centering
\begin{tabular}[c]{cD{.}{.}{8}D{.}{.}{8}}
\hline
$j$ & \multicolumn{1}{c}{PP channel} & \multicolumn{1}{c}{VV channel}\\
\hline
1 & -0.6157(17) & -0.6447(17)\\
2 & -0.1933(48) & -0.1257(49)\\
3 & 0.7341(61) & 0.6980(63)\\
4 & -0.6262(46) & -0.6892(49)\\
5 & 0.1090(35) & 0.2410(43)\\
6 & 0.2803(72) & 0.1914(78)\\
7 & -0.267(17) & -0.297(17)\\
8 & 0.036(38) & 0.144(37)\\
9 & 0.073(81) & -0.001(78)\\
10 & 0.03(16) & 0.01(15)\\
11 & -0.14(26) & -0.08(25)\\
12 & 0.08(36) & 0.07(35)\\
13 & 0.06(43) & 0.03(42)\\
14 & -0.07(51) & -0.05(51)\\
15 & -0.05(60) & -0.02(59)\\
16 & -0.01(77) & 0.00(76)\\
\hline
\end{tabular}
\caption{Fit results for $\bar{T}_j$.}
\label{tab:Tr}
\end{table}
Once a set of estimates for $\langle T_j^*(\hat{z})\rangle$,
{\it i.e.} $\bar{T}_j$,
is obtained, the remaining task is to use (\ref{eq:rhoDelta_approx})
to estimate $\bar{\rho}_\Delta(\omega)$.
The results are shown in Fig.~\ref{fig:rho_variousN}.
Three bands corresponding to the polynomial order $N$ = 12 (red), 14
(blue), 16 (orange) are overlaid.
We find that the overall shape is unchanged by adding more terms,
{\it i.e.} from 12 to 14 or to 16,
while the size and shape of the statistical error is affected.
When the polynomial order is lower, some wiggle structure is observed,
while the necks, the positions of small statistical error, are fatten
by adding more terms and eventually the error becomes nearly uniform
over $\omega$.
Beyond $N=16$, the results are essentially unchanged, since the higher
order coefficients $c_j^*(\omega)$ are exponentially suppressed.
\begin{figure}[tbp]
\centering
\includegraphics[width=8cm]{plots/PP_d01_Ns.pdf}
\includegraphics[width=8cm]{plots/VV_d01_Ns.pdf}
\includegraphics[width=8cm]{plots/PP_d03_Ns.pdf}
\includegraphics[width=8cm]{plots/VV_d03_Ns.pdf}
\caption{
Smeared spectral function $\bar{\rho}_\Delta(\omega)$
reconstructed using (\ref{eq:rhoDelta_approx}).
The order of approximation is $N$ =
12 (red), 14 (blue), 16 (orange).
The results for $\Delta$ = 0.1 (top panels) and
0.3 (bottom panels),
for the PP channel (left panels) and VV channel (right panels),
are shown.
}
\label{fig:rho_variousN}
\end{figure}
In Fig.~\ref{fig:rhoDelta} we show the results for the smeared
spectral function $\bar{\rho}_\Delta(\omega)$ obtained with $N=16$.
The smearing kernel is $S_\Delta(\omega,\omega')$ with $\Delta$ = 0.1
(top), 0.2 (middle) and 0.3 (bottom).
The results are compared with the expected contributions from the
ground state and first excited state (red and blue curves).
They are drawn assuming $\delta$-function distributions,
$\bar{\rho}(\omega')=\sum_i A_ie^{-\omega't_0}\delta(\omega'-E_i)$,
with the fitted values of energy levels $E_i$ and their amplitudes
$A_i$ given in Table~\ref{tab:mass-amp}.
The factor $e^{-\omega't_0}$ is introduced to take account of the time
evolution from 0 to $t_0$, which is included in the definition of
the state $|\psi\rangle=e^{-Ht_0}J_\mu|0\rangle$.
\begin{figure}[tbp]
\centering
\includegraphics[width=8cm]{plots/PP_d01.pdf}
\includegraphics[width=8cm]{plots/VV_d01.pdf}
\includegraphics[width=8cm]{plots/PP_d02.pdf}
\includegraphics[width=8cm]{plots/VV_d02.pdf}
\includegraphics[width=8cm]{plots/PP_d03.pdf}
\includegraphics[width=8cm]{plots/VV_d03.pdf}
\caption{
Reconstructed smeared spectral function $\rho_\Delta(\omega)$ for
$\Delta$ = 0.1 (top), 0.2 (middle) and 0.3 (bottom).
Left and right columns are those of the pseudo-scalar and vector
channels.
Results are shown by orange band, while the
ground-state contribution (dot-dashed) and
the ground-state and plus excited-state contribution (dashed) are
plotted assuming they have $\delta$-function structures.
}
\label{fig:rhoDelta}
\end{figure}
When the smearing width is large, $\Delta=0.3$ (bottom panels of Fig.~\ref{fig:rhoDelta}), we observe that the
reconstructed smeared spectral function follows the expected form from
the low-lying states in the lower region of $\omega$.
As $\omega$ increases, the spectral function indicates more
contributions from higher excited states.
This is exactly what we expected.
The lattice data in the short time separations contain the
information of such states, which is properly extracted with our
method.
From perturbation theory, one expects a constant proportional to the
number of color degrees of freedom $N_c=3$ for the (unsmeared)
spectral function $\rho(\omega)$.
This constant is slightly distorted because of the difference between
$\rho(\omega)$ and $\bar{\rho}(\omega)$.
For $t_0=1$ (in the lattice unit), this should give an exponentially
decreasing spectral function $\bar{\rho}_\Delta(\omega)$ as
$\sim \omega^2e^{-2\omega}$ at large $\omega$, which is indeed observed
in the results.
On the other hand, the resonance structure is smeared out and
invisible with $\Delta=0.3$ as one can see from the contributions of
the ground and first-excited states.
For smaller smearing widths, $\Delta$ = 0.2 and 0.1, a larger
systematic error is expected due to the truncation of the
Chebyshev approximation.
For $\Delta=0.2$, a typical size of the error is about 10--20\% as
one can see from Figures~\ref{fig:cheb_approx} and
\ref{fig:cheb_approx_w0=0.5}.
(Dot-dashed lines correspond to $N=15$.)
This error due to the truncation is not included in the band shown in
Fig.~\ref{fig:rhoDelta}, but taking account of this marginal size of
error the reconstructed $\rho_\Delta(\omega)$ looks reasonable
for $\Delta=0.2$ (middle panels).
Namely, it follows the expected curve of the ground and the first
excited states up to around the peak of the latter and then drops
slowly due to higher excited state contributions.
The truncation error increases to 50--100\% for $\Delta=0.1$
(upper panels of Figures~\ref{fig:cheb_approx} and
\ref{fig:cheb_approx_w0=0.5}), and we should not take the results
(top panels of Fig.~\ref{fig:rhoDelta}) too seriously.
If it were precisely calculated, we would be able to resolve the
resonance structures as the curve of the low-lying state
contributions suggests.
To do so, we need to include higher order terms of the Chebyshev
approximation, which requires much better precision of the simulation
data.
This reflects the fact that the reconstruction of the full spectral
function from Euclidean lattice data is an ill-posed problem.
One needs ridiculously high precision in order to achieve a full
reconstruction as emphasized in \cite{Hansen:2019idp}.
\section{Discussions}
\label{sec:discussions}
As already mentioned earlier, our proposal to calculate the smeared
spectral function is not limited to the case just discussed.
Any sort of weighted integral of the spectral function can be
considered.
A well-known example is the contribution of quark vacuum polarization
to the muon anomalous magnetic moment $g-2$.
Phenomenologically, one employs the optical theorem and dispersion
relation to relate the vacuum polarization function in the Euclidean
domain $\Pi(Q^2)$ to a weighted integral of the experimentally
observed $R$-ratio, or the spectral function.
Then, an integral of $\Pi(Q^2)$ with an appropriate weight gives
the contribution to $g-2$.
In this case, however, a direct expression in terms of the time
correlator $C(t)$ is known \cite{Bernecker:2011gh}, and we do not
really need the approximation method developed in this work.
Another phenomenologically interesting example is the hadronic $\tau$
decay.
Using the finite-energy sum rule \cite{Braaten:1991qm}
the hadronic width can be written as
\begin{equation}
\label{eq:hadronic_tau}
R_\tau^{ud}=12\pi^2 S_{\rm EW}|V_{ud}|^2
\int_0^{m_\tau^2}\frac{ds}{m_\tau^2}
\left(1-\frac{s}{m_\tau^2}\right)^2
\left(1+\frac{2s}{m_\tau^2}\right)
\rho_{V+A}(s),
\end{equation}
where $S_{\rm EW}$ is a short-distance electroweak correction and
$|V_{ud}|$ is a CKM matrix element.
(Here, only the $ud$ contribution is considered. An extension to the
$us$ contribution is straightforward.)
The spectral function $\rho_{V+A}(s)$ denotes a sum of $VV$ and $AA$
channels.
The integral (\ref{eq:hadronic_tau}) reflects a particular kinematics
of $\tau$ decay and has a complicated form, but our method can be
applied for such a case in principle.
A practical question would be, however, whether a good enough approximation
can be achieved with a limited number of terms.
The Laplace transform (\ref{eq:Laplace_smearing}) is often
considered in QCD sum rule analyses \cite{Shifman:1978bx}
because the corresponding Borel transform of perturbative series makes
it more convergent.
Our method allows to calculate two-point function after the Borel
transform directly using lattice QCD.
It may be useful to test the perturbative expansion and the
operator product expansion involved in the QCD sum rule calculations.
Conversely, it can also be used to validate lattice QCD calculations
especially in the short-distance region.
Such a test has been performed using short-distance correlators in the
coordinate space \cite{Tomii:2017cbt}, and it is interesting to do the
test for the Borel transformed quantities.
The dispersion integral of the form (\ref{eq:dispersion}) is of course
another type of applications of our method.
Although the vacuum polarization function $\Pi(Q^2)$ in the space-like
momenta $Q^2$ can be directly obtained by a Fourier transform of the
lattice correlators, the Chebyshev approximation offers a method to
extract it at arbitrary values of $Q^2$ corresponding to the
momenta of non-integer multiples of $2\pi/L$.
The Laplace transform as introduced in \cite{Feng:2013xsa} can do this
too, but requires the information of large time separations in the
integral of the form $\int_0^\infty dt\,e^{\omega t}C(t)$.
The method developed in this work accesses only to relatively short
time separations, but we need to introduce an approximation.
Systematic error in each case has to be carefully examined.
Extension to the cases of more complicated quantities, such as the
nucleon structure function as measured in deep inelastic scattering or
the inclusive hadron decays, can also be considered.
One of the authors has proposed an analysis to use the dispersion
integral to relate the inclusive decay rate to an amplitude in
space-like momenta \cite{Hashimoto:2017wqo}.
This method contains a difficulty of requiring the information at
unphysical momentum regions, which may be avoided by more flexible
integral transformation proposed in this work.
For this class of applications, there are two independent kinematical
variables, $q^2$ and $p\cdot q$, with $p$ the momentum of a decaying
particle (or the initial nucleon) and $q$ the momentum transfer.
In order to apply the method outlined in this work, we need to fix one of
these kinematical variables and introduce a smearing on the other
variable.
More complicated integral might be useful for $b\to u\ell\nu$ decay
analysis for which one introduces elaborate kinematical cuts in order
to avoid backgrounds from $b\to c\ell\nu$.
The flexibility of our method would allow such analyses.
Going to high tempertature QCD, our method would not work as it is,
because the correlator can not be simply written as
$\langle\psi|\hat{z}^t|\psi\rangle$
due to the contribution from the opposite time direction.
The operator is then no longer a power of the transfer matrix, direct
estimate of the Chebyshev matrix elements is not available.
\section{Conclusions}
\label{sec:conclusions}
Precise reconstruction of the spectral function from lattice data
remains a difficult problem.
Instead, we calculate a smeared counterpart, which contains some
information of the spectral function after smearing out its detailed
structures.
For the Charmonium spectral function we obtain a reasonably precise
result for a smearing width $\Delta$ = 0.3, which is about 700~MeV in
the physical unit.
Since the mass splittings experimentally observed are narrower than
this, we are not able to resolve the details of the spectrum.
Taking the limit of small smearing width requires exponentially better
statistical precision, and it comes back to the original problem.
Still, our proposal has advantages compared to previously available
methods.
In contrast to the Bayesian approach \cite{Burnier:2013nla} or the
maximum entropy method \cite{Nakahara:1999vy,Asakawa:2000tr},
our method allows a reliable estimate of the systematic errors, since
it does not assume any statistical distribution of unknown function.
In principle, the method is deterministic once the input lattice data
are given.
The least-squared fit involved in order to enforce the constraint
that eigenvalues of Hamiltonian is positive plays only a minor role
that becomes irrelevant when the lattice data are made precise.
Compared to the Backus-Gilbert method \cite{Hansen:2017mnd}, the
method proposed in this paper is more flexible as it allows any
predefined smearing function, while it is automatically determined in
the Backus-Gilbert method and therefore is uncontrollable.
The variant of the Backus-Gilbert method \cite{Hansen:2019idp} also
has this flexibility.
Our method also allows systematic improvements since the
approximation is achieved by a series of exponentially decreasing
coefficients.
As the statistical precision of the input correlator is improved, one
can include higher order terms and thus improve the approximation.
The method can be used in the analysis of inclusive processes to
define intermediate quantities for which fully non-perturbative
lattice calculation is possible.
Its potential application is not limited to the spectral function for
two-point correlators, but other processes such as deep inelastic
scattering and inclusive $B$ meson decays can be considered.
Since the method does not rely on perturbation theory, the processes
with small momentum transfer can be calculated on a solid theoretical
ground, which was not available so far.
More importantly, we do not have to rely on the assumption of
quark-hadron duality.
\section*{Acknowledgments}
We thank the members of the JLQCD collaboration for discussions and
for providing the computational framework and lattice data.
Numerical calculations are performed
on Oakforest-PACS supercomputer operated by Joint Center for Advanced
High Performance Computing (JCAHPC), as well as
on SX-Aurora TSUBASA at KEK used under its ``Particle, Nuclear, and Astro Physics Simulation Program.''
This work is supported in part by JSPS KAKENHI Grant Number 18H03710
and by the Post-K supercomputer project through the Joint Institute
for Computational Fundamental Science (JICFuS).
\clearpage
\input{ref.tex}
\end{document}
|
1,116,691,499,174 | arxiv | \section{Introduction}
A smooth complex hyperelliptic curve $C$ is a Riemann surface of genus $g>1$ that is a double covering of the Riemann sphere $\mathbb P^1$. Having such a map makes hyperelliptic curves distinguishable and more accessible in many aspects since, for example, they can be described by an equation of the form $y^2=F(x)$ and in this way one can see an hyperelliptic curve as a subvariety of a weighted projective plane.
A covering $f:C'\rightarrow C$ will be called {\it hyperelliptic} if {\it both} curves are hyperelliptic. Such assumption is actually quite strong: if $f$ is cyclic and unbranched then
$\deg(f)=2$ (see \cite{O},\cite{Ries}). If $f$ is a hyperelliptic double covering, then the number of branch points has to be at most $4$ and there are constrains on a
line bundle that defines the covering (see Section \ref{sec:higher} for details). On the other hand, surprisingly, a non-Galois \'etale triple covering of a genus 2 curve is hyperelliptic, see \cite{LOtriple}.
The Prym theory investigates the (connected component of) kernel of the norm map $\Nm_f:JC'\rightarrow JC$ that can also be seen as a complementary abelian subvariety to the image of Jacobian
$f^*(JC)$ inside $JC'$ and is called the Prym variety of the covering. One can then consider the Prym map that assigns to a covering its Prym variety.
From the point of view of Prym theory, hyperelliptic double coverings are distinguishable, because the Prym map restricted to them is never injective (see the bidual construction, \cite{NO}, or Corollaries \ref{never2} and \ref{never}).
Motivated by this, we investigate the Prym map of
hyperelliptic Klein coverings, i.e. $4:1$ Galois coverings with Galois group isomorphic to the Klein group $\mathbb Z_2^2$
and both curves are hyperelliptic.
In \cite{BOklein} we have shown the injectivity of
the Prym map for the special case of \'etale coverings
over a genus 2 curve. Now, we are able to show the injectivity of the hyperelliptic Prym map in full generality (any genus and including ramified coverings). We show
in Theorems \ref{injectiveKlein}, \ref{mixed_Prym}, \ref{brached_inj_Prym} and \ref{mixed4g+3} the following:
\bigskip
\begin{thm}
Let $\mathcal{RH}_{g,b}$ be the moduli space of hyperelliptic Klein coverings over a curve of genus $g>1$ which are simply ramified in $b$ points. We also include the cases $g=1,b=8$ and $g=1, b=12$. Then the corresponding Prym maps on $\mathcal{RH}_{g,b}$
for $b \in \{ 0,4, 8, 12\} $ are (globally) injective.
\end{thm}
\bigskip
The proofs of these theorems are based on geometric characterizations of such coverings and the description of the 2-torsion points of the involved Jacobians in terms of the Weierstrass points. In all the cases we construct an explicit inverse of the Prym map.
It has been shown that the Prym map of double coverings branched in at least 6 points (hence not hyperelliptic) is globally injective (\cite{NO}). Since hyperelliptic coverings make the bound on the number of branched points sharp, one may believe that our result is an important step in showing global injectivity of Klein Prym maps (both \'etale and branched).
The paper is organised as follows: Section 2 contains the necessary basic facts about involutions on hyperelliptic curves, following top-down perspective.
In Section 3 we recall the constructions for double hyperelliptic coverings to see the bottom-up perspective. Having both perspectives gives us a possibility to show what kind of data is needed to set up a Klein covering construction.
In Section 4, we generalise results from \cite{BOklein}, i.e. we prove the injectivity of the Prym map for \'etale hyperelliptic Klein coverings of any genus and we also prove the so-called mixed case, i.e. coverings ramified in 8 points.
In Section 5, we show the injectivity of the Prym map for $\mathbb Z_2^2$ hyperelliptic coverings branched in 12 points and another mixed case, namely coverings branched in 4 points.
\subsection*{Acknowledgements}
The first author has been supported by the Polish National Science Center project number 2019/35/ST1/02385.
The second author warmly thanks for the hospitality during her stay at the Jagiellonian University in Krak\'ow, where part of this work was carried out. Her visit in Krak\'ow was covered by International Cooperation Funding Programme at the Faculty of Mathematics and Computer Science of the Jagiellonian University under the Excellence Initiative at
the Jagiellonian University.
\section{Preliminaries}
In this section we describe the geometry of hyperelliptic curves that contain at least one more involution and its corresponding covering from the top-down perspective. Some of the results can already be found in \cite{GS} that is devoted to hyperelliptic curves with extra involutions.
We start by recalling some basic facts about involutions on hyperelliptic curves. Let $H$ be a hyperelliptic curve of genus $g(H)=g$.
For simplicity, the hyperelliptic involution will always be denoted by $\iota$ (or $\iota_H$ if it is important to remember the curve). Let $W=\{w_1,\ldots,w_{2g+2}\}$ denote the set of Weierstrass points, which is the same as the set of ramification points of the hyperelliptic covering.
The following propositions are well-known facts, see for example \cite{Sh}.
\begin{prop}\label{prop:permute_Wpoints}
Let $H$ be a hyperelliptic curve and $\iota$ the hyperelliptic involution. Then $\iota$ commutes with any automorphism of $H$. Every automorphism of $H$ is a lift of an automorphism of $\mathbb P^1$ and it restricts to a permutation of $W$. In particular, if $\mathbb Z_2^n\subset\Aut(H)$ then $n\leq 3$.
\end{prop}
\begin{prop}\label{prop:involutions}
Let $\t\in\Aut(H)$ be a (non-hyperelliptic) involution on $H$. By $H_\t=H/\t$ we denote the quotient curve. If $g(H)=2k$ then both $\t$ and $\iota\tau$ have exactly 2 fixed points and $g(H_\t)=g(H_{\iota\tau})=k$. If $g(H)=2k+1$ then either $\t$ is fixed point free and $\iota\tau$ has 4 fixed points or $\t$ has 4 fixed points and $\iota\tau$ is fixed point free.
\end{prop}
In order to make statements easier and more compact we abuse the notation by saying that a genus 1 curve with a chosen double covering of $\mathbb P^1$ is called hyperelliptic.
\begin{cor}\label{hypquot}
With the notation from Proposition \ref{prop:involutions}, the curves $H_\t$ and $H_{\iota\tau}$ are hyperelliptic whose hyperelliptic involution lift to the involution $\iota$ on $H$.
\end{cor}
Assume there are two involutions $\sigma,\t\in Aut(H)$ such that $\sigma\t=\t\sigma$, (i.e., $\langle \sigma,\t \rangle \simeq \mathbb Z_2^2$). In such a case, the covering $H\to H/\langle \sigma,\t\rangle$ will be Galois with the deck group isomorphic to the Klein four-group.
Since we are interested in Prym maps, we make another natural assumption, namely $\iota\notin \langle \sigma,\t\rangle$, hence $g(H/\langle \sigma,\t\rangle)>0$. The groups satisfying both conditions will be called {\it Klein subgroups}.
We start by excluding the case when the genus of the curve is even, using the following fact.
\begin{lem}
Let $H$ be a hyperelliptic curve of genus $g(H)=2k$. Then, there does not exist a Klein subgroup $\langle \sigma,\t \rangle \subset\Aut(H)$.
\end{lem}
\begin{proof}
If $g(H)$ is even, then $|W|=4k+2$, hence the action of the subgroup $\langle \sigma,\t\rangle \cong\mathbb Z_2\times\mathbb Z_2$ cannot be free on $W$. On the other hand, Proposition \ref{prop:Angela} and Figure \ref{fig:hyperelliptic_branched+2} show that the ramification points of any double covering cannot be Weierstrass, see also \cite[Lemma 1]{GS}.
\end{proof}
Now, we are left with two cases, either $g(H)=4k+1$ or $g(H)=4k+3$. The next proposition is essentially rephrasing \cite[Lemma 2.13]{BOklein}.
\begin{prop}\label{prop:cominv}
Let $H$ be a hyperelliptic curve with the group of commuting involutions $\langle \iota,\sigma,\t\rangle \subset\Aut(H)$. Then
\begin{itemize}
\item there exists a unique Klein subgroup of fixed point free involutions if and only if $g(H)=4k+1$.
\item there exists a unique Klein subgroup of involutions with fixed points if and only if $g(H)=4k+3$.
\end{itemize}
\end{prop}
\begin{proof}
Assume $g(H)=2n+1$.
Without loss of generality, by Proposition \ref{prop:involutions}, we can assume $\sigma,\t$ are fixed point free. Then the existence of the group of the fixed point free involutions is equivalent to the fact that the involution $\sigma\t$ is fixed point free.
Denote by $g_{\alpha}$ the genus of the quotient curve $H/\alpha$ for $\alpha$ an involution and by $g_0$ the genus of $H/\langle\sigma,\t\rangle$. According to Accola's theorem for the group $\langle \sigma,\t\rangle$ we obtain
\begin{eqnarray*}
2(2n+1) + 4g_0 & = & 2g_{\sigma} + 2g_{\tau} + 2g_{\sigma\tau}\\
2n+1 +2g_0 & = &g_{\sigma} + g_{\tau} + g_{\sigma\tau} \\
&=& 2n + 2 + g_{\sigma\tau}.
\end{eqnarray*}
Since the left-hand side is odd, $\sigma\t$ is fixed point free if and only if $n=2k$.
Analogously, the group $\langle\iota\sigma,\iota\tau\rangle$ contain $\sigma\t$, so contains only involutions with fixed points if and
only if $n=2k+1$.
The uniqueness of the groups follows from the fact that any other subgroup contains $\iota$ or contains both fixed-point free involutions and involutions with fixed points.
\end{proof}
\section{Hyperelliptic double coverings}\label{sec:higher}
In this section, we focus on the bottom-up perspective.
According to Proposition \ref{prop:involutions}, there are three possibilities for a hyperelliptic double covering, namely \'etale coverings and
coverings branched in 2 or 4 in points.
\subsection{Coverings branched in 2 points}
Let us assume $H$ is hyperelliptic and $f: C\rightarrow H$ is a covering branched in 2 points. Firstly, we show a necessary and sufficient conditions for $C$ to be hyperelliptic.
\begin{prop}\label{prop:Angela}
Let $f: C\rightarrow H$ be a covering of a hyperelliptic curve $H$ branched in 2 points $P,Q\in H$. Then $C$ is
hyperelliptic if and only if $P=\iota Q$ and the line bundle defining the covering is $\mathcal{O}_H(w)$ for some
Weierstrass point $w$.
\end{prop}
\begin{proof}
Let $\eta \in \Pic^1(H)$ be the element defining the covering $f: C\rightarrow H$, so $\eta^2 = \mathcal{O}_H(P + Q)$.
Suppose $P= \iota Q$ and $\eta = \mathcal{O}_H(w)$ with $w$ a Weierstrass point. By the projection formula
\begin{eqnarray*}
h^0(C, f^*\eta ) &= & h^0(H, \eta \otimes f_*\mathcal{O}_C) \\
& = & h^0(H, \eta ) + h^0(H, \mathcal{O}_H) \\
& =& 2
\end{eqnarray*}
Since $\deg f^*\eta =2$, this implies that $C$ is hyperelliptic.
Now assume that $C$ is hyperelliptic. Let $h_C$ and $h_H$ be the hyperelliptic divisors on $C$, respectively on $H$. Notice that the hyperelliptic involution on $C$ is a lift of the hyperelliptic
involution on $H$. Since every automorphism of $C$ commutes with the hyperelliptic involution, the ramification
locus of $f$ is invariant under the hyperelliptic involution, so either it consists of two points conjugated to
each other or of two Weierstrass points. In the latter case, $\eta$ is a square root of $\mathcal{O}_H(w_1 + w_2)$,
where $w_1, w_2$ are Weierstrass points, but a necessary condition for the hyperelliptic involution $\iota$ to lift
to an involution on $C$ is $\iota^* \eta \simeq \mathcal{O}_H(h_H) \otimes \eta^{-1} \simeq \eta$, that is, $\eta^2 \simeq
\mathcal{O}_H(h_H)$, a contradiction. Therefore, the branch locus is of the form $\{ P, \iota P \}$. By the projection
formula
$$
2= h^0(C, \mathcal{O}_C(h_C) ) = h^0(H, f_*(f^*\mathcal{O}_H(w))) = h^0(H, \mathcal{O}_H(w)) + h^0(H, \mathcal{O}_H(w) \otimes \eta^{-1})
$$
with $w \in H$ a Weierstrass point. This implies that $\eta$ is of the form $\mathcal{O}_H(w)$.
\end{proof}
One constructs a commutative diagram of hyperelliptic curves
(left Diagram \ref{diag:Const_tildeC2})
starting from $2g+3$ given points in $\mathbb P^1$. Let $[y],[z],[w_1],\ldots,[w_{2g+1}]\in\mathbb P^1$.
Let $H$ be the hyperelliptic genus $g$ curve ramified in $z,w_1,\ldots,w_{2g+1}$ mapping to the corresponding points with brackets in $\mathbb P^1$, and let $y_1,y_2$ be the
fibre over $[y]$. Let $f:C\rightarrow H$ be a double covering branched in $y_1,y_2$ and defined by $\mathcal{O}(z)$. The hyperelliptic curve $C$ of genus $2g$ can also be
constructed in the following way. Let $p:\mathbb P^1\rightarrow\mathbb P^1$ be the double covering branched in $[y],[z]$. For $i=1,2$
denote by $[y'],[z'],[w_1^{i}],\ldots,[w_{2g+1}^i] \in\mathbb P^1$ the respective preimages. Then $C$ is a double covering of $\mathbb P^1$
branched in $[w_1^{i}],\ldots,[w_{2g+1}^i]$. Clearly, the preimages of $[y'],[z']$ in $C$ coincide with the appropriate preimages of $y_1,y_2,z$ (see right Diagram \eqref{diag:Const_tildeC2}).
\begin{equation} \label{diag:Const_tildeC2}
\xymatrix@R=.7cm@C=.7cm{
& C_{2g} \ar[dr]^{2:1} \ar[dl]_{2:1} & \\
H' _{g}\ar[dr]_{2:1} & & H_g \ar[dl]^{2:1}\\
&\mathbb P^1& } \hspace{2.6cm}
\xymatrix@R=1.1cm@C=1.1cm{
C \ar[d]_{2:1} \ar[r]^{f} & H \ar[d]^{2:1} \\
\mathbb P^1 \ar[r]^p & \mathbb P^1
}
\end{equation}
\begin{figure}[h!]
\includegraphics[width=13cm, trim={0.3cm 9cm 0cm 8cm}, clip]{hyperelliptic_branched+2.pdf}
\caption{Hyperelliptic coverings ramified in two points}
\label{fig:hyperelliptic_branched+2}
\end{figure}
Moreover, the Prym variety of the covering $f:C\rightarrow H$ is a Jacobian of a hyperelliptic curve $H'$ of genus g, appearing in the left Diagram \ref{diag:Const_tildeC2}, defined by exchanging the role of
$y$ and $z$. The distribution of the Weierstrass points is illustrated in Figure \ref{fig:hyperelliptic_branched+2}.
\begin{cor}\label{never2}
The construction shows that the Prym map of a double covering branched in 2 points is never injective, since the Jacobian of $H'$ does not recognise the branching points. In particular, if one moves point $[z]$ in $\mathbb P^1$, one gets one dimensional family of coverings $C\rightarrow H$ with the same Prym.
\end{cor}
\subsection{\'Etale coverings and coverings branched in 4 points}
Now, we consider a hyperelliptic curve $Y$ of genus $g\geq 2$ and double \'etale covering $\pi: X \rightarrow Y$, so $X$ is of genus
$2g-1$. According to
\cite[Proposition 4.2]{bo} $X$ is hyperelliptic if and only if the 2-torsion point $\eta$ defining $\pi$ is of the form $\eta=\mathcal{O}_Y(w_1-w_2)$
where $w_1$ and $w_2$ are Weierstrass points. Assume that $X$ is hyperelliptic an let $\iota$ be the hyperelliptic involution. Let
$Y':= X / \langle \iota\tau \rangle$, which is of genus $g-1$. The double covering $f: X \rightarrow Y'$ is ramified in four points. The
hyperelliptic involution $\iota$ on $X$ descends to $Y'$ in such a way that $Y'$ is also hyperelliptic (see Diagram \eqref{diag:diag3}).
Conversely, starting from a hyperelliptic curve $Y'$ of genus $g-1$ we can give necessary and sufficient condition
for $X$ to be hyperelliptic.
\begin{prop} \label{hyp-cov}
Let $Y'$ be a hyperelliptic curve of genus $g-1$ and $f: X \rightarrow Y'$ a double covering ramified in
four points defined by a line bundle $\eta \in \Pic^{2}(Y')$, such that $\eta^2 \simeq \mathcal{O}_{Y'}
(B)$, where $B$ is the branch
locus of the covering.
Then $X$ is hyperelliptic if and only if $\eta = \mathcal{O}_{Y'}(h_{Y'}) $, with $h_{Y'}$ the hyperelliptic divisor on $Y'$
and $B \in |2h_{Y'}|$ is reduced.
\end{prop}
\begin{proof}
Since $f_*\mathcal{O}_X \simeq \mathcal{O}_{Y'} \oplus \eta^{-1}$, by using the projection formula one computes
$$
H^0(X, f^* \mathcal{O}_{Y'}(h_{Y'}) ) = H^0(Y', \mathcal{O}_{Y'}(h_{Y'}) ) \oplus H^0(Y', \mathcal{O}_{Y'}).
$$
So $\dim H^0(X, f^* \mathcal{O}_{Y'}(h_{Y'}) ) = 3$. According to Clifford Theorem \cite[Chapter III]{acgh} $X$ is
hyperelliptic and
$ f^* h_{Y'}$ is a multiple of the hyperelliptic divisor. Suppose that $X$ is hyperelliptic and the double covering $f:X \rightarrow Y'$ is given by a line bundle $\eta $ such that $\eta^2 \simeq \mathcal{O}_{Y'}(B)$, with $B$ the
(reduced) branch locus of $f$. From the
commutativity of the Diagram \eqref{diag:diag3} the union of $B$ and the set of Weierstrass points of $Y'$ map to the
branch locus of the map $Y \rightarrow \mathbb P^1$, which has cardinality $2g+2$. This implies that $B \in |2h_{Y'}|$, so $\eta $
is a square root of $\mathcal{O}_{Y'}(2h_{Y'})$. Since
$X= \Spec(\mathcal{O}_{Y'} \oplus \eta^{-1})$ the involution $j$ on $Y'$ lift to an involution on $X$ if and only if
$j^* \eta \simeq \eta$. Then, either $\eta = \mathcal{O}_{Y'}(h_{Y'})$ or $h^0(Y', \eta ) =1$. In the latter case, if
$\eta=\mathcal{O}_{Y'}(p_1+q_1)$, then $j(p_1) = p_1 $ and $j(q_1) = q_1$, that is, $\eta$ is defined by the sum of
Weierstrass points, say $\eta= \mathcal{O}_{Y'}(w_1+w_2)$. Since $X$ is hyperelliptic, $f^*\eta \in |2h_X|$ but this contradicts the projection formula.
Therefore, $\eta = \mathcal{O}_{Y'}(h_{Y'}) $.
\begin{equation} \label{diag:diag3}
\xymatrix@R=.7cm@C=.7cm{
& X_{2g-1} \ar[dr]^{\pi} \ar[dl]_{f} & \\
Y' _{g-1}\ar[dr]_{2:1} & & Y_g \ar[dl]^{2:1}\\
&\mathbb P^1& }\hspace{2.6cm}
\xymatrix@R=1.1cm@C=1.1cm{
X_{2g-1} \ar[d]_{2:1} \ar[r]^{f} & Y'_{g-1} \ar[d]^{2:1} \\
\mathbb P^1 \ar[r]^p & \mathbb P^1
}
\end{equation}
One can see this construction from the perspective of points in $\mathbb P^1$. Let $[x],[y],[w_1],\ldots,[w_{2g}]\in\mathbb P^1$ and
let $Y_g$ be the hyperelliptic genus $g$ curve branched in these points. Let $X_{2g-1}\rightarrow Y_g$ be the \'etale double covering defined by $\mathcal{O}(x-y)$,
where $ x, y$ are the preimages of
$[x], [y]$ respectively. On the other hand $X_{2g-1}$ can be also
constructed in the following way. Let $p:\mathbb P^1\rightarrow\mathbb P^1$ be the double covering branched in $[x],[y]$. For $i=1,2$
denote by $[w_1^{i}],\ldots,[w_{2g}^i]\in \mathbb P^1$ the respective preimages under $p$. Then $X_{2g-1}$ is a double covering of $\mathbb P^1$
branched in $[w_1^{i}],\ldots,[w_{2g}^i]$.
The curve $Y'_{g-1}$ is constructed as double cover of $\mathbb P^1$ branched in $[w_1],\ldots,[w_{2g}]$ and one obtains the commutativity of the right Diagram \eqref{diag:diag3}. The covering $X_{2g-1}\rightarrow Y'_{g-1}$ is branched in
$x,\iota x,y,\iota y$ and defined by the hyperelliptic bundle (see Figure
\ref{fig:double_hyperelliptic_covers}).
\begin{figure}[h!]
\includegraphics[width=11.3cm, trim={0.3cm 8.5cm 0cm 8.2cm}, clip]{hyperelliptic_covers.pdf}
\caption{Distribution of Weierstrass points on hyperelliptic covers}
\label{fig:double_hyperelliptic_covers}
\end{figure}
\bigskip
\end{proof}
\subsubsection{The Prym map}
Let $\mathcal{R}_{g-1,4}^H:=\{(Y' , B) \ \mid \ B \in |2h_{Y'}|\}$ be the space parametrising double coverings $f:X \rightarrow Y'$
ramified in four points where both curves are hyperelliptic, according to the previous proposition this depends only
on the choice of the branch divisor in $|2h_{Y'}|$. We denote by $\mathcal{J}_g^H \subset \mathcal{A}_g$ the locus of the
hyperelliptic Jacobians inside of the moduli space of principally polarised abelian varieties of dimension $g$. Then $\mathcal{J}_g^{H, (1,2, \dots, 2)}$ is the moduli of abelian varieties which are quotients of hyperelliptic Jacobians by 2-torsions of the form $w_i-w_j$.
\begin{prop}\label{twocases}
The relation given by left Diagram \eqref{diag:diag3} induces an isomorphism
$$
\gamma: [f:X \rightarrow Y'] \mapsto [\pi: X \rightarrow Y]
$$
fitting in the following commutative diagram
\begin{equation} \label{diag4}
\xymatrix@R=1cm@C=1cm{
\mathcal{R}_{g-1,4}^H \ar[rr]^{\gamma} \ar[dd]_{\Pr_{g-1}} \ar@{->}[rdrd]& & \mathcal{R}_{g}^H \ar[dd]^{\Pr_g} \ar@{->} [ldld] \\
& {} & \\
\mathcal{J}_g^{H, (1,2, \dots, 2)} & & \mathcal{J}_{g-1}^H
}
\end{equation}
where the diagonal arrows are the corresponding forgetful maps. In particular $\deg
\Pr_{g-1}\mid_{\mathcal{R}_{g-1,4}^H} = {2g+2 \choose 2}$
and $\Pr_{g}\mid_{\mathcal{R}_{g}^H}$ is a $\mathbb P^2$-bundle.
\end{prop}
\begin{proof}
It is well-known that the Prym variety of an \'etale double covering $X \rightarrow Y$ over a hyperelliptic curve is
isomorphic to the product of the Jacobians of the quotient curves $X/\iota \simeq \mathbb P^1$ and $X/\iota \tau$
(see \cite{M}, for instance), which gives $P(X / Y) \simeq JY'$ as a principally polarized abelian variety.
This proves the commutativity of the top right triangle of the diagram. Similarly, we have
$P(X / Y') \simeq JY/ \langle \beta \rangle$, with $\beta=w_i-w_j \in JY[2]$ the element defining the \'etale
covering $\pi$. This shows the commutativity of the top left triangle of the diagram.
\end{proof}
\begin{cor}\label{never}
Hyperelliptic Prym maps of \'etale double coverings and double coverings branched in 4 points are
never injective.
\end{cor}
\subsection{Useful notation}
We recall the following notation from \cite{bo} that helps with dealing with abelian subvarieties. Let X be an abelian variety and $M_i$ abelian varieties such that there
exist embeddings $M_i \hookrightarrow X$ for $i = 1,...,k$. We write
$$
X = M_1 \boxplus M_2 \ldots \boxplus M_k
$$
if $\epsilon_{M_1} + \epsilon_{M_2} + \ldots +\epsilon_{M_k} = 1$, where $\epsilon_{M_i}$ are the associated symmetric idempotents. In particular,
$X = M \boxplus N $ if and only if $(M,N)$ is a pair of complementary abelian subvarieties of $X$. If $M_i$'s are general enough, then
the decomposition is unique up to permutation
(\cite[Proposition 5.2]{bo}), see also Lemma \ref{generic}.
We will also use the following notation. If $f:X\to Y$ is a covering and $f^*$ is not an embedding we will denote the image $\im(f^*(JY))$ by $JY^*$.
In the sequel we will denote by the same letter an automorphisms of the covering curve and its extension to the Jacobian, except for the hyperelliptic involution, whose extension is $-1$. We will also denote the identity as $1$. By $m_k$ we denote the multiplicity by $k$ on an abelian variety.
\bigskip
\section{Prym maps of hyperelliptic \'etale Klein coverings}\label{etaleKlein}
In \cite{BOklein}, we have considered \'etale Klein coverings of genus 2 curves and have shown that the Prym map is injective in this case. We now generalise the result to hyperelliptic Klein coverings of higher genera.
\subsection{From the bottom construction}
Let $H$ be a genus $g$ hyperelliptic curve with
Weierstrass points $w_1,\ldots,w_{2g-1},x,y,z\in H$
and $[w_1],\ldots,[w_{2g-1}],[x],[y],[z]\in \mathbb P^1$ be the set of $2g+2$ branched points respectively. Set $\eta=\mathcal{O}_H(x-y),\ \xi=\mathcal{O}_H(y-z)$. According to
\cite[Theorem 4.7]{bo} the covering $\widetilde{C}$ associated to the non-isotropic Klein group $G=\{0,\eta,\xi,\eta+\xi\} \subset JH[2]$ is hyperelliptic and since
the $4:1$ map $\widetilde{C}\rightarrow H$ is \'etale, $\widetilde{C}$ is of genus $4g-3$.
Let $C_x$ be the double covering of $H$ defined by $\xi$, $C_y$ defined by $\eta+\xi$ and $C_z$ defined by $\eta$; all three of genus $2g-1$ Then the Prym varieties of these coverings are Jacobians of curves, denoted by $H_x,\ H_y,\ H_z$ respectively, of genus $g-1$. Recall that for $j\in\{x,y,z\}$, the curve $H_j$ is given by choosing $[j], [w_1],\ldots,[w_{2g-1}]$ as branch points.
These curves fit into the following commutative diagram (we draw only two curves to make it easier to read).
\begin{equation} \label{diagetale}
\xymatrix@R=.6cm@C=.6cm{
&& \widetilde{C} \ar[dr]^{et} \ar[dl]_{et} && \\
& C_{x}\ar[dr]^{et} \ar[dl]_{+4} & & C_{y}\ar[dr]^{+4} \ar[dl]_{et} & \\
H_{x}\ar[drr] &&H \ar[d] && H_{y}\ar[dll] \\
&& \mathbb P^1 &&
}
\end{equation}
\subsection{Decomposition of $J\widetilde{C}$}
In order to decompose the Jacobian of $\widetilde{C}$ and describe the Prym variety $P(\widetilde{C}/H)$
of the covering $\widetilde{C} \rightarrow H$,
we will use a top-down perspective. Let $\widetilde{C}$ be a hyperelliptic curve of genus $4g-3$ with commuting fixed point free involutions $\sigma,\t,\sigma\t\in\Aut(\widetilde{C})$. Without loss of generality, we can assume $C_x=\widetilde{C}/\sigma, C_y=\widetilde{C}/\t, C_z=\widetilde{C}/\sigma\t$ and $H=\widetilde{C}/\langle \sigma,\t\rangle $. With this notation, we have that $H_x=\widetilde{C}/ \langle \sigma,\iota\t \rangle$ and the following diagram commutes
\begin{equation}\label{diag:injectionJHx}
\xymatrix@R=.9cm@C=.6cm{
& \widetilde{C} \ar[dr]^{+4} \ar[dl]_{et} \ar[d]_{+4} & \\
C_{\sigma}\ar[dr]_{+4} & C_{\iota \t} \ar[d]_{+2} & C_{\iota\sigma\t} \ar[dl]^{+2}\\
&H_x&
}
\end{equation}
where $C_{\alpha}=\widetilde{C}/\alpha$ with $\alpha$ an involution. Analogously one checks that $H_y=\widetilde{C}/\langle \t,\iota\sigma\rangle$ and $H_z=\widetilde{C}/\langle \sigma\t,\iota\t\rangle$.
\begin{prop} \label{DecompJC-et}
The Jacobian of $\widetilde{C}$ is decomposed in the following way
$$J\widetilde{C}=JH^*\boxplus JH_x\boxplus JH_y\boxplus JH_z.$$
In particular, $P(\widetilde{C}/H)=JH_x\boxplus JH_y\boxplus JH_z$.
\end{prop}
\begin{proof}
The proof follows from straightforward computation. Firstly, note that Diagram \ref{diag:injectionJHx} shows that $JH_x$ is embedded in $JC_{\iota\sigma\t}$ which is embedded in $J\widetilde{C}$, hence $JH_x$ is embedded in $J\widetilde{C}$ with the restricted polarisation type being four times the principal polarisation on $JH_x$. Analogously, $JH_y$ and $JH_z$ are also embedded in $J\widetilde{C}$.
Since the covering $\widetilde{C}\rightarrow H$ is \'etale, by \cite[Proposition 11.4.3]{bl}, the pullback map is not an embedding, so we denote the image of $JH$ as $JH^*$. Moreover, $JH^*=\im(1+\sigma+\t+\sigma\t)$. Since the hyperelliptic involution extends to $(-1)$ on $J\widetilde{C}$ we have that $JH_x=\im(1+\sigma-\t-\sigma\t)$, $JH_y=\im(1-\sigma+\t-\sigma\t)$,
$JH_z=\im(1-\sigma-\t+\sigma\t)$. Observe that sum of the endomorphisms defining these Jacobians is 4, which is also the exponent of each subvariety in
$J\widetilde{C}$, so $\epsilon_{JH^*}+\epsilon_{JH_x}+\epsilon_{JH_y}+\epsilon_{JH_z}=1$.
Since, by definition, the Prym variety is complementary to $JH^*$, we get that $P(\widetilde{C}/H)=JH_x\boxplus JH_y\boxplus JH_z$
\end{proof}
\begin{prop}\label{4.2}
The addition map $\psi:JH_x\times JH_y\times JH_z\longrightarrow P(\widetilde{C}/H)$ is a polarised isogeny of degree $4^{2g-2}$ and its kernel is contained in the set of 2-torsion points.
\end{prop}
\begin{proof}
In order to compute
the kernel of $\psi$ we consider the description of $JH_j$, for $j\in\{x,y,z\}$, as fixed loci inside the
Jacobian of $\widetilde{C}$:
$$
JH_{x} \subset \Fix(\sigma, \iota\tau), \quad JH_{y}\subset \Fix(\tau, \iota\sigma), \quad JH_{z}\subset\Fix(\sigma\tau, \iota\sigma).
$$
Let $(a,b,c) \in \Ker \psi $, then $c=-a-b$. Applying $\iota\sigma$ and $\iota\tau$ to $c=-a-b$ we get
$$
-a-b=\iota\sigma (-a-b) = -\iota a -b \quad \textnormal{ and } \quad -a-b=\iota\tau (-a-b) = -a - \iota b ,
$$
so $\iota a=a$ and $\iota b= b$, that is, $a,b$ and $c$ are 2-torsion points in their respective Jacobians. This implies
\begin{equation}\label{eq:kervarphi}
\Ker \psi = \{ (a,b, -a-b) \ \mid \ a \in JH_{x}[2], \ b \in JH_{y}[2] \}.
\end{equation}
The restricted polarisation to $JH_j$ is of type $(4,\ldots,4)$. Since $P(\widetilde{C}/H)$ is complementary to $JH^*$, it has complementary type which is $(1,\ldots,1,1,4,\ldots,4)$ with $g-1$ fours.
Moreover, $\psi$ as an addition map is polarised and the degree is exactly $4^{2(g-1)}$.
\end{proof}
\subsection{The Prym map}
Let $\mathcal{RH}_{g,0}$ denote the moduli space parametrising the pairs $(H, G)$ with $H$ a hyperelliptic curve of genus $g$ and $G$ a Klein subgroup of $JH[2]$ whose generators are differences of Weierstrass points, so the corresponding covering curve $\widetilde{C}$ is hyperelliptic of genus $4g-3$. We call the elements of $\mathcal{RH}_{g,0}$
\'etale hyperelliptic Klein coverings. Set
$$
\delta:= (\underbrace{1,\ldots,1}_{2g-2}, \underbrace{4, \ldots, 4}_{g-1})$$
and let $\mathcal{A}_{3g-3}^{\delta}$ denote the moduli space of
polarised abelian varieties of dimension $3g-3$
and polarisation of type $\delta$.
The main aim of this section is to prove that the Prym map
$$
Pr_{4g-3,g}^H: \mathcal{RH}_{g,0} \rightarrow \mathcal{A}_{3g-3}^{\delta}, \qquad (H,G) \mapsto (P(\widetilde{C}/H), \Xi)
$$
of \'etale hyperelliptic Klein coverings is injective. We will show this by constructing the inverse map explicitly. We start by showing
the following equivalence of data, which generalises
\cite[Theorem 3.1]{BOklein}.
\begin{prop} \label{equivalence1}
The following data are equivalent:
\begin{enumerate}
\item a triple $(H, \eta, \xi)$, with $H$ a hyperelliptic curve of genus $g$ and $\eta$ and $\xi$ differences of
Weierstrasss points such that Klein subgroup $G=\langle \eta, \xi \rangle$ of $JH[2]$ is non-isotropic;
\item a hyperelliptic curve $\widetilde{C}$ of genus $4g-3$ with
$\mathbb Z_2^3 \subset \Aut(\widetilde{C})$;
\item a hyperelliptic curve $H$ of genus $ g$ together with the choice of $3$ Weierstrass points;
\item a set of $2g+2$ points in $\mathbb P^1$ with a chosen triple of them, up to projective equivalence (respecting the triple).
\end{enumerate}
\end{prop}
\begin{proof}
Equivalences $(1) \Leftrightarrow (3) \Leftrightarrow (4)$ are obvious. The equivalence $(3) \Leftrightarrow (2) $ follows from \S 3.1
\end{proof}
\begin{cor}
The moduli space $\mathcal{RH}_{g,0}$ is irreducible
\end{cor}
\begin{proof}
It follows from the equivalence $(1) \Leftrightarrow (4)$ of
the Proposition \ref{equivalence1}.
\end{proof}
\begin{lem} \label{multtwo}
Let $JH_x,JH_y,JH_z$ be as before and let $P=P(\widetilde{C}/H)$. Let $Z=(JH_x \times JH_y \times JH_z)[2]$ be the set of 2-torsion points on the product and let $G_P=\psi(Z)$.
By $m_2$ we denote the multiplication by 2.
Consider the following commutative diagram
\begin{equation}
\xymatrix@R=.9cm@C=.9cm{
JH_x \times JH_y \times JH_z \ar[r]^(.7){\psi} \ar[d]_{m_2} & P
\ar[d]^{\pi_{P}} \\
JH_x \times JH_y \times JH_z \ar[r]^-{p}& P/G_P
}
\end{equation}
Then the map $p$ is a polarised isomorphism of principally polarised abelian varieties.
\end{lem}
\begin{proof}
Note that $JH_x \times JH_y \times JH_z$ in the top left has product polarisation of type four times the principal one. Hence, $Z$ is an isotropic subgroup of the kernel of the polarising map. In particular $m_2$, having $Z$ as its kernel, is a polarised isogeny (see also \cite[Cor. 2.3.6]{bl}). Moreover, $Z$ is also the kernel of $\pi_P\circ\psi$, hence both $\psi$ and $\pi_P$ are polarised isogenies. Then, the isomorphism theorems yields the existence and uniqueness of the isomorphism $p$.
\end{proof}
\begin{cor}\label{3.5}
Let $\Xi$ be the restricted polarisation on $P$ and $\phi_\Xi$ its polarising isogeny. Then $G_P=\ker(\phi_\Xi)\cap P[2]\simeq\mathbb Z_2^{2g-2}$.
\end{cor}
\begin{proof}
Clearly $G_P \subset P[2]$ and since $P/G_P$ is principally polarised, $G_P$
is a maximal isotropic subgroup of $\ker(\phi_\Xi)$.
Hence $G_P\subset \ker(\phi_\Xi)\cap P[2]$ and the
claim follows by computing the cardinalities.
\end{proof}
\begin{lem} \label{Intersection}
It holds $G_P = JH_x \cap JH_y \cap JH_z$.
\end{lem}
\begin{proof}
Let $a \in JH_x \cap JH_y \cap JH_z$, then $a$ is fixed by all the involutions in $J\widetilde{C}$, in particular it is a 2-torsion point. Hence $a=a+a+a
\in \im \psi(Z)$. Conversely, if $a+b+c \in G_P$, with $a \in JH_x=\Fix(\sigma,\t)^0$, $b \in JH_y=\Fix(\iota\sigma,\t)^0$ and $c \in JH_z= \Fix(\iota\sigma,\sigma\t)^0$, then $a,b,c \in P[2]$. One checks that
$a,b,c \in JH_x \cap JH_y \cap JH_z$.
\end{proof}
Let $\pi_j : \widetilde{C} \rightarrow H_j$ be the 4:1 branched maps in the diagram (4.1)
for $j\in \{x,y,z\}$ an let $k: \Pic^4(\widetilde{C}) \rightarrow\Pic^0(\widetilde{C}) $ defined
by $D \mapsto D-2g_2^1$, with $2g_2^1$ the hyperellipptic divisor. Hence, we have injective maps
$$
\alpha_j := k\circ \pi_j^* : \Pic^1(H_j) \rightarrow\Pic^0(\widetilde{C}) \simeq J\widetilde{C}.
$$
\begin{prop}\label{gluing}
The maps $\alpha_j$ have as common image in $G_P \subset P[2]$ for
$j\in \{x,y,z\}$, the image of $2g-1$ Weierstrass points on each of the curves $H_j$.
\end{prop}
\begin{proof}
Observe that by construction
$$
\alpha_x(\Pic^1 H_x) \cap \alpha_y(\Pic^1 H_y) \cap \alpha_z(\Pic^1 H_z)
\subset JH_x \cap JH_y \cap JH_z =G_P.
$$
On the other hand, for any Weierstrass point $w \in H_x$, image of
a Weierstrass point $\tilde w \in \widetilde{C}$ we have
$$\pi_x^* w \sim \tilde w + \sigma\tilde w+ \iota\t\tilde w+ \sigma\t\tilde w
\sim \tilde w + \sigma\tilde w+ \t\tilde w+ \sigma\t\tilde w $$
since $\iota\tilde w = \tilde w$. Similarly for every $w= \pi_y(\tilde w)$ or $w= \pi_y(\tilde w)$, image of a
Weierstrass point in $\widetilde{C}$ one checks that
$$\pi_j^* w \sim \tilde w + \sigma\tilde w+ \t\tilde w+ \sigma\t\tilde w.$$
\end{proof}
\begin{thm}\label{injectiveKlein}
For $g\geq 2$ the hyperelliptic Klein Prym map $Pr_{4g-3,g}^H:
\mathcal{RH}_{g,0} \rightarrow \mathcal{A}_{3g-3}^{\delta} $ is injective.
\end{thm}
\begin{proof}
Let $(P,\Xi)$ be a polarised variety in the image of the Prym map. Let $G=P[2]\cap \ker\phi_\Xi$, which is an isotropic subgroup in $\ker\phi_\Xi$. Let $\pi_P:P\rightarrow P/G$. According to Corollary \ref{3.5}, we have that $P/G$ is principally polarised and by Lemma \ref{multtwo}, $P/G$ is polarised isomorphic to a product of the form $JH_1\times JH_2\times JH_3$,
for uniquely determined Jacobians $JH_1$, $JH_2$ and $JH_3$.
Now, since $P$ is in the image of the Prym map and the $H_i$, $i\in\{1,2,3\}$, are actually isomorphic to $H_j$, for $j\in\{x,y,z\}$
corresponding to some points $[x],[y],[z] \in\mathbb P^1$. Let $f_i:H_i\rightarrow \mathbb P^1$ be the hyperelliptic maps and $W^i$ the set of Weierstrass points
on each $H_i$. By Proposition \ref{gluing} there exists an automorphism of $\mathbb P^1$ such that $\bigcap f_i(W^i)$ consists of $2g-1$ points in
such a
way that $\alpha_1(w_n^1)=\alpha_2(w_n^2)=\alpha_3(w_n^3) \in G_P$, for all $n\in \{1, \ldots, 2g-1\}$.
Moreover, there are uniquely determined points $[x],[y],[z]\in \mathbb P^1$ that are images of the remaining Weierstrass points. In this way, we have constructed the set $\{[w_1],\ldots,[w_{2g-1}], [x],[y],[z]\}$ of $2g+2$ points in $\mathbb P^1$ with a distinguished triple.
Hence, the obtained map $(P, \Xi)\mapsto \{[w_1],\ldots, [w_{2g-1}], [x],[y],[z] \}$ provides the inverse to the Prym map via the equivalence in Proposition \ref{equivalence1}.
\end{proof}
\begin{rem}
Note that for $g=2$, the curves $H_i$ are actually elliptic, so one uses the notion of 2-torsion points instead of Weierstrass points and some steps are vacuous, see \cite{BOklein}.
\end{rem}
\begin{figure}[h!]
\includegraphics[width=11.3cm, trim={0.3cm 6cm 0cm 7cm}, clip]{hyp_covers1.pdf}
\caption{Weierstrass points on hyperelliptic Klein coverings}
\label{fig:hyperelliptic_covers}
\end{figure}
\subsection{The mixed case}
Consider a smooth curve $\widetilde{C}$ of genus $4g-3$, with $g\geq 2$,
admitting one fixed point free involution $\sigma$ and two
involutions $\iota\t$, $\iota\sigma\t$ with 4 fixed points each.
The corresponding tower of curves is the same as in the case
of \'etale Klein coverings (see Diagram \eqref{diagetale} or Figure \ref{fig:hyperelliptic_covers}).
The starting data is now the base curve $H_{\sigma, \iota\t}$
(for instance $H_x$ in the diagram) of genus $g-1$, a
choice of two pairs of conjugated points (branch points for the double covering $C_x \rightarrow H_x$) and a Weierstrass point. According to
Proposition \ref{DecompJC-et}, $ J\widetilde{C} = JH^* \boxplus JH_x \boxplus JH_y \boxplus JH_z $, therefore the associated Prym variety
is $P= P(\widetilde{C} / H_x)= JH^*\boxplus JH_y \boxplus JH_z$. Moreover, since the restricted polarisation to $JH_x$ is of type $(4, \ldots, 4)$, the Prym variety $P$ is of type
$$
\delta:= (\underbrace{1,\ldots,1}_{2g-1}, \underbrace{4, \ldots, 4}_{g-1}).
$$
Consider the canonical addition map
$$
\psi: JH^* \times JH_y \times JH_z \rightarrow P.
$$
Since $JH^* \subset \Fix(\sigma, \t)$, $JH_y \subset \Fix(\t, \iota \sigma)$ and $JH_z \subset \Fix(\sigma\t, \iota\sigma)$, we can conclude as before that
\begin{equation}
\Ker \psi = \{ (-(b+c),b, c) \ \mid \ b \in JH_{y}[2], \ c \in JH_{z}[2] \}.
\end{equation}
Analogously to Lemma \ref{multtwo} we have:
\begin{lem} \label{multtwo2}
Let $JH,JH_y,JH_z$ be as before and let $P=P(\widetilde{C}/H_x)$. Let $Z=(JH \times JH_y \times JH_z)[2]$ be the set of 2-torsion points on the product and let $G_P=\psi(Z)$.
By $m_2$ we denote the multiplication by 2.
Then one obtains the following commutative diagram
\begin{equation}
\xymatrix@R=1.2cm@C=.9cm{
JH \times JH_y \times JH_z \ar[rr]^(.5){\psi'} \ar[dd]_{m_2} \ar[dr]_{4:1}
& & P
\ar[dd]^{\pi_{P}} \\
&JH^* \times JH_y \times JH_z\ar[ur]^{\psi} & \\
JH \times JH_y \times JH_z \ar[rr]^-{p}& & P/G_P
}
\end{equation}
where $p$ is an isomorphism of principally polarised abelian varieties and the degrees of $\psi$, respectively
$\pi_P$, are $4^{2g-2}$, respectively $4^{g-1}$.
\end{lem}
In particular, $G_P = \Ker(\phi_{\Xi}) \cap P[2]$. One also computes,
as in Lemma \ref{Intersection}, that $G_P = JH^* \cap JH_y \cap JH_z$.
Let $\mathcal{RH}_{g,8}$ denote the moduli space parametrising hyperelliptic Klein coverings over hyperelliptic curves of genus $g-1$ simply ramified in 8 points, so the covering curve is of genus $4g-3$.
\begin{thm} \label{mixed_Prym}
For $g\geq 2$ the Prym map $$Pr^H_{4g-3,g-1}: \mathcal{RH}_{g-1, 8} \rightarrow \mathcal{A}^{\delta}_{3g-1}, \qquad ([\widetilde{C} \rightarrow H']) \mapsto (P(\widetilde{C}, H'),
\Xi)$$ is injective.
\end{thm}
\begin{proof}
The proof is very similar to that one of Theorem \ref{injectiveKlein}.
Consider $(P, \Xi)$ a polarised abelian variety in the image of
$Pr^H_{4g-3,g-1}$. The subgroup $G:= P[2] \cap \Ker \phi_{\Xi}$ is isotropic and according to Lemma \ref{multtwo2} the quotient is isomorphic, as
principally polarised abelian varieties, to a product $JH_1\times JH_2 \times JH_3$ of uniquely determined Jacobians $JH_1, JH_2$ of dimension $g-1$ and one Jacobian $JH_3$, of dimension $g$.
Without loss of generality and keeping the notation as above, we set $JH_1=JH_y$ and $JH_2= JH_z$ for some points $[y],[z] \in \mathbb P^1$ and set $H:=H_3$.
Consider the image $JH^*$ of $JH$ in $P$. The map $m_2: JH^* \rightarrow JH^*$, multiplication by 2, factors through the restriction to $JH^*$ of the quotient map $\pi_P: P \rightarrow P/G$, since $\Ker \pi \subset \Ker m_2$.
\begin{equation}
\xymatrix@R=1.1cm@C=.9cm{
JH^* \ar[rr]^(.5){m_2} \ar[dr]_{\pi_P{_{|JH^*}}}
& & JH^* \\
&JH \ar[ur]_{4:1}^{\beta} &
}
\end{equation}
Let $V_4:= \Ker \beta$, which is a subgroup of $JH$ generated by two 2-torsion points.
Let $\pi_j : \widetilde{C} \rightarrow H_j$ the $4:1$ branched maps for $j\in\{y,z\}$ and
$\pi_0: \widetilde{C} \rightarrow H$ the \'etale $4:1$ map (see Diagram \ref{diagetale}). Consider the map $k: \Pic^4(\widetilde{C}) \rightarrow \Pic^0(\widetilde{C})$ defined as before. So the
maps $\alpha_j:=k \circ \pi_j^* $ are injective for $j\in\{x,y\}$ and
$$
\alpha_0:=k \circ \pi_0^*: \Pic^1(H) \rightarrow \Pic^0(\widetilde{C}) \simeq J\widetilde{C}
$$
has in its kernel 3 Weierstrass points (i.e. its pullback under $\pi_0$ is $ \sim 2g_2^1$), whose differences generate $V_4$. The argument in Proposition \ref{gluing} shows that
$$
\alpha_y(\Pic^1 H_y) \cap \alpha_z(\Pic^1 H_z) \cap \alpha_0(\Pic^1 H)
= JH_y \cap JH_z \cap JH = G.
$$
Therefore, one can find a suitable automorphism of $\mathbb P^1$ such that the images of the hyperelliptic maps from $H_j$ and $H$ consists of $2g-1$ points in $\mathbb P^1$ and
their Weierstrass points have $G$ as common image in $P \subset J\widetilde{C}$.
In order to determine the remaining point $[z]\in \mathbb P^1$, we consider the two Weierstrass points
$y,z\in H$ above $[y],[z]\in \mathbb P^1$, whose images
are not in $G$. Then there is a unique point $x\in H$
(necessarily a Weierstrass point),
such that $V_4=\langle \mathcal{O}_H(x-y), \mathcal{O}_H(y-z) \rangle$.
One recovers the base curve $H':= H_x$ as the only hyperellitptic curve branched in $[x], [w_1], \ldots, [w_{2g-1}]\in \mathbb P^1$.
Applying the equivalence $(2) \Leftrightarrow (3)$ of Proposition \ref{equivalence1} one recovers the Klein covering $\widetilde{C} \rightarrow H'$ (see Figure \ref{fig:hyperelliptic_covers}).
\end{proof}
\begin{cor}
The following data are equivalent:
\begin{itemize}
\item[(1)] a triple $(W,W',B)$ of disjoint sets of points in $\mathbb P^1$ such that $W$ is of cardinality 2g-1, $W'$ is a pair of points and $B$ is a point (up to a projective equivalence respecting the sets);
\item[(2)] a hyperelliptic genus $4g-3$ curve with a choice of a Klein subgroup of involutions $\langle \sigma,\iota\tau\rangle$, where $\sigma$, $\t$ are fixed point free;
\item[(3)] a hyperelliptic genus $g-1$ curve together with a
Weierstrass points $x$ and two pairs of points $y,\iota y$
and $z,\iota z$.
\end{itemize}
\end{cor}
\begin{rem}
Although from our point of view, the mixed case is less natural, it can be seen as the starting point of \cite{CMS}.
\end{rem}
\section{Branched $\mathbb Z_2^2$-coverings over hyperelliptic curves}
\subsection{From the top curve.} Let $\widetilde{C}$ be a hyperelliptic curve of genus $4g+3$ admitting a subgroup of automorphisms
generated by three commuting involutions, namely $\sigma, \t $ and the hyperelliptic involution $\iota$. By Proposition \ref{prop:cominv}
we have
$$
|\Fix (\t)| = |\Fix (\sigma)| = |\Fix (\iota\sigma\t)| = 0, \qquad |\Fix (\sigma\t)| = |\Fix (\iota\sigma)| = |\Fix (\iota\t)| = 4.
$$
For the convenience of the reader, we write $\alpha \in \{ \sigma,\t, \iota\sigma\t \}$ and $\beta \in \{ \iota\sigma,\iota\t, \sigma\t \}$.
By $C_{\alpha}=\widetilde{C}/\alpha$, we denote the quotient curves of genus 2g+2 and $T_{\beta}$ the quotient curves of genus 2g+1.
We have the following commutative diagrams
\begin{equation} \label{diag5}
\xymatrix@R=.9cm@C=.6cm{
&& \widetilde{C} \ar[dr]^{et} \ar[dl]_{et} && \\
& C_{\t}\ar[dr]_{+2} \ar[dl]_{+2} & & C_{\sigma}\ar[dr]_{+2} \ar[dl]_{+2} & \\
H_{\t,\iota\sigma}\ar[drr] && H_{\sigma,\sigma\t} \ar[d] && H_{\sigma,\iota\t}\ar[dll] \\
&& \mathbb P^1 &&
}
\qquad
\xymatrix@R=.9cm@C=.6cm{
& \widetilde{C} \ar[dr]^{+4} \ar[dl]_{+4} \ar[d]^{+4} & \\
T_{\sigma \t}\ar[dr]_{+4} & T_{\iota \t} \ar[d]_{+4} & T_{\iota\sigma} \ar[dl]^{+4}\\
&E \ar[d]& \\
&\mathbb P^1&
}
\end{equation}
where $H_{\alpha,\beta}$ is the genus $g+1$ curve quotient of $\widetilde{C}$ by the subgroup $\langle \alpha, \beta \rangle$
and $E$ is the quotient of $\widetilde{C}$ by $\langle \iota \sigma, \iota \t \rangle$ which is the unique quotient curve of genus $g$ in the tower. Here $+4$, respectively
$+2$, denotes a 2:1 map branched in 4, respectively 2 points. By Corollary \ref{hypquot}, all the positive genus curves in both diagrams
are hyperelliptic.
In order to describe the images of the Jacobians of the quotient curves in $J\widetilde{C}$, we analyse the behaviour of the 2-torsion points under the pull-back maps. According to \cite[Prop 11.4.3]{bl}, $JT_\beta$ and $JE$ are embedded in $J\widetilde{C}$ whereas the image of $JC_\alpha$ is not, so its image will be denoted by $JC_\alpha^*$.
Moreover, Diagram \eqref{diag6} shows that $JH_{\alpha,\beta}$ is not embedded in $J\widetilde{C}$, so we will use the notation $JH_{\alpha,\beta}^*$ for its image.
\begin{equation} \label{diag6}
\xymatrix@R=.7cm@C=.7cm{
& \widetilde{C} \ar[dr]^{+4} \ar[dl]_{et} & \\
C_\alpha \ar[dr]_{+2} & & T_{\beta} \ar[dl]^{et}\\
& H_{\alpha,\beta} &
}
\end{equation}
By construction, $JT_\beta=\im(1+\beta)$, $JE=\im(1-\sigma-\t+\sigma\t)$.
Moreover, we have that $JH^*_{\t,\iota\sigma}=\im(1-\sigma+\t-\sigma\t),\ JH^*_{\sigma,\sigma\t}=\im(1+\sigma+\t+\sigma\t),\ JH^*_{\sigma,\iota\t}=\im(1+\sigma-\t-\sigma\t)$.
One can easily compute that
$$(1-\sigma+\t-\sigma\t)+(1+\sigma+\t+\sigma\t)+(1+\sigma-\t-\sigma\t)+(1-\sigma-\t+\sigma\t)=4$$
which shows that $J\widetilde{C}=JE\boxplus JH^*_{\t,\iota\sigma}\boxplus JH^*_{\sigma,\iota\t}\boxplus JH^*_{\sigma,\sigma\t}$.
As a result we get that $$P(\widetilde{C}/E)=JH^*_{\t,\iota\sigma}\boxplus JH^*_{\sigma,\iota\t}\boxplus JH^*_{\sigma,\sigma\t}.$$
\subsubsection{2-torsion points on Jacobians}
We shall describe the images of the 2-torsion points in $J\widetilde{C}$ of the Jacobians of the quotient curves.
We start by recalling well-known results concerning 2-torsion points on hyperelliptic Jacobians.
\begin{lem}
\label{2torsionlemma}
Let $W=\{w_1,\ldots,w_{2g+2} \}$ be the set of Weierstrass points on a hyperelliptic curve $C$ of genus $g$. Let $S=\{s_1,\ldots,s_{2k}\} \subset\{1,\ldots,2g+2\}$ be a set of even cardinality with $2k\leq g+1$. By $S^c$ we denote the complement of $S$ in $\{1,\ldots,2g+2\}$. We consider degree 0 divisors as elements of $JC=Pic^0(C)$ hence equality means linear equivalence of divisors.
\begin{enumerate}
\item For all $i,j\leq 2g+2$, we have $w_i-w_j=w_j-w_i\in JC[2]$. Consequently, a divisor $D=\sum_{s\in S} \pm w_s$ does not depend on a particular choice of pluses and minuses, as long as its degree equals 0.
\item We have the equality $\sum_{s_i\in S} \pm w_s=\sum_{t\in S^c} \pm w_t$, as long as degrees of both sides equal 0.
\end{enumerate}
From now on, we will write $P_S=\sum_{i=1}^k(w_{s_{2i}}-w_{s_{2i-1}})$ to have degree 0 automatically and hence make notation more consistent.
\begin{enumerate}\setcounter{enumi}{2}
\item If $g$ is even then for every $P\in JC[2], P\neq 0$ there exists a unique $S$ of even cardinality with $|S|\leq g+1$ such that $P=P_S$.
\item If $g$ is odd then for every $P\in JC[2], P\neq 0$ there exists a unique $S$ of even cardinality with $|S|\leq g$ such that $P=P_S$ or unique complementary pair $S,S^c$ of cardinality $g+1$ such that $P_S=P_{S^c}$.
\end{enumerate}
\end{lem}
\begin{proof}
A proof can be found in any of \cite{D}.
\end{proof}
In the case of $\widetilde{C}$, we number the Weierstrass points as follows. Start with any Weierstrass point and call it $w_1$. Then we denote $\sigma w_1, \t w_1, \sigma\t w_1$ the other 3 Weierstrass points in the fibre of the map $\widetilde{C} \rightarrow H_{\sigma, \sigma\tau}$. Note that the four Weierstrass points are indeed different because $\sigma,\t$ are fixed point free and $\sigma\t$ have fixed points outside the set of Weierstrass points. Then proceed in the same way with the rest of the Weierstrass points. Since there are $2(4g+3)+2$ of them on $\widetilde{C}$, in the end we will get \textit{the following numbering $W^{\widetilde{C}}=\{w_1, \sigma w_1, \t w_1, \sigma\t w_1, w_2, \sigma w_2,\ldots,\t w_{2g+2},\sigma\t w_{2g+2}\}$ of Weierstrass points of $\widetilde{C}$.} One has to be aware that we made a particular choice, however in Remark \ref{renumbering}, we will show that a different choice will only result in a permutation of indices and will not affect the results obtained.
For convenience, a Weierstrass point of $\widetilde{C}$ when written in brackets will denote its image
in the corresponding quotient curve. We start writing down 2-torsion points of $JE$. Since $E=\widetilde{C}/\langle \iota\sigma,\iota\t\rangle $, on can easily check that $W^E=\{[w_1],\ldots,[w_{2g+2}]\}$. Setting $e_i:=w_i+\sigma w_i+ \t w_i+ \sigma\t w_i$ and considering $JE$ as a subvariety of $J\widetilde{C}$, one checks that
\begin{equation}\label{generators}
\{ e_i-e_j :1\leq i<j\leq 2g+2\}\subset JE[2].
\end{equation}
Note that $e_i-e_j$ represents $[w_i]-[w_j]\in JE$, so Lemma \ref{2torsionlemma} shows that the sums of up to $\frac{g+1}{2}$ elements with disjoint indices give, on one hand, different points of $J\widetilde{C}$ and on the other, represent all possible 2-torsion points on $JE$. Therefore, we get that $JE[2]$ is generated by \eqref{generators}.
For the description of the 2-torsion points of $JT_\beta$, we will compute explicitly one case, namely $T:= T_{\sigma\t}=\widetilde{C}/\sigma\t$. In this case, one checks that $W^T=\{[w_1],[\t w_1],\ldots,[w_{2g+2}],[\t w_{2g+2}]\}$. Again, denote by $v_i=w_i+\sigma\t w_i$ and $v_i'=\t w_i+\sigma w_i$, so the 2-torsion points of $JT$ are generated by $$\langle v_i-v_j,v_i'-v_j,v_i-v_j', v'_i-v'_j :1\leq i<j\leq 2g+2\rangle$$
considered as embedded in $J\widetilde{C}[2]$.
Indeed, using Lemma \ref{2torsionlemma} one checks that, by adding up to at most $2g+1$ generators, we generate all $2^{4g+2}$ 2-torsion points of the image of $JT$.
For the computation of $JH_{\alpha,\beta}^*[2]$ one has to take into account that half of the 2-torsion points on the quotient come from the 4-torsion points of $JH_{\alpha,\beta}$. We will compute explicitly the points on $JH^*_{\sigma,\sigma\t}[2]$ (and we will denote the curve by $H$). Note that $H$ is a quotient of $T$ (see Diagram \eqref{diag6}) hence we can use $v_i,v_i'$ to represent 2-torsion points on $JH$. Set $H:= H_{\sigma,\sigma\t}$.
Recall that the map $\pi:T\rightarrow H=T/\sigma$ is an \'etale
double covering of hyperelliptic curves of genera $2g+1$ and $g+1$ as illustrated in Figure \ref{fig:branched_hyp_covers}. The set of Weierstrass points of $H$ equals $W^H=\{[w_1],\ldots,[w_{2g+2}],u_{2g+3},u_{2g+4}\}$ and
the covering is defined by a two torsion point $u_{2g+3}-u_{2g+4}\in JH[2]$. In particular,
$\pi^{*}(u_{2g+3}-u_{2g+4})=0$.
Set $u_i:=v_i+v_i'$ and write the following subgroup of $JH^*[2]$:
$$U:=\left< u_i-u_j :1\leq i<j\leq 2g+2\right>$$
The same computation as for $JE$ shows that the subgroup $U$ has $2^{2g}$ elements, so it is $25\%$ of all 2-torsion points of $JH^*$.
We can write the preimages $\pi^{-1}(u_{2g+3})=x+\iota x \text{ and } \pi^{-1}(u_{2g+4})=y+\iota y$, for some $x,y\in T$.
Now, we are able to represent the preimage of $\pi^{*}([w_i]-u_{2g+3})$ as: $$\pi^{*}([w_i]-u_{2g+3})=u_i-g^1_2=v_i-v'_i\in JT.
$$
Note that, together with $U$, we can represent all these 2-torsion points as $\{P_S=\sum_{i\in S}v_i-v_i': |S|<g+1\}$, (where $S$ can be also of odd cardinality), so we get $2^{2g+1}$ points which represent $50\%$ of the 2-torsion points on $JH^*$, precisely the points that come from the 2-torsion points on $JH$.
The trickiest part is to represent 2-torsion points in $JH^*$ that come from 4-torsion points in $JH$, say $R$, satisfying $2R=u_{2g+3}-u_{2g+4}$. There are precisely $2^{2g+1}$ of them.
We will use the fact from Lemma \ref{2torsionlemma} that
$$
u_{2g+3}-u_{2g+4}=\sum_{i\leq g+1}[w_{2i}]-[w_{2i-1}]\ \textnormal{as divisors in } JH.
$$
Now, $\pi^{-1}([w_{i}])=\{[w_i],[\t w_i]\}\subset T$. Since $v_i\in \Pic^2(\widetilde{C})$ represents $[w_i]$ and $v'_i$ represents $[\t w_i]$, we set $Q_i\in \{v_i,v'_i\}$
and define $Q=\sum_{i\leq g+1}Q_{2i}-Q_{2i-1}$. There are precisely $2^{2g+2}$ of such representations with the relation
$$
Q=Q' \quad \Leftrightarrow \quad \forall_j (Q_j=Q_j')\ \textnormal{ or } \ \forall_j (Q_j\neq Q'_j).$$
Therefore, we have defined $2^{2g+1}$ points in $JT[2]$ (seen as embedded in $J\widetilde{C}[2]$). In order to show that indeed $\pi^{*}(R)=Q$ it is enough to note the following facts.
Firstly, the cardinalities of both sets (of possible Q's and possible $\pi^{*}(R)$'s) are equal to $2^{2g+1}$. Secondly, for every $Q$ we have that $\Nm_\pi(Q)=u_{2g+3}-u_{2g+4}$. Thirdly, by definition, we have that $\Nm_{\pi}\circ\pi^*=m_2$, where $m_2$ is the multiplication by 2, therefore
$$
\Nm_\pi(Q)=u_{2g+3}-u_{2g+4}=2R=\Nm_{\pi}(\pi^*(R))$$
and lastly, $\pi^*(R)$ are 2-torsion points and $Q$ are the only 2-torsion points of $JT$ with the property that $\Nm_\pi(Q)=u_{2g+3}-u_{2g+4}$.
In the following proposition we compile the analogous results for all other curves $JH_{\alpha,\beta}^*$.
\begin{lem}\label{descrip2tor}
We have the following generators of subgroups of 2-torsion points as embedded in $J\widetilde{C}[2]$:
\begin{align*}
JE[2]&=\left<(w_i+\sigma w_i+\t w_i+\sigma\t w_i)-(w_j+\sigma w_j+\t w_j+\sigma\t w_j):\ 1\leq i<j\leq 2g+2\right>,\\
JH_{\sigma,\sigma\t}^*[2]&=JE[2]+\left<w_i+\sigma\t w_i-\t w_i-\sigma w_i: \ 1\leq i\leq 2g+2\right>+\\&+\left<\sum_{k\leq g+1}(Q_{2k}-Q_{2k-1}): Q_i\in\{w_i+\sigma\t w_i,\sigma w_i+\t w_i\}\right>, \\
JH_{\t,\iota\sigma}^*[2]&=JE[2]+\left<w_i+\t w_i-\sigma w_i-\sigma\t w_i: \ 1\leq i\leq 2g+2\right>+\\&+\left<\sum_{k\leq g+1}(Q_{2k}-Q_{2k-1}): Q_i\in\{w_i+\sigma w_i,\t w_i+\sigma\t w_i\}\right>, \\
JH_{\sigma,\iota\t}^*[2]&=JE[2]+\left<w_i+\sigma w_i-\t w_i-\sigma\t w_i: \ 1\leq i\leq 2g+2\right>+\\&+\left<\sum_{k\leq g+1}(Q_{2k}-Q_{2k-1}): Q_i\in\{w_i+\t w_i,\sigma w_i+\sigma\t w_i\}\right>, \\
JT_{\sigma\t}[2]&=JE[2]+\left<(w_i+\sigma\t w_i)-(\sigma w_j+\t w_j), (w_i+\sigma\t w_i)-(w_j+\sigma\t w_j):\ 1\leq i,j\leq 2g+2\right>, \\
JT_{\iota\sigma}[2]&=JE[2]+\left<(w_i+\sigma w_i)-(\sigma\t w_j+\t w_j), (w_i+\sigma w_i)-(w_j+\sigma w_j):\ 1\leq i,j\leq 2g+2\right>, \\
JT_{\iota\t}[2]&=JE[2]+\left<(w_i+\t w_i)-(\sigma\t w_j+\sigma w_j), (w_i+\t w_i)-(w_j+\t w_j):\ 1\leq i,j\leq 2g+2\right>.
\end{align*}
\end{lem}
\begin{proof}
The proof is completely analogous to what we have done for $JH^*_{\sigma,\sigma\t}$. One only needs to change the subscripts depending on the involution by which it is divided. For
example, for $H_{\sigma,\iota\t}$, the only involution with fixed points is $\iota\t$, so $Q_i\in\{w_i+\t w_i,\sigma w_i+\sigma\t w_i\}$.
\end{proof}
\begin{rem}\label{renumbering}
Note that a different choice of Weierstrass points $w_1,\ldots,w_{2g+2}\in \widetilde{C}$ will only result in a permutation of indices because all sets of generators are invariant under $\langle \sigma,\tau\rangle$. In particular, the statement of Lemma \ref{descrip2tor} does not depend on the numbering of the Weierstrass points we started with.
\end{rem}
\subsection{From the bottom curve}
In order to construct a $\mathbb Z_2^2$-branched covering of hyperelliptic curves, one only needs to choose a 2g+5 points in $\mathbb P^1$ with a distinguished triple, denoted by $[w_1],\ldots,[w_{2g+2}], [x],[y],[z]$. Then, the curve $E$ is the hyperelliptic curve that is a double cover of $\mathbb P^1$ branched in $[w_1],\ldots,[w_{2g+2}]$ points. The curve $T_x$ will be the double cover of $E$ branched at $y_E,\iota y_E, z_E,\iota z_E$ with the corresponding line bundle being the hyperelliptic $g^1_2$. According to Proposition \ref{hyp-cov}, $T_x$ is hyperelliptic. Then the preimages of $x_E$ become $x_T,x'_T$ and therefore one chooses $x_T,x'_T,\iota x_T, \iota x'_T$ as a branching for the map $\widetilde{C}\rightarrow T_x$ with the line bundle being $g^1_2$. By construction (and Proposition \ref{hyp-cov}), $\widetilde{C}$ is a hyperelliptic curve of genus $4g+3$. The covering involution of the map $T_x\rightarrow E$ lifts to $\widetilde{C}$, hence $\widetilde{C}$ is a hyperelliptic curve with two (additional) commuting involutions.
Moreover, the construction is uniquely defined up to a projective equivalence. To see this, one can directly construct the curve $\widetilde{C}$ as follows. Consider a $\mathbb Z_2\times\mathbb Z_2$-covering of $\mathbb P^1$ branched in $[x],[y],[z]$ with two simple ramifications on each fiber
(the existence is shown in Remark \ref{p1}). By Hurwitz formula the covering curve is of genus 0. Moreover, there are $8g+8$ points over $[w_1],\ldots,[w_{2g+2}]$ and there is an action of $\mathbb Z_2\times\mathbb Z_2$ on them.
Then $\widetilde{C}$ is constructed as the double cover branched in these $8g+8$ points.
Since the set of branching points is invariant under the action of $\mathbb Z_2\times\mathbb Z_2$ the curve $\widetilde{C}$ possesses two commuting involutions. In this way, we constructed all the maps of the following commutative diagram.
\begin{equation} \label{Const_tildeC}
\xymatrix@R=.9cm@C=1cm{
\widetilde{C} \ar[d]_{2:1} \ar[r]^{4:1} & E \ar[d]^{2:1} \\
\mathbb P^1 \ar[r]^{4:1} & \mathbb P^1
}\end{equation}
\begin{rem}\label{p1}
An example of such a covering of the projective line can be described as $[x:y]\rightarrow[x^4+y^4:2x^2y^2]$ with the branching points $[1:0],[1:-1],[1:1]$. The deck transformations are explicitly described as $[x:y]\rightarrow[-x:y]$ and $[x:y]\rightarrow [y:x]$.
\end{rem}
We finish this section by showing the equivalence of data needed to built a branched $\mathbb Z_2^2$ covering.
\begin{prop} \label{equivalence}
For $g\geq 1$, the following data are equivalent:
\begin{enumerate}
\item $2g+5$ points in $\mathbb P^1$ with a distinguished triple up to a projective transformation;
\item a genus $4g+3$ hyperelliptic curve $\widetilde{C}$ with 2 commuting involutions;
\item a genus $g$ hyperelliptic curve $E$ with three pairs of points $x,\iota x, y, \iota y, z,\iota z$ up to an isomorphism;
\item 3 genus $g+1$ hyperelliptic curves $H_x$, $H_y, H_z$ that have branching points that can be glued to get a set of $2g+5$ points with each of 3 distinguished points shared by precisely 2 curves.
\end{enumerate}
\end{prop}
\begin{proof}
The equivalence $1\iff3$ is given by the hyperelliptic covering. The equivalence $2\iff 3$ follow from the construction of branched $\mathbb Z^2_2$ coverings seen from top and from bottom.
The implication $2\Rightarrow 4$ comes from taking 3 quotient curves by 3 Klein subgroups (see Diagram \eqref{diag5}) and $4\Rightarrow 1$ is obvious.
\end{proof}
\begin{figure}[h!] \label{fig:branched_hyp_covers}
\includegraphics[width=11.5cm, trim={0cm 7cm 0cm 5cm}, clip]{branched_hyp_covers.pdf}
\caption{ Weierstrass and ramification points on hyperelliptic branched coverings}
\end{figure}
\subsection{Prym map}
The aim of this section is to show that the Prym map for hyperelliptic Klein coverings branched in 12 points is injective.
In our construction the pullback of $E$ under the 4:1 map $\widetilde{C} \rightarrow E$ defines a subvariety with polarisation of type $(4, \ldots, 4)$ therefore, as a complementary subvariety of $E$ in $J\widetilde{C}$, the Prym variety $P$ corresponding to the map $\widetilde{C} \rightarrow E$,
has a polarisation $\Xi$ of type
$$
\delta:= (\underbrace{1,\ldots,1}_{2g+3}, \underbrace{4, \ldots, 4}_{g}).$$
Let $\mathcal{RH}_{g,12}$ denote the moduli space of pairs $(E, \{q_1, q_2, q_3\})$ where $E$ is a hyperelliptic curve and
the points $\{q_1, q_2, q_3\} \subset E $ are not pairwise conjugated. In view of Proposition \ref{equivalence} this moduli space parametrises also the hyperelliptic $\mathbb Z_2^2$-
coverings branched in 12 points.
\begin{prop}
The moduli space $\mathcal{RH}_{g,12}$ is irreducible.
\end{prop}
\begin{proof}
It follows from the equivalence $(1) \Leftrightarrow (3)$ of the Proposition \ref{equivalence}.
\end{proof}
We consider the Prym map
$$
Pr^H_{4g+3,g}: \mathcal{RH}_{g,12} \rightarrow \mathcal{A}_{3g+3}^{\delta}, \qquad (E, \{q_1, q_2, q_3\}) \mapsto (P(\widetilde{C}/ E) , \Xi).
$$
Observe that the image of this Prym map is contained in an irreducible component of $\mathcal{A}_{3g+3}^{\delta}$ whose
elements admit a $\mathbb Z_2 \times \mathbb Z_2 $ automorphism subgroup acting on them and leaving invariant the
algebraic class of the polarisation $\Xi$.
\begin{lem}\label{finite}
The Prym map $Pr^H_{4g+3,g}$ is finite over its image.
\end{lem}
\begin{proof}
Let $P$ be a Prym variety in the image of $Pr^H_{4g+3,g}$.
Then $P$ contains finitely many abelian subvarieties of dimension $g+1$ and polarisation type $(2,4,\ldots,4)$. Therefore there is only finitely many possible triples of curves $H_{x},H_{y},H_{z}$ (with chosen 2-torsion points).
According to the equivalence of Proposition \ref{equivalence}, this leads to finitely many coverings.
\end{proof}
Recall that the Prym variety of $\widetilde{C}\rightarrow E$ decomposes as $P(\widetilde{C}/E)=JH^*_{\t,\iota\sigma}\boxplus JH^*_{\sigma,\iota\t}\boxplus JH^*_{\sigma,\sigma\t}$.
\begin{lem}\label{lem58}
We have the equality as subgroups of $J\widetilde{C}$
\begin{eqnarray*}
JH^*_{\t,\iota\sigma}\cap JH^*_{\sigma,\iota\t}\cap JH^*_{\sigma,\sigma\t} &=& JE[2]+\left<w_i+\t w_i-\sigma w_i-\sigma\t w_i: \ 1\leq i\leq 2g+2\right> \\ &=&
JE[2]+\mathbb Z_2(w_1+\t w_1-\sigma w_1-\sigma\t w_1),
\end{eqnarray*}
In particular, the intersection is of order $2^{2g+1}$.
\end{lem}
\begin{proof}
According to Lemma \ref{2torsionlemma} (1) and Lemma \ref{descrip2tor}, we have the inclusion from the right hand side of the first equality. To prove the equality one uses the
description of Lemma \ref{descrip2tor} and check the elements
of the form $\sum_{k\leq g+1}(Q_{2k}-Q_{2k-1})$ can not be contained in the intersection. Indeed, there are precisely $2g+2$ summands, so two such divisors are linearly equivalent if and only if all summands coincide or the summands are complementary. Since the involutions involved are different, for different curves some summands (but not all) will be the same.
The second equality follows from the fact that one can write
\begin{eqnarray*}
(w_i+\t w_i+ \sigma w_i + \sigma\t w_i) - (w_1+\t w_1+ \sigma w_i + \sigma\t w_1) + && \\
w_1+\t w_1-\sigma w_1-\sigma\t w_1 &=& (w_i+\t w_i+ \sigma w_i + \sigma\t w_i) -2\sigma w_1 -2\sigma\t w_1 \\ &\sim&
(w_i+\t w_i+ \sigma w_i + \sigma\t w_i) -2\sigma w_i -2\sigma\t w_i \\
&=& (w_i+\t w_i - \sigma w_i - \sigma\t w_i)
\end{eqnarray*}
\end{proof}
\begin{rem}
Note that for $Q_i\in\{w_i+\sigma w_i,\t w_i+\sigma\t w_i\}, R_i\in\{w_i+\t w_i,\sigma w_i+\sigma\t w_i\}$ we have
$$\sum_{k\leq g+1}(Q_{2k}-Q_{2k-1})+\sum_{k\leq g+1}(R_{2k}-R_{2k-1})\in JH^*_{\sigma,\sigma\t}.$$
This relation gives many of the elements in the kernel of the addition map $JH^*_{\t,\iota\sigma}\times JH^*_{\sigma,\iota\t}\times JH^*_{\sigma,\sigma\t}\to P(\widetilde{C}/E)$.
\end{rem}
\begin{lem}\label{generic}
For a hyperelliptic branched $\mathbb Z_2\times \mathbb Z_2$ covering $\widetilde{C}\rightarrow E$ of a very general curve $E$, the Prym variety $P(\widetilde{C}/E)$ contains $JH^*_{x}, JH^*_{y}, JH^*_{z}$
as the only proper abelian subvarieties (and sums of these).
\end{lem}
\begin{proof}
Note that a general hyperelliptic curve has a simple Jacobian.
This means that one can choose $y,z,w_1,\ldots,w_{2g+2}$ in such a way that the curve $H_x$ have simple Jacobian. Having simple Jacobian is an open condition, so by moving $x$ in uncountably many ways, we can assume that $H_y,H_z$ have also simple Jacobians. Moreover, note that we have only countably many curves whose Jacobians are isogenous to $JH_x$. Therefore, by possibly moving $x$, we may assume $\Hom(JH_y,JH_x)=\Hom(JH_z,JH_x)=0$.
Similarly, perturbing $y$ a little, one obtains $\Hom(JH_z,JH_y)=0$.
As usual, let $\alpha\in\{x,y,z\}$ and note that the exponents of $JH^*_{\alpha}$ equal 4, so
\begin{equation}\label{equal4}
\sum_{\alpha} \Nm_{JH^*_{\alpha}}=m_4,
\end{equation}
where $\Nm$ stands for the norm map of an abelian subvariety (see \cite[\S 5.3]{bl}) and $m_4$ is a multiplication by 4. Note also that, by definition, we have $\im(\Nm_{JH^*_{\alpha}})=JH^*_{\alpha}$.
Let $A$ be an abelian subvariety. We can assume $\dim(A)\leq 3$, by possibly taking the complementary one. If $A$ is distinct from every $JH^*_{\alpha}$, then we can take $a\in A$ of infinite order such that $\langle a\rangle \cap JH^*_{\alpha}=\{0\}$ for each $\alpha$.
By evaluating Equation \ref{equal4} on $a$ one gets that the image of $\Nm_{JH^*_{\alpha}|A}$ is nonzero for at least two subscripts. Without loss of generality, we can assume $\Nm_{JH^*_{x}|A}\neq \{0\}$ and $\Nm_{JH^*_{y}|A}\neq \{0\}$.
If $\dim(A)=1$ then we get a contradiction with the simplicity of $JH^*_{x}$. If $\dim(A)=2$ then both (restricted) maps are isogenies, so one can find a suitable 'quasi'-inverse (i.e $f\circ g=g\circ f=m_k$, for some $k\in\mathbb N$) and hence a composition gives a non-trivial homomorphism in $\Hom(JH_y,JH_x)$ that is a contradiction. If $\dim(A)=3$, then $\ker(\Nm_{JH^*_{x}}|_A)^0$ is a 1-dimensional abelian subvariety with a non-zero image to $JH^*_{y}$ or ${JH^*_{z}}$ that again contradicts the simplicity of these.
\end{proof}
\begin{rem}
The proof of Lemma \ref{generic} can be easily generalised to any number of summands of any exponent (as long as the summands are simple and there are no non-zero homomorphisms between them). To make reading easier we have sticked to what is needed in the proof of the injectivity of the Prym map.
\end{rem}
\begin{thm} \label{brached_inj_Prym}
For $g>0$, the Prym map $Pr^H_{4g+3,g}$ is injective.
\end{thm}
\begin{proof}
By Lemma \ref{finite}, it is enough to show that $Pr^H_{4g+3,g}$ is generically injective. Let $P$ be a general element in the image of the Prym map. By Lemma \ref{generic}, we can assume that there are exactly $3$ abelian subvarieties of $P$, called $A,B,C$. Note that. Let $G=A\cap B\cap C$ and note that by Lemma \ref{lem58} the cardinality of $G$ is $2^{2g+1}$.
Set $A/G=:JH_x$, $B/G=:JH_y$, $C/G=:JH_z$. Since $G$ contains only 2-torsions, we can extend the quotient map to the map $A\rightarrow A/G\rightarrow A$ such that the composition is multiplication by $2$. Moreover, since $G$ is order $2^{2g+1}$ we get that $A/G=JH_x\rightarrow A$ is of order 2, hence given by a $2$ torsion of the form $w^{H_x}_{2g+3}-w^{H_x}_{2g+4}\in \ker JH_x\rightarrow A$.
We denote the remaining Weierstrass points of $H$ by $w^{H_x}_1,\ldots,w^{H_x}_{2g+2}$. By taking the images under the hyperelliptic covering, we get the points $[w_1],\ldots,[w_{2g+2}]\in\mathbb P^1$
Similarly to the \'etale case, for $j\in\{x,y,z\}$, we can define
$\alpha_j=k\circ\pi^*_j:\Pic^1(H_j)\rightarrow J\widetilde{C}$, (where $k(D)=D-2g^1_2$) although
note that $\alpha_j$ is of degree 2, since $\pi^*(w_{2g+3})=\pi^*(w_{2g+4})=2g^1_2$.
However it is still true that $\alpha_j(w_p)\neq\alpha_j(w_{q})$ for $p\neq q<2g+3$.
Therefore, we can renumber the Weierstrass points of $H_y$ in such a way that $\alpha_x(w^{H_x}_i)=\alpha_y(w^{H_y}_i)$ for $i=1,\ldots,2g+2$.
This compatibility allows us to show that having the hyperelliptic covering of $H_y$ there exists an automorphism of $\mathbb P^1$ such that images of Weierstrass points coincide, i.e. $[w^{H_y}_i]=[w^{H_x}_i]$, $i=1,\ldots,2g+2$. Since $g\geq 1$, we get that the automorphism is unique.
Moreover, by construction, we will get that $[w^{H_x}_{2g+3}]=[w^{H_y}_{2g+3}]$ and we will call it $[z]$ and $[w^{H_x}_{2g+4}]=[y]$, $[w^{H_y}_{2g+4}]=[x]$ are distinct.
We can perform a similar argument for $H_z$ to get the unique automorphism of $\mathbb P^1$ such that images of Weierstrass points of $H_z$ become $\{[w^{H_x}_{1}],\ldots,[w^{H_x}_{2g+2}], [x], [y]\}$.
Note that, although in the construction we have used $\alpha_j$ that are a priori defined for a chosen $\widetilde{C}$, in fact we only needs the equality of images of Weierstrass points that lie in $P$, so the construction is intrinsic.
In this way, we have constructed a unique set of $2g+5$ points of $\mathbb P^1$ with a distinguished triple (up to projective equivalence), so we get injectivity of $Pr^H_{4g+3,g}$.
\end{proof}
\begin{rem}
Note that, unlike the \'etale case, a simple computation of degrees shows that $P\rightarrow P/G$ cannot be a polarised isogeny, so we need to divide each subvariety individually.
\end{rem}
\subsection{The mixed case of $4g+3$}
From up to bottom perspective, this case occurs when one starts with a genus $4g+3$ curve with two fixed point free involutions $\sigma,\t$ (with $\sigma\t$ having 4 fixed points) and takes a group generated by $\langle \sigma,\t\rangle$.
The tower of curves can be found in Diagram \ref{diag5} when we treat $H_{\sigma,\sigma\t}$ as the base curve.
From what we have already described, one can easily deduce that in this case, the Prym variety $P(\widetilde{C}/H_{\sigma,\sigma\t})=JE\oplus JH^*_{\t,\iota\sigma}\oplus JH^*_{\sigma,\iota\t}$. Since most of the computations have already been done, we will focus on stating the results and main steps.
For $\delta=(\underbrace{1,\ldots,1}_{2g+1},\underbrace{2,4,\ldots,4}_{g+1})$, define the Prym map:
$$Pr ^H_{4g+3,g+1}:\mathcal{RH}_{g+1,4}\longrightarrow\mathcal{A}^\delta_{3g+2}$$
\begin{thm}\label{mixed4g+3}
For $g>0$, the Prym map $Pr^H_{4g+3,g+1}$ is injective.
\end{thm}
\begin{proof}
Firstly, we can use proofs of Lemmas \ref{finite} and \ref{generic} to assume that $P\in \im(Pr^H_{4g+3,g+1})$ is general and therefore it contains only 3 abelian subvarieties. One of the subvarieties is of dimension $g$ and we can call it $JE$ and the other two are called $B,\ C$.
By Lemma \ref{descrip2tor}, we note that $JE\cap B\cap C=JE[2]$ is of order $2^{2g}$ and $B\cap C$ is of order $2^{2g+1}$.
Denote by $w_1,\ldots,w_{2g+2}$ Weierstrass points of $E$ and $[w_1],\ldots,[w_{2g+2}]$ their images in $\mathbb P^1$.
Using results from the proof of Theorem \ref{brached_inj_Prym}, by taking $G=B\cap C$, we see that firstly, there exist unique curves $H_{y}, H_{z}$ such that
$B=JH^*_{y},\ C=JH^*_{z}$ with the quotient maps given by the differences $w^{H_j}_{2g+3}-w^{H_j}_{2g+4}$ for $j=y,z$.
As before, consider the pullback map $\pi_E^*:\Pic^1(E)\rightarrow \Pic^4(\widetilde{C})$ and $k:\Pic^4(\widetilde{C})\rightarrow J\widetilde{C}$ given by $k(D)=D-2g^1_2$. Then $k\circ \pi_E^*:\Pic^1(E)\rightarrow J\widetilde{C}$ is a monomorphism and $k\circ \pi_{JH_j}^*$ are of degree 2. Note that we can number the Weierstrass points on $H_x,H_y$ using the condition $\alpha_j(w^{H_j}_l)=\alpha_E(w_l)$ for $l=1,\ldots,2g+2$ and by the fact that we are in the Prym locus, we will get that out of 4 points $w^{H_j}_{2g+3}-w^{H_j}_{2g+4}$ for $j=y,z$, under projections to $\mathbb P^1$ precisely two will be glued and called $x$, and the others will be called $y,z$ respectively.
To summarise, starting from the Prym variety $P$, we have constructed $2g+5$ points in $\mathbb P^1$ with a chosen triple $x,y,z$ and a distinguished point $x$ that yields the Prym variety we started with.
In this way, we have proved that the Prym map $Pr^H_{4g+3,g+1}$ has an inverse, hence it is injective.
\end{proof}
\begin{cor}
In the process, we have shown that the following data equivalent.
\begin{itemize}
\item[(1)] a triple $(W,W',B)$ of disjoint sets of points in $\mathbb P^1$ such that $W$ is of cardinality 2g+2, $W'$ is a pair of points and $B$ is a point
(up to a projective equivalence respecting the sets),
\item[(2)] a hyperelliptic genus $4g+3$ curve with a choice of a Klein subgroup of involutions $\langle \sigma,\t\rangle$ such that $\sigma$ and $\t$ are fixed point free and $\sigma\t$ has 4 fixed points,
\item[(3)] a hyperelliptic genus $g+1$ curve together with a pair of Weierstrass points $y,z$ and a pair of points $x,\iota x$.
\end{itemize}
\end{cor}
\section{Final remarks}
We assumed $g\geq 2$ in the \'etale case, because $g=1$ gives a trivial Prym and $g=0$ is impossible.
In the branched case, we assumed $g\geq 1$. For $g=0$ one gets that the Prym variety is the whole Jacobian. However, it must be noted that the mixed case is non-trivial and the proof of Theorem \ref{mixed4g+3} does not work for $g=0$. This case will be investigated in a joint paper of the first author with Anatoli Shatsila.
\begin{rem}
We would like to point out that throughout the paper we used coordinate-free point of view. However, one can also work with equations. It can be checked that a hyperelliptic curve given by $y^2=(x^4+a_1x^2+1)\cdot\ldots\cdot(x^4+a_nx^2+1)$ has two commuting involutions given by $(x,y)\mapsto (-x,y)$ and $(x,y)\mapsto(\frac{1}{x},\frac{y}{x^{2n}})$ (see \cite{Sh}).
It's genus equal $2n-1$ and since the family depends on $n$ parameters, one can use Propositions \ref{equivalence1} and \ref{equivalence} to show that a general hyperelliptic Klein covering can be given by such an equation.
\end{rem}
|
1,116,691,499,175 | arxiv | \section{Introduction and some previous results}
A function $ f \colon X \to Y$ between topological spaces has the Baire property iff for each open set $V \subset Y$ the preimage $f^{-1}(V)$ has the Baire property in $X$ (i.e., is open modulo a meager set in $X$). According to a well-known theorem attributed to K. Kuratowski, if a function $ f \colon X \to Y$ from a metrizable space $X$ to a separable metrizable space $Y$ has the Baire property, then for some meager set $M \subset X$ the restriction
$f$ to $X \setminus M$ is continuous.
In 1935 K. Kuratowski \cite{KK} raised a question whether the assumption of separability of $Y$ is essential. The question makes sense if we assume that $X$ fulfills Baire theorem.
Thus the Kuratowski's question proved to be equivalent to the existence of a partition of a space into meager subsets such that the union of each its subfamily has the Baire property.
\begin{definition}
Let $(X, \tau)$ be a topological space and let $\mathcal{F}$ be a partition of $X$ into meager sets. We say that $\mathcal{F}$ is \textit{a Kuratowski partition} if $\bigcup \mathcal{F}'$ has the Baire property for any subfamily $\mathcal{F}' \subset \mathcal{F}$.
\end{definition}
In \cite{S} Solovay considered partitions of the interval $[0,1]$ into meager sets and using metamathematical methods showed that the union of suitable sets of the partition is non-measurable in the sense of category.
He showed that one cannot prove in the Zermelo-Fraenkel set theory ZF (which does not contain the axiom of choice) the existence of a set of reals which is not Lebesgue measurable, more precisely he proved the following theorem.
\begin{theorem}[\cite{S}]
ZF is consistent with the conjunction of the following statements:
\\(1) The principle of dependent choices;
\\(2) every set A of reals is Lebesgue measurable;
\\(3) every set A of reals has the property of Baire; \\(4) every uncountable set A of reals contains a perfect subset;
\\(5) let $A = \{A_x: x \in R\}$ be an indexed family of non-empty sets of reals with the reals as the index set, then there are Borel functions $h_1$ and $h_2$ mapping $R$ into $R$ such that
\\(a) $\{x:h_1(x) \not \in A_x\}$ has Lebesgue measure zero,
\\(b) $\{x:h_2(x) \not \in A_x\}$ is of first category.
\end{theorem}
In 1979 Bukovsky in \cite{B} using models of set theory, namely a generic ultrapower, proved the following theorem, firstly proved by Solovay (unplished result). The proof of Solovay's result was longer and more complicated than Bukovsk\'y's.
\begin{theorem}[\cite{B}]
Let $\{A_\xi \colon \xi \in S\}$ be a partition of the unit $[0,1]$ into pairwise disjoint sets of Lebesgue measure zero. Then there exists a set $S_0 \subseteq S$ such that the union $\bigcup_{\xi\in S_0} A_\xi$ is not Lebesgue measurable.
\end{theorem}
In \cite{EFK} the authors considered partitions of \v Cech complete space.
We remind that a space $X$ is \textit{\v Cech complete} if $X$ is a dense $G_\delta$ subset of a compact space.
Among others they proved the following results.
\begin{theorem}[\cite{EFK}]
Let $X$ be a \v Cech complete space and has a pseudobase of cardinality $\leq 2^\omega$.
Let $\mathcal{F}$ be a partition of $X$ into meager sets. Then there exists a family $\mathcal{A} \subset \mathcal{F}$ such that $\bigcup \mathcal{A}$ has not the Baire property.
\end{theorem}
\begin{corollary}[\cite{EFK}]
If $\mathcal{F}$ is a partition of $R$ into sets of measure zero, then there exists a family $\mathcal{A} \subset \mathcal{F}$ such that $\bigcup \mathcal{A}$ has not the Baire property.
\end{corollary}
Since the questions on Kuratowski's theorem and continuity is the same question, in \cite{EFK1} the authors proved equivalent results to previous theorem but concerning continuous functions. This result is equivalent to one in Selectors Theory.
\begin{theorem}[\cite{EFK1}]
If $X$ is a \v Cech complete space and has a pseudobase of cardinality $\leq 2^\omega$, then for each map $f \colon X \to Y$ having the Baire property into a space $Y$ with a $\sigma$-disjoint base there exists a meager set $F \subset X$ such that the restriction $f$ to $X \setminus F$ is continuous.
\end{theorem}
Remind that a family $\mathcal{A}$ of subsets of a space $X$ is \textit{point finite} iff for each point $x \in X$ the set $\{A \in \mathcal{A} \colon x \in A \}$ is finite. In \cite{FG} the authors considering point finite families of meager sets proved the following result.
\begin{theorem}[\cite{FG}]
Let $\mathcal{F}$ be a point finite family of meager sets covering pseudobasically compact space $X$ and $\pi w (X) \leq 2^\omega$. Then there exists a family $\mathcal{A} \subset \mathcal{F}$ such that $\bigcup \mathcal{A}$ has not the Baire property.
\end{theorem}
\section{The equistence of non-measurable sets}
In the literature there are well-known constructions of non-measurable sets like Vitali's, Bernstein's, Sierpi\'nski's and Shelah's (for the last one see \cite{S}). However, the next theorem is an old result published by Lusin in 1912, (see Theorem 8.2. p. 37 in \cite{JO}).
\begin{theorem}[\cite{JO}]
A real-valued function on $R$ is measurable if and only if for every $\varepsilon >0$ there exists a set $E \subset R$ with $\mu(E)<\varepsilon$ such that
the restriction of $f$ to $R\setminus E$ is continuous, where $\mu$ denotes the Lebesque measure.
\end{theorem}
\noindent
Below we show the combinatorial proof of the existence of one more non-measurable set (see \cite{FJ1}). However, the next result is proved in Measure Theory, the original Lusin Theorem is obtained for Category Theory, (see Appendix in \cite{WS}, p. 172 - 173).
In the proof we will use Fubini Theorem on product measures which we assume to be well-known.
\begin{proposition}[\cite{FJ1}]
There exists a set $X \subset [0,1]$ of apositive Lebesgue measure which is non-measurable.
\end{proposition}
\begin{proof}
Suppose that all subsets of $[0,1]$ are measurable. Let $\mathcal{F}$ be a partition of $X$. Order $\mathcal{F} = \{F_\alpha \colon \alpha <\lambda\}$, where $\lambda$ is a cardinal not greater than $\mathfrak{c}$. We can assume that $\mu(\bigcup_{\beta < \alpha}F_\beta) = 0$ for all $\alpha < \lambda$ and $\mu(\bigcup \mathcal{F}) >0$, where $\mu$ denotes a Lebesgue measure.
Consider the following sequences of sets in $[0,1] \times [0,1]$: for $\alpha = 0$ take $F_0 \times F_0$ and for $0 < \alpha < \alpha$ take $F_\alpha \times (\bigcup_{\beta < \alpha} F_\beta)$.
Then all these sets defined above have Lebesgue measure zero.
For each $F \in \mathcal{F}$ consider open $G_\delta$ sets $G^{F}_{n}, n \in \omega$ such that $F \subset \bigcap_{n \in \omega} G^{F}_{n}$, $\mu (\bigcap_{n\in \omega}G^{F}_{n}) = 0$ and $\mu (G^{F}_{n}) < \frac{1}{n}$.
Let $\{U_m \colon m \in \omega\}$ be a base in $X$.
For $n, m \in \omega$ consider
$$A_{n, m} = \{\alpha < \lambda \colon U_m \subset G^{F_\alpha}_{n}\}.$$
Since $\mu(F_\alpha) = 0$, for all $\alpha < \lambda$ and $\mu (\bigcup_{\alpha < \lambda}F_\alpha) >0$. At the presence of Fubini Theorem we have that the set
$\bigcup \{F_\alpha \colon \alpha \in A_{n, m}\}$ is non-measurable for some $n, m \in \omega$.
\end{proof}
In \cite{FJ1} it is also shown that the existence of Kuratowski partitions are strongly connected with the structure of quotient algebras (see also \cite{GS}). (The analogous result to Theorem 7 one can formulate in Category Theory). Let $LM$ denotes the family of all Lebesgue measurable subsets of an unit and $\triangle$
- the ideal of Lebesgue-null sets
\begin{theorem}[\cite{FJ1}]
Let $\kappa $ be a regular cardinal and $I_\kappa$ be a $\kappa$-complete ideal. Then $P(\kappa)/I_\kappa$ is not isomorphic to $LM/\triangle$.
\end{theorem}
Since it is provable that the existence of a Kuratowski partition of the Baire (complete) metric space is equiconsistent in ZFC with the existence of a measurable cardinal, the results below concern the equistence of such a cardinal.
Let $(X, \Sigma, \mu)$ be a measure space. A measure $\mu$ is \textit{perfect} iff for every $\Sigma$-measurable function $f \colon X \to R$ and $E \subset R$ such that $f^{-1}(E) \in \Sigma$ there exists a Borel set $B \subset E$ such that $\mu(f^{-1}(E)) = \mu (f^{-1}(B))$.
In \cite{FGPR} the authors proved the following result.
\begin{theorem}[\cite{FGPR}]
Let $(X, \Sigma, \mu)$ be a space with a perfect measure.
Let $\mathcal{F}$ be a point cover consisting of sets of measure zero and let $|\mathcal{F}|$ be smaller then the first measurable cardinal. Then there exists $\mathcal{A} \subset \mathcal{F}$ such that $\bigcup \mathcal{A}$ is not measurable and has inner measure zero.
\end{theorem}
In \cite{FK} the authors using metamathemacal methods proved the following result.
\begin{theorem} [\cite{FK}]
The following theories are equiconsistent:
\\ (1) ZFC $+ \exists$ measurable cardinal;
\\ (2) ZFC $+$ there is a complete metric space $X$, a metric space $Y$, and a function $f \colon X \to Y$ having the Baire property such that there is no meager set $F \subseteq X$ for which the restriction $f$ to $X \setminus F$ is continuous;
\\ (3) ZFC $+$ there is a Baire metric space $X$, a metric $Y$, and a function $f \colon X \to Y$ having the Baire property such that there is no meager set $F \subseteq X$ for which the restriction $f$ to $X \setminus F$ is continuous.
\end{theorem}
\section{Kuratowski partitions in the Ellentuck structure}
A space $[\omega]^\omega$ of infinite subsets of $\omega$, equipped with the topology generated by the base consisting of the sets of the form
$[a, A] = \{B \in [A]^\omega \colon a \subset B \subseteq a \cup A\}$, where $a \in [\omega]^{<\omega}, A \in [\omega]^\omega$ is called \textit{Ellentuck space}. We call sets of the form $[a, A]$ by \textit{Ellentuck sets} (or shortly EL-sets).
In paper \cite{FS} it is proved the following result.
\begin{theorem} [\cite{FS}]
No non-meager subspace of Ellentuck space admits a Kuratowski partition.
\end{theorem}
However, in the proof of this paper there is a gap but the theorem remains true.
In \cite{FJ} the result was corrected.
A set $Q \subset [\omega]^\omega$ is \textit{Ramsey null} (or for short $Q$ is RN) if for every $[a, A]$ there exists $[a', A']\subseteq [a, A]$ such that $[a', A'] \cap Q = \emptyset$.
A Ramsey null set means a meager set in Ellentuck topology.
\\An Ellentuck set is \textit{large} if is not RN.
\\The next theorem is proved in \cite{FJ}.
\begin{theorem} [\cite{FJ}]
Let $[a, A]$ be a large EL-set. Let $\mathcal{F}$ be a partition of $[a, A]$ into pairwise disjoint RN-sets such that $\bigcup \mathcal{F}$ is large. Then $\mathcal{F}$ is not Kuratowski's partition.
\end{theorem}
\begin{proof}
Let $[a, A]$ be a large EL-set.
Let $\mathcal{F}$ be a partition of $[a, A]$ into RN-sets. Suppose that $\mathcal{F}$ is a Kuratowski partition.
We will show that $\mathcal{F}_P = \{F \cap P \colon F \in \mathcal{F}\}$, where $P \subset [\omega]^\omega$ is a perfect set, has cardinality continuum.
We construct inductively a family $\{[a_f, A_f] \colon f \in {^\omega}\{0,1\} \}$ with the following properties: for each $n < \omega$ and $h \in {^n}\{0, 1\}$ there exist $a_{h^\smallfrown \varepsilon} \supseteq a_h$ and $A_{h^\smallfrown \varepsilon} \subseteq A_h$ such that $a_{h^\smallfrown \varepsilon}\setminus a_{h}, \subseteq A$ and $\max a_h < \min A_{h^\smallfrown \varepsilon}$ for any $\varepsilon \in \{0, 1\}$.
Accept the following notation $[a_h, A_h]_{P_h} = [a_h, A_h]\cap P_h$ for $h \in {^n}\{0, 1\}$.
For a given $[a_h, A_h]$ enumerate all subsets of $(a_h\setminus a)\cap A$ by $s_0,..., s_k$.
We will construct simultaneously large EL-sets
$[a_{h^\smallfrown 0}, A_{h^\smallfrown 0}]_{P_{h^ \smallfrown 0}}$ and
$[a_{h^\smallfrown 1}, A_{h^\smallfrown 1}]_{P_{h^ \smallfrown 1}}.$
\\
First we will construct the sequences
$$B^{h}_{0} \supseteq B^{h}_{1}\supseteq .. \supseteq B^{h}_{k}
\textrm{ and }
C^{h}_{0} \supseteq C^{h}_{1}\supseteq .. \supseteq C^{h}_{k}$$ as follows: let $B^{h}_{0} = C^{h}_{0} = A_{h}.$
For given sets $B^{h}_{i}, C^{h}_{j}$ if there exist $B\subseteq B^{h}_{i}$ and $C\subseteq~C^{h}_{j}$ such that there are perfect sets
$P_{h^\smallfrown 0}, P_{h^\smallfrown 1} \in [\omega]^\omega$ with
$P_{h^\smallfrown 0} \subseteq [(a_h \cup s_i), B]$, $P_{h^\smallfrown 1} \subseteq [(a_h \cup s_j), C]$ and $P_{h^\smallfrown 0} \cap P_{h^\smallfrown 1} = \emptyset$ we put $B^{h}_{i+1} := B$ and $C^{h}_{j+1} := D$. If not then $B^{h}_{i+1} = B^{h}_{i}$ and $C^{h}_{j+1} = C^{h}_{i}$.
Let $B_{h^\smallfrown 0} = B^{h}_{k} \setminus \{\min B^{h}_{k}\}$
and
$C_{h^\smallfrown 0} = C^{h}_{k} \setminus \{\min C^{h}_{k}\}$.
Now let
$$a_{h^\smallfrown 0} = (a_h \cup s_i) \cap P_{h^\smallfrown 0}
\textrm{ and }
a_{h^\smallfrown 1} = (a_h \cup s_j) \cap P_{h^\smallfrown 1}$$ and
$A_{h^\smallfrown 0} = B_{h^\smallfrown 0} \cap P_{h^\smallfrown 0}$,
$A_{h^\smallfrown 1} = B_{h^\smallfrown 1} \cap P_{h^\smallfrown 1}$. Thus the sets $[a_{h^\smallfrown \varepsilon}, A_{h^\smallfrown \varepsilon}]$, $ \varepsilon \in \{0, 1\}$ have been defined.
Let $\bigcap^{\infty}_{n = 0} [a_{f|n}, A_{f|n}]$. By Fusion Lemma, (see e.g. \cite{TJ}), we have that $\bigcup_{f \in {^\omega}\{0, 1\}} \bigcap^{\infty}_{n = 0} [a_{f|n}, A_{f|n}]$ is a perfect set. Denote it by $P$. By the construction, $[a, A]_P$ has cardinality continuum because $[a, A]$ is large.
Let $\mathcal{Q} = \{P_\alpha \in [\omega]^\omega \colon P_\alpha \cap [a, A] \not = \emptyset, \alpha < 2^\omega\}$ be a family of perfect sets which has non-empty intersection with $[a, A]$.
For each $P_\alpha \in \mathcal{Q}$ we choose two distinct sets $D^{0}_{\alpha}, D^{1}_{\alpha} \in [a, A]$ such that
$$D^{0}_{\alpha}, D^{1}_{\alpha} \in \{D \in [a, A] \colon D \cap P_\alpha \not = \emptyset\} \setminus (\{D^{0}_{\beta} \colon \beta < \alpha\} \cup \{D^{1}_{\beta} \colon \beta < \alpha\}).$$
For any $\varepsilon \in \{0, 1\}$ consider $\{D^{\varepsilon}_{\alpha} \colon \alpha < 2^\omega\}$. There exist $[d^\varepsilon, D^\varepsilon]$ such that $[d^\varepsilon, D^\varepsilon] = \{D^{\varepsilon}_{\alpha} \colon \alpha < 2^\omega\}$. Both such EL-sets are disjoint and their union has not the Baire property. Indeed. If $[d^\varepsilon, D^\varepsilon]$ was a meager set then there would exist $P_\alpha \in \mathcal{Q}$ such that $ [a, A]_{P_\alpha} \subset [\omega]^\omega \setminus [d^\varepsilon. D^\varepsilon], \varepsilon \in \{0, 1\}$. But we have $\{B \in [\omega]^\omega \colon B \cap [d^\varepsilon, D^\varepsilon]_{ P_\alpha} \} \not = \emptyset.$ A contradiction.
If $[d^\varepsilon, D^\varepsilon], \varepsilon \in \{0, 1\}$ is not meager then there exists $P_\alpha \in \mathcal{Q}$ such that $P_\alpha \subset [d^\varepsilon, D^\varepsilon]$ but by the construction,
$[d^{\varepsilon'}, D^{\varepsilon '}]_{P_\alpha} \not = \emptyset$ for $\varepsilon ' \not = \varepsilon $. A contradiction to $[d^0, D^0], [d^1, D^1]$ being disjoint.
\end{proof}
\section{Kuratowski partitions and K-ideals}
Let $\kappa$ is a regular cardinal.
With any Kuratowski partition
$$\mathcal{F} = \{F_\alpha \colon \alpha < \kappa\}$$
we may associate an ideal
$$I_\mathcal{F} = \{A \subset \kappa \colon \bigcup_{\alpha \in A} F_\alpha \textrm{ is meager }\}$$
which we call \textit{a $K$-ideal}.
Let $S$ be a set of a positive measure. An \textit{$I$-partition} of $S$ is a maximal family $W$ of subsets of $S$ of positive measure such that $A \cap B \in I$ for any distinct $A, B \in W$.
An $I$-partition $W_1$ of $S$ is a \textit{refinement} of an $I$-partition $W_2$ of $S$, $W_1 \leq W_2$, if every $A \in W_1$ is a subset of some $B\in W_2$.
Let $\kappa$ be a regular cardinal and let $I$ be a $\kappa$-complete ideal on $\kappa$ containing singletons.
The ideal $I$ is \textit{precipitous} if whenever $S$ is a set of a positive measure and $\{W_n \colon n < \omega\}$ are $I$-partitions of $S$ such that
$$W_0 \geq W_1\geq ... \geq W_n \geq ...$$
then there exists a sequence of sets
$$A_0 \supseteq A_1\supseteq ... \supseteq A_n \supseteq ...$$
such that $A_n \in W_n$ for each $n$, and $\bigcap_{n=0}^{\infty} X_n$ is nonempty, (see also \cite{TJ} p. 438-439).
Let $I^+ = \mathcal{P}(\kappa) \setminus I$.
Consider a set
$$X(I) = \{x \in ^{\omega}(I^+) \colon \bigcap\{x(n) \colon n \in \omega\} \not = \emptyset \textrm{ and } \forall_{n\in \omega} \bigcap\{x(m) \colon m < n\} \in I^+\}.$$
As was pointed out in \cite{FK} the set $X(I)$ is considered as a subset of a complete metric space $(I^+)^\omega$, where $I^+$ is equipped with the discrete topology.
In \cite{FK} the following facts were proved (see \cite{FK}, Proposition 3.1. and Theorem 3.2.).
\begin{proposition}[\cite{FK}]
$X(I)$ is a Baire space iff $I$ is a precipitous ideal.
\end{proposition}
\begin{theorem}[\cite{FK}]
Let $I$ a precipitous ideal on some regular cardinal. Then there is a Kuratowski partition of the metric Baire space $X(I)$.
\end{theorem}
The following proposition is proved in \cite{JW}.
\begin{proposition} [\cite{JW}]
Let $2^\omega = 2^{\omega_1}$.
Then there exists a metric Baire space having a Kuratowski partition for which a completion does not have a Kuratowski partition.
\end{proposition}
\noindent
Moreover in \cite{JW} it is shown that a $K$-ideal is not necessarily precipitous. Before it we show two related results. (See \cite{RE} p. 103 for the definition of a direct sum of spaces).
\begin{proposition}[\cite{JW}]
Let $\kappa$ be a regular cardinal.
Let $X$ be a space having a Kuratowski partition of cardinality $\kappa$ and let $\Pi$ be a family of all permutations of $\kappa$. Then the direct sum $\oplus_{\pi \in \Pi} X_\pi$ has a Kuratowski partition.
\end{proposition}
\noindent
With the Kuratowski partition $\mathcal{F}^*$ of a direct sum $\oplus_{\pi \in \Pi} X_\pi$ of copies of $X$ considered in the proof of Proposition 18 we may assiociate a $K$-ideal $I_{\mathcal{F}^*}$. In Theorem 19 we show that such an ideal is a Frech\'et ideal, (i.e. $I_{\mathcal{F}} = [\kappa]^{<\kappa}$).
\begin{theorem}[\cite{JW}]
Let $\kappa$ be a regular cardinal.
Let $X$ be a space having a Kuratowski partition $\mathcal{F}$ of cardinality $\kappa$. Then a $K$-ideal $I_{\mathcal{F}^*}$ of is a Frech\'et ideal.
\end{theorem}
\noindent
The following lemma is proved in \cite{TJ}, (Lemma 35.9 p. 440).
\begin{lemma} [\cite{TJ}]
Let $\kappa$ be a regular uncountable cardinal. The ideal $I = \{X \subseteq \kappa \colon |X| < \kappa\}$ is not precipitous.
\end{lemma}
\noindent
Using Lemma 20 we immediately obtain the next corollary.
\begin{corollary}[\cite{JW}]
Let $X$ be a space having a Kuratowski partition. Then the $K-$ideal can not to be precipitous.
\end{corollary}
Let $B(2^\kappa)$ be a Baire space, where $\kappa$ is a cardinal. In \cite{FK} the authors proved the following theorem.
\begin{theorem}[\cite{FK}]
Assume that $J$ is an $\omega_1$-complete ultrafilter on $\kappa$. Then $B(2^\kappa)$ has a Kuratowski partition of cardinality $\kappa$.
\end{theorem}
\noindent
By Theorem 3.3. and Theorem 3.4. in \cite{FK} the existence of a Kuratowski partition of an arbitrary space is equiconsistent with the existence of a measurable cardinal.
In the next theorem it is shown that reducing an ideal up to the Frechet ideal can be obtained by enlarging the space, as a direct sum. On the other hand, enlarging an ideal depends on localization property, it means reducing the space. The idea of getting measurable cardinal is a result of localization (see \cite{FK}).
\begin{theorem}[\cite{JW}]
Let $\kappa$ be a measurable cardinal. Then each $\kappa$-complete ideal $I$ on $\kappa$ can be represented by some $K$-ideal, (i.e. for each $\kappa$-complete ideal $I$ on $\kappa$ there exists a space having a Kuratowski partition $\mathcal{F}$ of cardinality $\kappa$ such that $I$ is of the form $I_\mathcal{F}$).
\end{theorem}
\noindent
If $\kappa$ is a nonmeasurable cardinal but there exists a Kuratowski partition of cardinality $\kappa$ of a space $X$ then one can obtain each $\kappa$-complete ideal $K$ such that $J \subseteq K \subseteq I$, where $J$ is a Frech\'et ideal and $I$ is an ideal from previous theorem.
Thus, the existence of a Kuratowski partition leads to the statement that there exists a precipitous ideal, (see \cite{FJ}).
\\
At the presence of the previous results, in \cite{FJ2} it is shown the next theorem.
\begin{theorem}[\cite{FJ2}]
Let $X$ be a metric Baire space with a Kuratowski partition $\mathcal{F}$. Then there exists an open set $U \subset X$ such that a $K$-ideal $I_{\mathcal{F}\cap U}$ is precipitous.
\end{theorem}
\begin{proof}
Let $\kappa = \min \{|\mathcal{F}| \colon \mathcal{F} \textrm{ is a Kuratowski partition of } X\}.$
Let $\mathcal{F} = \{F_\alpha \colon \alpha < \kappa\}$ be a fixed Kuratowski partition of $X$ and let $I_{\mathcal{F}}$ be a $K$-ideal for $\mathcal{F}$. Consider
$FN(\kappa) = \{f \in {^X}\kappa \colon \exists_{\mathcal{U}_f} \textrm{ family of open disjoint sets which is } \textrm{dense in } X \textrm{ and } \\ \forall_{\alpha \in \kappa} \forall_{U \in \mathcal{U}_f} f \textrm{ is constant on } F_\alpha\cap U\}$, (compare \cite[proof of Theorem 3.3]{FK}).
For each $f \in FN(\kappa)$ and each $U \in \mathcal{U}_f$ consider a $K-$ideal $$I_{\mathcal{F}\cap U} = \{A \subset \kappa \colon \bigcup_{\alpha \in A} (F_\alpha \cap U) \textrm{ is meager, } F_\alpha \in \mathcal{F}\}.$$ We claim that there exists $f \in FN(\kappa)$ and $U \in \mathcal{U}_f$ such that $I_{\mathcal{F}\cap U}$ is precipitous.
Suppose not, then by Lemma 2.1, for all $f \in FN(\kappa)$ and all $U \in \mathcal{U}_f$ there are sequences of functionals
$F^{f}_{0}(U) > F^{f}_{1}(U)> ... $, on $U$ such that $F^{f}_{i}(U) \subseteq FN(\kappa)$, $i = 0, 1, ...$.
Fix $f \in FN(\kappa)$ and $U\in \mathcal{U}_f$.
Let $W^{f}_{i}(U)$ be $I_{\mathcal{F}\cap U}$-partitions for $F^{f}_{i}(U)$, $i = 0, 1, ...\ $.
Then $W^{f}_{0}(U) \geq W^{f}_{1}(U)\geq ...\ $.
Let $X^{f}_{i}(U) \in W^{f}_{i}(U)$ such that
$X^{f}_{0}(U) \supseteq X^{f}_{1}(U)\supseteq ...\ $.
Since $I_{\mathcal{F}\cap U}$ is not precipitous, $\bigcap_{i=0}^{\infty} X^{f}_{i}(U) = \emptyset$.
Now for each $i = 0, 1, ...$ consider $F^{f}_{i}: =\bigcup_{U \in \mathcal{U}_f} F^{f}_{i}(U).$
Hence $F^{f}_{i} \subseteq FN(\kappa)$ and $F^{f}_{0} > F^{f}_{1} >...$
is the sequence of functionals on $S$.
Let $W^{f}_{i} = \bigcup_{U \in \mathcal{U}_f} W^{f}_{i}(U)$. Then $W^{f}_{i}$ are $I_{\mathcal{F}}$-partitions for $F^{f}_{i}$, $i = 0, 1, ...$ and $W^{f}_{0} \geq W^{f}_{1} \geq ...\ $.
Let $X^{f}_{i} = \bigcup_{U \in \mathcal{U}_f}X^{f}_{i}(U)$ be elements of $W^{f}_{i}$, $i = 0, 1, ...$ with $X^{f}_{0} \supseteq X^{f}_{1} \supseteq ...\ $. Let $h^{f}_{i} \in F^{f}_{i}$ be functions with domains $X^{f}_{i}$, $i = 0, 1, ...\ $. By Baire Category Theorem there exists $x \in \bigcap_{i=0}^{\infty} X^f_i \subseteq S$ such that $h^{f}_{0}(x) > h^{f}_{1}(x) >...\ $. A contradiction with the well-foundness of the sequence.
\end{proof}
In the light of consideration on Kuratowski partitions, the newest result of this topic given in \cite{FJ3} is the following theorem.
\begin{theorem}[\cite{FJ3}]
If there is a metric Baire space which admits a Kuratowski partition then there is a measurable cardinal.
\end{theorem}
|
1,116,691,499,176 | arxiv | \section{Introduction}
The comprehensive formulation for loop quantum cosmology (LQC) in the spatially flat-isotropic model has been constructed \cite{Ashtekar:2006uz,Ashtekar:2006rx}. With a massless scalar field serving as the \emph{emergent time}, the result shows that the quantum evolution is \emph{deterministic across the deep Planck regime} and in the backward evolution of the states which are semiclassical at late times, \emph{the big bang is replaced by a big bounce}. Based on the same principles, the construction was further improved by a more direct implementation of the underlying physical ideas of loop quantum gravity (LQG) \cite{Ashtekar:2006wn}. In the improved dynamics, \emph{the big bounce occurs precisely when the matter density enters the Planck regime}, regardless of the value of the momentum $p_\phi$ of the scalar field.
Both the precursor strategy (``$\mu_o$-scheme'') and the improved strategy (``$\mubar$-scheme'') were applied and reconstructed for the Bianchi I model to include anisotropy \cite{Chiou:2006qq}. The analytical investigation shows that the state in the kinematical Hilbert space associated with the classical singularity is \emph{completely decoupled} in the difference evolution equation, indicating that the classical singularity is resolved in the quantum evolution and the big bounce may take place when any of the area scales undergoes the vanishing behavior.
While a thorough numerical investigation remains to be done to draw the definite conclusion for the details of the quantum evolution in the Bianchi I model, this paper studies its effective dynamics with LQC discreteness corrections. Not only does the result affirm the anticipations in \cite{Chiou:2006qq} but more intuitive pictures are also obtained in this semiclassical approach, giving an insight into how and why the big bounces take place.
In accordance with the formulation in \cite{Chiou:2006qq}, this paper focus specifically on the model with a massless scalar field. In the context of effective dynamics with LQC discreteness corrections, the similar analysis for more generic Bianchi I models with the inclusion of arbitrary matter with equation of state $w<+1$ is also investigated in \cite{Chiou:2007sp}, which gives the similar results for the occurrence of big bounces with only difference in detail. With arbitrary matter sources, however, the equations of motion are very complicated and a proper approximation has to be used. By contrast, in the special case of a massless scalar field, the equations of motion can be solved analytically and therefore the underlying physics is more transparent.
This paper is organized as follows. In \secref{sec:classical dynamics}, the classical dynamics of the Bianchi I cosmology with a massless scalar source is solved in terms of Ashtekar variables in Hamiltonian formulation. The effective dynamics with LQC corrections in $\mubar$-scheme is constructed and solved in \secref{sec:mubar dynamics}. Its phenomenological ramifications are discussed in \secref{sec:discussion}. As a comparison to the $\mubar$-scheme, the effective dynamics in $\mu_o$-scheme is also included in \appref{sec:muzero dynamics}.
\section{Classical Dynamics}\label{sec:classical dynamics}
The spacetime metric of Bianchi type I is given as:
\begin{equation}
ds^2=-dt^2+a_1^2(t)dx^2+a_2^2(t)dy^2+a_3^2(t)dz^2.
\end{equation}
In terms of Ashtekar variables, the phase space of Bianchi I models is given by the diagonal triad
variables $p_I$ and diagonal connection variables $c_I$ for
$I=1,2,3$, which satisfy the canonical relation:
\begin{equation} \{c_I,p_J\}=8\pi G\gamma\,\delta_{IJ}.
\end{equation}
The triad variables $p_I$ are
related with the length scale factors $a_I$ via:
\begin{equation}\label{eqn:p and a}
p_1=a_2a_3,\qquad
p_2=a_1a_3,\qquad p_3=a_1a_2.
\end{equation}
In the presence of a massless scalar field $\phi(\vec{x},t)=\phi(t)$,
(which is independent of the spatial coordinates with homogeneity assumed),
the classical dynamics is govern
by the Hamiltonian constraint:
\begin{eqnarray}\label{eqn:cl Hamiltonian}
&&C=C_{\rm grav}+C_\phi\\
&=&-\frac{\big(c_2p_2c_3p_3+c_1p_1c_3p_3+c_1p_1c_2p_2\big)}{8\pi G\gamma^2\sqrt{{p_1p_2p_3}}}
+\frac{p_\phi^2}{2\sqrt{p_1p_2p_3}},\nonumber
\end{eqnarray}
where $p_\phi$ is the conjugate momentum of $\phi$ and has the canonical relation with $\phi$:
\begin{equation}
\{\phi,p_\phi\}=1.
\end{equation}
We can simplify the Hamiltonian by choosing the lapse function $N=\sqrt{p_1p_2p_3}$
and thus introducing the new time variable $dt'=(p_1p_2p_3)^{-1/2}dt$.
The rescaled Hamiltonian constraint is given by
\begin{equation}\label{eqn:cl rescaled Hamiltonian}
H=-\frac{\left(c_2p_2c_3p_3+c_1p_1c_3p_3+c_1p_1c_2p_2\right)}{8\pi G\gamma^2}
+\frac{p_\phi^2}{2}.\\
\end{equation}
The equations of motion are governed by the Hamilton's equations:
\begin{eqnarray}
\label{eqn:cl eom 1}
\frac{dp_\phi}{dt'}&=&\{p_\phi,H\}=0\quad\Rightarrow\quad
p_\phi\ \text{is constant}\\
\label{eqn:cl eom 2}
\frac{d\phi}{dt'}&=&\{\phi,H\}=p_\phi,\\
\label{eqn:cl eom 3}
\frac{dc_1}{dt'}&=&\{c_1,H\}=8\pi G\gamma\,\frac{\partial\, H}{\partial p_1}\nonumber\\
&=&-\gamma^{-1}c_1\left(c_2p_2+c_3p_3\right),\\
\label{eqn:cl eom 4}
\frac{dp_1}{dt'}&=&\{p_1,H\}=-8\pi G\gamma\,\frac{\partial\, H}{\partial c_1}\nonumber\\
&=&\gamma^{-1}p_1\left(c_2p_2+c_3p_3\right),
\end{eqnarray}
and so on for $c_2$, $c_3$, $p_2$, $p_3$ in the cyclic manner.
In addition to the Hamilton's equations, the constraint that the Hamiltonian must vanish yields
\begin{eqnarray}\label{eqn:cl eom 5}
&&H(c_I,p_I)=0\quad
\Rightarrow\\
p_\phi^2&=&\frac{1}{4\pi G\gamma^2}
\big(c_2p_2c_3p_3+c_1p_1c_3p_3+c_1p_1c_2p_2\big).\nonumber
\end{eqnarray}
Combining \eqnref{eqn:cl eom 3} and \eqnref{eqn:cl eom 4} gives
\begin{eqnarray}\label{eqn:const Ki}
\frac{d}{dt'}(p_Ic_I)=0,\quad\Rightarrow\quad
p_Ic_I=8\pi G\gamma\hbar\,{\cal K}_I,
\end{eqnarray}
where ${\cal K}_I$ are dimensionless constants, which will be used to
parameterize the solutions of evolution.
Taking \eqnref{eqn:const Ki} into \eqnref{eqn:cl eom 5}, we have
\begin{equation}\label{eqn:p_ph and K}
p_\phi^2=16\pi G\hbar^2
\left\{{\cal K}_2{\cal K}_3+{\cal K}_1{\cal K}_3+{\cal K}_1{\cal K}_2\right\}
\end{equation}
or equivalently
\begin{equation}\label{eqn:K}
{\cal K}_\phi^2=2
\left({\cal K}_2{\cal K}_3+{\cal K}_1{\cal K}_3+{\cal K}_1{\cal K}_2\right),
\end{equation}
if we define
\begin{equation}\label{eqn:def of p_ph}
p_\phi:=\hbar\sqrt{8\pi G}\,{\cal K}_\phi.
\end{equation}
Putting \eqnref{eqn:const Ki} into \eqnref{eqn:cl eom 4} gives
\begin{equation}
\frac{1}{p_1}\frac{dp_1}{dt'}={8\pi G \hbar}\left({\cal K}_2+{\cal K}_3\right),
\end{equation}
By referring to \eqnref{eqn:cl eom 2}, this leads to
\begin{equation}\label{eqn:cl diff eq 2}
\frac{1}{p_I}\frac{dp_I}{d\phi}=8\pi G\hbar\,\frac{{\cal K}_2+{\cal K}_3}{p_\phi}
=\sqrt{8\pi G}\,\Big(\frac{1-\kappa_I}{\kappa_\phi}\Big),
\end{equation}
where we scale the parameters ${\cal K}_I={\cal K}\kappa_I$, ${\cal K}_\phi={\cal K}\kappa_\phi$ such that
\begin{equation}\label{eqn:para constraint}
\kappa_1+\kappa_2+\kappa_3=1,
\qquad
\kappa_1^2+\kappa_2^2+\kappa_3^2+\kappa_\phi^2=1.
\end{equation}
Regarding $\phi$ as the \emph{emergent time}, the solutions of evolution are given by
\begin{equation}\label{eqn:cl sol 1}
p_I(\phi)=p_I(\phi_0)\,e^{\sqrt{8\pi G}\big(\frac{1-\kappa_I}{\kappa_\phi}\big)(\phi-\phi_0)},
\end{equation}
or equivalently
\begin{equation}\label{eqn:cl sol 2}
a_I(\phi)=a_I(\phi_0)\,e^{\sqrt{8\pi G}\,\frac{\kappa_I}{\kappa_\phi}(\phi-\phi_0)}.
\end{equation}
The classical Bianchi I model with a massless scalar field admits both ``Kasner-like''
(two of $\kappa_I$ positive and the other negative) and ``Kasner-unlike''
(all $\kappa_I$ positive) solutions.
The Kasner-like solution, which has two expanding and one contracting directions (say $\kappa_\phi>0$),
eventually encounters the ``Kasner-like singularity''
(a given regular cubical cell stretches as an infinitely long line) in the far past
and the ``planar collapse'' (a regular cubical cell stretches as an infinitely large plane)
in the far future. On the other hand, the Kasner-unlike solution, with all directions
expanding, encounters the ``Kasner-unlike singularity''
(a regular cubical cell vanishes to a point) in the far past and no
planar collapse.
We will see that with LQC discreteness corrections, both Kasner-like and Kasner-unlike singularities are resolved and replaced by the \emph{big bounces},
whereas the planar collapse remains its destiny even one of the three diagonal directions
approaches infinitely small length scale.
\section{Effective Dynamics in $\mubar$-Scheme}\label{sec:mubar dynamics}
In LQC, the connection variables $c_I$ do not exist and should be replace by
holonomies. In the effective theory, to capture the quantum corrections, following the procedures used in the isotropic case \cite{Taveras:IGPG preprint,Singh:2005xg}, we take the
prescription to replace $c_I$ by $\sin(\mubar_Ic_I)/\mubar_I$, introducing discreteness
variables $\mubar_I$. In the improved strategy ($\mubar$-scheme) used in
Bianchi I LQC \cite{Chiou:2006qq}, $\mubar_I$ are not fixed constants but given by
\begin{equation}
\mubar_1=\sqrt{\frac{\Delta}{p_1}}\,,\quad
\mubar_2=\sqrt{\frac{\Delta}{p_2}}\,,\quad
\mubar_3=\sqrt{\frac{\Delta}{p_3}}\,,
\end{equation}
where $\Delta=\frac{\sqrt{3}}{2}(4\pi\gamma\Pl^2)$ is the \emph{area gap} in the full theory of LQG.
Imposing this prescription plus the loop quantum correction to the inverse triad on \eqnref{eqn:cl Hamiltonian}, we have the effective Hamiltonian constraint to the leading order:
\begin{eqnarray}\label{eqn:qm Hamiltonian original}
C_{\rm eff}&=&f(p_1)f(p_2)f(p_3)\frac{p_\phi^2}{2}
-\frac{f(p_1)f(p_2)f(p_3)}{8\pi G \gamma^2}\\
&&\quad\times
\left\{
\frac{\sin(\mubar_2c_2)\sin(\mubar_3c_3)}{\mubar_2\mubar_3}p_2p_3+
\text{cyclic terms}
\right\},\nonumber
\end{eqnarray}
where $f(p_I)$ is the eigenvalue of the inverse triad operator $\widehat{1/\sqrt{p_I}}$. The loop quantization gives the quantum corrections:
\begin{equation}
f(p_I)\sim
\left\{
\begin{array}{cr}
\frac{1}{\sqrt{{p_I}}}\left(1+{\cal O}(\Pl^2/p_I)\right) & \text{for}\ p_I\gg\Pl^2\\
\propto{p_I}^n/\Pl^{2n+1} & \text{for}\ p_I\ll\Pl^2
\end{array}
\right.
\end{equation}
with the Planck length $\Pl:=\sqrt{G\hbar}$ and a positive $n$. The corrections to $f(p_I)$ is significant only in the Planckian region in the vicinity of $p_I=0$. From now on, we will ignore the quantum corrections to $f(p_I)$ by simply taking its classical function $f(p_I)=p_I^{-1/2}$. [We will see that in the backward evolution the big bounce takes place much earlier before the discreteness correction on the inverse triad operator becomes considerable, and it is the ``non-locality'' effect (i.e., using the holonomies) that accounts for the occurrence of the big bounce.]
With $f(p_I)=p_I^{-1/2}$, by choosing $dt'=(p_1p_2p_3)^{-1/2}dt$, the Hamiltonian constraint \eqnref{eqn:qm Hamiltonian original} can be rescaled as
\begin{eqnarray}\label{eqn:qm Hamiltonian}
&&H_\mubar=\frac{p_\phi^2}{2}\\
&&\quad
-\frac{1}{8\pi G \gamma^2}
\left\{
\frac{\sin(\mubar_2c_2)\sin(\mubar_3c_3)}{\mubar_2\mubar_3}p_2p_3+
\text{cyclic terms}
\right\}.\nonumber
\end{eqnarray}
Again, the equations of motion are given by the Hamilton's equations and
the constraint that the Hamiltonian must vanish:
\begin{eqnarray}
\label{eqn:qm eom 1}
\frac{dp_\phi}{dt'}&=&\{p_\phi,H_\mubar\}=0\quad\Rightarrow\quad
p_\phi\ \text{is constant}\\
\label{eqn:qm eom 2}
\frac{d\phi}{dt'}&=&\{\phi,H_\mubar\}=p_\phi,\\
\label{eqn:qm eom 3}
\frac{dc_1}{dt'}&=&\{c_1,H_\mubar\}=8\pi G\gamma\,\frac{\partial\, H_\mubar}{\partial p_1}\nonumber\\
&=&-\gamma^{-1}
\left(\frac{3\sin(\mubar_1c_1)}{2\mubar_1}-\frac{c_1\cos(\mubar_1c_1)}{2}\right)\nonumber\\
&&\quad\ \times
\left(\frac{\sin(\mubar_2c_2)}{\mubar_2}p_2
+\frac{\sin(\mubar_3c_3)}{\mubar_3}p_3\right),\\
\label{eqn:qm eom 4}
\frac{dp_1}{dt'}&=&\{p_1,H_\mubar\}=-8\pi G\gamma\,\frac{\partial\, H_\mubar}{\partial c_1}\nonumber\\
&=&\gamma^{-1}p_1\cos(\mubar_1c_1)\nonumber\\
&&\quad\ \times
\left(\frac{\sin(\mubar_2c_2)}{\mubar_2}p_2
+\frac{\sin(\mubar_3c_3)}{\mubar_3}p_3\right),
\end{eqnarray}
and
\begin{eqnarray}\label{eqn:qm eom 5}
&&H_\mubar(c_I,p_I)=0\quad
\Rightarrow\qquad p_\phi^2=\\
&&\frac{1}{4\pi G\gamma^2}
\left\{
\frac{\sin(\mubar_2c_2)\sin(\mubar_3c_3)}{\mubar_2\mubar_3}p_2p_3+
\text{cyclic terms}
\right\}.\nonumber
\end{eqnarray}
[Note that in the classical limit $\mubar_Ic_I\rightarrow0$, we have
$\sin(\mubar_Ic_I)/\mubar_I\rightarrow c_I$,
$\cos(\mubar_Ic_I)\rightarrow1$ and therefore
\eqnref{eqn:qm eom 3}--\eqnref{eqn:qm eom 5} reduce to their
classical counterparts \eqnref{eqn:cl eom 3}--\eqnref{eqn:cl eom 5}.]
By \eqnref{eqn:qm eom 3} and \eqnref{eqn:qm eom 4}, we have
\begin{eqnarray}\label{eqn:qm dpc/dt'}
&&\left(\frac{3\sin(\mubar_Ic_I)}{2\mubar_I}-\frac{c_I\cos(\mubar_Ic_I)}{2}\right)\frac{dp_I}{dt'}
+p_I\cos(\mubar_Ic_I)\frac{dc_I}{dt'}\nonumber\\
&&=\frac{d}{dt'}\left[p_I\frac{\sin(\mubar_Ic_I)}{\mubar_I}\right]=0,
\end{eqnarray}
which gives
\begin{equation}\label{eqn:qm pc}
p_I\frac{\sin(\mubar_Ic_I)}{\mubar_I}
=8\pi G\gamma\hbar\,{\cal K}_I.
\end{equation}\\
Taking \eqnref{eqn:qm pc} into \eqnref{eqn:qm eom 5} again
gives the same constraints on the constant parameters as in
\eqnref{eqn:p_ph and K} or \eqnref{eqn:K}.
Substituting \eqnref{eqn:qm pc} into \eqnref{eqn:qm eom 4} yields
\begin{equation}\label{eqn:qm diff eq 1}
\frac{1}{p_1}\frac{dp_1}{dt'}
=8\pi G \hbar\,\cos(\mubar_1c_1)({\cal K}_2+{\cal K}_3).
\end{equation}
By regarding $\phi$ as the emergent time via \eqnref{eqn:qm eom 2} and expressing $\cos x=\pm\sqrt{1-\sin^2 x}$,
\eqnref{eqn:qm diff eq 1} then gives
\begin{equation}\label{eqn:qm diff eq 2}
\frac{1}{p_I}\frac{dp_I}{d\phi}=
\pm\sqrt{8\pi G}\,\Big(\frac{1-\kappa_I}{\kappa_\phi}\Big)
\left[1-\frac{\varrho_I}{\varrho_{I\!,\,{\rm crit}}}\right]^{1/2},
\end{equation}
where we define the \emph{directional density}:
\begin{equation}
\varrho_I:=\frac{p_\phi^2}{p_I^3}
\end{equation}
for the $I$-direction and its critical value is given by the \emph{Planckian matter density}
$\rho_{\rm Pl}$ times a numerical factor:
\begin{equation}
\varrho_{I\!,\,{\rm crit}}:=\left(\frac{\kappa_\phi}{\kappa_I}\right)^2\rho_{\rm Pl},
\qquad
\rho_{\rm Pl}:=(8\pi G \gamma^2\Delta)^{-1}.
\end{equation}
\begin{widetext}
\begin{figure}
\begin{picture}(400,150)(0,0)
\put(-60,0){
\begin{picture}(460,150)(0,0)
\put(10,133){(a)}
\put(175,133){(b)}
\put(345,133){(c)}
\resizebox{\textwidth}{!}
{\includegraphics{fig1.eps}}
\end{picture}
}
\end{picture}
\caption{$\kappa_1=\kappa_2=\kappa_3=1/3$, $\kappa_\phi=\sqrt{2/3}$; $p_1(\phi_0)=1\times10^4\Pl^2$,
$p_2(\phi_0)=2\times10^4\Pl^2$, $p_3(\phi_0)=3\times10^4\Pl^2$;
and $p_\phi=2\times10^3\hbar\sqrt{8\pi G}$ (i.e., ${\cal K}\kappa_\phi=2\times10^3$).
The
{red} lines are for $p_1$, $a_1$, $\varrho_1$;
{green} for $p_2$, $a_2$, $\varrho_2$;
and
{blue} for $p_3$, $a_3$, $\varrho_3$. The values of
$\varrho_{1,\,{\rm crit}}$, $\varrho_{2,\,{\rm crit}}$
and $\varrho_{3,\,{\rm crit}}$ are pointed by the arrow(s) in (c). (The Barbero-Immirzi parameter is set to $\gamma=1$.)}\label{fig:fig1}
\end{figure}
\begin{figure}
\begin{picture}(400,150)(0,0)
\put(-60,0){
\begin{picture}(460,150)(0,0)
\put(10,133){(a)}
\put(175,133){(b)}
\put(345,133){(c)}
\resizebox{\textwidth}{!}
{\includegraphics{fig2.eps}}
\end{picture}
}
\end{picture}
\caption{$\kappa_1=1/3$, $\kappa_2=1/5$, $\kappa_3=7/15$, $\kappa_\phi=\sqrt{142}/15$; $p_1(\phi_0)=1\times10^4\Pl^2$,
$p_2(\phi_0)=2\times10^4\Pl^2$, $p_3(\phi_0)=3\times10^4\Pl^2$;
and $p_\phi=2\times10^3\hbar\sqrt{8\pi G}$ (i.e., ${\cal K}\kappa_\phi=2\times10^3$).}\label{fig:fig2}
\end{figure}
\begin{figure}
\begin{picture}(400,150)(0,0)
\put(-60,0){
\begin{picture}(460,150)(0,0)
\put(10,133){(a)}
\put(175,133){(b)}
\put(345,133){(c)}
\resizebox{\textwidth}{!}
{\includegraphics{fig3.eps}}
\end{picture}
}
\end{picture}
\caption{$\kappa_1=1/2$, $\kappa_2=3/4$, $\kappa_3=-1/4$, $\kappa_\phi=1/\sqrt{8}$; $p_1(\phi_0)=p_2(\phi_0)=p_3(\phi_0)
=1\times10^4\Pl^2$;
and $p_\phi=2\times10^3\hbar\sqrt{8\pi G}$ (i.e., ${\cal K}\kappa_\phi=2\times10^3$).}\label{fig:fig3}
\end{figure}
\begin{figure}
\begin{picture}(400,150)(0,0)
\put(-60,0){
\begin{picture}(460,150)(0,0)
\put(10,133){(a)}
\put(175,133){(b)}
\put(345,133){(c)}
\resizebox{\textwidth}{!}
{\includegraphics{fig4.eps}}
\end{picture}
}
\end{picture}
\caption{$\kappa_1=1/2$, $\kappa_2=3/4$, $\kappa_3=-1/4$, $\kappa_\phi=1/\sqrt{8}$; $p_1(\phi_0)=3\times10^4\Pl^2$,
$p_2(\phi_0)=2\times10^4\Pl^2$, $p_3(\phi_0)=1\times10^4\Pl^2$;
and $p_\phi=2\times10^3\hbar\sqrt{8\pi G}$ (i.e., ${\cal K}\kappa_\phi=2\times10^3$).}\label{fig:fig4}
\end{figure}
\end{widetext}
\section{Discussion}\label{sec:discussion}
As opposed to the classical equation \eqnref{eqn:cl diff eq 2}, in which $p_I$ continues to decrease toward the classical singularity in the backward evolution, the effective equation in \eqnref{eqn:qm diff eq 2} flips sign exactly at the moment when $\varrho_I$ approaches its critical value $\varrho_{I\!,\,{\rm crit}}$.
Note that by \eqnref{eqn:p and a} $p_I$ can be regarded as the \emph{area} scale factors.
Therefore, with the LQC discreteness corrections, \eqnref{eqn:qm diff eq 2} shows that the singularities (both Kasner-like and Kasner-unlike) are resolved and replaced by the big bounces in the backward evolution when any of the area scales undergoes the vanishing behavior. Across the bounces, the equation of motion again comes closer and closer to the classical counterpart. Hence, the semiclassicality is retained on both asymptotic sides of the evolution.
Furthermore, the detailed evolutions of $p_I$ are decoupled in different diagonal directions and evolve independently of one another once the initial conditions ($p_I(\phi_o)$, $p_\phi$ and $\kappa_I$) are specified. Thus, the bounces occur up to three times, once in each direction, whenever each of the directional densities $\varrho_I$ approaches its critical value.
As expected, in $\mubar$-scheme, the critical values $\varrho_{I\!,\,{\rm crit}}$ are in the Planck regime of ${\cal O}(\hbar\Pl^{-4})$ and \emph{independent of the value of $p_\phi$} ($\varrho_{I\!,\,{\rm crit}}$ depend on $p_\phi$ only through the ratio $\kappa_\phi/\kappa_I\equiv{\cal K}_\phi/{\cal K}_I$).\footnote{In \appref{sec:muzero dynamics}, the old precursor strategy ($\mu_o$-scheme) is presented and it shows that the critical value of $\varrho_I$ can be made arbitrarily small by increasing $p_\phi$.}
Note that $\varrho_I$ have the same dimension as the matter density $\rho:=p_\phi^2/(2p_1p_2p_3)$ and $\varrho_I$ play the same role as $\rho$ does in the isotropic case, signaling the occurrence of big bounces.
On the other hand, the planar collapse is \emph{not} resolved but one of the length scale factors $a_I$ continues the vanishing behavior in the Kasner-like case. This is
expected since the classical solutions \eqnref{eqn:const Ki} and \eqnref{eqn:cl sol 1} yield $\mubar_Ic_I\rightarrow0$ (and $\muzero_Ic_I\rightarrow0$ in $\mu_o$-scheme) toward the planar collapse and therefore the quantum corrections become more and more negligible (in both schemes).
For given initial conditions, the differential equation \eqnref{eqn:qm diff eq 2} can be solved numerically. The behaviors of $p_I(\phi)$, $a_I(\phi)$ and $\varrho_I(\phi)$ are depicted in parts (a), (b) and (c) respectively in \figref{fig:fig1} and \figref{fig:fig2} for Kasner-unlike solutions and in \figref{fig:fig3} and \figref{fig:fig4} for Kasner-like solutions.
The fact that smallness of $p_I$ (not of $a_I$) is an indication of the occurrence of big bounces seems to support the suggestion that ``area is more fundamental than length in LQG'', although whether this is simply a technical artifact or reflects some deep physics is still not clear. (See Section VII.B of \cite{Rovelli:1997yv} for some comments on this aspect and \cite{Rovelli:1993vu} for more details.)
Meanwhile, as the length operator has been shown to have a
discrete spectrum \cite{Thiemann:1996at}, the fact that the
vanishing of the length scale factor in the planar collapse is not stopped seems to contradict the discreteness of the length spectrum. Whether we miss some important ingredients when imposing the fundamental discreteness of LQG in the LQC construction or indeed area is more essential than length remains an open question for further investigation.
It is also noteworthy that \eqnref{eqn:qm diff eq 2} remains invariant if we rescale $p_\phi\rightarrow l^3 p_\phi$ and $p_I\rightarrow l^2p_I$ at the same time. This is reminiscent of the idea as suggested in \cite{Rovelli:1990ph,Rovelli:1992vv} that area is measurable only if the surface is \emph{coupled with the material reference}. The scaling invariance, however, breaks down in the full LQC theory since the quantum evolution is governed by a difference equation \cite{Chiou:2006qq}, in which the step size of difference introduces an additional scale in the deep Planck regime.\footnote{Therefore, the semiclassicality is retained in the full quantum theory only for large $p_\phi$ and $p_I$. Accordingly, we put big values of $p_\phi$ and $p_I(\phi_o)$ in the figures to make sense of the semiclassical approach for the effective dynamics. The figures are trivially rescaled under the scaling.}
Meanwhile, related to the above observation, the physical meaning of the directional densities $\varrho_I$ can be interpreted as the (inverse of) area scales, again, \emph{measured by the reference of the matter content}. The big bounces take place whenever one of the area scales becomes very small by the reference of the matter momentum. It is then attempting to regard not only $\phi$ as the ``internal clock'' (emergent time) but also $p_\phi$ as the ``internal rod'' --- namely, the measurement of both temporal and spatial geometries makes sense only in the presence of matter content. This observation may support the ideas of the relational interpretation of quantum mechanics with real rods and clocks such as studied in \cite{Gambini:2006ph} (see also \cite{Rovelli:1990ph,Rovelli:1992vv}), although the link is far from clear. If this concept is taken seriously, in return, we might be able to further improve the $\mubar$-scheme to better reflect the underlying physics of LQG such that the difference equation of evolution in the full LQC theory also respects the scaling invariance mentioned above.
\begin{acknowledgements}
The author would like to thank Abhay Ashtekar, Golam Hossain, Tomasz Pawlowski, Parampreet Singh for useful discussions and especially Kevin Vandersloot for sharing his private notes and important ideas. This work was supported in part by the NSF grant PHY-0456913.
\end{acknowledgements}
|
1,116,691,499,177 | arxiv | \section{Introduction}
The last decade has witnessed a growing interest in the physics of black holes in anti-de Sitter ($AdS$) backgrounds, motivated by the proposed correspondence relating $AdS$ spacetimes to conformal field theory defined on the boundary spacetime \cite{Maldacena:1997re,Witten:1998qj}, known as the $AdS/CFT$ correspondence. In this context asymptotically $AdS$ black hole solutions are related to thermal CFT. For example, the Schwarzschild-AdS$_5$ Hawking-Page phase transition \cite{Hawking:1982dh} is interpreted as a thermal phase transition from a confining to a deconfining phase in the dual $D = 4$, ${\cal N} = 4$ super Yang-Mills theory \cite{Witten:1998zw}, while the phase structure of Reissner-Nordstr\"om-AdS (RNAdS) black holes, which resembles that of a van der Waals-Maxwell liquid-gas system is related to the physics of a class of field theories coupled to a background current \cite{Chamblin:1999tk}.
The construction of new asymptotically (locally) $AdS$ objects is then of particular interest in this context, since it provides new backgrounds for the dual CFTs. Black holes in $d$ dimensions have an horizon topology $S_{d-2}$, matching the topology of the conformal boundary. The simplest different topology (both at the horizon and at infinity) is $S_{d-3}\times S_1$ and has been first discussed in the $AdS/CFT$ context by \cite{Copsey:2006br} in $d=5$. Such objects are called black strings and have been constructed in arbitrary dimensions in \cite{rms}. Yet other types of objects have been proposed recently in \cite{adsbrane} where the horizon and asymptotic toplogy is $S_p\times \mathbb M_n$, $\mathbb M_n$ denoting a $n$-dimensional Ricci flat manifold.
In this paper, we generalise the black brane type solutions constructed in \cite{adsbrane} to the charged and rotating case. In the section \ref{sec:model}, we present the model, the generic equations and boundary conditions and quickly review some known solutions. In section \ref{sec:asympt}, we present the asymptotic solution for large values of the radial coordinate and close to the horizon. Section \ref{sec:physics} is devoted to the thermodynamical and geometrical properties of the charged and/or rotating black brane. In particular we derive a Smarr law for the three cases (charged, rotating, charged-rotating).
We present our numerical results in section \ref{sec:results}. This section is divided in three parts: charged case, rotating case and charged-rotating case. We discuss in more details the charged \emph{or} rotating cases and give evidences that the charged \emph{and} rotating case admits solutions. We finally draw our conclusions in the last section.
\section{The generic model, equations and boundary conditions}
\label{sec:model}
We consider the Maxwell-Einstein-Hilbert action in $d$ dimensions supplemented by a cosmological constant term:
\begin{equation}
S = \frac{1}{16\pi G}\int_{\mathcal M} \sqrt{-g} \left( R - 2\Lambda - F^2\right) d^d x - \frac{1}{8\pi G}\int_{\partial\mathcal M}\sqrt{-h} K d^{d-1} x,
\label{smodel}
\end{equation}
where $G$ is the $d$-dimensional Newton constant, $\mathcal M$ is the manifold under consideration, $\partial\mathcal{M}$ the boundary manifold $g$ is the determinent
of the metric, $R$ is the scalar curvature, $\Lambda$ is the cosmological constant, $F=dA$ is the field strength of the Maxwell field $A$, $h$ is the determinent of
the boundary metric and $K$ is the extrinsic curvature of $\partial\mathcal M$ embedded in $\mathcal M$. Note that we use geometric units for the electromagnetic coupling. In the following, we will set the Newton constant to $G=1$.
We further define the AdS radius by $$ \ell^2 = -\frac{(d-1)(d-2)}{2\Lambda}.$$ In order to describe rotating brane like black objects, i.e. black objects with event horizon topology $S_p\times \mathbb M_n$, we introduce the following metric ansatz \begin{eqnarray}
& ds^2 = -b(r)dt^2 +a(r)d\vec z^2_n+ \frac{ dr^2}{f(r)} + g(r)\sum_{i=1}^{N-1} \left(\prod_{j=0}^{i-1} \cos^2\theta_j \right) d\theta_i^2 \\
&+h(r) \sum_{k=1}^N \left( \prod_{l=0}^{k-1} \cos^2 \theta_l \right) \sin^2\theta_k \left( d\phi_k - w(r) dt\right)^2 \nonumber \\
& +(g(r)-h(r)) \left\{ \sum_{k=1}^N \left( \prod_{l=0}^{k-1} \cos^2 \theta_l \right) \sin^2\theta_k d\phi_k^2 \right. \nonumber\\
&\left.- \left[\sum_{k=1}^N \left( \prod_{l=0}^{k-1} \cos^2 \theta_l \right) \sin^2\theta_k d\phi_k\right]^2 \right\},\nonumber
\label{metric-rot}
\end{eqnarray}
where $\theta_0 \equiv 0$, $\theta_i \in [0,\pi/2]$ for $i=1,\dots , N-1$, $\theta_N \equiv \pi/2$,
$\phi_k \in [0,2\pi]$ for $k=1,\dots , N$, where $N=(p+1)/2$ and $d\vec z^2_n = \delta_{ij}dz^i dz^j,\ i,j=1,2\ldots,n$ denotes the transverse space. Here we take $p$ to be odd and $d=p+n+2$.
Note that the ansatz for the static black branes in arbitrary dimensions discussed in \cite{adsbrane} are recovered for $w(r)=0$, $h(r)=g(r)=r^2$.
The ansatz for the Maxwell field is given by
\begin{equation}
A_a dx^a = V(r) dt+a_\varphi(r)\sum_{k=1}^N \prod_{l=0}^{k-1} \cos^2\theta_l \sin^2\theta_k d\phi_k.
\end{equation}
The variation of the action \eqref{smodel} with respect to the metric components and the Maxwell field components leads to the Einstein-Maxwell equations given by
\begin{equation}
G_a^b = -\Lambda \delta_a^b + T_a^b,\ \nabla_a F^{ab} = 0,
\label{geneqs}
\end{equation}
where $T_{ab} = F_{ac}F^c_{\ b} - \frac{1}{2} F_{cd}F^{cd}g_{ab}$ is the stress tensor and $G_a^b$ is the Einstein tensor. The independant components of $G_a^b$ and $T_a^b$ are given by
\begin{eqnarray}
G_t^t&=&\frac{n f a''}{2 a}+\frac{1}{2} (p-1) \left(\frac{n f a' g'}{2 a g}+\frac{r^2 f w g' w'}{2 b g}+\frac{f' g'}{2 g}+\frac{f g''}{g}+\frac{f g'}{r g}+\frac{r^2}{g^2}\right)\\
&&+\frac{n r^2 f w a' w'}{4 a b}+\frac{n a' f'}{4 a}+\frac{n^2 fa'^2}{8 a^2}-\frac{3 n f a'^2}{8 a^2}+\frac{n f a'}{2 r a}-\frac{r^2 f w b' w'}{4 b^2}+\frac{r^2 w f' w'}{4 b}\nonumber\\
&&+\frac{r^2 f w w''}{2 b}+\frac{r^2 f w'^2}{4 b}+\frac{3 r f w w'}{2 b}+\frac{f'}{2 r}+\frac{(p-4) (p-1) f g'^2}{8 g^2}-\frac{p^2-1}{2g}\nonumber,
\end{eqnarray}
\begin{eqnarray}
G_r^r&=&\frac{1}{2} (p-1) \left(\frac{n f a' g'}{2 a g}+\frac{f b' g'}{2 b g}+\frac{f g'}{r g}\right)+\frac{n f a' b'}{4 a b}+\frac{n^2 f a'^2}{8 a^2}-\frac{n f a'^2}{8 a^2}+\frac{n f a'}{2 r a}\nonumber\\
&&+\frac{f b'}{2 r b}+\frac{r^2 f w'^2}{4 b}+\frac{(p-2) (p-1) f g'^2}{8 g^2}-\frac{p^2-1}{2 g}+\frac{(p-1) r^2}{2 g^2},\nonumber
\end{eqnarray}
\begin{eqnarray}
G_z^z &=&\frac{n f a''}{2 a}-\frac{f a''}{2 a}+\frac{n f a' b'}{4 a b}-\frac{f a' b'}{4 a b}+\frac{n a' f'}{4 a}-\frac{a' f'}{4 a}+\frac{n (p-1) f a' g'}{4 a g}\nonumber\\
&&-\frac{(p-1) f a' g'}{4 a g}+\frac{n^2 f a'^2}{8 a^2}-\frac{5 n f a'^2}{8 a^2}+\frac{n f a'}{2 r a}+\frac{f a'^2}{2 a^2}-\frac{f a'}{2 r a}\nonumber\\
&&+\frac{f b''}{2 b}+\frac{b' f'}{4 b}+\frac{(p-1) f b' g'}{4 b g}-\frac{f b'^2}{4 b^2}+\frac{f b'}{2 r b}-\frac{r^2 f w'^2}{4 b}\nonumber\\
&&+\frac{(p-3) f' g'}{4 g}+\frac{f' g'}{2 g}+\frac{f'}{2 r}+\frac{(p-1) f g''}{2 g}+\frac{(p-4)(p-1) f g'^2}{8 g^2}\nonumber\\
&&+\frac{(p-1) f g'}{2 r g}-\frac{p^2-1}{2 g}+\frac{(p-1) r^2}{2 g^2}\nonumber,
\end{eqnarray}
\begin{eqnarray}
G_\theta^\theta &=&\frac{n f a''}{2 a}+\frac{n f a' b'}{4 a b}+\frac{n a' f'}{4 a}+\frac{n (p-2) f a' g'}{4 a g}+\frac{(n-3) n f a'^2}{8 a^2}+\frac{n f a'}{2 r a}\nonumber\\
&&+\frac{f b''}{2 b}+\frac{b' f'}{4 b}+\frac{(p-2) f b' g'}{4 b g}-\frac{f b'^2}{4 b^2}+\frac{f b'}{2 r b}-\frac{r^2 f w'^2}{4 b}+\frac{(p-2) f' g'}{4 g}\nonumber\\
&&+\frac{f'}{2 r}+\frac{(p-2) f g''}{2 g}+\frac{(p-5) (p-2) f g'^2}{8 g^2}+\frac{(p-2) f g'}{2 r g}+\frac{(p-5) r^2}{2 g^2}\nonumber\\
&&-\frac{(p-3) (p+1)}{2 g}\nonumber,
\end{eqnarray}
\begin{eqnarray}
G_{\varphi_1}^{\varphi_2}&=&-\frac{n r^2 f w a' w'}{4 a b}+\frac{n f a' g'}{4 a g}-\frac{n f a'}{2 r a}+\frac{f b' g'}{4 b g}+\frac{r^2 f w b' w'}{4 b^2}-\frac{f b'}{2 r b}\nonumber\\
&&-\frac{r^2 w f' w'}{4 b}-\frac{(p-1) r^2 f w g' w'}{4 b g}-\frac{r^2 f w w''}{2 b}-\frac{r^2 f w'^2}{2 b}-\frac{3 r f w w'}{2 b}+\frac{f' g'}{4 g}\nonumber\\
&&-\frac{f'}{2 r}+\frac{f g''}{2 g}\nonumber,
\end{eqnarray}
\begin{eqnarray}
G_t^\varphi &=& -\frac{n f w a' b'}{4 a b}+\frac{(n-1) r^2 f w^2 a' w'}{4 a b}+\frac{r^2 f w^2 a' w'}{4 a b}+\frac{n f a' w'}{4 a}+\frac{n f w a'}{2 r a}\nonumber\\
&&-\frac{f w b''}{2 b}-\frac{w b' f'}{4 b}-\frac{(p-1) f w b' g'}{4 b g}-\frac{r^2 f w^2 b' w'}{4 b^2}-\frac{f b' w'}{4 b}+\frac{f w b'^2}{4 b^2}\nonumber\\
&&+\frac{r^2 w^2 f' w'}{4 b}+\frac{(p-1) r^2 f w^2 g' w'}{4 b g}+\frac{r^2 f w^2 w''}{2 b}+\frac{r^2 f w w'^2}{b}+\frac{3 r f w^2 w'}{2 b}+\frac{1}{4} f' w'\nonumber\\
&&+\frac{w f'}{2 r}+\frac{(p-1) f g' w'}{4 g}+\frac{(p-1) f w g'}{2 r g}+\frac{1}{2} f w''+\frac{3 f w'}{2 r}-\frac{(p-1) r^2 w}{g^2},\nonumber
\end{eqnarray}
\begin{eqnarray}
T_t^t&=&\frac{f w^2 a_\varphi'^2}{2 b}-\frac{f a_\varphi'^2}{2 r^2}-\frac{(p-1) a_\varphi^2}{g^2}-\frac{f V'^2}{2 b}\nonumber,\\
T_r^r&=&-\frac{f w a_\varphi' V'}{b}-\frac{f w^2 a_\varphi'^2}{2 b}+\frac{f a_\varphi'^2}{2
r^2}-\frac{(p-1) a_\varphi^2}{g^2}-\frac{f V'^2}{2 b}\nonumber,\\
T_z^z&=&\frac{f w a_\varphi' V'}{b}+\frac{f w^2 a_\varphi'^2}{2 b}-\frac{f a_\varphi'^2}{2
r^2}-\frac{(p-1) a_\varphi^2}{g^2}+\frac{f V'^2}{2 b}\nonumber,\\
T_\theta^\theta&=&\frac{f w a_\varphi' V'}{b}+\frac{f w^2 a_\varphi'^2}{2 b}-\frac{f a_\varphi'^2}{2
r^2}-\frac{(p-5) a_\varphi^2}{g^2}+\frac{f V'^2}{2 b}\nonumber,\\
T_{\varphi_1}^{\varphi_2}&=&\frac{f w a_\varphi' V'}{b}+\frac{f w^2 a_\varphi'^2}{b}-\frac{f a_\varphi'^2}{r^2}+\frac{4a_\varphi^2}{g^2}\nonumber,\\
T_t^\varphi&=&-\frac{f w^2 a_\varphi' V'}{b}+\frac{f a_\varphi' V'}{r^2}-\frac{f w V'^2}{b}\nonumber.
\end{eqnarray}
The two independant Maxwell equations read
\begin{eqnarray}
&&-\frac{n f w a' a_\varphi'}{2 a b}-\frac{n f a' V'}{2 a b}-\frac{f w a_\varphi''}{b}+\frac{f w a_\varphi' b'}{2 b^2}-\frac{w a_\varphi' f'}{2 b}-\frac{(p-1) f w a_\varphi' g'}{2 b g}\\
&&-\frac{f a_\varphi' w'}{b}-\frac{f w a_\varphi'}{rb}+\frac{f b' V'}{2 b^2}-\frac{f' V'}{2 b}-\frac{(p-1) f g' V'}{2 b g}-\frac{fV''}{b}-\frac{f V'}{r b}=0\nonumber,\\
\nonumber\\
&&-\frac{n f w^2 a' a_\varphi'}{2 a b}+\frac{n f a' a_\varphi'}{2 r^2 a}-\frac{n f w a'V'}{2 a b}-\frac{f w^2 a_\varphi''}{b}+\frac{f a_\varphi''}{r^2}+\frac{f a_\varphi'b'}{2 r^2 b}\nonumber\\
&&+\frac{f w^2 a_\varphi' b'}{2 b^2}-\frac{w^2 a_\varphi' f'}{2 b}-\frac{(p-1) f w^2 a_\varphi' g'}{2 b g}-\frac{2 f w a_\varphi' w'}{b}-\frac{f w^2 a_\varphi'}{r b}+\frac{a_\varphi' f'}{2 r^2}\nonumber\\
&&+\frac{(p-1) f a_\varphi' g'}{2 r^2 g}-\frac{f a_\varphi'}{r^3}-\frac{2 (p-1) a_\varphi}{g^2}+\frac{f w b' V'}{2 b^2}-\frac{w f' V'}{2 b}-\frac{(p-1) f w g' V'}{2 b g}\nonumber\\
&&-\frac{f w V''}{b}-\frac{f V' w'}{b}-\frac{f w V'}{r b}=0\nonumber.
\end{eqnarray}
It is possible to express the equations as a system of 7 non linear coupled ordinary differential equations for the functions $a,b,f,g,w,V,a_\varphi$ by combining suitably the Einstein equation. However the resulting equations are very long and we refrain to write them here.
It should be stressed that it is possible to rewrite the equation for $w$ (resp. $V$) as a total derivative in the neutral case, where $V=a_\varphi=0$ (resp. in the non rotating case where $w=a_\varphi=0, g=h=r^2$):
\begin{equation}
\left( w'r^3\sqrt{ g^{p-1}a^n \frac{f}{b} }\right)'=0,
\end{equation}
for the neutral rotating case and
\begin{equation}
\left( V'r^p\sqrt{ a^n \frac{f}{b} }\right)'=0,
\end{equation}
for the charged non rotating case.
This leads to first integrals (say $j$ and $q$) associated respectively to the angular momentum and to the charge. Note that the function $a_\varphi$ does not appear in these particular case since it is associated with the magnetic field which is excited by the rotation in the charged case.
In order to have a well posed problem, we need $13$ boundary conditions, given by
\begin{eqnarray}
&&f(r_h)=0,\ b(r_h)=0,\ b'(r_h)=b_1,\ a(r_h)=a_h,\ V(r_h)=V_h,\ w(r_h)=w_h,\nonumber\\
&&\Gamma_1(r_h)=0,\ \Gamma_2(r_h)=0,\ \Gamma_3(r_h)=0\\
&&g\rightarrow r^2,\ w\rightarrow0,\ V\rightarrow0,\ a_\varphi\rightarrow0\mbox{ for }r\rightarrow\infty,\nonumber
\end{eqnarray}
where $a_h,b_1$ are real constants adjusted in order to find asymptotically locally $AdS$ solutions and for some constants $V_h,\ w_h$ and where $\Gamma_i$ are combinations of the various functions, which should vanish at the horizon for the equations to be regular.
The $\Gamma_i$ are given by
\begin{eqnarray}
\Gamma_1(r)&=& b' r \left(g (-f' g'+2 p+2)-8 a_\varphi^2\right)+r^3 \left(f'g^2 w'^2-2 (p+1) b' \right)+2 b' f' g^2\nonumber\\
\Gamma_2(r)&=& \frac{ar^2 \left(f' g^2 w'^2-2 b' (p-1)\right)}{b' f' g^2}+\frac{2 a}{r}-a'\\
\Gamma_3(r)&=& r^2 \left(2 a_\varphi b' (p-1)+f' g^2 w' (a_\varphi'w +V'\right)-a_\varphi' b' f' g^2\nonumber
\end{eqnarray}
\subsection{Known solutions}
Atlhough it seems hopeless to find an analytic solution to the full set of equations \eqref{geneqs}, there are some particular cases where an analytic solution exists.
For instance for $p=0$ or $n=0$, the metric describes a black hole (resp. topological black hole) and the vacuum non-rotating case is given by
\begin{equation}
f(r)=\frac{r^2}{\ell^2}+1-\left(\frac{r_0}{r}\right)^{p},\ b(r)=f(r),\ g(r)=h(r)=r^2,\ \omega(r)=0,
\end{equation}
for $n=0$ and
\begin{equation}
f(r)=\frac{r^2}{\ell^2}-\left(\frac{r_0}{r}\right)^{p},\ b(r)=f(r),\ a(r)=r^2,
\end{equation}
for $p=0$.
In the same limits, the electrovacuum solution have been constructed in \cite{Gibbons:2004uw}.
The charged and rotating black hole has been constructed numerically in \cite{kunz}.
The case $n=1$ has been studied in \cite{brs} for the charged or rotating case and in \cite{rms} for the non rotating case.
For a vanishing cosmological constant, the function $a$ can be set to $a(r)=1$ since the extradirections are Ricci-flat and the remaining $p+2$ dimensional space can be taken to be the charged and/or rotating $(p+2)$-dimensional black hole \cite{mp,bhchrot} (see \cite{bhrev} and references therein for a review).
The phase structure of the case $n=1$ has been intensively studied \cite{gl,gubser,wiseman,rms,adsstab} (see \cite{bsrev} and reference therein for a review) with or without a cosmological constant.
\section{Near Horizon and asymptotic expansion}
\label{sec:asympt}
Far from the event horizon, the functions $a,b,f,g,w,V,a_\varphi$ obey the following expansion:
\begin{eqnarray}
\nonumber
a(r)&=&\frac{r^2}{\ell^2}+\sum_{j=0}^{\lfloor(d-4)/2\rfloor}a_j(\frac{\ell}{r})^{2j}
+ \sum_k\delta_{d,2k+1}\zeta\log(\frac {r}{\ell}) (\frac{\ell}{r})^{d-3}
+c_z(\frac{\ell}{r})^{d-3}+\Ord{\frac{1}{r}}{d-1},
\\
\label{odd-inf}
b(r)&=&\frac{r^2}{\ell^2}+\sum_{j=0}^{\lfloor(d-4)/2\rfloor}a_j(\frac{\ell}{r})^{2j}
+ \sum_k\delta_{d,2k+1}\zeta\log (\frac {r}{\ell}) (\frac{\ell}{r})^{d-3}
+c_t(\frac{\ell}{r})^{d-3}+\Ord{\frac{1}{r}}{d-1},\nonumber
\\
f(r)&=&\frac{r^2}{\ell^2}+\sum_{j=0}^{\lfloor(d-4)/2\rfloor}f_j(\frac{\ell}{r})^{2j}
+ \sum_k\delta_{d,2k+1}(n+1)\zeta\log (\frac {r}{\ell}) (\frac{\ell}{r})^{d-3}\nonumber\\
&&+(n c_z+c_t + (p-1)c_g +c_0)(\frac{\ell}{r})^{d-3}+\Ord{\frac{1}{r}}{d-1},\nonumber\\
\nonumber
g(r)&=& r^2\left( 1 + c_g\left(\frac{\ell}{r}\right)^{d-5} \right)+\Ord{\frac{\ell}{r}}{d-1}\\
\nonumber
w(r)&=& c_w\left(\frac{\ell}{r}\right)^{d-3} + \Ord{\frac{\ell}{r}}{d-1}\\
\nonumber
V(r_)&=& c_V\left(\frac{\ell}{r}\right)^{d-3} + \Ord{\frac{\ell}{r}}{d-1}\\
\nonumber
a_\varphi(r)&=& c_\varphi\left(\frac{\ell}{r}\right)^{d-3} + \Ord{\frac{\ell}{r}}{d-1}
\end{eqnarray}
where $d=n+p+2$, the constants $a_i,f_i,\zeta$ depend on $p,n,\ell$ and are given by $a_0= \frac{p-1}{n+p-1}$, $a_1=\frac{n (p-1)^2}{(n+p-3) (n+p-1)^2 (n+p)}$,
$a_2=-\frac{n (p-1)^3 \left(n^2+n (4 p-17)+3 (p-3) p\right)}{3 (n+p-5) (n+p-3) (n+p-1)^3 (n+p)^2} $, $f_0=\frac{(p-1) (2 n+p)}{(n+p-1) (n+p)}$,
$f_1=\frac{n (n+1) (p-1)^2}{(n+p-3) (n+p-1)^2 (n+p)}$, $f_2= -\frac{n (n+1) (p-1)^3 (n (p-4)+(p-3) p)}{(n+p-5) (n+p-3) (n+p-1)^3 (n+p)^2}$,
$\zeta = -1/12$ for $p=2,n=1$, $\zeta=-3/400$ for $p=2,n=3$, $-1/100$ for $p=3,n=2$,$\ldots$, $c_0= 0$ for $p=2,n=1$, $c_0=3/800$ for $p=3,n=2$, $1/100$
for $p=3,n=2$,$\ldots$ and where $\lfloor X \rfloor$ denotes the floor integer value of $X$. The coefficient $c_t,c_z,c_g,c_w,c_\varphi,c_V$ are to be
determined numerically and are related to the mass, tension, angular momentum, charge and magnetic momentum of the $AdS$ brane. Note that the constants
$a_i,f_i,\zeta$ are the same than in the uncharged case \cite{adsbrane}.
Note also that the equations are invariant under arbitrary scaling of the metric functions $a,b$, related to the freedom of redifining the $t$ and $z$ coordinates.
We fix the normalisation of these two functions by adjusting the real constants $a_h,b_1$ described in the previous section such that the functions $a,b\rightarrow r^2/\ell^2$ at large values of $r$.
Close to the horizon, the metric and maxwell functions obey the following expansion
\begin{eqnarray}
a(r)&=&a_h + a_1(r-r_h)+\Ord{r-r_h}{2},\nonumber\\
b(r)&=&b_1(r-r_h)+\Ord{r-r_h}{2},\nonumber\\
f(r)&=& f_1(r-r_h)+\Ord{r-r_h}{2},\nonumber\\
g(r)&=&g_h + g_1(r-r_h)+\Ord{r-r_h}{2},\\
w(r)&=&w_h + w_1(r-r_h)+\Ord{r-r_h}{2},\nonumber\\
V(r)&=&V_h + V_1(r-r_h)+\Ord{r-r_h}{2},\nonumber\\
a_\varphi(r)&=&A_h + A_1(r-r_h)+\Ord{r-r_h}{2},\nonumber\\
\end{eqnarray}
where the various coefficients can be expressed as functions of $a_h,b_1,g_h,w_h,V_h$.
The case of no rotation is much simpler: the functions $w,a_\varphi$ vanish, $g=h=r^2$ and the near horizon series reduces to
\begin{eqnarray}
f(r)&=&\left(-\frac{q^2 r_h a_h^{-n} r_h^{-2p}}{n+p}+\frac{r_h}{(n+p+1)\ell^2}+\frac{p-1}{r_h}\right)(r-r_h)+\Ord{r-r_h}{2},\nonumber\\
a(r)&=&a_h + \frac{2 a_h r_h \left(q^2\ell^2-a_h^n (n+p) (n+p+1) r_h^{2p}\right)}{q^2 r_h^2\ell^2-a_h^n (n+p) r_h^{2p}}(r-r_h) + \Ord{r-r_h}{2}\nonumber\\
b(r)&=& b_1(r-r_h) + \Ord{r-r_h}{2}
\end{eqnarray}
The requirement that $f'(r_h)>0$ leads to the following condition on the charge:
\begin{equation}
q^2\leq\frac{a_h^n (n+p) r_h^{2 p-2} \left(\ell^2 (p-1)+r_h^2 (n+p+1)\right)}{\ell^2},
\label{ext_bound}
\end{equation}
this condition is in agreement with the bound found in ref \cite{brs} for $n=1$. If the bound is saturated, $r_h$ becomes a double root of $f$ and the temperature vanishes; this is the extremal solution. The lower horizon radius should lead to a lower bound on the mass for a given value of the charge. We will discuss this further in the next sections.
Note that the maximal value of $q$ is a growing function of $r_h$ i.e. for a fixed value of the charge, we have a lower bound on the horizon radius or equivalently a maximal bound on the $AdS$ length.
\section{Physical properties of the $AdS$ branes}
\label{sec:physics}
\subsection{Thermodynamical properties}
The metric admits $\xi=\partial_t + \Omega_H^{(i)} \partial_{\varphi_i}$ as a killing vector. The $\Omega_H^{(i)}$s are fixed by the requirement that $\xi$
is null on the event horizon, leading to
\begin{equation}
\Omega_H^{(i)}= \omega_h.
\end{equation}
The asymptotical thermodynamical quantities are computed using the counterterm procedure described in \cite{counter} and applied to $AdS$ branes in \cite{adsbrane}, leading to the mass $M$, tension $\mathcal T$ and angular momentum $J$:
\begin{eqnarray}
M &=& \frac{\ell^{p-1} V_p \mathcal V_{n}}{16\pi}((n+p)c_t - n c_z + (p-1)c_g) +M_c,\\
\mathcal T_i &=& \frac{\ell^{p-1} V_p\mathcal V_{n}}{16\pi n L_i}(c_t - (p+1) c_z-(p-1)c_g) +\frac{1}{n} \mathcal T_c,\nonumber\\
J&=&\frac{\ell^{p-1} V_p \mathcal V_{n}}{16\pi}\frac{2(n+p+1)}{(p+1)}c_w,\nonumber
\end{eqnarray}
where $V_p$ denotes the surface of the unit $p$-sphere, $\mathcal V_n$ is the volume of the $n$ extradirections, $L_i$ is the length of the $i-th$ extradirection
and where $M_c = -\mathcal T_c=$ are Casimir like terms appearing in odd dimensions (see ref. \cite{adsbrane}) and are not useful for our purpose.
Note that we add manually a $1/n$ factor in the definition of the tension. First, since there are $n$ extradirections, all playing a spectator role in the equations, all the tensions should be the same. Second, as we will see later, each extradirection will contribute one time in the Smarr relation if we use this definition.
For a charged non-rotating solution, the electric field with respect to a constant $r$ hypersurface is given by $E^{\mu}=g^{\mu\rho}F_{\rho\nu}n^{\nu}$, $n^\mu$ being the unit normal to the constant $r$ surface. The electric charge of the charged solutions is computed using Gauss'
law by evaluating the flux of the electric field at infinity:
\begin{eqnarray}
\label{Qc}
Q=\frac{1}{4\pi G_d}\oint_{\Sigma }d^{d-2}y\sqrt{\sigma}u^{a}n ^{b}F_{ab}.
\end{eqnarray}
If $A_{\mu}$ is the electromagnetic potential, then the electric
potential $\Phi$, measured at infinity with respect to the horizon
is defined as \cite{potential}:
\begin{equation}
\Phi=A_{\mu}\chi^{\mu}|_{r\rightarrow\infty}-A_{\mu}\chi^{\mu}|_{r=r_h}~,
\end{equation}
with $\chi^{\mu}$ a Killing vector orthogonal to and null on the horizon.
In the case of rotating solutions, the charge is defined in the same way but the magnetic field is excited by the rotation and leads to a magnetic momentum. The electric charge and magnetic momentum are given by
\begin{equation}
Q=\frac{\ell^{p-1}V_p\mathcal V_{n}}{16\pi} (n + p-1)c_V,\ \mu =\frac{\ell^{p-1}V_p\mathcal V_{n}}{16\pi} (n + p-1)c_\varphi.
\end{equation}
The temperature and entropy are computed in the standard way, leading to
\begin{equation}
T_H = \frac{\sqrt{b_1 f_1}}{4\pi},\ S = \frac{V_p \mathcal V_{n}}{4}\sqrt{a_0^ng_h^{p-1}}r_h ,
\end{equation}
where $f_1$ denotes the derivative of $f$ at the horizon.
We derived a Smarr law for the rotating $AdS$ brane and for the charged brane using the same approach than in ref. \cite{brs}:
\begin{eqnarray}
M + \mathcal T_i L_i = T_H S + \frac{(p+1)}{2}\omega_h J,\\
M + \mathcal T_i L_i = T_H S + \Phi Q,\nonumber
\end{eqnarray}
where summation over the repeated index is understood.
Although it seems reasonable that the Smarr law in the charged-rotating case should read $M + \mathcal T_i L_i = T_H S + \frac{(p-1)}{2}\omega_h J + \Phi Q$, we could not proove this using the standard technique. This is due to the fact that in the purely charged or purely rotating case, the charge (resp. the angular momentum) appear as a first integral of the equations; in the charged \emph{and} rotating case, they are not constant anymore but rather asymptotic constant. The Smarr law we derived reads
\begin{equation}
M + \mathcal T_i L_i = T_H S + \frac{(p+1)}{2}\omega_h J_H,
\end{equation}
where $J_H$ is the angular momentum of the event horizon defined by
\begin{equation}
J_H = \frac{V_p}{16\pi}\frac{2}{p-1}\sqrt{\frac{f_1}{b_1}g_h^{p-1}a_h^n}r_h^3 w_1.
\end{equation}
This can be understood by the fact that the electromagnetic field carries a part of the total angular momentum.
\subsection{Geometric properties properties of the $AdS$ brane}
The equations admit another scaling property:
\begin{eqnarray}
&&r\rightarrow \lambda r,\ \ell\rightarrow\lambda \ell,\ M\rightarrow,\lambda^{p-1}M\ \mathcal T\rightarrow\lambda^{p-1}\mathcal T,\ w_h\rightarrow\lambda^{-1} w_h,\ J\rightarrow\lambda^{p} J,\nonumber\\
&& Q\rightarrow\lambda^{p-1}Q,\ \mu\rightarrow\lambda^{p-1}\mu,\ T_H\rightarrow\lambda^{-1}T_H,\ S\rightarrow\lambda^pS.
\label{scaler}
\end{eqnarray}
The horizon topology is $\mathbb M_n\times S_p\times\mathbb R$, $\mathbb M_n=\prod_{i=1}^n L_i$ being the $n$-dimensional manifold describing the extradirections. The setup can be interpreted in different ways (such as a $n$ dimensional infinite brane surrounded by a $(p+1)$-dimensional black hole, a $p+2$ dimensional black hole with a $n$ dimensional compact internal space, etc \cite{adsbrane} depending on the range of the $L_i$ (wich can be infinite). The geometry can be a sort of $AdS$ with compact directions, which can be seen as the global $AdS$ with indentifications, this is however not clear from a more formal point of view and regarding the asymptotic spacetime.
\section{Numerical Results}
\label{sec:results}
In this section, we present our numerical results and discuss the thermodynamics of the charged and of the rotating case. Then, we present a solution to the charged \emph{and} rotating case, but don't study systematically its properties, due to some restrictions in our approach, that will be discussed further in the following.
It should be stressed that in the case where the rotation is absent, the metric ansatz reduces to the metric ansatz considered in \cite{adsbrane}.
In this case, we don't have restrictions on the value of $p$.
\subsection{Charged case}
We integrated numerically the system of equations \eqref{geneqs} supplemented by the additional requirement that $g=h=r^2, w=a_\varphi=0$, using the solver Colsys \cite{colsys} for various values of $p,n,\ell,q,r_h$. We adopted the following numerical approach: we first fix the value of $a,b,V$ at the horizon and then rescale the solution in order to follow the suitable asymptotic behaviour. We typically vary the cosmological constant and use the scaling properties \eqref{scaler}
in order to fix the $AdS$ radius to $\ell=1$. This approach suffers from a considerable drawback, namely the fact that it is complicated to obtain a fixed value of the charge or of the electric potential at the origin; one has to produce a large amount of data in order to foliate the parameter space and find constant charge (or other thermodynamical quantities) slices. However, in certain regions of the parameters, some physical quantities such as the electric potential at the horizon do not vary too much and it is possible to have ideas of the underlying physics at fixed eletric potential. This is the reason why we present the physical quantities for $n=2,\ p=3$ only, hoping that it catches the main features of the generic solutions. Note that we cosntructed the solutions for others values of $p,\ n$ but we did not study systematically the thermodynamics.
Note also that the case $n=1$ has already been considered in \cite{brs}.
First, we found numerical evidences that for given values of the charge, there exists a lower bound on the mass, along with the discussion following equation \eqref{ext_bound}. This is illustrated on figure \ref{fig:n2p3QM} where we plot the values of the mass versus the values of the charge for a large number of solutions. The approach was the following: we varied the horizon radius for each gradually increasing value of the electric potential at the horizon. The mass seems to approach a lower bound as can be appreciated on the figure.
\begin{figure}
\centering
\includegraphics[scale=.5]{boundsQMn2p3.eps}
\caption{Domain of the solution in the $Q-M$ plane for $p=3,\ n=2$ and $\ell=1$. Here we plot the absolute value of the charge. }
\label{fig:n2p3QM}
\end{figure}
As the mass approaches the lower bound, a critical solution is approached and the numerical solver fails to converge.
The study of this critical solution is technically involved and seems to require a different parametrization of the metric ansatz. This is beyond the purposes of this paper.
The existence of a minimal mass for fixed values of the charge is to be interpretted as the balance between repulsive
electrostatic and attractive gravitational interactions.
We also present the various thermodynamical quantities in figure \ref{fig:thermon2p3_ch} for $\ell=1$ as a function of the horizon radius $r_h$. We note that the tension has a small negative region for small $r_h$.
\begin{figure}
\centering
\includegraphics[scale=.5]{thermon2p3_ch.eps}
\caption{Thermodynamical quantities of the charged $AdS$ branes for $n=2,\ p=3$ and $\ell=1$. The tension and angular momentum on the figure represent
the total tensions $n\mathcal T$ and total angular momentum $(p+1)J/2$ respectively.}
\label{fig:thermon2p3_ch}
\end{figure}
Althought Figure \ref{fig:thermon2p3_ch} seems to suggest the existence of thermally stable and unstable phases, it is not fully generally the case: for fixed values of the charge, there seems to be only one thermally unstable branch for large values of the charge as illustrated on figure \ref{fig:STHn2p3_ch} where the entropy is plotted as a function of the charge and of the temperature. For small values of the charge however, it can be expected that thermally stable and thermally unstable branches exist. Note that this was already found in \cite{brs} for $n=1$. This is confirmed in figure \ref{fig:STHfQn2p3_ch} where the entropy as a function of the temperature is plotted for different fixed values of the charge.
\begin{figure}
\centering
\includegraphics[scale=.5]{THSQn2p3.eps}
\caption{The entropy as a function of the charge and the temperature for $p=3,n=2$.}
\label{fig:STHn2p3_ch}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{STHfixedQn2p3.eps}
\caption{Some constant $Q$ slices of the graphic presented in figure \ref{fig:STHn2p3_ch}.}
\label{fig:STHfQn2p3_ch}
\end{figure}
\subsection{Rotating case}
Here only odd values of $p$ are allowed. We present our results for $\ell=1$. Our approach is the same as in the charged case, i.e. \emph{a posteriori}
rescaling of the functions $a,b$ and consequently $w$.
A striking feature of the rotating $AdS$ branes is that for small values of the angular velocity at the horizon ($w_h$), the tension is first negative,
then changes sign and become positive. Above a critical value of the angular velocity (depending on the number of dimension), the tension is always negative.
This can be interpretted as the balance between centrifugal force and gravitational tension; once the rotation is too high, the black brane wants to spread,
this is an effect opposite to the tension. This is illustrated on figures \ref{fig:negtensn2p3} and \ref{fig:tensn2p3}.
\begin{figure}
\centering
\includegraphics[scale=.5]{negtensn2p3.eps}
\caption{A generic profile of the total tension, horizon angular velocity and total angular momentum as functions of the horizon radius. The box on upper right corner is a zoom on interval $[0,0.6]$. We systematically observed a region of negative tension. This was noticed and interpreted in \cite{adsnubs} in the uncharged and non rotating case for $n=1$.}
\label{fig:negtensn2p3}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{tensn2p3.eps}
\caption{The total tension decreases for increasing total angular momentum for fixed values of the horizon radius. Once the angular momentum is too high, the tension becomes negative. Note also that the entropy decreases with the angular momentum, as can be expected from the $AdS$ black holes. The mass is almost constant in the regime presented on the figure.}
\label{fig:tensn2p3}
\end{figure}
The entropy decreases with the angular momentum, similar to the case of rotating black holes. Again, this is to be understood as the spreading of the horizon
with the rotation (at fixed mass). This is illistrated on figures \ref{fig:tensn2p3} and \ref{fig:sjn2p3}.
\begin{figure}
\centering
\includegraphics[scale=.5]{SJn2p3.eps}
\caption{The entropy as a function of the angular momentum for a fixed value of $r_h$. The mass is almost constant in the regime presented on the figure.}
\label{fig:sjn2p3}
\end{figure}
The entropy as a function of the temperature for fixed values of the angular momentum behaves similar to the case of the rotating black strings \cite{brs}:
there are two phases of $AdS$ black branes for vanishing values of the angular momentum and these phases quickly disappear for non vanishing values of $J$. We show the entropy as a function of the temperature and angular momentum in figure \ref{fig:STHfQn2p3_rot} and constant $J$ foliations of the latter in figure \ref{fig:STHfQn2p3_rot}.
\begin{figure}
\centering
\includegraphics[scale=.5]{THSJn2p3.eps}
\caption{The entropy as a function of the total angular momentum and the temperature for $p=3,n=2$.}
\label{fig:STHn2p3_rot}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{STHfixedJn2p3.eps}
\caption{Some constant $J$ slices of the graphic presented in figure \ref{fig:STHn2p3_ch}.}
\label{fig:STHfQn2p3_rot}
\end{figure}
\subsection{Charged \emph{and} rotating case}
Finally we contructed the charged and rotating $AdS$ black brane solution, providing a numerical evidence that the solution indeed exists. The generic profile is presented in figure \ref{fig:profchrotn2p3} for $n=2,\ p=3,\ \Lambda=-3,\ r_h=1 $.
As discussed before, we don't discuss here its thermodynamical properties in details, due to the restrictions of our approach. The problem however deserves further investigations.
Let us finally mention that the tension is systematically negative for small values of the horizon radius as in the case $n=1$ \cite{adsnubs}, with or without charge and rotations.
\begin{figure}
\centering
\includegraphics[scale=.5]{profn2p3_ch_rot.eps}
\caption{A generic profile for the charged and rotating $AdS$ black brane with $n=2,\ p=3,\ \Lambda=-3$.}
\label{fig:profchrotn2p3}
\end{figure}
\section{Conclusion}
In this paper, we presented evidences for the existence of a charged-rotating black brane in asymptotically locally $AdS$ spacetime. These solutions generalise the black branes considered in \cite{adsbrane}. The author of reference \cite{adsbrane} further considered the limit where the horizon radius shrinks to zero, namely the soliton limit and interpretted the resulting configuration in a braneworld context. Here we focused on the black brane solution but we believe that at least in the rotating case, the limit $r_h\rightarrow 0$ should exist. In the charged case, we have serious doubts about this since there exists a lower bound on the horizon radius (this was already discussed in \cite{brs} for $n=1$). Note that the $n$-dimensional manifold can be replaced by any $n$-dimensional Ricci flat manifold.
The lower bound on the horizon radius translates to a lower bound for the mass for fixed values of the charge. This is interpretted as the competition between the electric repulsion and the gravitational attraction. The tension has the same behaviour than the uncharged static case. In that sense, the charge influences the mass.
The tension is negative for small values of the horizon radius, this is to be understood in the same way as in ref \cite{adsnubs}: the spatial pressure of the cosmological constant dominates the gravitational tension due to the mass in the small $r_h$ regime while the situation is reversed for larger values of the horizon radius.
The rotation has the surprising effect of lowering the tension, even up to a point where the tension goes from positive to negative values. Here, the rotation seems to play a role opposite to the tension, even if the tension is concerned with the non rotating extradirections.
We did not study systematically the thermodynamics of the charged and rotating case, but we expect its feature to combine the properties of the charged case and of the rotating case.
Finally, let us mention that the charged case can be interpreted in the $AdS/CFT$ context, the value of $V$ at the origin being the source of a spin $1$ operator defined on a conformal field theory living in $\mathbb M^n\times S_p\times \mathbb R_t$ (recall that we can shift $V$ such that $V(r_h)=0$), the charge (next to leading order on the boundary) is then to be interpretted as the correlator of the spin $1$ field. In this context, the $S_p$ can be interpretted as an internal space and one the $\mathbb M^n$ can be thought as an infinite $n$-dimensional space. Such considerations are speculative but we believe these would deserve further investigations.
\section{Acknowledgement}
We gratefully acknowledge Y. Brihaye and E. Radu for useful discussion.
|
1,116,691,499,178 | arxiv | \section{Introduction}
The phenomenological (non-perturbative) pomeron has long been established
as an effective Regge pole whose exchange governs high-energy diffractive
hadron-hadron scattering \cite{dola}.
The pomeron carries vacuum quantum numbers
$C = P = +1$, and there is no {\it a priori} reason why a $C = P = -1$
partner, the phenomenological odderon \cite{nico1},
should not exist. If present,
the exchange of an odderon Regge pole would produce a difference between
${\rm p p}$ and ${\rm p}\bar {\rm p}$
scattering at high energies and at small momentum
transfer. A particularly sensitive test is provided by measurements of
the forward real part of the ${\rm p p}$ and ${\rm p}\bar {\rm p}$
scattering amplitudes,
and they are consistent with the absence of odderon exchange \cite{auge}.
There are two possible explanations for
the apparant absence of the odderon. One is that the non-perturbative
odderon really does not exist. This seems implausible as QCD-based models
of the phenomenological pomeron can easily be extended to describe a
phenomenological odderon. Further a perturbative ``odderon'', namely
three-gluon exchange\footnote{A lot of theoretical work has been
devoted to calculating perturbative QCD (pQCD) corrections to this type
of odderon \cite{pod1,pod2,pod3,pod4}. The result seems to be that such
corrections have a small effect, for instance changing the (effective)
intercept of the odderon trajectory $\alpha_{\mathbb{O}}(0)$
by less than 10$\%$
\cite{pod2,pod4,pod5} from the value of 1, which is the result for the
lowest order three gluon exchange.}
is believed to dominate large-angle ${\rm p p}$ and ${\rm p}\bar {\rm p}$
scattering. One of the most compelling arguments for a $C = -1$ exchange in
high energy scattering is provided by the ${\rm p p}$ and
${\rm p}\bar {\rm p}$ differential
cross sections at the ISR where a deep dip in the former process is
transformed into a shoulder in the latter. The second possibility is that
the phenomenological odderon does exist, but that its coupling to the nucleon
in elastic scattering at small $t$ is extremely small. That is the view we
take here.
One successful approach to high-energy diffractive scattering is based on
functional integral techniques \cite{lana,na91}
and the use of the model of the stochastic
vacuum (MSV) \cite{msv}
to evaluate the correlation functions of the Wegner-Wilson
loops which occur in the formalism when applied to
hadron-hadron scattering \cite{dfkpre,dfk}.
This gives a remarkably good description
of many different processes involving the exchange of vacuum quantum
numbers \cite{dfk,all,dogupi,doku,naber}.
This model can easily be extended to the exchange of a $C = P = -1$ object,
and it has been shown that the clustering of two quarks to form a diquark in
a nucleon leads to a drastic reduction in the odderon-$N$-$N$
coupling \cite{doruod}.
The general result is
rather model independent. It relies on the fact that the quark-diquark
density in a nucleon is nearly symmetric under a parity transformation
(if the diquark is sufficiently small) whereas the odderon coupling changes
sign. Therefore there is a cancellation when the nucleon wave function is
integrated over all angles. Specifically, in the MSV it has been shown that
the formation of a diquark with a radius $r_D < 0.3$ fm yields sufficient
suppression of the odderon coupling to the proton in order to be in agreement
with the upper limit allowed by the measurements of the forward real part of
the ${\rm p p}$ and ${\rm p}\bar {\rm p}$ scattering amplitudes.
It should be stressed that this argument is only valid if the odderon is
treated as a simple Regge pole near $J = 1$. If a more complicated situation
is permitted, as in \cite{nico2}, then the phenomenology is very different
and the suppression of odderon exchange in ${\rm p p}$ and
${\rm p}\bar {\rm p}$ scattering
at high energies is much less marked. However the data do not {\it require}
the additional complexity of double and triple poles, so until demonstrated
explicitly otherwise we prefer to stay with the simplest model which agrees
with data and which allows essentially parameter-free predictions.
It is clearly advantageous if one can find high-energy reactions which permit
odderon exchange but exclude pomeron exchange. Exclusive neutral pseudoscalar
meson production in ${\rm e p}$
scattering at high energies is one such, providing a
direct probe for odderon exchange \cite{schmna,engel}.
Using a simple ansatz for the odderon, this
cross section has been calculated \cite{kina}.
Because of the suppression of the
odderon-p-p coupling, the cross section due to photon-photon fusion is
comparable to that expected at best from odderon exchange. Nonetheless there
are kinematical distributions which could provide promising signals for the
odderon.
The suppression of the odderon coupling due to diquark formation does not hold
if the nucleon is transformed diffractively into an excited negative parity
state. In this case, even for a pointlike diquark the odderon couples to
the nucleon without any restriction \cite{donaru}
giving promise of a significantly higher
cross section for odderon exchange in, for example,
${\rm e p} \rightarrow {\rm e} \pi^0 {\rm X}$
(where X stands for diffractively excited proton states)
compared to ${\rm e p} \rightarrow {\rm e} \pi^0 {\rm p}$.
In fact this is much more
representative of the real experimental situation, as it is often not
possible to say whether the proton has recoiled quasi-elastically or has
been transformed into an excited state within some experimentally defined
mass range. At HERA the upper limit for the recoil mass $M_X$ is typically
$\sim 2$ GeV. Accordingly we calculate the contribution from odderon
exchange to pseudoscalar meson
production
with nucleon fragmentation in ep interactions at high energies , Fig.1(a).
To complement this we also calculate the contribution to the same process
from photon exchange, Fig.1(b). This latter calculation is exact, limited
only by the errors on the ep total cross section for the $(Q^2,M_X)$
range of relevance. When combined with the previous calculation
\cite{kina} of photon
exchange for the quasi-elastic process this provides an absolute
prediction for
the cross section from the electromagnetic
process alone. Any measurement deviating significantly from this would
be strong evidence for the odderon.
\begin{figure}[htb]
\unitlength1.0cm
\begin{center}
\begin{picture}(15.,8.8)
\put(-0.6,1.0){
\epsfysize=5.0cm
\epsffile{pseudofeyn.eps}}
\put(0.4,0){(a)}
\put(7.7,1.0){
\epsfysize=5.0cm
\epsffile{pseudofeyn.eps}}
\put(8.7,0){(b)}
\put(3.9,2.8){$\mathbb{O}$}
\put(12.2,2.8){$\gamma$}
\end{picture}
\end{center}
\vspace*{-0.0cm}
\caption{Feynman diagrams for pseudoscalar meson production in $ep$
scattering at high energies with odderon (a) and photon (b) exchange.}
\end{figure}
Thus we are considering electroproduction of a pseudoscalar meson
PS = $\pi^0,\eta,\eta',\eta_c$ with nucleon break-up:
\beqa
{\rm e}^{\pm}(p_1) + {\rm p}(p)
\rightarrow {\rm e}^{\pm}(p_1') + {\rm PS}(k) + {\rm X}(p_X).
\label{reaction}
\eeqa
We define $q_1 = p_1 - p_1'$, $q_2 = q_1-k = p_X - p$,
$s=(p_1+p)^2$, $W=s_2=(q_1+p)^2$,
$t_1 =q_1^2=-Q^2$
and $t_2 = q_2^2$. Here we treat the very small $Q^2$ range.
In the H1 experiment at HERA the kinematical cuts for this
so called photoproduction region \cite{hera} are
\beqa
&& y_{{\rm min}}=0.3 \le y \le 0.7=y_{{\rm max}},
\nonumber\\
&& 0 < Q^2 < 0.01 \, {\rm GeV^2},
\label{cuts}
\eeqa
where, in the proton rest frame $y = (p q_1)/(p p_1)$ is the fractional
energy loss of the incoming lepton.
Due to the cuts (\ref{cuts}) the photons emitted by the ${\rm e}^{\pm}$
are always nearly on shell, and the equivalent photon approximation
(EPA) \cite{budnev} is applicable.
The total electroproduction cross section for producing a PS
in terms of the EPA is given by
\begin{eqnarray}
&&\sigma = \int_{y_{min}}^{y_{max}} \, \frac{dy}{y} \, n(y) \;
\sigma_{\gamma {\rm p}}
(s_2)
\; \; \; \; \; {\rm with} \; \; \; \; \; s_2=y s + (1-y) m_{{\rm p}}^2,
\nonumber\\[0.2cm]
&&n(y) = \frac{\alpha}{\pi} \,
\bigg\{
\Big(1-y+\frac{y^2}{2}\Big) \, {\rm ln} \Big(
\frac{|t|_u}{|t|_l}\Big) -
\frac{m_e^2 y^2}{|t|_l} \,
\Big( 1 - \frac{|t|_l}{|t|_u} \Big) \ -
\nonumber\\
&& \hphantom{N(y) = \frac{\alpha}{\pi} \, b }
\Big( 1-\frac{y}{2} \Big)^2 \,
{\rm ln} \Big( \frac{ |t|_u/E^2 + y^2} { |t|_l/E^2 + y^2} \Big) \bigg\},
\label{epa}
\end{eqnarray}
where $m_{{\rm p}}$
is the nucleon mass and $\sigma_{\gamma {\rm p}}$ is the total photoproduction
cross section for the reaction. $n(y)$ is the equivalent photon number
\cite{budnev} for a given
energy fraction of the incoming ${\rm e}^{\pm}$ transfered
to the $\gamma {\rm p}$ subsystem. In the proton rest frame
$E=p p_1/m_{{\rm p}}$
is the energy of the incoming $e^{\pm}$.
The upper limit $|t|_u$ is given by experiment (\ref{cuts}),
$|t|_u$=0.01 ${\rm GeV}^2$, and $|t|_l$=$m_e^2 y^2/(1-y)$.
\section{The odderon contribution}
For convenience we discuss the odderon exchange contribution to the
reaction $\gamma {\rm p} \rightarrow \pi^0 {\rm X}$.
The conversion to an incident
electron via the EPA is in principle straightforward, as is the replacement
of the $\pi^0$ by one of the other pseudoscalars.
There are two apparently extreme approaches to this calculation. One
is to assume that the system X is dominated by a small number of
resonances, which for the odderon should be a good approximation. The
second is to ignore any possible structure in the final system X and
represent it simply by a free quark-diquark pair. Both calculations are
performed and give similar results.
We discuss first resonance production.
The break-up of the nucleon by odderon exchange leads naturally to negative
parity final states. The lowest lying nucleonic isoscalar states with
negative parity are the $N(1520)$: $J^{P}={\frac{3}{2}}^- $
and the $N(1535)$:
$J^{P}={\frac{1}{2}}^-$.
These are both compatible with the diquark picture.
For dynamical reasons a scalar diquark is favoured over a vector one
\cite{shure}.
The P-wave excitation of a quark and a scalar diquark
gives degenerate nucleon resonances with quantum numbers $J^P =
{\frac{3}{2}}^-, {\frac{1}{2}}^-$ which can readily be identified with
these two observed states.
It is easy to see that considering unpolarised cross sections,
summed over both resonances the quark spin degree of freedom becomes
irrelevant and the calculation reduces to one where a spinless state is
exited to a 2P resonance.
The scattering amplitude $T(s_2,t_2)$, calculated in the
$\gamma {\rm p}$ c.m. system, can be expressed through a profile
function $J(\vec b)$:
\be \label{fund}
T(s_2,t_2) = 2is_2 \int \,
d^2b\,e^{i\vec{q_2}_T \vec{b}}\,J(\vec b).
\label{msvampl}
\ee
The profile function is expressed in the model \cite{dfk}
as an overlap of a
dipole-dipole scattering amplitude $\tilde J(\vec b, \vec r_1, z_1, \vec
r_2, z_2)$ where $\vec b$ is the impact parameter of two
lightlike dipole trajectories with (transverse) size $\vec r_1$ and
$\vec r_2$ respectively. The quantities $z_1$, $z_2$ are the
longitudinal momentum fractions of the quarks in the dipoles.
It is of course necessary to take a profile function corresponding
to the exchange of a $C$=$P$=$-$1 object \cite{doruod}. Note that the
model contains only the kinematical $s_2$-dependence shown in
(\ref{fund}) which leads to an energy independent cross section. The
parameters of the model\footnote{The parameters of the pion-photon
overlap can be found in \cite{donaru}. The MSV parameters are: $a$=0.31 fm,
$\langle g^2 \, FF \rangle$=3.0 ${\rm GeV}^4$, $S_{\rm p}$=0.85 fm as
determined in \cite{mdoc}.}
are the same as those used in \cite{donaru} which were
fixed at an energy of $\sqrt{s_2}$=$W$=20 GeV. We return to the question of
energy dependence at the end of this section.
Hadronic amplitudes for production of hadronic resonance
states are obtained by smearing the dipole extensions
$\vec r_i$ with the respective transition densities:
\beqa
J(\vec b)_{\lambda,\lambda_{\gamma}} &=&
\int \frac{d^2 r_1}{4\pi} dz_1
\int \frac{d^2 r_2}{4\pi}
\nn\\
&& \sum_{f,h_1, h_2}
\Psi^{*\, \pi^0}_{f h_1 h_2}(\vec{r}_1,z_1)
\Psi^{\ga}_{\lambda_{\gamma},\,f h_1 h_2} (\vec{r}_1,z_1)
\Psi^{*\, {\rm 2P}}_{\lambda} (\vec{r}_2)
\Psi^{{\rm p}}(\vec{r}_2)
\tilde{J}(\vec b, \vec{r}_1,z_1, \vec{r}_2).
\label{profile}
\eeqa
Here $\lambda,\lambda_{\gamma}$ stands for the helicities of the
photon and of the the orbital helicity of the resonance, respectively.
In agreement with the other applications of the model we use for the
quark-diquark wave function of the proton a Gaussian with an extension
parameter $S_{\rm p}$ adjusted to $pp$ scattering and neglect the
dependence on the longitudinal momentum fraction $z_2$ in the purely
hadronic overlap. Thus we take for the proton wave function
\be
\label{proton}
\Psi^{\rm p}(r_2)=
\frac{\sqrt{2}e^{-{r_2^2}/{4S_{\rm p}^2}} }{S_{\rm p}}.
\label{pwave}
\ee
As the
orbital wave function for the low lying degenerate excited states we choose
an ansatz analogous to the proton-wave function but in a P-state:
\beqa
&&\Psi^{{\rm 2P}}_{\lambda} (\vec{r}_2)=
\Psi^{{\rm 2P}} (r_2) \; e^{i \lambda \theta_2},
\nonumber\\
&& \Psi^{{\rm 2P}} (r_2) =
\frac{r_2 \; e^{-{r_2^2}/{4 S_{\rm p}^2}} }{ S_{\rm p}^2 }
\label{2pwave}
\eeqa
with the same extension parameter $S_{\rm p}$ as for the
proton.
An analogous strategy has been applied successfully for the excited
$\rho$ states \cite{doku}.
In (\ref{msvampl}) we use $\vec{q_2}_T$ as x-axis
for the transverse vectors $\vec{r}_1$, $\vec{r}_2$ and $\vec{b}$, so
$\theta_2$ is the angle of $\vec{r}_2$ in planar coordinates.
The orbital helicity $\lambda$ of the 2P state can
take the values $0,\pm 1$, but the orbital helicity
$\lambda=0$ cannot be excited in our model.
The photon-pion overlap is taken from \cite{donaru}:
\beqa
&& \sum_{f,h_1, h_2}
\Psi^{*\, \pi^0}_{f h_1 h_2} (\vec{r}_1,z_1)
\Psi^{\ga}_{\lambda_{\gamma},f h_1 h_2} (\vec{r}_1,z_1) =
\nn\\
&&i\frac{e}{\sqrt{2}}
f_\pi e^{-{\omega^2 r_1^2}/{2}} e^{i\lambda_{\gamma} \theta_2}
z_1(1-z_1) f(z_1) \left(
m_q \; {\rm K}_1 (m_q r_1) +
r_1 \omega^2{\rm K}_0 (m_q r_1) \right)
\label{piwave}
\eeqa
Here $m_q$ is the quark mass, $m_q=0.22$ GeV.
For a justification of such a simple ansatz see \cite{dogupi}.
It should be noted that the
$\gamma\gamma$ decay width of the $\pi^0$ comes out correctly with this
overlap. The function $f(z_1)$ is given by \cite{bsw}:
\be
f(z_1)= {\cal N} \sqrt{z_1(1-z_1)}
\exp\left(
-\frac{M_\pi^2(z_1-1/2)^2}{2\omega^2}\right)
\ee
and $\omega$ and $\cal{N}$ are fixed by normalization.
To take advantage of global azimuthal invariance
it is useful to choose as new integration variables
the relative angles between $\vec{b}$ and
$\vec{r}_{1(2)}$
$\theta_{1(2)}^{\prime} = \theta_{1(2)}-\theta_b$. Then
with this choice of coordinates $\tilde{J}$
in (\ref{profile}) becomes independent
of $\theta_b$
and inserting (\ref{pwave}), (\ref{2pwave}) and (\ref{piwave}) into the
scattering amplitude (\ref{msvampl}) we get
\beqa
T_{\lambda , \lambda_{\gamma}}&=&2is_2 \int b\, db\,d\theta_b \,
e^{i\sqrt{-t_2} b \cos (\theta_b)}
\int \frac{d^2 r_1}{4\pi} d z_1
\int \frac{d^2 r_2}{4\pi}
\nn\\
&&\times
\Big[\sum_{f,h_1, h_2}\Psi^{*\, \pi^0}_{f h_1 h_2}(r_1,z_1)
\Psi^{\ga}_{\lambda_{\gamma}, f h_1 h_2}(r_1,z_1)\Big]_{\theta_1=0}
\Psi^{*\, 2P}(r_2)
\Psi^{{\rm P}}(r_2)
\nn\\
&&\times
e^{i \lambda_{\gamma} (\theta_1^{\prime}+\theta_b)}
e^{i \lambda (\theta_2^{\prime}+\theta_b)}
\tilde{J}(\vec{b}, \vec{r}_1,z_1, \vec{r}_2),
\eeqa
where the wavefunctions now only depend on $r_{1(2)}$.
Using the relation
\be
\int_0^{2\pi}d\theta_b \,e^{i\sqrt{-t_2}
b \cos (\theta_b)} e^{in\theta_b}=(i)^n 2\pi
J_n(\sqrt{-t_2}b)\nn
\ee
we can now perform the $\theta_b$ integration and
obtain finally for the scattering amplitude
\beqa
&&T_{\lambda,\lambda_{\gamma}} = 2 i s_2 \int b\, db\,
\int \frac{d^2 r_1}{4\pi} dz_1
\int \frac{d^2 r_2}{4\pi}
\nn\\
&&\hphantom{T_{\lambda_{\gamma},\lambda}}
\times
\Big[ \sum_{f,h_1, h_2}
\Psi^{*\, \pi^0}_{f h_1 h_2}(r_1,z_1)
\Psi^{\ga}_{\lambda_{\gamma},f h_1 h_2}(r_1,z_1) \Big]_{\theta_1=0}
\Psi^{*\, 2{\rm P}}(r_2)\Psi^{{\rm p}}(r_2)
\nn\\
&&\hphantom{T_{\lambda_{\gamma},\lambda_{{\rm 2P}}}}
\times
e^{i (\lambda_{\ga} \theta_1^{\prime}+
\lambda\theta_2^{\prime})}
(-i)^{(\lambda +\lambda_{\gamma})}
2\pi
J_{(\lambda +\lambda_{\gamma})}(\sqrt{-t_2}b) \;
\tilde{J}(\vec{b}, \vec{r}_1,z_1, \vec{r}_2).
\label{msvfinalampl}
\eeqa
The resulting differential cross section is:
\beqa
\frac{d\sigma_{\gamma \, {\rm p}}^{\mathbb{O}}}{dt_2}=
\frac{1}{16 \pi s_2^2}\frac{1}{2}\sum_{\lambda}
\sum_{\lambda_{\gamma}} | T_{\lambda,\lambda_\gamma} |^2.
\label{diffcrossmsv}
\eeqa
One of the features of the result (\ref{diffcrossmsv})
is, that the differential cross
section in the forward direction ($t_2=0$) does not vanish due to the
appearence of the Bessel function $J_0$
in (\ref{msvfinalampl}) for $\lambda+\lambda_{\gamma}=0$.
The differential cross section (\ref{diffcrossmsv}) for
photoproduction is displayed in Fig. \ref{msvbild}.
\begin{figure}[htb]
\unitlength1.0cm
\begin{center}
\begin{picture}(15.,9.8)
\put(3.0,1.0){
\epsfysize=9.0cm
\epsffile{dsigdt2msv.eps}}
\end{picture}
\end{center}
\vspace*{-1.5cm}
\caption{The differential cross section
$d\sigma_{\gamma {\rm p}}^{\mathbb{O}} / dt_2$
of the process
$\gamma {\rm p} \to \pi^0 \{{\rm 2P}\}$ as a function of $t_2$.
(A fit to this curve is:
$d\sigma_{\gamma {\rm p}}^{\mathbb{O}} / dt_2$
=$a \, \exp( -b t_2 -c t_2^2)$ where
$a = 1523 \; {\rm nb}, \;b = 5.44 {\rm GeV}^{-2},
\; c = - 0.80 {\rm GeV}^{-4}$)}
\label{msvbild}
\end{figure}
The integrated cross section is:
\be
\sigma_{\gamma \, {\rm p}}^{\mathbb{O}}
(\gamma {\rm p} \to \pi^0 \{2{\rm P}\}) = 294 \;{\rm nb}.\nn
\ee
As this photoproduction cross section is constant, i.e. independent of $s_2$,
the EPA conversion to electroproduction can be achieved by simply multiplying
it with a constant $c_{{\rm EPA}} = 0.0136$ corresponding to the $y$
integration
in (3). This gives
\be
\sigma^{\mathbb{O}}
({\rm e p} \to {\rm e} \pi^0 \{{\rm 2P}\}) = 4.01 \;{\rm nb}.\nn
\ee
The other possibility, namely treating the final state $X$ in
(\ref{profile}) as a free
quark-diquark pair, is achieved by approximating the final state by a plane
wave. This is effectively invoking quark-hadron duality, and should be a
reasonable approximation if we are not interested in the local form of the
mass spectrum of $X$ but only in the integral over the full mass spectrum.
In this case the integrated cross section was found to be \cite{donaru}
\be
\sigma_{\gamma {\rm p}}^{\mathbb{O}}
(\gamma {\rm p} \to \pi^0 {\rm X}) = 341 \; {\rm nb}\nn
\ee
consistent with (14).
A good check for the use of quark-hadron duality in diffractive
processes is provided by the application of a similar method to the
reaction $\gamma {\rm p} \to \rho {\rm X}$
and its comparison with elastic $\rho$
photoproduction. One obtains in the model \cite{doku}:
\be
{\sigma( \gamma {\rm p} \to \rho {\rm p})} = 7.9 \;\mu{\rm b}
\ee
in good agreement with the pomeron contribution \cite{dl95}
at $W=20$ GeV to the total $\rho$ photoproduction cross section
\cite{ast82}.
For the ratio one obtains\footnote{We thank S. Weinstock for
communicating this result prior to publication.}
\be
\frac{\sigma( \gamma {\rm p} \to \rho {\rm X})}
{\sigma(\gamma {\rm p} \to \rho {\rm p})} \approx 1.5.
\ee
in agreement with the data at HERA energies \cite{weinexp}.
It should be noted that the results for diffractive dissociation depend
much more on the choice of the wave functions than for elastic processes.
In the latter the overlap becomes essentially the density and is constrained
by normalization. The photon-pion overlap has been tested to some
extent by the pion radiative decay, but there is no such test for
the proton-2P overlap. The odderon contribution is also much more
sensitive to the parameters of the MSV than pomeron exchange.
We have also applied the matrix cumulant expansion technique
\cite{naber}, and with the approximations as done there we
found results differing from the above ones at most by 50 \%.
Taking all these uncertainties into account we
estimate the uncertainty in the cross section in this
model calculation to be at least a factor 2 at $W = 20$ GeV.
Finally we return to the question of energy-dependence.
In hadron hadron scattering the increase of the cross
sections together with the shrinking of the diffraction peak can
be well reproduced in this model by scaling the hadronic radii by
$(W^2/400 \; {\rm GeV}^2)^{0.08/3}$ \cite{dfk,naber}. Assuming that the same
radial scaling reproduces the energy dependence of the odderon
contributions we find that the integrated cross section scales like
$(W^2/400 \; {\rm GeV}^2)^{0.15}$,
leading to an enhancement of ca. 1.8 for HERA energies
as compared to $\sqrt W=20$ GeV.
\section{The electromagnetic cross section}
In this section we consider PS production mediated by photon
rather than odderon exchange (Fig. 1b).
Again the proton, now hit by the photon, is allowed to go
into some hadronic final state.
The coupling of a pseudoscalar meson PS to two photons is fixed as
described in \cite{kina}. The Lorenz structure is that of the triangle anomaly
where the strength of the coupling, parametised by a constant $u_{PS}$,
can be extracted from the partial decay width of the PS decaying into
two photons. The scattering amplitude in leading order perturbation theory
of the electroweak interaction is
\begin{eqnarray}
&&S_{fi} = i \, (2 \pi)^4 \, \delta^4(p+p_1-p_1'-k-p_X) \, T_{fi},
\nonumber\\
&&T_{fi} = -e \, \bar{u}(p_1') \gamma^{\mu} u(p_1) \,
\frac{1}{t_1} \; u_{PS} \,
\epsilon_{\mu \nu \rho \sigma} \, q_1^{\rho} q_2^{\sigma} \, T(t_1,t_2)
\, \frac{1}{t_2}
\nonumber\\
&&\hphantom{T_{fi} = }
\times \langle X(p_X) | e \, J^{\nu}(0) | P(p) \rangle.
\label{amplitude}
\end{eqnarray}
Here $J^{\nu}$ is the hadronic part of the electromagnetic
current\footnote{Throughout this Section all notations are as in
\cite{nabuch} } and the form factor $T(t_1,t_2)$ is given in terms of
a vector meson dominance ansatz (cf. \cite{kina}).
We are not interested in properties due to
polarisation of the initial and final state particles and so average
over the initial particle polarisations and sum over the final ones.
In addition we are also not interested in momentum distributions of
the outgoing hadrons except, clearly, for the PS. For this reason we
perform the integrations over their momenta.
Summing over all kinematically accessible states X the differential
cross section can be written as
\begin{eqnarray}
&&d^6 \sigma^{\gamma} = \frac{1}{2\, w(s,m^2,m_e^2)} \,
\frac{d^3p_1'}{(2 \pi)^3 \, 2 \, p_1^{' 0}}
\frac{d^3k}{(2 \pi)^3 \, 2 \, k^{0} } \;\;
\rho^{\mu \nu} \, P_{\mu \nu},
\nonumber\\
&&\rho^{\mu \nu} := \frac{e^2}{t_1^2} \,
\Big\{ (g^{\mu \nu} - \frac{q_1^{\mu}q_1^{\nu}}{q_1^2}) +
\frac{ (2 p_1 - q_1)^{\mu}(2 p_1 - q_1)^{\nu}}{q_1^2} \Big\},
\nonumber\\
&&P_{\mu \nu} := \frac{e^2}{t_2^2} \,
u_{PS}^2 \, T^2(t_1,t_2)
\epsilon_{\mu \omega \alpha \beta}
q_1^{\alpha} q_2^{\beta}
\epsilon_{\nu \rho \gamma \delta}
q_1^{\gamma} q_2^{\delta} \; (2 \pi) (2\, m_{{\rm p}})
\, W^{\rho \omega},
\nonumber\\
&&w(x,y,z):= (x-(\sqrt{y}+\sqrt{z})^2)^{\frac{1}{2}}
(x-(\sqrt{y}-\sqrt{z})^2)^{\frac{1}{2}}.
\label{diffcross}
\end{eqnarray}
Here $W_{\rho \omega}$
is the usual hadron tensor as defined for example in chapter
18 of \cite{nabuch} involving the two invariant functions
$W_1$ and $W_2$.
If the sum over all kinematically accessible states is
restricted to some subset invariant
under Lorentz and parity transformations
the two invariant functions $W_{1,2}$ which appear in
$W_{\rho \omega}$ change,
whereas the tensor structures, multiplied by $W_{1,2}$, do not.
The photoproduction cross section
of (3) for photon exchange $\sigma_{\gamma {\rm p}}(s_2)$ is
given by
\begin{eqnarray}
&&\sigma_{\gamma {\rm p}}^{\gamma}(s_2) =
\int \frac{d^3k}{2 k^0 (2 \pi )^3} \,
\frac{1}{2 \, w(s_2,m^2,t_1)} \,
\frac{1}{2} \, ( -g^{\mu \nu}) \,
\nonumber\\
&& \hphantom{d\sigma_T^{\ast}=}
(u_{PS}^2 \, T^2 \, e^2 \,
\epsilon_{\mu \omega \alpha \beta}
q_1^{\alpha} q_2^{\beta}
\epsilon_{\nu \rho \gamma \delta}
q_1^{\gamma} q_2^{\delta}) \, \frac{1}{t_2^2}
\nonumber\\[0.05cm]
&&\hphantom{d\sigma_T^{\ast}=}
(2 \, \pi) \,(2 \, m_{{\rm p}}) \,
W^{\rho \omega}.
\label{sigtotallg}
\end{eqnarray}
We now evaluate (\ref{sigtotallg})
using the data on total $\gamma^{\ast} {\rm p}$
cross section collected in \cite{brasse} where
the $\gamma^{\ast} {\rm p}$ c.m. energy is below
2.0 GeV and $0 \le |t_2| \le 6.0$ GeV$^2$.
As a check on the procedure we compare the result for the $\Delta(1232)$
resonance region with an explicit calculation of $\Delta$ electroproduction.
\subsection{The resonance region}
The total $\gamma^{\ast} {\rm p}$
absorption cross section $\Sigma = \sigma_T + \epsilon\sigma_L$
in the region of the main nucleon resonances has been determined in inelastic
electron proton scattering experiments. With the kinematical
definitions of \cite{hand} the relation of $\sigma_T$ and $\sigma_L$
to $W_{1,2}$ is:
\begin{eqnarray}
&& W_1 (M_X^2,q_2^2) = \frac{1}{4 \pi^2 \alpha} \,
\frac{M_X^2-m_{{\rm p}}^2}{2 \, m_{{\rm p}}} \,
\sigma_T (M_X^2,q_2^2),
\nonumber\\
&& W_2 (M_X^2,q_2^2) = \frac{1}{4 \pi^2 \alpha} \,
\frac{M_X^2-m_{{\rm p}}^2}{2 \, m_{{\rm p}}} \,
(1-\nu^2/q_2^2)^{-1} \,
\big[ \sigma_T(M_X^2,q_2^2) + \sigma_L(M_X^2,q_2^2) \big],
\nonumber\\
&& \nu = \frac{1}{2 \, m_{{\rm p}}} \, (M_X^2-m_{{\rm p}}^2 -q_2^2).
\label{kindef}
\end{eqnarray}
Here $\alpha$ is the fine structure constant.
For very small photon virtualities
there are photoproduction data in addition. In \cite{brasse} all these data
have been combined to extract the following parametrisation of $\Sigma$
as a function of the photon virtuality $q_2^2$ and the invariant
mass squared of the final state $M_X^2$
\begin{eqnarray}
&&\ln\Big( \frac{\Sigma}{G_D^2} \Big) = a(M_X^2)+
b(M_X^2) \ln \Big( \frac{ |\vec{q}_2|}{|\vec{q}_2|_0} \Big) +
c(M_X^2) | \ln \Big(\frac{ |\vec{q_2}|}{|\vec{q}_2|_0} \Big) |^{d(M_X^2)},
\nonumber\\
&&G_D(q_2^2) = \frac{1}{(1 - q_2^2/0.71 {\rm GeV^2})^2 }.
\label{fit}
\end{eqnarray}
$G_D$ is the dipole form factor, $|\vec{q}_2|$ ($|\vec{q}_2|_0$) is
the absolute value of the photon momentum
in the proton rest frame for photon virtuality $q^2_2$ ($q^2_2=0$). The fit is
restricted to the range $1.11 \le M_X \le 1.99$ GeV. The functions $a,b,c$
are given in \cite{brasse} only at discrete points\footnote{In the range $1.11
\le M_X \le 1.755$ in steps of $\Delta M_X$=0.015 GeV and in the range $1.755
\le M_X \le 1.990$ in steps of $\Delta M_X$=0.02 GeV.}. The function $d$ is
taken to be a constant, $d = 3.0$. To convert the fit to a continuous function
in $M_X^2$ and $q_2^2$ we represent $\Sigma$ as a histogram.
Fig. \ref{histobild} shows
as an example $\Sigma$ for a fixed photon virtuality of $q_2^2=-0.1$ GeV$^2$.
\begin{figure}[htb]
\unitlength1.0cm
\begin{center}
\begin{picture}(15.,8.8)
\put(3.0,1.0){
\epsfysize=9.0cm
\epsffile{histobild.eps}}
\end{picture}
\end{center}
\vspace*{-1.5cm}
\caption{The $\gamma^{\ast} {\rm p}$ absorption cross section
$\Sigma$ for $q_2^2=-0.1$ GeV$^2$ as a histogram using the
parametisation (\ref{fit})}
\label{histobild}
\end{figure}
It was shown in \cite{brasse} that the total virtual photon-proton cross
section for longitudinal photons $\sigma_L$ is small in the resonance region,
so we neglect it in our discussion. Setting thus $\sigma_T=\Sigma,\,
\sigma_L=0$ in (\ref{kindef})
we get an analytic expression for the hadron tensor $W_{\mu \nu}$
and are in a position to calculate the inelastic PS production cross
section (\ref{sigtotallg}), summing over all hadron final states $|X \rangle $
with invariant mass $1.11\le M_X \le 1.99$ GeV.
As a specific check on our procedure we shall want to focus on a particular
mass range, namely that of the $\Delta(1232)$ resonance. This can be achieved
as follows. We multiply $\sigma_{\gamma {\rm p}}(s_2)$ in (\ref{sigtotallg})
by
\begin{eqnarray}
1 = \int d \, M_X^2 \; \delta(p_X^2 - M_X^2)
\label{trick}
\end{eqnarray}
and define
\begin{eqnarray}
&&\tilde{\sigma}_{\gamma {\rm p}}^{\gamma} (s_2) :=
\int \, \frac{d^3k}{2 k^0 (2 \pi )^3} \,
\frac{1}{2 \, w(s_2,t_1=0,m_{{\rm p}}^2)} \,
\frac{1}{2} \, ( -g^{\mu \nu}) \,
\nonumber\\
&& \hphantom{d\sigma(s_2,t_2)=}
( 4 \pi \alpha \, u_{PS}^2 \, T^2 \,
\epsilon_{\mu \omega \alpha \beta}
q_1^{\alpha} q_2^{\beta}
\epsilon_{\nu \rho \gamma \delta}
q_1^{\gamma} q_2^{\delta} )\, \frac{1}{t_2^2}
\nonumber\\[0.05cm]
&&\hphantom{d\sigma(s_2,t_2)=}
(2 \, \pi) \, (2 \, m_{{\rm p}}) \, W^{\rho \omega}(p_X^2,t_2) \,
m_{{\rm p}}^2 \, \delta(p_X^2-m_X^2)
\label{newsigma}
\end{eqnarray}
Contracting the Lorentz indices in (\ref{newsigma}), inserting the explicit
form of the hadron tensor and expressing the invariant functions through
(\ref{kindef})
we get finally for the cross section of inelastic
PS production
\begin{eqnarray}
&&\sigma^{\gamma} =
\int_{M_{min}}^{M_{max}} \, d M_X \; \frac{2 \, M_X}{m^2} \,
\int_{y_{min}}^{y_{max}} \, \frac{d y}{y} \; n(y) \;
\int_{t_{2_{min}}}^{t_{2_{max}}} \, d t_2 \;
\frac{d \tilde{\sigma}_{\gamma {\rm p}}^{\gamma}}{d t_2} (s_2),
\nonumber\\[0.1cm]
&&
\frac{d \tilde{\sigma}_{\gamma {\rm p}}^{\gamma}}{d t_2}=
\frac{m^2 T^2 u^2 \; \Sigma(M_X^2,t_2)}
{64 \pi^2 ( s_2 - m_{{\rm p}}^2 )^2 t_2^2}\,
\frac{M_X^2 - m_{{\rm p}}^2}
{m_{{\rm p}}^4 + (M_X^2-t_2)^2 -
2 m_{{\rm p}}^2 ( M_X^2 + t_2)}
\nonumber\\
&&\hphantom{\frac{d \tilde{\sigma}_T}{d t_2}}
\Big\{ m_{PS}^4( m_{{\rm p}}^2 - M_X^2 )^2
- 2 t_2 m_{PS}^2 \big(
M_X^2 ( m_{PS}^2 + M_X^2 - s_2) +
m_{{\rm p}}^2 ( s_2 - M_X^2 ) \big) +
\nonumber\\
&&\hphantom{m_{PS}^4( \{ \frac{d \tilde{\sigma}_T}{d t_2}}
t_2^2 \big( m_{{\rm p}}^4 + m_{PS}^4 + M_X^4 +
2 m_{PS}^2 ( 2 M_X^2 - s_2 ) + 2 m_{{\rm p}}^2 ( m_{PS}^2 - s_2) -
\nonumber\\
&&\hphantom{ff m_{PS}^4( \{ t_2^2 \big( \frac{d \tilde{\sigma}_T}{d t_2}}
2 M_X^2 s_2 + 2 s_2^2 \big) +
2 t_2^3 \big( s_2 - m_{{\rm p}}^2 - m_{PS}^2 - M_X^2 \big)
+ t_2^4 \; \; \Big\}
\label{epax}
\end{eqnarray}
The integration limits with respect to $t_2$ are just the
kinematical limits of the two by two process
$\gamma + {\rm p} \rightarrow {\rm PS + X}$
as given,
for example, in \cite{particle}.
In the second row of Tab. \ref{tot} we list our results
for the total cross section of
inelastic PS electroproduction in the photoproduction region
(\ref{cuts}), calculated from (\ref{epax}).
\begin{table}[hbt]
\begin{center}
$
\begin{array}{|l||c|c|}
\hline
& \sigma^{\gamma}({\rm e \, p}\rightarrow {\rm e\, PS\, X})
& \sigma^{\gamma}({\rm e \, p}\rightarrow {\rm e\, PS\, p})\\
\hline
\pi^0 & 2.0 \; {\rm pb} & 78.1 \; {\rm pb} \\
\hline
\eta & 1.9 \; {\rm pb} & 56.4 \; {\rm pb} \\
\hline
\eta^{\prime} & 3.1 \; {\rm pb} & 83.6 \; {\rm pb} \\
\hline
\eta_c & 0.3 \; {\rm pb} & 3.83 \; {\rm pb}\\
\hline
\end{array}
$
\caption{Total cross section for for inelastic and elastic PS
electroproduction by photon exchange.}
\label{tot}
\end{center}
\end{table}
These values are compared in the third row of Tab. \ref{tot}
to the results of
elastic PS production calculated in terms of the
EPA in \cite{kina}.
As we can read off from
Tab. \ref{tot} the contributions from the inelastic PS production
integrated cross sections
are very small compared to the elastic ones. However
in experimental analyses one typically has to make additional cuts
on the data and, of course, this will change the ratio of inelastic to
elastic contributions. We explore this first for the $t_2$-distribution.
To compare with the results of
\cite{kina} it is convenient to plot the differential cross section with
respect to the logarithm of $-t_2$ rather than
$d\sigma^{\gamma}/dt_2$. Our results
for $\pi^0$ production are shown in Fig. \ref{t2bild}, together with the
results of \cite{kina} for the elastic case.
The cross sections for the other pseudoscalar mesons
in essence scale as the coupling constants $u_{PS}^2$.
It is immediately clear
that the large difference in integrated cross section
between the elastic and inelastic case is coming from the
region of very small $|t_2|$. In the limit $t_2 \rightarrow 0$
$d\sigma^{\gamma}/dt_2$
goes to a constant for the break-up calculation and
simultaneously the available phase space region in
$t_2$ becomes smaller and
smaller. This results in the decrease of the logarithmic distributions
in Fig. \ref{t2bild} for $|t_2|\le 10^{-2}$.
In the case of the elastic production $d\sigma^{\gamma}/dt_2$ increases
as $1/|t_2|$ as $t_2 \rightarrow 0$.
The corresponding logarithmic distribution
then shows a plateau which is cut off by the upper phase space limit of
$t_2$ \cite{kina}.
\begin{figure}[htb]
\unitlength1.0cm
\begin{center}
\begin{picture}(15.,6.8)
\put(3.5,0.0){
\epsfysize=8.0cm
\epsffile{picompare.eps}}
\end{picture}
\end{center}
\vspace*{-0.4cm}
\caption{The ${\rm Log}_{10}(-t_2)$ distributions for elastic
$\pi^0$
production (solid line)
taken from [22]
compared to case of inelastic
production (dashed line), integrated over the
whole resonance region ($1.11 \le M_X \le 1.99$ GeV). }
\label{t2bild}
\end{figure}
Another and experimentally preferred variable to measure is
$|\vec{k}_T|\equiv k_T$,
the transverse momentum of the pseudoscalar mesons, defined relative to the
beam axis. At HERA $k_T$ distributions can be measured for values of $k_T$
greater than $O$(0.1 GeV).
The photoproduction cuts of (\ref{cuts}) restrict the
transverse
momentum of the incident photon to be smaller than $O$(0.1 GeV), so in
our case there is
practically no distinction between the beam axis and the photon
axis. Then within the accuracy of the
EPA calculations we have $t_2 = -k_T^2$.
The complete electromagnetic result
$d\sigma^{\gamma}/dk_T$
for all four pseudoscalars is shown in Fig. \ref{ktbild}.
We see that applying a cut $k_T > 0.1$ GeV
the break-up cross section represents a much larger
fraction of the total
then for the integrated cross sections (Table 1).
\begin{figure}[htb]
\unitlength1.0cm
\begin{center}
\begin{picture}(15.,7.8)
\put(-0.5,0.0){
\epsfysize=8.50cm
\epsffile{k_telplusinel.eps}}
\put(7.5,0.0){
\epsfysize=8.50cm
\epsffile{piktelinelcomp.eps}}
\end{picture}
\end{center}
\vspace*{-0.0cm}
\caption{(a) The complete electromagnetic result for the
$k_T$ distributions of $\pi^0$
(solid line), $\eta$ (dashed line), $\eta'$
(dashed dotted line) and $\eta_c$ (dotted line) production.
(b) The elastic (dashed dotted line) and the inelastic
contribution
(dashed line) together with the full electromagnetic
result for pion production}
\label{ktbild}
\end{figure}
\subsection{The $\Delta$(1232) resonance}
In this section we consider the hadron final state
$|{\rm X} \rangle$ to be the resonance $\Delta$(1232), and evaluate the
cross section in two ways. Firstly we use the procedure of Section 3.1
but restrict ourselves to the $\Delta$ region
which we define as $1.11 \le M_X \le 1.40$ GeV (see Fig. \ref{histobild}).
Secondly the $\Delta$ is treated in the isobar model \cite{isobar}
as a stable particle (zero width approximation) of spin 3/2.
A spin 3/2 particle can be described \cite {rarita} by a vector-spinor
$R^{\mu}$, which is contained in the direct product of the vector and the
Dirac representation of the Lorenz group:
\begin{eqnarray}
R^{\mu} \; \epsilon \; (\frac{1}{2},\frac{1}{2})
\otimes \big[ (\frac{1}{2},0) \oplus (0,\frac{1}{2}) \big].
\nonumber
\label{rari}
\end{eqnarray}
To project out the spin 3/2 part of this reducible representation
corresponding to the $\Delta$-particle
one has to require $\gamma_{\mu}R^{\mu}$=0 and
$p_{\Delta}^{\mu}R_{\mu}$=0 \cite{rarita,weinberg}
and that every component of $R^{\mu}$ fulfills
the Dirac equation $(p_{\Delta}.\gamma-m_{\Delta})R_{\mu}$=0. Let
$R_{\mu}(p_{\Delta},i)$ (i=1,...,4) be a set of normalised basis
vector-spinors:
\begin{eqnarray}
\bar{R}_{\mu}(p_{\Delta},i) R^{\mu}(p_{\Delta},j) = \delta_{ij} \; \; \; \;
i,j=1,...,4.
\label{norm}
\end{eqnarray}
To calculate the partial width or the total and differential cross sections
we only need the polarisation sum $S_{\mu \nu}$. Using the above conditions
we get for the polarisation sum
($v_{\Delta}=p_{\Delta}/M_{\Delta}$)
\begin{eqnarray}
S^{\mu \nu}(p_{\Delta}) &=& \sum_i R^{\mu}(p_{\Delta},i)
\bar{R}^{\nu}(p_{\Delta},i)
\nonumber\\
&=& \frac{1}{3} \Big( \frac{\gamma.v_{\Delta} + 1}{2} \Big)
\big(3 g^{\mu \nu} - 2 v_{\Delta}^{\mu} v_{\Delta}^{\nu} -
\gamma^{\mu} \gamma^{\nu} \big)
\Big( \frac{\gamma.v_{\Delta} + 1}{2} \Big).
\label{spinsum}
\end{eqnarray}
The coupling of the $\Delta$ resonance to the proton and the
photon is parametrised through
\begin{eqnarray}
\frac{2 e f_{\gamma R}}{M_{\Delta}} \epsilon_{\mu \nu \rho \sigma}
p_{\Delta}^{\rho} p^{\sigma} \, G_D(q_2^2),
\label{deltac}
\end{eqnarray}
which corresponds to the covariant formulation of the
isobar model \cite{isobar}. Here $G_D$ is defined in (23) and
$f_{\gamma R}$ is the coupling constant. From (\ref{deltac})
together with (\ref{spinsum}) we can calculate the $\Delta$
decay into a real photon and a proton. The result is
\begin{eqnarray}
\Gamma (\Delta^+ \rightarrow {\rm p} \gamma) =
\frac{e^2 f_{\gamma R}^2}{3 \, \pi} \;
\frac{(M_{\Delta}^2-m_{{\rm p}}^2)^3}{8\, M_{\Delta}^5}.
\label{ddecay}
\end{eqnarray}
This partial decay width is known experimentally \cite{dalitz,berends},
$\Gamma (\Delta^+$ $\rightarrow$ $p \gamma)$= $0.65 \pm 0.02$ MeV.
Taking the central value
from which it follows that $f_{\gamma R}$=2.4. Now everything is fixed
and we
can easily derive the hadron invariant functions
$W_1^{\Delta},W_2^{\Delta}$:
\begin{eqnarray}
W_1^{\Delta} &=& \frac{f_{\gamma R}^2 G_D^2}{3 M_{\Delta}^4 m_{{\rm p}}^2}
((p q_2)^2 - p^2 q_2^2 ) \, f(pq_2,t_2) \,
\delta ( (p+q_2)^2 - M_{\Delta}^2),
\nonumber\\
W_2^{\Delta} &=& \frac{f_{\gamma R}^2 G_D^2}{3 M_{\Delta}^4 m_{{\rm p}}^2}
(-p^2 q_2^2) \, f(pq_2,t_2) \,
\delta ( (p+q_2)^2 - M_{\Delta}^2),
\nonumber\\
f(p q_2, t_2) &=& \{ \,
(m_{{\rm p}} + 2 M_{\Delta}) m_{{\rm p}}^2 +
2(m_{{\rm p}} + M_{\Delta}) p q_2
+ m_{{\rm p}} ( M_{\Delta}^2 + t_2) \; \}.
\label{wd}
\end{eqnarray}
Having derived the hadron tensor it is then simply a task of
contracting indices to get the cross section of photon proton
scattering into a PS and the $\Delta^+$ and, by applying the EPA, the
total electroproduction cross section.
\begin{eqnarray}
&&\sigma^{\gamma} = \int_{y_{min}}^{y_{max}} \,
\frac{dy}{y} \, n(y)
\int_{t_{2_{min}}}^{t_{2_{max}}} \,
dt_2 \, \frac{d \sigma_{\gamma {\rm p}}^{\gamma}}{dt_2} (s_2,t_2),
\nonumber\\
&&\frac{d \sigma_{\gamma {\rm p}}^{\gamma} }{d t_2}=
\frac{e^2 f_{\gamma R}^2 G_D^2 u_{PS}^2 T^2 }
{384 m_{{\rm p}} M_{\Delta}^3 \pi
(s_2 - m_{{\rm p}}^2 )^2 t_2^2 }
\big( (m_{{\rm p}} + M_{\Delta})^2 - t_2 \big)
\nonumber\\
&&\hphantom{\frac{d \tilde{\sigma}_T}{d t_2}}
\Big\{ m_{PS}^4( m_{{\rm p}}^2 - M_{\Delta}^2 )^2
- 2 t_2 m_{PS}^2 \big(
M_{\Delta}^2 ( m_{PS}^2 + M_{\Delta}^2 - s_2) +
m_{{\rm p}}^2 ( s_2 - M_{\Delta}^2 ) \big) +
\nonumber\\
&&\hphantom{m_{PS}^4( \{ \frac{d \tilde{\sigma}_T}{d t_2}}
t_2^2 \big( m_{{\rm p}}^4 + m_{PS}^4 + M_{\Delta}^4 +
2 m_{PS}^2 ( 2 M_{\Delta}^2 - s_2 ) + 2 m_{{\rm p}}^2 ( m_{PS}^2 - s_2) -
\nonumber\\
&&\hphantom{ff m_{PS}^4( \{ t_2^2 \big( \frac{d \tilde{\sigma}_T}{d t_2}}
2 M_{\Delta}^2 s_2 + 2 s_2^2 \big) +
2 t_2^3 \big( s_2 - m_{{\rm p}}^2 - m_{PS}^2 - M_{\Delta}^2 \big)
+ t_2^4 \; \; \Big\}
\label{dsigdelta}
\end{eqnarray}
This calculation is compared with the result
for $d\sigma/d {\rm Log}_{10}(-t_2)$
of Sect. 3.1,
restricted to the $\Delta$ region in Fig. \ref{compare}a.
%
\begin{figure}[htb]
\unitlength1.0cm
\begin{center}
\begin{picture}(15.,6.8)
\put(-0.7,0.0){
\epsfysize=8.0cm
\epsffile{piexactmodel.eps}}
\put(8,0.0){
\epsfysize=8.0cm
\epsffile{piexactmodelkt.eps}}
\end{picture}
\end{center}
\vspace*{-0.3cm}
\caption{(a) ${\rm Log}_{10}(-t_2)$ distributions for the delta
resonance region,
calculated from the
$\gamma^{\ast} {\rm p}$
cross section data as explained in
Sect. 3.1, integrated over the
invariant mass range $1.11 \le M_X \le 1.4$ GeV (dashed line),
and from the isobar model (solid line).
(b) $\sigma_T=\Sigma$ at $M_X=1.215$ GeV
from (\ref{fit}) as a function of $t_2$
(dashed lines)
compared with $\sigma_T$ calculated from $W_1^{\Delta}$.}
\label{compare}
\end{figure}
The comparison is quite satisfactory. The deviations can be
understood qualitatively by plotting $\sigma_T=\Sigma$
from (\ref{fit}) at
$M_X=1.215$ where the curve
in Fig. \ref{histobild} has its maximum,
as a function of $t_2$ together with $\sigma_T$
calculated from $W_1^{\Delta}$ in (\ref{wd}) using the definition
(\ref{kindef}) and replacing
the $\delta$-function in (\ref{wd}) with $1/(\pi M_{\Delta}\Gamma)$
where $\Gamma$ is the total width of the $\Delta$,
$\Gamma \approx 120$ MeV \cite{particle} (Fig. \ref{compare}).
For $|t_2| \le 0.1 \; {\rm GeV}^2$ the fit (\ref{fit}) to the measured
cross section $\Sigma$ is somewhat larger due to a
significant S-wave component which is absent in the calculation of the
$\Delta$ alone but the shapes are nearly the same. For
$|t_2|\ge 0.1 \; {\rm GeV}^2$ the model calculation of $\sigma_T$
decreases too slowly compared to experiment. In essence this translates
directly into the shapes of Fig. \ref{compare}a where for
$|t_2| \le 0.1 \; {\rm GeV}^2$ the
differential cross section with respect to the logarithm of
the exact result is
a little bit larger than the one of the model calculation whereas for
$|t_2| \ge 0.1 \; {\rm GeV}^2$ the model calculation
overshoots the exact one.
\section{Conclusions}
We have calculated pseudoscalar electroproduction by odderon exchange
with nucleon dissociation in a specific model which has proven successful
in processes dominated by pomeron exchange. The principal conclusion
is that the cross section is significantly larger for nucleon breakup
than when the nucleon
remains intact.
\begin{figure}[htb]
\unitlength1.0cm
\begin{center}
\begin{picture}(15.,7.4)
\put(2.5,0.0){
\epsfysize=9.0cm
\epsffile{k_tmsv.eps}}
\end{picture}
\end{center}
\vspace*{-0.0cm}
\caption{The $k_T$ distribution in pion production
for odderon exchange (solid line)
calculated from the amplitude
(\ref{msvfinalampl}) of Sect. 2 compared to the
complete electromagnetic result (dashed line). Interference
contributions are not taken into account.}
\label{k_Tmsv}
\end{figure}
We have also presented the results of an ``exact'' calculation of the same
process by photon exchange. When combined with the previous
calculation for the elastic case
\cite{kina} we have a precise prediction for the purely
electromagnetic cross section.
In Fig. \ref{k_Tmsv} we show for $\pi^0$ production the
distribution of the pion's $k_T$ summed over the elastic and
inelastic channels for the electromagnetic and the odderon
exchange. Interference terms are not taken into account here.
Even taking the most pessimistic view of the uncertainties
in the model the process should be observable at HERA.
This would establish the soft odderon as an exchange-object
in high energy scattering on equal footing with the soft pomeron.
On the other hand
failure to observe any significant
deviation from the electromagnetic result
would be clear evidence for the
complete failure of our model.
One possible conclusion would then be that the soft
odderon really does not exist and
that our understanding of diffractive processes is much
less than is believed.
The competing hadronic process, Reggeised $\omega$ exchange, can be estimated
from what is known about $\pi^0$ photoproduction at low energies,
$\sqrt s \le 5.5$ GeV
and the falloff with energy given
by Regge theory:
$d \sigma / d t_2|_{\omega-exch} \sim s^{2 \,
\alpha_{\omega}(t_2)-2}$
with $\alpha_{\omega}(0) \approx$ 0.18 to 0.5.
The result is very much smaller than the calculated
odderon contribution and is not a serious background to it. However it
could conceivably be sufficiently large to interfere with the photon exchange
amplitude and so simulate a very weakly-coupled odderon.
\section{Acknowledgements}
The authors would like to thank P. V. Landshoff, K. Meier, T. Berndt,
S. Tapprogge, A. Likhoded, W. Buchm\"uller, E. Meggiolaro, M. Strikman,
B. Nicolescu, A. Hebecker, H. J. Pirner, G. Kulzinger and S. Weinstock
for useful discussions and correspondence.
\newpage
|
1,116,691,499,179 | arxiv | \section{Conclusion}\label{sec:conclusion}
In this paper, we have proposed a framework that integrates low-light enhancement and fuses RGB and event modailies for effective monocular depth estimation under adverse night conditions. A synthetic night-time driving dataset that contains paired RGB, event and depth images has also been constructed, which includes scenes that encountering adverse weather, light and road conditions. The experiment results have shown that our proposed framework is able to achieve satisfactory depth estimation results in various adverse night scenarios.
\section{Dataset}\label{sec:dataset}
To the best of our knowledge, there is currently no dataset that is proposed for monocular depth estimation at adverse night conditions, containing paired RGB, event and depth images. In order to validate the effectiveness of our proposed framework, and advance future research in this direction, we construct the first adverse night-time driving dataset that includes the aforementioned data modalities and the ground truth depth maps. The dataset was constructed using CARLA \cite{dosovitskiy2017carla}, a popular simulator in the field of autonomous driving, and the event camera plugin \cite{hidalgo2020learning}.
\subsection{Data Collection and Statistics}
We collect the data through a sensor suite that contains an RGB camera, an event-based camera, and a depth camera. The positive and negative thresholds for triggering an event of the event camera was set to 0.4. All the sensors were set with a FOV of 90 degrees, a resolution of $640 \times 480$ pixels, and a data generation rate of 8 Hz. All sensors had an egocentric view mounted on a vehicle while it was driving around. The start and end points of the vehicle were manually selected and the routes of the vehicle were accurately planned. The speed and following distance of the vehicles as well as the lights at night were based on road traffic guidelines to achieve the maximum realism of night-time driving. The driving scenes are diverse, including typical city and rural roads, as well as tunnel and highway. The statistics of the driving scenarios is shown in Fig.~\ref{fig:scene_distribution}. We also apply adverse weather conditions such as rain, fog, and the combination of rain and fog to the scene, and Fig.~\ref{fig:weather_distribution} shows the distribution of the adverse weather in our dataset. The dataset contains 11,191 samples. Each sample has paired RGB, event and ground truth depth images. We split the entire dataset into 70\% for training, 15\% for validation and the rest 15\% for testing. We name our dataset as \textbf{MonoANC} (\textbf{Mono}cular depth estimation at \textbf{A}dverse \textbf{N}ight \textbf{C}onditions).
\begin{figure}
\centering
\begin{subfigure}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/pie-city4.0.pdf}
\caption{}
\label{fig:scene_distribution}
\end{subfigure}%
\begin{subfigure}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/pie-scenraio6.0.pdf}
\caption{}
\label{fig:weather_distribution}
\end{subfigure}%
\caption{Distribution of different night-time driving environments (a) and different adverse weather conditions (b).}
\label{fig:dataset_statistics}
\end{figure}
\section{Discussion}\label{sec:discussion}
\subsection{Performance on Carla-ADC}
We designed sufficient experiments to demonstrate that our framework effectively fuses information from the event camera into depth estimation. From table II, it can be seen that pure events as input has no way to predict satisfactory depth values due to the limited information, while pure RGB input could perform normally. On the basis of RGB input, either low-light enhancement, adding Sober operator, or fusing events, will improve the accuracy of depth estimation on top of that, and the enhancement performance varies due to the different enhancement methods. And after adding the edge information extracted by the sober operator directly to the image after low-light enhancement, there is a significant decrease in Abs-rel. The edge information effectively complements the possible errors caused by overexposure or wrong enhancement in the low-light enhancement process. A similar conclusion can be drawn from table III. Meanwhile, the final predictions of the two networks regress to a similar interval.
Figure 6 shows the quantitative result of the initial RGB input, prediction of fusion map, and comparison with depth ground truth. The depth map of the EVEN fusion network has a very noticeable improvement in detail at the edges as well as the object in the distance. This also visually evidences the effective fusion of the edge information and HDR features of events into our network
However, the edge information extracted by the Sober operator is generated from computer graphics while not captured by the real sensor. Finally, event fused with low-light enhancement achieves targeted enhancement of edge features and adds more high dynamic range information to get the best predictions.
\iffalse
\subsection{Performance on MVSEC}
\begin{table}
\centering
\begin{tabular}{l|l|r|r|r}
\hline
\multirow{\begin{tabular}[c]{@{}l@{}}Input Sequences \\(Trained on day2)\end{tabular}} & Error Metric ↓~ & \multicolumn{3}{l}{~ ~ ~Accuracy Metric ↑} \\
\cline{2-5}
& ~ ~ ~ ~Abs & \multicolumn{1}{l|}{~ ~α1} & \multicolumn{1}{l|}{~ ~α2} & \multicolumn{1}{l}{~ ~α3} \\
\hline
RGB\_Night1 & ~ ~ 4.7747 & 0.6724 & 0.8819 & 0.9530 \\
\hline
EVEN\_Night1 & \textbf{~ ~ 4.3375} & \textbf{0.6729} & \textbf{0.8858} & \textbf{0.9606} \\
\hline
RGB\_Night2 & ~ ~ 5.1952 & 0.6643 & 0.8734 & 0.9414 \\
\hline
EVEN\_Night2 & \textbf{~ ~ 4.4671} & \textbf{0.6788} & \textbf{0.8898} & \textbf{0.9600} \\
\hline
RGB\_Night3 & ~ ~ 4.7022 & 0.6719 & 0.8988 & 0.9633 \\
\hline
EVEN\_Night3 & \textbf{~ ~ 4.1098} & \textbf{0.6883} & 0.8939 & \textbf{0.9716} \\
\hline
\end{tabular}
\end{table}
\fi
\section{Experiment}\label{sec:experiment}
In this section, we first describe the implementation details of our framework - EVEN, and then the evaluation metrics, followed by the baseline methods that are used to compare against our framework. We then show overall results of all methods on MonoANC, and present the results of cross validation of the performance of EVEN on different adverse weather combinations at the end.
\subsection{Implementation Details}
We implement our EVEN framework using PyTorch. The learning rate for training the multi-modal fusion network was 1e-3. AdamW \cite{loshchilov2017decoupled} was used as the optimizer. Weight decay was set to 1e-3. Step size of scheduler was 5 during the training of the fusion network and we trained it for 100 epochs. We set $\beta$ to 0.8 in Equation~\ref{eq:joint_loss}. After the fusion network was properly trained, we pre-generated the fusion images, and trained depth estimation network (i.e., Depthformer and SimIPU) using their default settings \cite{lidepthtoolbox2022}.
\subsection{Evaluation Metrics}
We use standard evaluation protocols following~\cite{eigen2014depth} to evaluate our framework and all baseline methods. Specifically, we measure the mean
absolute relative error (Abs. Rel.), mean squared relative error (Sq. Rel.), root mean squared error (RMSE), and mean log10 error (Log10). Apart from the error metrics, we also adopted three different thresholds as the accuracy metrics which are the common practice in the literature, i.e., $\alpha = 1.25^i, i = 1, 2, 3$.
\subsection{Baseline Methods}
We implement six baselines to compare and examine the effectiveness of our framework on boosting depth estimation, i.e., the use of low-light enhancement and fusion with event and RGB modalities. As mentioned early, salient edges and texture are core features for depth estimation. We therefore adopted the Sobel operator \cite{kanopoulos1988design}, which is an edge detector, to process the RGB modality, and using the resulting image as the alternative to the event image in our framework to justify the use of event data, which is also able to retain salient edge information.
\begin{enumerate}
\item RGB: the raw RGB image is fed directly into the depth estimation network as the only input for depth estimation.
\item Event: the event image is fed directly into the depth estimation network as the only input.
\item RGB + Sobel: the paired raw RGB and Sobel operator processed images are used as the inputs to the phase-2 of EVEN, followed by depth estimation of phase-3.
\item RGB + Event: the paired raw RGB and event images are used as the inputs to the phase-2 of EVEN, followed by depth estimation of phase-3.
\item RGB$_{Enhanced}$: the enhanced RGB image after phase-1 is fed directly into the depth estimation network as the only input for depth estimation.
\item RGB$_{Enhanced}$ + Sobel: the paired enhanced RGB image after phase-1 and Sobel operator processed image are used as the inputs to the phase-2 of EVEN, followed by depth estimation of phase-3.
\end{enumerate}
\begin{table*}[]
\caption{Results on MonoANC Dataset When the Depth Estimation Network in EVEN is Instantiated as Depthformer and SimIPU Respectively}
\label{tab:overall_results}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}ccccccccccccccc@{}}
\toprule
\multicolumn{1}{l}{} & \multicolumn{7}{c}{Depthformer} & \multicolumn{7}{c}{SimIPU} \\ \cmidrule(l){2-15}
Input Sequence &
\multicolumn{4}{c}{Error Metric ↓} &
\multicolumn{3}{c}{Accuracy Metric ↑} &
\multicolumn{4}{c}{Error Metric ↓} &
\multicolumn{3}{c}{Accuracy Metric ↑} \\ \cmidrule(l){2-15}
& Abs. Rel. & Sq. Rel. & RMSE & Log10 & $\alpha 1$ & $\alpha 2$ & $\alpha 3$ & Abs. Rel. & Sq. Rel. & RMSE & Log10 & $\alpha 1$ & $\alpha 2$ & $\alpha 3$ \\ \midrule
RGB & 0.192 & 0.310 & 4.973 & 0.069 & 0.810 & 0.911 & 0.985 & 0.293 & 0.370 & 5.177 & 0.079 & 0.710 & 0.921 & 0.972 \\ \midrule
Event & 0.452 & 0.220 & 7.775 & 0.172 & 0.390 & 0.622 & 0.795 & 0.594 & 1.240 & 9.180 & 0.116 & 0.552 & 0.828 & 0.932 \\ \midrule
RGB + Sobel & 0.180 & 0.340 &5.304 & 0.064 & 0.808 & 0.908 & 0.956 & 0.266 & 0.310 & 4.947 & 0.067 & 0.773 & 0.930 & 0.976 \\ \midrule
RGB + Event & 0.179 & 0.340 & 5.992 & 0.067 & 0.795 & 0.920 & 0.956 & 0.229 & 0.280 & 5.151 & 0.057 & 0.837 & 0.953 & 0.984 \\ \midrule
RGB$_{Enhanced}$ & 0.181 & 0.390 & 5.737 & 0.074 & 0.765 & 0.924 & 0.971 & 0.263 & 0.300 & 4.998 & 0.058 & 0.824 & 0.948 & 0.984 \\ \midrule
RGB$_{Enhanced}$ + Sobel & 0.139 & \textbf{0.280} & 5.023 & 0.063 & 0.806 & 0.970 & 0.988 & 0.216 & \textbf{0.240} & \textbf{4.080} & 0.063 & 0.846 & 0.954 & 0.986 \\ \midrule
EVEN (Ours) &
\textbf{0.112} &
\textbf{0.280} &
\textbf{4.335} &
\textbf{0.049} &
\textbf{0.903} &
\textbf{0.976} &
\textbf{0.993} &
\textbf{0.125} &
0.280 &
4.845 &
\textbf{0.049} &
\textbf{0.857} &
\textbf{0.959} &
\textbf{0.988} \\ \bottomrule
\end{tabular}
}
\end{table*}
\begin{comment}
\begin{table*}[]
\caption{Cross Validation Results of EVEN on Different Adverse Weather Conditions}
\label{tab:cross_validation}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}cccccccccccccccc@{}}
\toprule
\multicolumn{2}{c}{\multirow{Input Sequence}} &
\multicolumn{7}{c}{Depthformer} &
\multicolumn{7}{c}{SimIPU} \\ \cmidrule(l){3-16}
\multicolumn{2}{c}{} &
\multicolumn{4}{c}{Error Metric ↓
} &
\multicolumn{3}{c}{Accuracy Metric
↑} &
\multicolumn{4}{c}{Error Metric ↓
} &
\multicolumn{3}{c}{Accuracy Metric
↑} \\ \midrule
Train Set &
Test Set &
Abs. Rel. &
Sq. Rel. &
RMSE &
Log10 &
$\alpha 1$ &
$\alpha 2$ &
$\alpha 3$ &
Abs. Rel. &
Sq. Rel. &
RMSE &
Log10 &
$\alpha 1$ &
$\alpha 2$ &
$\alpha 3$ \\ \midrule
rain and fog at the same time &
rain only and fog only &
0.325 & 1.987 & 8.475 & 0.187 &
0.471 & 0.645 & 0.797 & 0.330 &
1.865 & 8.710 &
0.187 &
0.420 &
0.655 &
0.786 \\ \midrule
rain only and fog only&
rain and fog at the same time&
\textbf{0.267} & \textbf{0.315} & \textbf{4.934} \textbf{0.031} & \textbf{0.646} & \textbf{0.833} & \textbf{0.937} & \textbf{0.260} & \textbf{0.307} & \textbf{4.933} &
\textbf{0.031} &
\textbf{0.680} &
\textbf{0.844} &
\textbf{0.939} \\ \bottomrule
\end{tabular}
}
\end{table*}
\end{comment}
\begin{table*}[]
\caption{Cross Validation Results of EVEN on Different Adverse Weather Conditions}
\label{tab:cross_validation}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}cccccccccccccccc@{}}
\toprule
\multicolumn{2}{c}{\multirow{2}{*}{Input Sequence}} & \multicolumn{7}{c}{Depthformer} & \multicolumn{7}{c}{SimIPU} \\ \cmidrule(l){3-16}
\multicolumn{2}{c}{} & \multicolumn{4}{c}{Error Metric ↓} & \multicolumn{3}{c}{Accuracy Metric ↑} & \multicolumn{4}{c}{Error Metric ↓} & \multicolumn{3}{c}{Accuracy Metric ↑} \\ \midrule
Train Set & Test Set & Abs. Rel. & Sq. Rel. & RMSE & Log10 & $\alpha 1$ & $\alpha 2$ & $\alpha 3$ & Abs. Rel. & Sq. Rel. & RMSE & Log10 & $\alpha 1$ & $\alpha 2$ & $\alpha 3$ \\ \midrule
rain and fog at the same time & rain only and fog only & 0.325 & 1.987 & 8.475 & 0.187 & 0.471 & 0.645 & 0.797 & 0.330 & 1.865 & 8.710 & 0.187 & 0.420 & 0.655 & 0.786 \\ \midrule
rain only and fog only & rain and fog at the same time & \textbf{0.267} & \textbf{0.315} & \textbf{4.934} & \textbf{0.031} & \textbf{0.646} & \textbf{0.833} & \textbf{0.937} & \textbf{0.260} & \textbf{0.307} & \textbf{4.933} & \textbf{0.031} & \textbf{0.680} & \textbf{0.844} & \textbf{0.939} \\ \bottomrule
\end{tabular}}
\end{table*}
\subsection{Overall Results}
As we instantiate the depth estimation network separately as either a Depthformer or a SimIPU, we run six baselines accordingly based on the instantiated depth estimation network. Table~\ref{tab:overall_results} summarizes the overall results. Our complete EVEN framework outperforms the baseline methods, and its performance improvement is consistent across Depthformer and SimIPU. The absolute relative error (Abs. Rel.) is reduced by 41.7\% and 57.3\% respectively compared to a single RGB input to the Depthformer and SimIPU. An 11.5\% relative improvement on $\alpha 1$ accuracy metric can also be observed for EVEN using Depthformer, and a 20.7\% increase for EVEN using SimIPU, compared to a single RGB image as the input to these two depth estimation networks.
Fig.~\ref{fig:qualitative_results} shows four qualitative results of depth estimation on MonoANC. It can be observed that the depth maps estimated by EVEN have a noticeable improvement in detail at the edges as well as the objects in the far distance compared to those of baselines. As indicated by the red boxes in Fig.~\ref{fig:qualitative_results}, our complete EVEN framework can produce depth maps without much artifacts, and are closer to the ground truth. These visually prove that the fusion of the edge information and HDR features of event data in EVEN is effective. When we replace the event image with Sobel operator processed image, i.e., indicated by RGB$_{Enhanced}$ + Sobel, the quality of the estimated depth map slightly degrades, but is still better than those of the rest baseline methods.
\begin{figure*}[]
\centering
\centerline{\includegraphics[width=\textwidth,scale=1.0]{figures/typical4-7-9.0.pdf}}
\caption{Qualitative results of depth estimation on MonoANC dataset. Top two examples are the results when Depthformer is adopted as the depth estimation network, and the bottom two examples are the results when SimIPU is adopted as the depth estimation network. Areas indicated by the red boxes show that our EVEN framework can better estimate monocular depth than other baseline methods.}
\label{fig:qualitative_results}
\end{figure*}
\subsection{Cross Validation on Adverse Weather}
We further split MonoANC based on different weather conditions. Specifically, there are three adverse weather conditions as shown in Fig.~\ref{fig:dataset_statistics}(b): 1) rain only; 2) fog only; 3) rain and fog occur together. We split the dataset into two sets. One set contains samples of rain only and fog only, and the other set contains samples of simultaneous occurrence of rain and fog in the scene. A two-fold cross-validation is then conducted to evaluate the performance of EVEN. Table~\ref{tab:cross_validation} shows the results. When the framework has seen each individual weather condition during the training, it can well estimate the depth of the scene with mixed adverse weather conditions, i.e., rain and fog occurring at the same time in the scene. Conversely, it becomes difficult for the framework to estimate depth for the scenes with only a single adverse weather condition if the training data is scenes of mixed adverse weather. Hence, cost function of decomposing adverse weather combinations is worth investigating for better depth estimation in future work.
\section{Introduction}\label{sec:intro}
\begin{figure}[h]
\centerline{\includegraphics[width=\linewidth,scale=1.0]{figures/typical33-2.0.pdf}}
\caption{Data samples from our MonoANC dataset. We show paired RGB and event images, and the ground truth depth map for each sample. The adverse night scenarios from top to bottom are: 1) driving in the heavy rain on a city road; 2) driving under a bridge at a foggy night; 3) driving at the countryside at a rainy and foggy night.}
\label{fig:framework}
\end{figure}
Depth estimation with monocular cameras has been actively studied over the past decades~\cite{Godard_2017_CVPR, godard2019digging, casser2019depth}, as it offers an efficient and economic way of obtaining depth. Compared to LiDAR, a monocular camera can be deployed pervasively, and due to its small scale, it can also be installed on an agent, e.g., an autonomous car, unobtrusively.
Albeit convenient and flexible, accurately estimating depth from a monocular camera is non-trivial, especially at night time, at which the visual perception of conventional RGB cameras degrades. The low dynamic range and sensitivity to motion blur of conventional cameras can lead to defective imaging at night, and the captured images/videos often exhibit underexposure due to low-lighting or back-lighting~\cite{wang2019underexposed}. For an autonomous car, when it is driving at night accompanied by adverse weather (e.g., rain and fog), the dual occurrence of adverse light and weather can cause a challenge for its RGB-based vision system.
Recently, event camera has gained popularity in visual perception and robotics. Event camera is a bio-inspired vision sensor that works in a different way than conventional cameras \cite{gallego2020event, lichtsteiner2008c}. Rather than capturing intensity images at a fixed rate, event cameras measure intensity changes asynchronously in the form of an event stream. Event cameras have distinct advantages over conventional RGB cameras, including very high dynamic range (HDR), high temporal resolution, less motion blur, and low power consumption. These features of the event camera can complement its RGB counterpart, providing extra visibility and leading to an enhanced visual perception system.
On the other hand, in depth estimation, texture and salient edges play more important roles than color as recognized by research in the computer vision community~\cite{hu2019visualization}. Texture can be well retained in RGB data whereas salient edges can be better captured by the event camera. Therefore, using both data modalities is a straightforward attempt to boost the overall depth estimation accuracy.
Although there are few studies ~\cite{gehrig2021combining},~\cite{Zhu_2018_ECCV},~\cite{tulyakov2019learning} that have been proposed to jointly utilize RGB and event data for monocular depth estimation, they mainly focus on day time or normal weather conditions. Thus far, no research has been carried out on event-based monocular depth estimation under adverse night conditions, which is challenging as the RGB source does not contain as much effective visual information as it does at day time, and how to effectively fuse RGB data with event stream at night time has yet to be addressed.
Despite practical applications, such as more intelligent and lightweight night-time autonomous driving and rescue robots, there is currently also no dataset that contains paired RGB, event and ground truth depth data captured at adverse night conditions to validate and benchmark research in this direction. Hence, in this work, we made the following two contributions:
\begin{enumerate}
\item We propose the first adverse night-time driving dataset that contains paired RGB images, event streams, and ground truth depth maps. The adverse night conditions in our dataset are diverse in a variety of aspects including adverse weather such as rain and fog, and different scenes such as driving on dim countryside roads.
\item We propose a novel three-phase framework, which employs low-light enhancement and multi-modal fusion to tackle the problem of monocular depth estimation at adverse night conditions with event-based vision. The entire framework has been thoroughly evaluated, with the results showing that it outperforms six baselines.
\end{enumerate}
\section{Method}\label{sec:method}
Our framework decomposes monocular depth estimation at adverse night conditions into three phases as shown in Fig.~\ref{fig:framework_overview}. In phase one, the raw RGB image is first enlightened using low-light image enhancement; In phase two, the enhanced RGB image and the event image are fused to generate a fusion image; In phase three, depth estimation is carried out based on the fusion image. We denote our framework as \textbf{EVEN} as it is based on \textbf{EV}ent vision and low-light \textbf{EN}hancement. We elaborate our framework in the following.
\begin{figure*}
\centerline{\includegraphics[width=\textwidth,scale=1.0]{figures/Pipeline_11.0.pdf}}
\caption{An overview of the proposed framework for monocular depth estimation at adverse night conditions (e.g., at foggy night). Our framework, named EVEN, leverages a three-phase process to estimate depth: 1) phase-1: enlightening the low-light RGB image; 2) phase-2: fusing visual information from enhanced RGB and event images; 3) phase-3: estimating depth based on reconstructed fusion image.}
\label{fig:framework_overview}
\end{figure*}
\begin{figure}
\centerline{\includegraphics[width=\columnwidth,scale=1.0]{figures/FusionNet12.0.pdf}}
\caption{The multi-modal fusion network of EVEN. }
\label{fig:fusion_network}
\end{figure}
\subsection{Event Stream}\label{subsec: event_stream}
Asynchronous event streams reflect changes in light intensity. In order to efficiently make full use of the information from the event-based data. We convert event streams in the voxel grid format to image format. Specifically, spatial points (indexed by $x$ and $y$ positions in image coordinates with the value being the polarity $p$) are stacked along the time axis $t$ using a fixed time period $\Delta$$t$ = 0.125 s. This produces a compact event image.
\subsection{Phase-1: Low-light Enhancement}
The visual perception of conventional RGB cameras degrades at night due to the limited amount of light. To recover the necessary scene color and texture information captured by the RGB camera, we utilize EnlightenGAN \cite{jiang2021enlightengan} to enhance the raw night-time RGB image. EnlightenGAN is of an attention-based U-Net structure. The input RGB image is normalized by using the illumination channel as the attention map for the ultimate low-light enhancement.
\subsection{Phase-2: Multi-modal Fusion}
Event data can capture much more HDR and temporal details of night-time scenes, whereas RGB data can provide necessary texture and color information. As these two modalities complement each other, and in order to leverage the merits of both, a novel fusion network (refer to Fig.~\ref{fig:fusion_network}), which is built on top of selective kernel network~\cite{li2019selective}, is designed to integrate event data with RGB modality.
\subsubsection{Fusion Network}
given an event image $\mathbf{X}_{Event}$ and an enhanced RGB image $\mathbf{X}_{Enhanced}$, we use two convolutional kernels with different kernel sizes to transform the input images into feature maps. After transformation, two feature maps $\mathbf{F}_{Event}$ and $\mathbf{F}_{Enhanced}$ are obtained:
\begin{equation}
\mathbf{F}_{Event} = g(\mathbf{X}_{Event}), \mathbf{F}_{Event} \in \mathbb{R}^{H \times W \times C}
\end{equation}
\begin{equation}
\mathbf{F}_{Enhanced} = h(\mathbf{X}_{Enhanced}), \mathbf{F}_{Enhanced} \in \mathbb{R}^{H \times W \times C}
\end{equation}
where $g(\cdot)$ and $h(\cdot)$ are separate convolutional neural network layers that conduct transformation. For the event image, we use a kernel size of $5 \times 5$ as the information carried in event modality is relatively sparse. Therefore, a large kernel size is used. For the enhanced RGB image, we use a kernel size of $3 \times 3$. Following convolutional transformation, the feature maps of the two modalities are merged using an element-wise summation:
\begin{equation}
\mathbf{F}_{sum} = \mathbf{F}_{Event} + \mathbf{F}_{Enhanced}, \mathbf{F}_{sum} \in \mathbb{R}^{H \times W \times C}
\end{equation}
We then apply global average pooling to conduct dimension reduction (along the $H$ and $W$ dimensions) for the merged feature map $\mathbf{F}_{sum}$, which produces a vector $\mathbf{V} \in \mathbb{R}^{1 \times C}$. Similar to~\cite{li2019selective}, we then use a simple fully connected layer $f(\cdot)$ to create a compact vector $\mathbf{k}$ on the basis of $\mathbf{V}$:
\begin{equation}
\mathbf{k} = f(\mathbf{V}), \mathbf{k} \in \mathbb{R}^{d \times 1}
\end{equation}
$\mathbf{k}$ is then used to guide adaptive fusion of the two modalities. Specifically, we create soft attention across channel $C$. For $c$-th element along the channel $C$, the soft attention for fusing event and enhanced RGB feature maps can be formulated as follows:
\begin{equation}
a_c=\frac{e^{\mathbf{A}_c \mathbf{k}}}{e^{\mathbf{A}_c \mathbf{k}}+e^{\mathbf{B}_c \mathbf{k}}}, b_c=\frac{e^{\mathbf{B}_c \mathbf{k}}}{e^{\mathbf{A}_c \mathbf{k}}+e^{\mathbf{B}_c \mathbf{k}}}
\end{equation}
\begin{equation}
\mathbf{F}_{{fused}_c} = a_c \cdot \mathbf{F}_{{Event}_c}+b_c \cdot \mathbf{F}_{{Enhanced}_c}, a_c + b_c = 1
\end{equation}
where $\mathbf{A}_c \in \mathbb{R}^{1 \times d}$ and $\mathbf{B}_c \in \mathbb{R}^{1 \times d}$ are learnable vectors.
The fused feature map $\mathbf{F}_{fused}$ is then fed into an U-Net~\cite{ronneberger2015u} followed by a group of convolution and ReLU operations to 1) further fuse features of the event and RGB modalities, and 2) reconstruct a fusion image $\mathbf{Y}$ of the same resolution to the input event and enhanced RGB images:
\begin{equation}
\mathbf{Y} = \text{Conv}(\text{ReLU}(\text{Conv}(\text{U-Net}(\mathbf{F}_{fused}))))
\end{equation}
The resulting fusion image, which has HDR property and better edge salience, also suppresses areas of overexposure caused by low-light enhancement as shown in Fig.~\ref{fig:fusion_network}.
\subsubsection{Fusion Loss}\label{AA}
In order to allow the entire fusion network to effectively merge visual information from the two modalities, a joint loss $\mathcal{L}_{joint}$ is designed as shown in Equation~\ref{eq:joint_loss}. We use the reconstruction loss between the fusion image and the enhanced RGB image as the primary loss (i.e., $\mathcal{L}_{Enhanced}$), and that between the fusion image and the event image as the auxiliary loss (i.e., $\mathcal{L}_{Event}$). Both reconstruction losses are implemented as an $\mathcal{L}_2$ loss that measures the mean squared error between the fusion image and the respective event or enhanced RGB image. During training, the fusion network is trained to decrease $\mathcal{L}_{joint}$.
\begin{equation}\label{eq:joint_loss}
\mathcal{L}_{joint} = \beta \times \mathcal{L}_{Enhanced} + (1 - \beta) \times \mathcal{L}_{Event}
\end{equation}
\subsection{Phase-3: Depth Estimation}\label{AA}
The fusion image, which contains visual information from both event and RGB modalities, is then used as the source for depth estimation. We separately adopt two state-of-the-art depth estimation networks, i.e., Depthformer~\cite{li2022depthformer} and SimIPU~\cite{li2021simipu} in our EVEN framework to carry out the depth estimation with the fusion image as their input.
\section{Related Work}\label{sec:related_work}
\subsection{Monocular Depth Estimation with Multi-Modal Fusion}
Monocular depth estimation can be achieved using RGB modality alone~\cite{Godard_2017_CVPR, godard2019digging, casser2019depth}. Recent advances in multiple data modalities have further improved the depth estimation accuracy. For instance, some research works proposed to use RGB and optical flow~\cite{ranftl2016dense,tateno2017cnn,chen2020denao,shimada2022pix2pix}, RGB combined with segmentation maps~\cite{zama2018geometry, zhu2020edge, he2021sosd}, or RGB with extra saliency features~\cite{zhang2021deep, abdulwahab2022monocular} as the inputs, and use multi-modal fusion to enhance depth estimation.
LiDAR has been explored for enhancing monocular depth estimation recently. \cite{jaritz2018sparse} and \cite{fu2019lidar} proposed using late fusion methods to fuse depth data from LiDAR and monocular RGB inputs. Apart from pure visual signals, radar has also been used with RGB modality for monocular depth estimation \cite{lin2020depth, lo2021depth}. Recently, an attention-based method has been proposed for fusing radar signals with monocular RGB images \cite{long2022radar}.
\subsection{Event-Based Depth Estimation}
Daniel et al.~\cite{gehrig2021combining} combined event-based data and monocular RGB frames with a recurrent asynchronous network for depth estimation, which is also the first work to fuse the event and monocular RGB frames. Zhou et al.~\cite{zhou2018semi} investigated the use of stereo event cameras for semi-dense depth estimation by
maximizing a temporal consistency between the corresponding event streams. Another event vision-based method was proposed by Zhu et al.~\cite{Zhu_2018_ECCV} which eliminates disparity for depth estimation. The method proposed by~\cite{tulyakov2019learning} shows the first learning-based stereo depth estimation for event cameras which is also the first one that produces dense results. \cite{zhu2019unsupervised} is an unsupervised framework that learns motion information only from event streams, achieving multi-task objectives including optical flow, egomotion and depth estimation. Cui et al. \cite{cui2022dense} proposed a dense depth estimation method based on the fusion of dense event stream and sparse point cloud.
Despite the efforts being made in event-based depth estimation, existing works are not engineered to specifically tackle monocular depth estimation at adverse night conditions, but instead mainly target at day time and normal weather conditions. In this work, we target monocular depth estimation at adverse night conditions. In order to improve the illumination in the field of view (FOV) and to take advantage of the HDR property of the event-based camera, we propose to combine low-light enhancement and multi-modal fusion of event and RGB data for better depth estimation. To the best of our knowledge, we are the first work that uses the event-based vision along with low-light image enhancement to estimate monocular depth at adverse night conditions.
|
1,116,691,499,180 | arxiv | \section{Introduction}
Multipion Bose-Einstein(BE) correlations has now aroused
great interests among
physicists~\cite{Re,LL,Za,Pr,CGZ,ZCG,ZC,CZ,Z1,Z2,Z3,SL,M1,M2,BK,Wo,FWW,BZ,Ur,conf,ZSH,AL}.
Among others, Lam and Lo~\cite{LL} were
the first to suggest BASER concept in high energy physics.
Zajc~\cite{Za} was the first to use Monte-Carlo methods to study multipion BE correlations
effects on two-pion interferometry for fixed pion multiplicity event.
Pratt~\cite{Pr} first suggested that multipion BE symmetrization effects
may lead to BE condensate in high energy heavy-ion collisions.
Detail derivation of multipion BE correlation effects on single
particle spectrum and two-pion interferometry was given by
Chao, Gao and I in Ref.~\cite{CGZ}. Recently Zim\'anyi and Cs\"org\H o suggested
a new class density matrix and study multipion BE correlations
and wavepacket effects on two-pion interferometry~\cite{ZC,CZ}.
In Ref.~\cite{Z2}, I have
derived any-order pion interferometry formula for a special class
density matrix. In Ref.~\cite{Z3}, generalized any-order pion interferometry
formulas are given which depends not only on the correlator as
assumed previously but also on the pion multiplicity
distribution which was neglected in the previous
studies. Multipion BE correlation effects on the source
distribution was studied in Ref.~\cite{ZSH}.
This paper is arranged as follows:
In section 2, any-order pion interferometry
formula for fixed pion multiplicity event are given. In section 3,
any-order pion interferometry formula for mixed events are derived.
In section 4, methods on multipion BE correlation simulation
are discussed.
Conclusions are given in section five.
\section{Any-order pion interferometry formulas for fixed pion
multiplicity events}
Assume the source is totally chaotic. It is easily checked
that the n-pion inclusive distribution in n-pion event can be
expressed as~\cite{CGZ}
\begin{equation}
P_n^{n}({\bf p_1,\cdot \cdot \cdot,p_n})=\sum_{\sigma}\prod_{j=1}^{n}\rho_{j,\sigma(j)}
\label{eq1}
\end{equation}
with
\begin{equation}
\rho_{ij}=\int d^4x g(x,\frac{p_i+p_j}{2})exp(i(p_i-p_j)\cdot x).
\end{equation}
Here $g(x,k)$ is a Wigner function which can be explained as the probability of fnding a pion
at point $x$ with momentum $p$.
$\sigma(j)$ denotes the $j$-th element of a permutations of the
sequence $\{ 1,2,\cdot\cdot\cdot,n\}$, and the sum over $\sigma$ denotes
the sum over all $n!$ permutations of this sequence.
Then the normalized $k$ pion inclusive distribution in
$n$ pion events can be expressed as
\begin{equation}
P_n^{k}({\bf p_1,\cdot\cdot\cdot,p_k})=
\frac{\int P_n^n({\bf p_1,\cdot\cdot\cdot,p_n})\prod_{j=k+1}^{n}d{\bf p_j}}
{\int P_n^n({\bf p_1,\cdot\cdot\cdot,p_n})\prod_{j=1}^{n}d{\bf p_j}}.
\label{eq3}
\end{equation}
It is easily checked that $P_n^{k}({\bf p_1,\cdot\cdot\cdot,p_k})$ can be re-written as
\begin{eqnarray}
&&P_n^k({\bf p_1,\cdot \cdot \cdot p_k})=
\frac{1}{n(n-1)\cdot\cdot\cdot (n-k+1)\omega(n)}
\sum_{i=k}^{n}\sum_{m_1=1}^{i-(k-1)}
\sum_{m_2=1}^{i-m_1-(k-2)}\cdot\cdot\cdot
\nonumber\\
&&\sum_{m_{k-1}=1}^{i-m_1-m_2\cdot\cdot\cdot-m_{k-2}-1}
\sum_{\sigma}
G_{m_1}({\bf p}_1,{\bf p}_{\sigma(1)})
G_{m_2}({\bf p}_2,{\bf p}_{\sigma(2)})
\nonumber\\
&&
\cdot \cdot\cdot
G_{m_{k-1}}({\bf p}_{k-1},{\bf p}_{\sigma(k-1)})
G_{i-m_1\cdot\cdot\cdot-m_{k-1}}({\bf p}_{k},{\bf p}_{\sigma(k)})\omega(n-i)
\end{eqnarray}
with
\begin{equation}
\omega(n)=
\frac{1}{n}\sum_{i=1}^{n}\omega(n-i)\int d{\bf p}G_i({\bf p},{\bf p}),~~~\omega(0)=1
\end{equation}
and
\begin{equation}
G_i({\bf p,q})=\int \rho({\bf p},{\bf p_1})
d{\bf p_1}\rho({\bf p_1},{\bf p_2})\cdot\cdot\cdot d{\bf p_{i-1}}\rho({\bf p_{i-1}},{\bf q}).
\end{equation}
Here $\sigma(i)$ denotes the $i$-th element of a permutations of the
sequence $\{ 1,2,\cdot\cdot\cdot,k\}$, and the sum over $\sigma$ denotes
the sum over all $k!$ permutations of this sequence.
Then the $k$-pion ($k\le n$) interferometry formula for n-pion evnets
can be expressed as
\begin{equation}
C_n^{k}({\bf p_1,\cdot\cdot\cdot,p_k})
=\frac{P_n^{k}({\bf p_1,\cdot\cdot\cdot,p_k)}}{\prod_{j=1}^{k}P_n^{1}({\bf p_j)}}.
\end{equation}
We assume the source distribution as
\begin{eqnarray}
g({\bf r},t,{\bf p})&=&
n_0\cdot (\frac{1}{2\pi R^2})^{3/2}exp(-\frac{{\bf r}^2}{2R^2})\delta(t)
(\frac{1}{2\pi \Delta^2})^{3/2} exp(-\frac{{\bf p}^2}{2 \Delta^2}).
\end{eqnarray}
Here $n_0$ is a parameter. Then the multipion BE correlation effects on
two-pion interferometry for fixed evnets are shown in Fig.1. It is
easy to see that as pion multiplicity increases, multipion BE correlations
effects on the two-pion interferometry effects becomes stronger. In the
limit $n\rightarrow \infty$ we have $C_n^2=1$.
\begin{figure}[t]\epsfxsize=12cm \epsfysize=6cm
\centerline{\epsfbox{ha11.eps}}
\vskip -0.5 cm
\end{figure}
\section{Any-order pion interferometry formula fo mixed events}
In the experimental analyses, one normally mixed all events to study
pion interferometry. The $k$-pion inclusive distribution can be expressed as
\begin{equation}
N_k({\bf p_1,\cdot\cdot\cdot,p_k})=\sum_{n=k}^{\infty}
P_n \cdot n\cdot(n-1)\cdot\cdot\cdot (n-k+1)P_n^k({\bf p_1,\cdot\cdot\cdot,p_k}).
\end{equation}
Here $P_n$ is the normalized pion multiplicity distribution. It is easily checked that
\begin{equation}
\int N_k({\bf p_1,\cdot\cdot\cdot,p_k})\prod_{j=1}^{k}d{\bf p_j}=
\langle n(n-1)\cdot\cdot\cdot (n-k+1)\rangle~~~ ,
\end{equation}
then the $k$-pion interferometry formulas can be expressed as
\begin{equation}
C_k({\bf p_1,\cdot\cdot\cdot,p_k})=
\frac{N_k({\bf p_1,\cdot\cdot\cdot,p_k})}{\prod_{j=1}^{k}N_1({\bf p_j})}.
\end{equation}
The discussion about the definition of two-pion interferometry formulas can be found in
the paper by Miskowiec and Voloshin~\cite{MV} and can also be found in Ref.~\cite{ZSH}. Using
Eq.(4) ,
the $k$-pion inclusive distribution can be expressed as~\cite{Z3}
\begin{eqnarray}
&&N_k({\bf p_1,\cdot \cdot \cdot p_k})=
\sum_{n=k}^{\infty}P_n\frac{\omega(n-i)}{\omega(n)} \sum_{i=k}^{n}\sum_{m_1=1}^{i-(k-1)}
\cdot\cdot\cdot
\sum_{m_{k-1}=1}^{i-m_1-m_2\cdot\cdot\cdot-m_{k-2}-1}
\sum_{\sigma}
\nonumber\\
&&
G_{m_1}({\bf p}_1,{\bf p}_{\sigma(1)})
\cdot \cdot\cdot
G_{m_{k-1}}({\bf p}_{k-1},{\bf p}_{\sigma(k-1)})
G_{i-m_1\cdot\cdot\cdot-m_{k-1}}({\bf p}_{k},{\bf p}_{\sigma(k)})
\nonumber\\
&&=
\sum_{m_1=1}^{\infty}\cdot\cdot\cdot
\sum_{m_k=1}^{\infty}
h_{m_1+\cdot\cdot\cdot+m_k}
\sum_{\sigma}
G_{m_1}({\bf p}_1,{\bf p}_{\sigma(1)})
\cdot \cdot\cdot
G_{m_{k}}({\bf p}_{k},{\bf p}_{\sigma(k)})
\end{eqnarray}
with
\begin{equation}
h_{m_1+\cdot\cdot\cdot+m_k}=\sum_{n=m_1+\cdot\cdot\cdot+m_k}^{\infty}P_n
\frac{\omega(n-m_1-\cdot\cdot\cdot-m_k)}{\omega(n)}.
\end{equation}
It is interesting to notice that if
$P_n=\omega(n)/\sum_n \omega(n)$ as assumed in Ref.~\cite{Z2,ZC,Pr}, we have
the following modified $k$ pion inclusive distribution~\cite{Z2}:
\begin{equation}
N_k({\bf p_1,\cdot \cdot \cdot p_k})=
\sum_{\sigma}H_{1\sigma(1)}\cdot\cdot\cdot H_{k\sigma(k)}
\end{equation}
with
\begin{equation}
H_{ij}=H({\bf p_i,p_j})=\sum_{n=1}^{\infty}G_n({\bf p_i,p_j}).
\end{equation}
One interesting property about Eq.(15) is: Eq.(15) is
very similar to the pure $n$ pion inclusive distribution (Eq.(1)) but with a modified source
distribution $S(x,K)$ which satisfies
\begin{equation}
H({\bf p_i,p_j})=\int S(x,\frac{p_i+p_j}{2})\exp(i(p_i-p_j)\cdot x)d^4 x.
\end{equation}
Based on Eq.(16), we will study multipion BE correlations effects
on the source distributions~\cite{ZSH}. From Eq.(16), the single
particle spectrum distribution
,$P^{\langle n\rangle}_1({\bf p})$,
can be expressed as:
\begin{equation}
P^{\langle n\rangle}_1({\bf p})=\frac{N_1({\bf p})}{\langle n\rangle}
=\frac{\int S(x,p)d^4 x}{\langle n\rangle},~~
\langle n \rangle =\int N_1({\bf p})d{\bf p}=\int S(x,{\bf p})d^4x d{\bf p}.
\end{equation}
Similarly the source distribution in coordinate space
,$P^{\langle n\rangle}_1({\bf r})$,
can be expressed as
\begin{equation}
P^{\langle n\rangle}_1({\bf r})
=\frac{\int S(x,{\bf p})d{\bf p} dt}{\langle n\rangle}.
\end{equation}
Assume the input $g(x,p)$ as Eq.(8), multipion BE correlation effects on
the source distribution are shown
in Fig.2 and Fig.3. It is clear that as $\langle n\rangle$ increases, multipion BE
correlations effects on the source distribution becomes stronger. Multipion
BE correlation make pions concentrate in momentum and coordinate space.
It is easily checked that the half width of the source distribution
in momentum space and coordinate
space $\Delta_{eff}$ and $R_{eff}$ satisfies the following relationship:
\begin{equation}
\sqrt{\frac{\Delta}{2R}}\le \Delta_{eff}\le \Delta,~~~
\sqrt{\frac{R}{2\Delta}}\le R_{eff}\le R.
\end{equation}
If we take $\frac{\langle n \rangle}{(R\Delta)^3}\rightarrow \infty$.
we have $R_{eff}=\sqrt{\frac{R}{2\Delta}}$ and
$\Delta_{eff}=\sqrt{\frac{\Delta}{2R}}$. The source size satisfy the minimal
Heisenberg relationship. That is all pions are concentrated in a single
phase space and pion condensate occurs.
But the pion multiplicity distribution is Boson form:$P_n=\frac{\langle n\rangle^n}
{(\langle n\rangle +1)^{n+1}}$.
In general, pion interferometry formula depends not only on the correlator
$\rho_{i,j}$ but also on $P_n$ as shown in Eq.(12). The later
property was neglected in the previous studies.
Recently, Egger, Lipa and Buschbeck~\cite{ELB}
results seems support the conclusion presented here. Previous studies based
on pure multipion interferometry formulas
have shown that the difference between three-pion interferometry and two-pion
interferometry are not so larger~\cite{HZ97,conf1}. Certainly, there is also
another possibility that the there are some correlation among the
emitted pions which was not include in the above derivation.
So one needs more studies about the two-pion and higher-order pion
interferometry!
If we assume the pion state
as
\begin{equation}
|\phi\rangle =\sum_n a_n |n>.
\end{equation}
Here $a_n$ is a parameter which connected with pion multiplicity distribution.
$|n>$ is the non-normalized pion $n$ pion state which can be expressed as
\begin{equation}
|n\rangle =\frac{(\int d{\bf p} j({\bf p}) a^+({\bf p}))^n}{n!}|0>.
\end{equation}
Here $a^+({\bf p})$ is the pion creation operator and $j({\bf p})$ is the pion
probability amplitude.
Then the pion multiplicity distribution $P_n$ can be expressed as
\begin{equation}
P_n=\frac{|a_n|^2\omega(n)}{\sum_n |a_n|^2\omega(n)}.
\end{equation}
The two-pion interferometry results for different $P_n$(different $a_n$)is
shown in Fig.4. One can see clearly that there are big differences
among the two-pion interferometry results for different $P_n$.
\section{Monte-Carlo simulation of multipion interferometry}
To put two-pion BE correlations in event generator has been studied by
different authors~\cite{CGZP}. To put multipion BE correlations in event generator is
a very difficult task as shown in Ref.~\cite{Za,FWW}.
Recently Fialkowski et al~\cite{FWW} used the
methods of Bialas, Krzywicki~\cite{BK} and Wosiek~\cite{Wo} and implemented
approximately multipion BE
correlation in JETSET/PYTHIA to study multipion BE correlation effects on the W mass.
Due to the fact that there is no space-time picture in the model
JETSET, So they had to put $\rho_{ij}$ by hand. On the other
hand for model which has space-time picture, one can construct
$g(x,p)$ according to following method~\cite{ZZZ,ZC}:
$D(x,p)$ is a total classical function which is the output of the evnet generator
and can be written down as:
\begin{equation}
D(x,p)=\sum_i \delta(x-x(i))\delta(p-p(i)).
\end{equation}
Here $x(i)$ and $p(i)$ are the $i$ particle coordinate and momentum.
The natural and easiest way to construct $g(x,p)$ the semiclassical
function is to replace the
above delta function with a Gaussian wavepacket
\begin{eqnarray}
g(y,k)&=&\sum_i \prod_{j=0}^{3} \exp(
-\frac{(y_j-x(i)_j)^2}{\delta x_j^2})
\exp(-\frac{k_j-p(i)_j}{\delta p_j^2}
).
\end{eqnarray}
In general $\delta x_j$ ($\delta p_j$) should not be the same. But for simplicity one can
set them the same value and take $\delta p_j=\frac{1}{\delta x_j}$.
The same discussion can be found
in the Ref.~\cite{PGG90}.
Unfortunately the
calculating work for the above methods to put multipion BE
correlation in event generator is so large that a new
method which enable us to implement quickly multipion correlation
in event generator is a debate for physics in this field.
\begin{figure}[t]\epsfxsize=12cm \epsfysize=6cm
\centerline{\epsfbox{ha22.eps}}
\vskip -0.5cm
\end{figure}
\section{Conclusions}
In the above, we mention nothing about the effects of
flow~\cite{xxx} and resonance~\cite{yyy}
on the pion interferometry. As our main interests is the
BE statistical effects and the flow and resonance
can be included in the function $g(x,p)$.
Certainly the multipion coulomb effects is
still a open question in this field. Energy constraint effects
on the pion interferometry can be studied by the method
presented in Ref.~\cite{ZCG}. In this paper, we have derived
any-order pion interferometry formula for fixed multipion
events and for mixed events. In general pion interferometry formulas
should depends on not only on the correlator but also
on the pion multiplicity distribution which was neglected
in previous studies. It is shown that as the pion
multiplicity becomes very larger all pions concentrate in
a single phase space and pion condensate
occurs. But the pion multiplicity distribution is BE form.
The methods to put multipion BE correlations in the
event generator is discussed.
\section*{Acknowledgments}
Q.H.Z. thanks Drs. U. Heinz, J. Zim\'anyi,
T. Cs\"org\H o, R. Lednicky, Y. Sinyukov, D. Mi\'skowiec, H. Egger,
B. Buschbeck, P. Scotto and Urs. Wiedemann
for helpful discussions.
Part of the talk is based on the work with U. Heinz and P. Scotto.
Q.H.Z was supported by the Alexander
von Humboldt Foundation.
\section*{References}
|
1,116,691,499,181 | arxiv | \section{Extending the derivation of normal equations for MLP to handle non-linear activation functions}
In our preliminary version~\cite{Gao2020RLS-RTMDNet}, we have claimed and demonstrated that each layer of MLP in the case of squared-error loss function can be represented by the system of normal equations for RLS estimation by simply ignoring non-linear activation functions. There is no problem for applying the recursive LSE only for the final linear layer. However, without non-linear activation functions, multi-layer MLP becomes just a linear function and there is no reason to have multiple layers. In the real implementation of our RLS-RTMDNet, we actually inherit the non-linear activation functions in the original RT-MDNet as shown in Figure~\ref{fig:MLP}. We hereby provide more explanations on how to extend the derivation of normal equations for MLP in~\cite{Gao2020RLS-RTMDNet} to generally handle other layers including non-linear activation functions. It is noted that we can connect recursive LSE to each MLP layer with non-linear activation and derive the corresponding normal equation individually for different input-output pair $(\mathbf{x}_i, \mathbf{y}_i)$ following the derivation in~\cite{Gao2020RLS-RTMDNet}.
More specifically, we can denote the \textit{squared-error loss} for a $L$-layered MLP neural network with non-linear activation functions using the input-output pair $(\mathbf{x}_i, \mathbf{y}_i)$ as:
\begin{align}
\vspace{-2mm}
\label{equ:lossSI} \mathscr{L}\left(\mathbf{y}_i, \mathbf{W}^{L-1}f^{L-2}(\mathbf{W}^{L-2} \cdots f^2(\mathbf{W}^{2}f^1(\mathbf{W}^{1}\mathbf{x}_i)))\right)~,
\vspace{-2mm}
\end{align}
where the activation functions $f^l(\cdot)$, such as the widely applied ReLU (Rectified Linear Unit) and Leaky ReLU, are \textit{applied to each node separately and the derivative of each activation function is just the diagonal matrix of the derivative on each node}. Backpropagation then gives the gradient of the input values at level $l$ as follows based on the chain rule:
\begin{align}
\vspace{-2mm}
\label{equ:delta1SI} \bm{\delta}^l_i \coloneqq &\left(\mathbf{W}^l\right)^{\top} \cdot diag\left((f^{l})’_i\right) \cdots \left(\mathbf{W}^{L-2}\right)^{\top} \cdot diag\left((f^{L-2})’_i\right)\cdot \nonumber\\
&\left(\mathbf{W}^{L-1}\right)^{\top} \cdot \nabla_{\mathbf{z}_i}\mathscr{L}~,
\vspace{-2mm}
\end{align}
where
\begin{align}
\vspace{-2mm}
\label{equ:z1SI} \nabla_{\mathbf{z}_i}\mathscr{L} &= \mathbf{z}_i - \mathbf{y}_i~, \\
\label{equ:z2SI}\mathbf{z}_i &= \mathbf{W}^{L-1}f^{L-2}(\mathbf{W}^{L-2} \cdots f^2(\mathbf{W}^{2}f^1(\mathbf{W}^{1}\mathbf{x}_i)))~.
\vspace{-2mm}
\end{align}
The $\bm{\delta}^l_i$ can also easily be computed recursively as:
\begin{align}
\vspace{-2mm}
\label{equ:delta2SI} \bm{\delta}^l_i = \left(\mathbf{W}^l\right)^{\top} \cdot diag\left((f^{l})’_i\right) \cdot \bm{\delta}^{l+1}_i~.
\vspace{-2mm}
\end{align}
The gradient of the weights in layer $l$ is then:
\begin{align}
\vspace{-2mm}
\label{equ:gradient1SI} \nabla_{\mathbf{W}^l}\mathscr{L} = diag\left((f^{l})’_i\right) \cdot \bm{\delta}^{l+1}_i\left(\mathbf{u}^l_i\right)^{\top}~.
\vspace{-2mm}
\end{align}
\textit{Note that the derivative of the weight decay regularization term can be directly added to the result of backpropagation when calculating the gradient of the weights.}
Since the ReLU and Leaky ReLU activation functions are widely applied, we can also use the diagonal matrix of the derivative on each node for them to calculate $\mathbf{z}_i$, i.e.,
\begin{align}
\vspace{-2mm}
\label{equ:z3SI}\mathbf{z}_i = &\mathbf{W}^{L-1} \cdot diag\left((f^{L-2})’_i\right) \cdot \mathbf{W}^{L-2} \cdots diag\left((f^{2})’_i\right) \cdot \nonumber\\
&\mathbf{W}^{2} \cdot diag\left((f^{1})’_i\right) \cdot \mathbf{W}^{1}\mathbf{x}_i \\
= &\mathbf{W}^{L-1} \cdot diag\left((f^{L-2})’_i\right) \cdot \mathbf{W}^{L-2} \cdots diag\left((f^{l+1})’_i\right) \cdot \nonumber\\
&\mathbf{W}^{l+1} \cdot diag\left((f^{l})’_i\right) \cdot \mathbf{W}^{l}\mathbf{u}^l_i~.
\vspace{-2mm}
\end{align}
Isolating the term $\mathbf{W}^{l}$ from the rest of the product of the network weight matrices, i.e., let
\begin{align}
\vspace{-2mm}
\label{equ:replaceSI} \mathbf{W}^r_i = &\mathbf{W}^{L-1} \cdot diag\left((f^{L-2})’_i\right) \cdot \mathbf{W}^{L-2} \cdots diag\left((f^{l+1})’_i\right) \cdot \nonumber\\
&\mathbf{W}^{l+1} \cdot diag\left((f^{l})’_i\right)~,
\vspace{-2mm}
\end{align}
we may rewrite
\begin{align}
\vspace{-2mm}
\label{equ:gradient2SI} \nabla_{\mathbf{W}^l}\mathscr{L} &= \left(\mathbf{W}^r_i\right)^{\top}(\mathbf{z}_i - \mathbf{y}_i)\left(\mathbf{u}^l_i\right)^{\top} \\
&= \left(\mathbf{W}^r_i\right)^{\top}(\mathbf{W}^r_i\mathbf{W}^{l}\mathbf{u}^l_i - \mathbf{y}_i)\left(\mathbf{u}^l_i\right)^{\top}~.
\vspace{-2mm}
\end{align}
Solving for $\mathbf{W}^l$ for which $\nabla_{\mathbf{W}^l}\mathscr{L}$ is zero, we may write
\begin{align}
\vspace{-2mm}
\label{equ:gradient3SI} \mathbf{W}^{l}\mathbf{u}^l_i\left(\mathbf{u}^l_i\right)^{\top} = \left(\left(\mathbf{W}^r_i\right)^{\top}\mathbf{W}^r_i\right)^{-1}\left(\mathbf{W}^r_i\right)^{\top}\mathbf{y}_i\left(\mathbf{u}^l_i\right)^{\top} ~.
\vspace{-2mm}
\end{align}
The right side of this new equation can be written as a more intuitive term using some substitution operations, i.e.,
\begin{align}
\vspace{-2mm}
\label{equ:substitution} & \left(\left(\mathbf{W}^r_i\right)^{\top}\mathbf{W}^r_i\right)^{-1}\left(\mathbf{W}^r_i\right)^{\top}\left(\mathbf{z}_i - \nabla_{\mathbf{z}_i}\mathscr{L} \right)\left(\mathbf{u}^l_i\right)^{\top} \nonumber \\ = & \left(\left(\mathbf{W}^r_i\right)^{\!\!\top}\!\mathbf{W}^r_i\right)^{-1}\!\!\left(\left(\mathbf{W}^r_i\right)^{\!\!\top}\!\mathbf{W}^{r}_i\mathbf{z}^{l+1}_i \!-\!diag\!\left((f^{l})’_i\right)\bm{\delta}^{l+1}_i\!\right)\!\!\left(\mathbf{u}^l_i\right)^{\!\!\top} \nonumber \\ = & \left(\mathbf{z}^{l+1}_i - \left(\left(\mathbf{W}^r_i\right)^{\top}\mathbf{W}^r_i\right)^{-1}diag\left((f^{l})’_i\right)\bm{\delta}^{l+1}_i\right)\left(\mathbf{u}^l_i\right)^{\top}~\!\!\!.
\vspace{-2mm}
\end{align}
As it can be seen, this new term includes the desired output $\mathbf{y}^{l+1}_i$ for layer $l+1$, i.e.,
\begin{align}
\vspace{-2mm}
\label{equ:substitution} \mathbf{y}^{l+1}_i = \mathbf{z}^{l+1}_i - \left(\left(\mathbf{W}^r_i\right)^{\top}\mathbf{W}^r_i\right)^{-1}diag\left((f^{l})’_i\right)\bm{\delta}^{l+1}_i~,
\vspace{-2mm}
\end{align}
where $\mathbf{z}^{l+1}_i$ is the actual state of the $(l+1)$-th layer's output, and the last term is the ``normalized'' error term. Finally, the system of normal equations for layer $l+1$ is written as:
\begin{align}
\vspace{-2mm}
\label{equ:normalequ} \mathbf{W}^l = \mathbf{y}^{l+1}_i\left(\mathbf{u}^l_i\right)^{\top}\left(\mathbf{u}^l_i\left(\mathbf{u}_i^l\right)^{\top}\right)^{-1}~.
\vspace{-2mm}
\end{align}
The above derivation concludes that each layer of MLP with non-linear activation function can also be represented by the system of normal equations with respect to a specific input-output pair $(\mathbf{x}_i, \mathbf{y}_i)$ for solving the linear LSE problem.
\section{Derivation of normal equations for MLP in RT-MDNet in the case of cross-entropy loss}
\begin{figure*}
\centering
\includegraphics[width=0.90\linewidth]{MLP}
\caption{The implementation details of the online learned fully-connected MLP layers in RT-MDNet. It is the derivative of the cross-entropy loss $\mathscr{L}_{ce}$ with respect to $\mathbf{z}$, i.e., $\nabla_{\mathbf{z}}\mathscr{L}_{ce} = \mathbf{u} - \mathbf{y}$, that is propagated backwards from higher layers to lower layers. We refer the readers to~\cite{Saitoh2021DLB} for more details.} \label{fig:MLP}
\end{figure*}
\begin{figure*}
\centering
\subfloat[Softmax function]{\includegraphics[width=0.18\linewidth]{Softmax}
\label{softmax}}
\hfil
\subfloat[Identity function]{\includegraphics[width=0.18\linewidth]{Identity}
\label{identity}}
\caption{Different designs for the output layer: Softmax function for the classification problems and Identity function for the regression problems.} \label{fig:outputlayer}
\end{figure*}
According to~\cite{Saitoh2021DLB}, the product of matrices in forward propagation in a MLP neural network is called an ``affine transformation" in the field of geometry. Therefore, the process that performs an affine transformation can be implemented as an ``Affine layer" in RT-MDNet~\cite{Jung2018RTMDNet}. In Figure~\ref{fig:MLP}, we show the implementation details of the online learned fully-connected MLP layers (\verb'fc4-6') $\left\{\mathbf{W}^l\right\}_{l=4}^6$. In RT-MDNet, the output scores $\mathbf{z}$ of the single fully-connected binary classification layer $\mathbf{W}^6$ are firstly normalized by a Softmax output layer and then converted to the probability values $\mathbf{u}$ of belonging to different classes. Finally, the cross-entropy error function is applied for calculating loss for online training. It can also be derived that it is the derivative of the cross-entropy loss $\mathscr{L}_{ce}$ with respect to $\mathbf{z}$, i.e., $\nabla_{\mathbf{z}}\mathscr{L}_{ce} = \mathbf{u} - \mathbf{y}$, that is propagated backwards from higher layers to lower layers. We refer the readers to~\cite{Saitoh2021DLB} for more details.
As~\cite{Saitoh2021DLB} has stated, using the cross-entropy loss function for the Softmax output layer (Figure~\ref{softmax}) makes backward propagation return the ``pretty" result $\nabla_{\mathbf{z}}\mathscr{L}_{ce} = \mathbf{u} - \mathbf{y}$, which is not accidental; the cross-entropy error function is designed to do this, especially for the classification problems. In the regression problems, the sum of squared errors is used for calculating loss for the same reason as the output layer can be seen as an Identity function (Figure~\ref{identity}) and backward propagation returns the same ``pretty" result $\nabla_{\mathbf{z}}\mathscr{L}_{se} = \mathbf{u} - \mathbf{y}$ for the squared-error loss $\mathscr{L}_{se}$. That means all the affine layers in the different cases of cross-entropy loss and squared-error loss will receive the gradient information from the same backpropagation of $\mathbf{u} - \mathbf{y}$ for the same input-output pair $(\mathbf{x}, \mathbf{y})$. This is the basic reason why we claimed in our preliminary conference version~\cite{Gao2020RLS-RTMDNet} that all the normal equation derivations for RLS estimation in the case of squared-error loss function can be easily transplanted to the case of cross-entropy loss.
More specifically, although we can not directly use the cross-entropy loss for deriving the normal equations in RT-MDNet due to the special form of Softmax-based output layer\footnote{It is very different from the ReLU-based non-linear activation layer where the activation function is applied to each node separately.}, we can instead cast the cross-entropy loss along with its Softmax-based output layer as a black box component, and let the affine layers believe they are receiving the gradient information from a squared-error loss with the Identity function as the output layer. In other words, the update of the affine layers will behave like in the case of squared-error loss to incorporate memory retention.
\section{Explanation of using virtual input}
We provide more detailed explanations about the virtual input defined in Eqs.~\eqref{equ:mean1}~\eqref{equ:mean2}~\eqref{equ:mean3} of the main paper as follows. We start by providing the proof of Eq.~\eqref{equ:submultiplicativity}, which means the minimization of Eq.~\eqref{equ:costfunctionLSnew2} will also lead to the minimization of the following {\it cost function}:
\begin{align}
\label{appequ:costfunctionLSnew5} \mathscr{L}(\mathbf{W}_n) = \mathop{\sum}\limits _{i=1}^{n}\beta^{n-i}\!\left\Vert \overline{\mathbf{y}}_i - \mathbf{W}_n\overline{\mathbf{x}}_i\right\Vert^2 + \frac{\delta}{b}\beta^n\left\Vert \mathbf{W}_n\right\Vert^2~.
\end{align}
This helps to explain why it is feasible to update $\mathbf{P}_{n}$ approximately based on the virtual input in the case of $b > 1$.
\textit{Proof of Eq.~\eqref{equ:submultiplicativity}.} Based on the Cauchy-Schwarz inequality, it holds that
\begin{align}
\label{appequ:submultiplicativity} \left\Vert \mathop{\sum}\limits _{j=1}^{b}\left(\mathbf{y}_{i,j} - \mathbf{W}_n\mathbf{x}_{i,j}\right)\right\Vert^2 \leq b\mathop{\sum}\limits _{j=1}^{b}\left\Vert \mathbf{y}_{i,j} - \mathbf{W}_n\mathbf{x}_{i,j}\right\Vert^2~,
\end{align}
for all the $b$ samples in the $i_{th}$ {\it block}. Using Eq.~\eqref{equ:mean1}, we can derive the following inequality
\begin{align}
\label{appequ:inequality} \left\Vert \mathop{\sum}\limits _{j=1}^{b}\left(\mathbf{y}_{i,j} - \mathbf{W}_n\mathbf{x}_{i,j}\right)\right\Vert^2 &= b^2\left\Vert \overline{\mathbf{y}}_i - \mathbf{W}_n\overline{\mathbf{x}}_i\right\Vert^2\nonumber\\
&\leq b\mathop{\sum}\limits _{j=1}^{b}\left\Vert \mathbf{y}_{i,j} - \mathbf{W}_n\mathbf{x}_{i,j}\right\Vert^2
\end{align}
to have
\begin{align}
\label{appequ:inequality2} \left\Vert \overline{\mathbf{y}}_i - \mathbf{W}_n\overline{\mathbf{x}}_i\right\Vert^2 \leq \frac{1}{b}\left\Vert \bm{Y}_i - \bm{X}_i\mathbf{W}_n^{\top}\right\Vert^2~.
\end{align}
\textit{Explanation of Eq.~\eqref{equ:mean2} in Section~\ref{Sec:MLP}.} In the real implementation of RT-MDNet, it is the average of all derivatives of the loss function for all the $b$ samples in the mini-batch that is used for updating $\mathbf{W}^l$ at each optimization iteration of the MBSGD process. Since the recursive formula for improving this process has a similar expression of the {\it cost function} to Eq.~\eqref{equ:costfunctionLSnew2}, we thus can directly define the virtual input for each MLP layer in RT-MDNet at the time index $n$ according to Eq.~\eqref{equ:mean1} as follows:
\begin{align}
\label{appequ:mean2} \overline{\mathbf{x}}_n^l = \frac{1}{b}\mathop{\sum}\limits _{j=1}^{b}\mathbf{x}_{j}^l~.
\end{align}
\textit{Explanation of Eq.~\eqref{equ:mean3} in Section~\ref{Sec:CNN}.} In the real implementation of DiMP, it is the sum of all derivatives of the squared-error loss in Eq.~\eqref{equ:costfunctionLSATOM3} for all the $N$ samples that is used for updating $\mathbf{W}$ at each optimization iteration of the BGD process. According to a variation of Eq.~\eqref{equ:submultiplicativity}, i.e.,
\begin{align}
\label{appequ:inequality3} \left\Vert \sqrt{b}\overline{\mathbf{y}}_i - \mathbf{W}_n\left(\sqrt{b}\overline{\mathbf{x}}_i\right)\right\Vert^2 \leq \mathop{\sum}\limits _{j=1}^{b}\left\Vert \mathbf{y}_{i,j} - \mathbf{W}_n\mathbf{x}_{i,j}\right\Vert^2~,
\end{align}
we can define the virtual input for the convolutional layer in DiMP at the time index $n$ as follows:
\begin{align}
\label{appequ:mean3} \overline{\mathbf{x}}_n = \frac{\sqrt{NM}}{NM} \mathop{\sum}\limits _{j=1}^{N}\mathop{\sum}\limits _{k=1}^{M}\sqrt{\gamma_{jk}}\mathbf{x}_{jk} = \frac{1}{\sqrt{NM}} \mathop{\sum}\limits _{j=1}^{N}\mathop{\sum}\limits _{k=1}^{M}\sqrt{\gamma_{jk}}\mathbf{x}_{jk}.
\end{align}
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{G}{iven} an arbitrary detected or annotated object of interest in the initial video frame, visual object tracking aims at {\it recognizing} and {\it localizing} other instances of the same object in subsequent frames to facilitate understanding how it moves through this video sequence. Recently this paradigm of tracking visual objects from a single initial exemplar in the testing phase has been broadly cast as a one-/few-shot meta-learning problem~\cite{Bertinetto2016SiameseFC, Bertinetto2016Learnet, Yang2017RFL, Park2018MetaSDNet, Li2018SiamRPN, Zhu2018DaSiamRPN, Yang2018MemTrack, Li2019SiamRPN++, Bhat2019DiMP, Huang2019SSD-MAML, Choi2019MLT, Li2019GradNet, Zhang2019UpdateNet, Xu2020SiamFC++, Danelljan2020PrDiMP, Wang2020Retina-MAML, Jung2020MetaRTT, Dai2020LTMU, Huang2020GlobalTrack, Yang2020ROAM, Yang2021MemTrack}, which pushes forward the development of deep trackers and achieves unprecedented performance improvements.
It is well known that Deep Neural Networks (DNNs) with large capacity are typically data-hungry. However, a single initial exemplar in the initial frame can only provide limited training set for sequence-specific adaptation of a deep tracker. In contrast to some non-meta-learning-based pioneering arts which only fine-tune several light-weight classification network layers~\cite{Nam2016MDNet, Jung2018RTMDNet} or train them from scratch~\cite{Danelljan2019ATOM} to do the initial adaptation at the first frame for {\it recognizing}, many meta-learning-based deep trackers~\cite{Bertinetto2016SiameseFC, Bertinetto2016Learnet, Park2018MetaSDNet, Li2018SiamRPN, Zhu2018DaSiamRPN, Li2019SiamRPN++, Bhat2019DiMP, Huang2019SSD-MAML, Xu2020SiamFC++, Danelljan2020PrDiMP, Wang2020Retina-MAML, Jung2020MetaRTT, Huang2020GlobalTrack} are proposed to speed up this initial adaptation process and reduce the risk of overfitting. The major insight is to inject additional prior knowledge into the meta-learning-based one-shot adaptation process to achieve explicit model initialization $\bm{\theta}_{\textrm{init}}^0$ through solving massive amounts of small meta-learning tasks based on the single initial exemplar in the offline learning process.
It is also noteworthy that, in the visual tracking testing phase, it is possible to augment the initial limited training set to update the trackers online by exploiting the position predictions of the object in subsequent frames as additional samples. Despite the fact that introducing imperfect predictions into the learning process may have the effect of accumulating errors and leading to tracking drift, the conditions under which a visual tracking system setup is performed are somehow less extreme~\cite{Bertinetto2019} in comparison with one-/few-shot classification. As the number of arriving samples increases, one may find an ideal and straightforward way to adapt the tracking models to new environments online and avoid overfitting, which is fine-tuning the model parameters over the whole increasingly larger sample set at each update step. However, in the circumstances of online deep trackers, it may encounter a significant computational overhead due to the requirement of more gradient descent iterations to achieve good convergence, which prohibits high tracking speed. In practice, a handful of samples are maintained during the whole testing phase in a way similar to the {\it sliding-window} strategy with fixed-window size. Again, to speed up this online adaptation and reduce the risk of overfitting, some of the aforementioned meta-learning-based one-shot initial adaptation methods are extended~\cite{Huang2019SSD-MAML, Wang2020Retina-MAML, Jung2020MetaRTT}, and some novel meta-learners are also carefully designed to offline-train updaters~\cite{Yang2017RFL, Yang2018MemTrack, Choi2019MLT, Li2019GradNet, Zhang2019UpdateNet, Yang2020ROAM, Yang2021MemTrack} for effective online adaptation, namely few-shot online adaptation.
Despite the success of both the meta-learning-based one-shot initial adaptation and few-shot online adaptation strategies in visual tracking to avoid overfitting and reduce memory or computation requirements based on fast domain adaptation, complex optimization is required in the offline training phase -- especially for the latter adaptation strategy. Specifically, both of them incorporate the prior knowledge from large amounts of annotated training data for fast domain adaptation, e.g., the training splits of \textit{ImageNet DET} and \textit{VID}~\cite{Russakovsky2015ImageNet}, \textit{MS-COCO}~\cite{Lin2014COCO}, \textit{GOT10k}~\cite{Huang2019GOT}, \textit{TrackingNet}~\cite{Muller2018TrackingNet} and \textit{LaSOT}~\cite{Fan2019LaSOT}. In this paper, we take a different perspective than the above methods for addressing the few-shot online adaptation problem, and propose a simple yet effective online adaptation method based on recursive least-squares (RLS) estimation~\cite{Haykin2002AFT}. The motivation behind this is that we find it is feasible to sidestep the computational overhead of fine-tuning over the whole gradually enlarged sample set by exploring the RLS's ability to ``not forget" the old knowledge about the object after discarding the old data. Our approach can be treated as a widely applicable and integrable module for existing deep trackers without incurring too much extra computational cost in the sense that it improves the plain gradient descent optimization over the {\it sliding-window} data for tracking whereas neither the time to perform optimizations nor the storage consumption significantly increase.
\begin{figure}[!t]
\captionsetup{font={small}}
\centering
\subfloat{\includegraphics[width=0.99\linewidth]{OnlineTracking.pdf}
\label{OnlineTracking}}
\caption{Online learning in visual tracking. Here we denote the collected $b$ training samples at time index $n$ as $o_n = \left\{\left(\mathbf{x}_{n,j}, \mathbf{y}_{n,j}\right )\right\}_{j=1}^b$.} \label{fig:OnlineTracking}
\end{figure}
In other words, we propose to use RLS estimation to aid the online adaptation. Our aim is to retain past memory while updating tracking models sequentially based on a handful of samples maintained at each update step. This resembles the emerging continual learning literature~\cite{Kirkpatrick2017EWC, He2018CAB, Zeng2019OWM, Parisi2018CLReview}, which has recently proved valuable in numerous typical supervised learning and reinforcement learning-based tasks where some static models are offline trained based on continual learning with fixed volumes of data. The few-shot online adaptation is also closely related to generic online learning, which basically aims at updating the predictor for future data at each time index while a continuous stream of data become available in a sequential order. It sometimes need dynamically adapting to new patterns in a stream when the data itself change from time to time, which is just right for visual tracking as shown in Figure~\ref{fig:OnlineTracking}. So our approach departs from the prior arts in that we are the first to improve on the online learning procedure by incorporating memory retention in the spirit of continual learning in humans without catastrophic forgetting.
The concrete implementation to demonstrate the effectiveness of our approach is based on the fact that the calculations in deep networks, no matter multi-layer perceptrons (MLPs) or convolutional neural networks (ConvNets), can be quickly performed using fast linear algebra routines by organizing the network parameters in matrices. We study two networks in the online learning families for tracking: the MLPs-based as in RT-MDNet~\cite{Jung2018RTMDNet}, and the ConvNets-based as in DiMP~\cite{Bhat2019DiMP}. The reason for choosing these baselines is that they both use the {\it sliding-window} strategy for online learning classification models and {\it recognizing} objects in tracking, and the exploited optimization methods only enable the learning to converge to a {\it local} point with respect to the fixed-sized sample set at each update step. Instead, our improved optimization method in a recursive fashion can enable the learning to approximately converge to a {\it global} point with respect to all the historical training samples ever seen including the discarded old data. What's more, we do not rely on offline training to improve the tracking performance, and all the validations are based on the off-the-shelf models of the baselines in their papers.
In summary, through network online learning we investigate a continual-learning-inspired online learning approach with RLS estimator-aided online adaptation in tracking, which is orthogonal to the investigation of meta-learning approaches. We believe the observations in this paper will improve the understandings of network online learning for visual tracking, and we expect our simple-to-implement approach will further advance the few-shot online adaptation research. Our improved code and raw results are released at \url{https://github.com/Amgao/RLS-OnlineTrack}.
\section{Related work}\label{sec:relatedwork}
\subsection{Online Learning in Visual Tracking}
Online learning has been an important part of visual tracking for about ten years since the classic IVT tracker~\cite{Ross2008IVT} was first proposed in 2008. The subsequent research has been concentrating on online robust classifier construction, including MIL~\cite{Babenko2011MIL}, Struck~\cite{Hare2016Struck}, MEEM~\cite{Zhang2014MEEM}, TGPR~\cite{Gao2020TGPR}, DCF-based approaches~\cite{Henriques2015KCF, Danelljan2017DSST, Danelljan2015SRDCF, Galoogahi2017BACF, YLi2019ARCF, Danelljan2016CCOT, Danelljan2017ECO, Xu2019LADCF, Li2018STRCF, Xu2019GFS-DCF, YLi2020AutoTrack}, deep classifiers~\cite{Nam2016MDNet, Jung2018RTMDNet, Danelljan2019ATOM}, and some recent long-term trackers~\cite{Yang2014Reacquisition, Dai2020LTMU}, just to name a few. This is very different from the typical supervised learning tasks training with fixed volumes of data, e.g., image classification~\cite{Simonyan2015VGG} and object detection~\cite{He2017MaskRCNN}.
In general, adapting to new data continually may give the online tracking models a tremendous advantage over the Siamese-style trackers~\cite{Bertinetto2016SiameseFC, Bertinetto2016Learnet, Li2018SiamRPN, Zhu2018DaSiamRPN, Li2019SiamRPN++, Xu2020SiamFC++, Chen2020SiamBAN} with online learning abandoned in discriminating background distractors. This instance-level discrimination power largely relies on the convergence rate of optimization algorithms, especially within limited optimization iterations allowed in the update stage for the sake of tracking speed. The challenges may stem from the learning rate selection, model initialization, non-convex cost function, and so on~\cite{Ruder2017GDReview}.
Besides providing a good model initialization to account for the convergence challenges in the update stage as the aforementioned meta-learning-based one-shot initial adaptation methods~\cite{Park2018MetaSDNet, Bhat2019DiMP, Huang2019SSD-MAML, Danelljan2020PrDiMP, Wang2020Retina-MAML, Jung2020MetaRTT} have done, some novel LSTM model-based~\cite{Yang2017RFL, Yang2018MemTrack, Yang2020ROAM, Yang2021MemTrack}, ConvNet model-based~\cite{Zhang2019UpdateNet}, and gradient-based~\cite{Choi2019MLT, Li2019GradNet} meta-learners are proposed to efficiently adapt tracking models to appearance variations. Moreover, the Conjugate Gradient~\cite{Shewchuk1994CG} methodology-based online optimizers~\cite{Danelljan2016CCOT, Danelljan2017ECO, Danelljan2019ATOM} or Steepest Descent methodology-based ones~\cite{Bhat2019DiMP} are also carefully designed to improve convergence while online learning.
In addition, despite the benefits of improved instance-level discrimination power with frequent update, the excessive update with limited memory size due to the discarding of old data may result in overfitting to recent training samples, deteriorating the robustness. Being robust is crucial for increasing the flexibility in tackling the different distortions of the object appearance over time (e.g., intrinsic object scale and pose variations, and variations caused by extrinsic illumination change, camera motion and occlusion). In other words, the plain online learning in visual tracking, e.g., the standard gradient descent (GD) or its momentum-based stochastic twin (SGD)~\cite{Qian1999Momentum} in the earlier online trackers~\cite{Danelljan2015SRDCF, Nam2016MDNet, Jung2018RTMDNet}, faces challenges in meeting both the instance-level discrimination and robustness demands at the same time, as they are sometimes contradictory~\cite{Wang2019SPM}. Apart from the aforementioned meta-learning-based few-shot online adaptation methods, some online trackers widely apply the moderately infrequent and underfitted update~\cite{Danelljan2016CCOT, Danelljan2017ECO, Danelljan2019ATOM, Bhat2019DiMP, Danelljan2020PrDiMP}, passive-aggressive update~\cite{Li2018STRCF, Xu2019LADCF, Xu2019GFS-DCF, YLi2020AutoTrack}, or long short-term complementary update~\cite{Nam2016MDNet, Jung2018RTMDNet} settings in the update stage to relieve suffering from overfitting to recent training samples, or even corrupted samples.
These observations inspire the design of our proposed RLS estimator-aided online learning approach for visual tracking, which not only has instance-level discrimination power in case of background distractors as other online trackers does, but also can enhance the robustness ability thanks to the memory retention despite the discarding of old data.
\subsection{One-/Few-shot Learning in Visual Tracking}
It is well known that humans can effortlessly learn novel concepts after only one or a few exposures by exploiting the knowledge primitives accrued in a lifetime. Inspired by this one-/few-shot learning ability of humans, there has been a recent resurgence of interest in two kinds of machine learning problems, i.e., (a) learning a meta-level knowledge across a large number of distinct training tasks to rapidly generalise to new concepts with small training data, which is termed as meta-learning or learning-to-learn (e.g.,~\cite{Koch2015MetaSiamese, Bertinetto2016Learnet, Finn2017MAML}), and (b) learning consecutive tasks to continually adapt to new data without forgetting how to perform previously trained tasks, namely continual learning (e.g.,~\cite{Kirkpatrick2017EWC, He2018CAB, Zeng2019OWM, Parisi2018CLReview}). Since the limited-data regime that characterises the visual tracking setup has attracted much attention recently, a promising research direction is the one set by applying one-/few-shot learning paradigm to visual tracking.
Siamese networks are firstly exploited to formulate the one-shot object recognition task as image matching via learning feature representations that preserve the class neighborhood structure and then inducing the $l_1$ component-wise distance metric to join the two Siamese twins for similarity computation~\cite{Koch2015MetaSiamese}. This metric-learning-based meta-learning approach is further extended to be capable of predicting network parameters in~\cite{Bertinetto2016Learnet} and two more distance metrics are evaluated, i.e., the inner-product and the Euclidean distance. In~\cite{Bertinetto2016SiameseFC}, a Siamese architecture that is fully-convolutional with respect to the search image by obtaining the convolution parameters from the template image, namely SiamFC, is designed for dense and efficient matching-based visual tracking in a one-shot manner. This efficiency essentially benefits from encoding the inner-product similarity between the template image and each spatial location in the search image. Following the SiamFC paradigm, many Siamese network-based one-shot learning methods for initial adaptation~\cite{Li2018SiamRPN, Zhu2018DaSiamRPN, Li2019SiamRPN++, Bhat2019DiMP, Xu2020SiamFC++, Huang2020GlobalTrack, Danelljan2020PrDiMP} and few-shot learning methods for online adaptation~\cite{Yang2017RFL, Yang2018MemTrack, Choi2019MLT, Li2019GradNet, Zhang2019UpdateNet, Yang2021MemTrack} in visual tracking are studied.
MAML~\cite{Finn2017MAML, antoniou2019how}, a model-agnostic meta-learning approach, aims to help the network to learn a set of good initialization parameters that are suitable for fine-tuning to produce good generalization performance on a new task efficiently. Since it is compatible with different models trained with gradient descent without changing their architectures, some recent works~\cite{Park2018MetaSDNet, Huang2019SSD-MAML, Wang2020Retina-MAML, Jung2020MetaRTT} formulate tracking in a MAML-based one-shot initial adaptation or few-shot online adaptation framework. They either improve the existing trackers~\cite{Park2018MetaSDNet, Jung2020MetaRTT} or directly convert a modern object detector into a tracker~\cite{Huang2019SSD-MAML, Wang2020Retina-MAML}, leading to improvements in tracking speed and robustness, or resusability of the advancement in object detection.
Continual learning, an attractive route to artificial general intelligence, has been demonstrated to be capable of solving the classic image classification problem~\cite{Kirkpatrick2017EWC, He2018CAB, Zeng2019OWM} or Atari 2600 games~\cite{Kirkpatrick2017EWC} through sequential learning. Besides overcoming catastrophic forgetting in the offline training phase like these earlier works, a preliminary version of this paper published in CVPR’20~\cite{Gao2020RLS-RTMDNet} is the first to improve on the online learning procedure by incorporating memory retention and demonstrate its success in visual tracking. It has theoretically and experimentally demonstrated retaining past memory while updating tracking models sequentially based on RLS estimation can tackle catastrophic forgetting incurred by plain SGD in the online learning of MLP layers in RT-MDNet~\cite{Jung2018RTMDNet}. The present work goes further to investigate RLS's ability of a convolutional layer for online learning improvement in tracking. To this end, a recent state-of-the-art tracker DiMP~\cite{Bhat2019DiMP} is exploited as another baseline, which enables us to demonstrate the effectiveness of our approach on the more challenging \textit{VOT2018/2019} benchmarks~\cite{Kristan2018VOTconf, Kristan2019VOTconf}, the large-scale short-term \textit{TrackingNet-test}~\cite{Muller2018TrackingNet} and \textit{GOT10k-test}~\cite{Huang2019GOT} tracking benchmarks, and the long-term \textit{LaSOT-test}~\cite{Fan2019LaSOT}, \textit{OxUvA-dev}~\cite{Valmadre2018OxUvA} and \textit{TLP}~\cite{Moudgil2019TLP} benchmarks. The consistent improvements on several challenging benchmarks against the plain GD prove our online adaptation method's efficiency without additional offline training and too much tedious work on parameter adjusting. In the supplementary material, we also provide additional technical details to explain, for instance, how to extend the derivation of normal equations for MLP in~\cite{Gao2020RLS-RTMDNet} to handle non-linear activation functions and how the derivation of normal equations for MLP in~\cite{Gao2020RLS-RTMDNet} can be easily transplanted to the case of cross-entropy loss.
\section{RLS Estimator-Aided Online Learning with Memory Retention}
\label{Sec:RLS}
Nowadays, online learning in visual tracking is increasingly related to online learning network parameters from a carefully designed and time-varying sampling set~\cite{Jung2018RTMDNet, Bhat2019DiMP}. By organizing the network parameters in matrices, the calculations in the networks can be quickly performed using fast linear algebra routines. In this section, a simple least-squares estimation (LSE) problem is firstly investigated in matrix form to facilitate the interpretation of our approach.
\subsection{LSE in Online Learning}
\label{Sec:Problem}
Consider the time-varying training set $\left\{\bm{X}(n), \bm{Y}(n)\right\}$ consisting of the observable input samples and desired responses within all of the time span until the time index $n$, the method of least squares can be used to estimate a set of weighting coefficients in $\mathbf{W}_n$ at the time index $n$ in the online learning procedure, such that the following {\it cost function} is minimized:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:costfunctionLS} \mathscr{L}(\mathbf{W}_n) = \left\Vert \bm{\varLambda}(n)\left(\bm{Y}(n) - \bm{X}(n)\mathbf{W}_n^{\top}\right)\right\Vert^2\! + \delta\beta^n\left\Vert \mathbf{W}_n\right\Vert^2\!,
\end{align}
\end{shrinkeq}
where the left squared error term is defined based on the $l_2$ norm of the {\it residual} vectors, and the regularizing term, controlled by a positive constant $\delta$, is included to stabilize the solution to the recursive formulation of this LSE problem~\cite{Haykin2002AFT, Moustakides1997} as shown in Section~\ref{Sec:RLS-Formulation}.
Here we denote $\bm{X}(n)$, $\bm{Y}(n)$ and $\bm{\varLambda}(n)$ in a {\it block} manner. The sampling set $\left\{\bm{X}(n), \bm{Y}(n)\right\}$ has growing dimensions in the number of data {\it blocks} in rows (growing-window),
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:XY} \bm{X}(n) = \begin{pmatrix}
\bm{X}_1 \\
\vdots \\
\bm{X}_n\\
\end{pmatrix} \in \mathbb{R}^{nb \times p}~, ~~~ \bm{Y}(n) = \begin{pmatrix}
\bm{Y}_1 \\
\vdots \\
\bm{Y}_n\\
\end{pmatrix} \in \mathbb{R}^{nb \times q}~,
\end{align}
\end{shrinkeq}
with $\bm{X}_i \!\in\! \mathbb{R}^{b \times p}, \bm{Y}_i \!\in\! \mathbb{R}^{b \times q}$ and $i = 1,2,\cdots,n$. Here $b$, $p$ and $q$ are to denote the {\it block} size and the dimensions of the input and output for each sample. The diagonal weighting matrix
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:Lambda} \bm{\varLambda}(n) = diag\left(\sqrt{\beta}^{n-1}\mathbf{I}_b,\cdots, \sqrt{\beta}\mathbf{I}_b, \mathbf{I}_b\right)
\end{align}
\end{shrinkeq}
is to diminish the importance of those previous observations (rows) with $0 < \beta \leq 1$, which can measure the memory of the algorithm. The regularizing term is also reformulated to make its beneficial effect forgotten with time for $\beta$ less than unity.
According to the method of LSE~\cite{Haykin2002AFT}, the optimum value of $\widehat{\mathbf{W}}_n$ at the time index $n$, for which Eq.~\eqref{equ:costfunctionLS} attains its minimum value, is defined by the normal equations. Before that, we can rewrite the {\it cost function} in Eq.~\eqref{equ:costfunctionLS} as
\begin{shrinkeq}{-1ex}
\begin{gather}
\label{equ:costfunctionLSnew2} \mathscr{L}(\mathbf{W}_n) \!=\! \mathop{\sum}\limits _{i=1}^{n}\beta^{n-i}\!\left(\frac{1}{b}\left\Vert \bm{Y}_i - \bm{X}_i\mathbf{W}_n^{\top}\right\Vert^2\right) \!+\! \frac{\delta}{b}\beta^n\left\Vert \mathbf{W}_n\right\Vert^2~\!\!.
\end{gather}
\end{shrinkeq}
Solving for $\mathbf{W}_n$ for which $\nabla _{\mathbf{W}_n}\mathscr{L}$ is zero, we may write the normal equations as follows in matrix form:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:normalequ_memo1} \widehat{\mathbf{W}}_n = \mathbf{Z}_n\bm{\Phi}_n^{-1}~,
\end{align}
\end{shrinkeq}
where $\mathbf{Z}_n$ and $\bm{\Phi}_n$ are defined respectively by Eqs.~\eqref{equ:normalequ_memo2} and \eqref{equ:normalequ_memo3}, i.e., the time-average cross-correlation matrix $\mathbf{Z}_n$ between the desired output and the input is shown by the formula
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:normalequ_memo2} \mathbf{Z}_n = \mathop{\sum}\limits _{i=1}^{n}\beta^{n-i}\bm{Y}_i^{\top}\bm{X}_i~,
\end{align}
\end{shrinkeq}
and the time-average cross-correlation matrix $\bm{\Phi}_n$ of the input including the regularizing term is defined by
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:normalequ_memo3} \bm{\Phi}_n = \mathop{\sum}\limits _{i=1}^{n}\beta^{n-i}\bm{X}_i^{\top}\bm{X}_i + \delta\beta^{n}\mathbf{I}~.
\end{align}
\end{shrinkeq}
In practice, SGD or its mini-batch version (MBSGD) is commonly used to optimize the parameters in $\mathbf{W}_n$ to obtain an approximate optimum value $\widehat{\mathbf{W}}_{n}^{appr}$ for problems similar to Eq.~\eqref{equ:costfunctionLS} or \eqref{equ:costfunctionLSnew2}. The initialization of this optimization at the time index $n$ is commonly set to $\widehat{\mathbf{W}}_{n-1}^{appr}$, which is the approximate optimum value at the time index $n-1$. This iterative optimization procedure for online learning can not only avoid the computationally expensive inverse operation in Eq.~\eqref{equ:normalequ_memo1} for high-dimensional input data, but also facilitate the training of networks in deep learning by allowing processing the continuously and increasingly growing large dataset with limited GPU memory size. As the number of arriving {\it blocks} increases, however, fine-tuning the parameters over the whole increasingly larger training set $\left\{\bm{X}(n), \bm{Y}(n)\right\}$ will give rise to more gradient descent iterations to achieve good convergence, which leads to a significant computational overhead and thus prohibits high tracking speed. Moreover, the growing training set results in growing storage consumption.
In order to make a compromise between the optimization and tracking speed, traditional online learning in visual tracking commonly uses a carefully designed {\it sliding-window} strategy with fixed-window size to incorporate the new data segments and also remove the influence of the obsolete data. Similar to this traditional online learning paradigm, a strategy to solve Eq.~\eqref{equ:costfunctionLS} can be derived as follows. First, if we denote $s$ as the number of {\it blocks} of the fixed-window size, then for $n \geq s$, the fixed-window data can be written from the growing-window data given in Eq.~\eqref{equ:XY} by discarding the oldest $n-s$ {\it blocks} of data, i.e.,
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:XYnew} \bm{X}(n) = \begin{pmatrix}
\bm{X}_{n-s+1} \\
\vdots \\
\bm{X}_n\\
\end{pmatrix} ~, ~~~ \bm{Y}(n) = \begin{pmatrix}
\bm{Y}_{n-s+1} \\
\vdots \\
\bm{Y}_n\\
\end{pmatrix}~.
\end{align}
\end{shrinkeq}
Second, the approximate optimum value $\widehat{\mathbf{W}}_n^{appr}$ can be obtained by minimizing Eq.~\eqref{equ:costfunctionLS} based on some optimization strategies, e.g., standard gradient descent or its stochastic twin SGD as in RT-MDNet~\cite{Jung2018RTMDNet}, and Steepest Descent (SD) methodology as in DiMP~\cite{Bhat2019DiMP}. The initialization at the time index $n$ is also set to $\widehat{\mathbf{W}}_{n-1}^{appr}$. For instance, the standard batch gradient descent (BGD) can be used to optimize $\mathbf{W}_n$ based on the following iterations:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:derivativeG} \mathbf{W}_n^{iter+1} &= \mathbf{W}_n^{iter} - \eta \left(\mathbf{W}_n^{iter}\bm{\Phi}_n - \mathbf{Z}_n\right)~,\\
\label{equ:updateG} \mathbf{W}_n^{0} &= \widehat{\mathbf{W}}_{n-1}^{appr}~.
\end{align}
\end{shrinkeq}
Here $\eta$ is a pre-defined learning rate, and $\bm{\Phi}_n$ and $\mathbf{Z}_n$ are obtained by summing over the fixed-window data.
Despite the success of this {\it sliding-window} strategy with fixed-window size, a limitation of historical memory loss prevents unveiling the power of online deep tracking in~\cite{Jung2018RTMDNet, Bhat2019DiMP}, because this strategy will potentially be faced with the problem of being prone to overfitting to the recent fixed-window data especially when the size of the window is limited. This raises an interesting question that we discuss in the next subsection: Can we have a more elegant path to retain the memory while neither increasing the time to perform optimizations nor the storage consumption significantly? We hope our work and the presented online learning approach allow researchers to focus on the still unsolved critical challenges of online deep tracking.
\subsection{Recursive Formulation with Memory Retention}
\label{Sec:RLS-Formulation}
This section investigates the relationship between $\widehat{\mathbf{W}}_{n}$ and $\widehat{\mathbf{W}}_{n-1}$ at the consecutive time indices and presents an interesting alternative to solving Eq.~\eqref{equ:costfunctionLS} in a recursive fashion~\cite{Haykin2002AFT}. In contrast to the aggressive online learning in the previous {\it sliding-window} strategy, the characteristic of this recursive online learning can spontaneously reduce the risk of overfitting thanks to the memory retention though the oldest {\it blocks} of data are discarded. In other words, the catastrophic forgetting of old knowledge about discriminating the discarded training samples during the independent optimization over the newly updated fixed-window sample set in Eqs.~\eqref{equ:derivativeG} and \eqref{equ:updateG} is mitigated.
To simplify the derivation, we only consider one sample pair in each {\it block}, which means $b=1$ in Eq.~\eqref{equ:costfunctionLS}. Let $\bm{X}_i = \mathbf{x}_i^{\top}$ and $\bm{Y}_i = \mathbf{y}_i^{\top}$, the {\it cost function} of Eq.~\eqref{equ:costfunctionLS} will degrade to
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:costfunctionLSnew} \mathscr{L}(\mathbf{W}_n) = \mathop{\sum}\limits _{i=1}^{n}\beta^{n-i}\left\Vert \mathbf{y}_i - \mathbf{W}_n\mathbf{x}_i\right\Vert^2 + \delta\beta^n\left\Vert \mathbf{W}_n\right\Vert^2~,
\end{align}
\end{shrinkeq}
and the cross-correlation matrices in Eq.~\eqref{equ:normalequ_memo1} are defined by
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:normalequ_memo2new} \mathbf{Z}_n &= \mathop{\sum}\limits _{i=1}^{n}\beta^{n-i}\mathbf{y}_{i}\mathbf{x}_i^{\top}\\
\label{equ:normalequ_memo3new} \bm{\Phi}_n &= \mathop{\sum}\limits _{i=1}^{n}\beta^{n-i}\mathbf{x}_i\mathbf{x}_i^{\top} + \delta\beta^{n}\mathbf{I}~.
\end{align}
\end{shrinkeq}
The normal equations in the case of $b = 1$ can be seen as an expansion of the single sample case without summing over all the historical observations:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:normalequ_singlesample} \widehat{\mathbf{W}}_n = \mathbf{y}_{n}\mathbf{x}_n^{\top}\left(\mathbf{x}_n\mathbf{x}_n^{\top} + \delta\mathbf{I}\right)^{-1}~.
\end{align}
\end{shrinkeq}
Isolating the terms corresponding to $i = n$ from the rest of the summations in Eqs.~\eqref{equ:normalequ_memo2new} and \eqref{equ:normalequ_memo3new} yields the following recursions for updating $\mathbf{Z}_n$ and $\bm{\Phi}_n$ respectively:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:normalequ_memo4} \mathbf{Z}_n &= \beta\mathbf{Z}_{n-1} + \mathbf{y}_{n}\mathbf{x}_n^{\top}~,\\
\label{equ:normalequ_memo5} \bm{\Phi}_n &= \beta\bm{\Phi}_{n-1} + \mathbf{x}_n\mathbf{x}_n^{\top}~.
\end{align}
\end{shrinkeq}
With $\bm{\Phi}_n$ assumed to be non-singular and thus invertible, we may obtain the following recursive equation for the inverse of $\bm{\Phi}_n$ by applying the Sherman-Morris formula~\cite{Hager1989Inverse}
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:inverse_Phi} \bm{\Phi}_n^{-1} = \beta^{-1}\bm{\Phi}_{n-1}^{-1} - \frac{\beta^{-2}\bm{\Phi}_{n-1}^{-1}\mathbf{x}_n\mathbf{x}_n^{\top}\bm{\Phi}_{n-1}^{-1}}{1 + \beta^{-1}\mathbf{x}_n^{\top}\bm{\Phi}_{n-1}^{-1}\mathbf{x}_n}~.
\end{align}
\end{shrinkeq}
If we denote $\mathbf{P}_{n} = \bm{\Phi}_n^{-1}$ and let
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:kn} \mathbf{k}_{n} = \frac{\beta^{-1}\mathbf{x}_n^{\top}\mathbf{P}_{n-1}}{1 + \beta^{-1}\mathbf{x}_n^{\top}\mathbf{P}_{n-1}\mathbf{x}_n}~,
\end{align}
\end{shrinkeq}
then Eq.~\eqref{equ:inverse_Phi} can be rewritten as
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:inverse_Phi2} \mathbf{P}_{n} = \beta^{-1}\mathbf{P}_{n-1} - \beta^{-1}\mathbf{P}_{n-1}\mathbf{x}_n\mathbf{k}_{n}~.
\end{align}
\end{shrinkeq}
It is surprising to find that multiplying $\mathbf{x}_n^{\top}$ by each side of Eq.~\eqref{equ:inverse_Phi2} yields the following simple expression
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:kn2} \mathbf{x}_n^{\top}\mathbf{P}_{n} &\!=\! \beta^{-1}\mathbf{x}_n^{\top}\mathbf{P}_{n-1} - \beta^{-1}\mathbf{x}_n^{\top}\mathbf{P}_{n-1}\mathbf{x}_n\mathbf{k}_{n} \!=\! \mathbf{k}_{n}~.
\end{align}
\end{shrinkeq}
This facilitates the derivation for obtaining a recursive formula for the normal equations in the case of $b = 1$ when we substitute Eqs.~\eqref{equ:normalequ_memo4} and \eqref{equ:inverse_Phi2} into Eq.~\eqref{equ:normalequ_memo1}
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:normalequ_memo6} \widehat{\mathbf{W}}_n &= \beta\mathbf{Z}_{n-1}\mathbf{P}_{n} + \mathbf{y}_{n}\mathbf{x}_n^{\top}\mathbf{P}_{n}\nonumber\\
&= \mathbf{Z}_{n-1}\mathbf{P}_{n-1} - \mathbf{Z}_{n-1}\mathbf{P}_{n-1}\mathbf{x}_n\mathbf{k}_{n} + \mathbf{y}_{n}\mathbf{k}_{n}\nonumber\\
&= \widehat{\mathbf{W}}_{n-1} + \left(\mathbf{y}_{n} - \widehat{\mathbf{W}}_{n-1}\mathbf{x}_n\right)\mathbf{k}_{n}\nonumber\\
&= \widehat{\mathbf{W}}_{n-1} - \left(\widehat{\mathbf{W}}_{n-1}\mathbf{x}_n - \mathbf{y}_{n}\right)\mathbf{x}_n^{\top}\mathbf{P}_{n}~,
\end{align}
\end{shrinkeq}
where the expression inside the brackets on the right-hand side of the last line represents the {\it residual} based on the old least-squares estimate of the parameters to be learned.
Extending the above derivation to the case of $b > 1$ is not straightforward, because the equations of Eqs.~\eqref{equ:normalequ_memo4} and \eqref{equ:normalequ_memo5} will be expanded to
\begin{shrinkeq}{-1ex}
\begin{gather}
\label{equ:normalequ_memo4new} \mathbf{Z}_n = \beta\mathbf{Z}_{n-1} + \bm{Y}_n^{\top}\bm{X}_n~,\\
\label{equ:normalequ_memo5new} \bm{\Phi}_n = \beta\bm{\Phi}_{n-1} + \bm{X}_n^{\top}\bm{X}_n~,
\end{gather}
\end{shrinkeq}
where
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:XYmean} \bm{X}_i = \begin{pmatrix}
\mathbf{x}_{i,1}^{\top} \\
\vdots \\
\mathbf{x}_{i,b}^{\top}\\
\end{pmatrix} ~~~\mathrm{and}~~~ \bm{Y}_i = \begin{pmatrix}
\mathbf{y}_{i,1}^{\top} \\
\vdots \\
\mathbf{y}_{i,b}^{\top}\\
\end{pmatrix}~.
\end{align}
\end{shrinkeq}
This obviously prohibits the derivation of Eq.~\eqref{equ:inverse_Phi}. Alternatively, using the fact that the mean squared error inside the brackets of Eq.~\eqref{equ:costfunctionLSnew2}, which is based on the $l_2$ norm of the {\it residual} vectors for all of the sample pairs in the $i_{th}$ {\it block}, serves as an upper bound of the error term for the virtual sample pair $\left\{\overline{\mathbf{x}}_i, \overline{\mathbf{y}}_i\right\}$, i.e.,
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:submultiplicativity} \left\Vert \overline{\mathbf{y}}_i - \mathbf{W}_n\overline{\mathbf{x}}_i\right\Vert^2 \leq \frac{1}{b}\left\Vert \bm{Y}_i - \bm{X}_i\mathbf{W}_n^{\top}\right\Vert^2~,
\end{align}
\end{shrinkeq}
where
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:mean1} \overline{\mathbf{x}}_i = \frac{1}{b}\mathop{\sum}\limits _{j=1}^{b}\mathbf{x}_{i, j} ~~~\mathrm{and}~~~ \overline{\mathbf{y}}_i = \frac{1}{b}\mathop{\sum}\limits _{j=1}^{b}\mathbf{y}_{i, j}~,
\end{align}
\end{shrinkeq}
we can instead approximately obtain the recursive formula for the normal equations in the case of $b > 1$ based on this virtual sample pair:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:RLS_update1_2} \widehat{\mathbf{W}}_n = \widehat{\mathbf{W}}_{n-1} - \left(\widehat{\mathbf{W}}_{n-1}\overline{\mathbf{x}}_n - \overline{\mathbf{y}}_{n}\right)\overline{\mathbf{x}}_n^{\top}\mathbf{P}_{n}~,
\end{align}
\end{shrinkeq}
where $\mathbf{P}_{n}$ is updated based on $\overline{\mathbf{x}}_n$ using Eqs.~\eqref{equ:kn} and \eqref{equ:inverse_Phi2}.
In practice, the exact optimum value $\widehat{\mathbf{W}}_{n-1}$ is hardly to be obtained and its approximate optimum value $\widehat{\mathbf{W}}_{n-1}^{appr}$ is commonly obtained from some optimization algorithm. However, we cannot directly use Eq.~\eqref{equ:RLS_update1_2} to obtain $\widehat{\mathbf{W}}_{n}^{appr}$ as the relationship between $\widehat{\mathbf{W}}_{n}^{appr}$ and $\widehat{\mathbf{W}}_{n-1}^{appr}$ does not rigorously comply with Eq.~\eqref{equ:RLS_update1_2}. Inspired from the gradient descent paradigm, we can, for instance, model the changing between $\widehat{\mathbf{W}}_{n-1}^{appr}$ and $\widehat{\mathbf{W}}_{n}^{appr}$ by the following iterations:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:derivativenew} \mathbf{W}_n^{iter+1} &= \mathbf{W}_n^{iter} - \eta \left(\mathbf{W}_n^{iter}\overline{\mathbf{x}}_n - \overline{\mathbf{y}}_{n}\right)\overline{\mathbf{x}}_n^{\top}\mathbf{P}_{n}~,\\
\label{equ:updatenew} \mathbf{W}_n^{0} &= \widehat{\mathbf{W}}_{n-1}^{appr}~,
\end{align}
\end{shrinkeq}
where $\eta$ is another pre-defined learning rate. From the above equations, we see that the approximate optimum value $\widehat{\mathbf{W}}_{n}^{appr}$ is obtained only based on the most recently arrived data {\it block} $n$, though it is dedicated to satisfying the {\it cost function} in Eq.~\eqref{equ:costfunctionLSnew2} and hence retaining the historical memory. Consequently using the fact that the matrix product $\left(\mathbf{W}_{n}^{iter}\overline{\mathbf{x}}_n - \overline{\mathbf{y}}_{n}\right)\overline{\mathbf{x}}_n^{\top}$ represents the gradient with respect to the weighting parameters $\mathbf{W}_{n}^{iter}$ for the virtual input $\overline{\mathbf{x}}_n$ of the $n_{th}$ data {\it block} when considering its squared-error loss $\frac{1}{2}\left\Vert \overline{\mathbf{y}}_n - \mathbf{W}_n^{iter}\overline{\mathbf{x}}_n\right\Vert^2$ without the $l_2$ regularization term, we can cast the iterations in Eqs.~\eqref{equ:derivativenew} and \eqref{equ:updatenew} as an improved gradient descent algorithm for online learning with memory retention. Note that the initial condition of $\mathbf{P}_{n}$ is set to
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:initialcondition} \mathbf{P}_{0} = \bm{\Phi}_0^{-1} = \left(\delta\mathbf{I}\right)^{-1} = \left.\mathbf{I}\middle/\delta\right.~.
\end{align}
\end{shrinkeq}
\textit{Discussion.} The traditional gradient descent algorithms following the {\it sliding-window} strategy for online learning compute the gradient at each optimization iteration based on the {\it cost function} similar to Eq.~\eqref{equ:costfunctionLS} for the whole batch of fixed-window data (BGD) or its mini-batch after random shuffle operation (MBSGD). An alternative strategy for improving these traditional algorithms to retain memory by performing minimal changes of their settings and implementations is to cast the whole batch (for BGD) or one mini-batch (for MBSGD) data as a data {\it block} at some time index in Eq.~\eqref{equ:costfunctionLSnew2}, and then improve the optimization iterations based on our proposed approach in Eqs.~\eqref{equ:derivativenew} and \eqref{equ:updatenew}. Note that the gradient term based on the virtual sample pair in Eq.~\eqref{equ:derivativenew} can be approximately replaced with the real gradient at each optimization iteration computed from BGD or MBSGD.
\section{Recursive Formulation for Multiple Fully-connected Layers in RT-MDNet}
\label{Sec:MLP}
In the online deep tracker RT-MDNet~\cite{Jung2018RTMDNet} that is dedicated to target classification, the multi-layer perceptron (MLP) with several fully-connected $1$-$D$ layers is applied to the last stage of the deep learning architecture for classification. The features present in the final $2$-$D$ feature maps are concatenated into one long input vector to the following MLP. In this section, we will show how our proposed RLS estimator-aided online learning approach for memory retention can be realized in this single-head real-time tracker to online learn those MLP layers.
\textit{Online Learning in RT-MDNet.} Being identical to MDNet~\cite{Nam2016MDNet}, RT-MDNet only updates the fully-connected MLP layers (\verb'fc4-6') $\left\{\mathbf{W}^l\right\}_{l=4}^6$ in an online manner while keeping the convolutional feature extraction layers (\verb'conv1-3') $\left\{\mathbf{W}^l\right\}_{l=1}^3$ fixed, namely update stage. Before that, the MLP layers also need to be fine-tuned using the samples from the initial frame to customize themselves to a new testing domain, namely initialization stage. There are two types of memory being maintained in the update stage to make compromises between robustness and adaptiveness: the long-term memory for regular updates with the samples collected for a long period of time; the recent short-term memory with the occasional updates triggered whenever the score of the estimated target is below a threshold, indicating the unreliable tracking.
During online tracking, the cross-entropy loss based on the softmax output of the single fully-connected binary classification layer $\mathbf{W}^6$ is used to fine-tune or update all the MLP layers and the optimization algorithm of MBSGD is exploited, which is a compromise between BGD and the single instance SGD. Specifically, suppose there are $b$ samples in one mini-batch used for one optimization iteration of the MBSGD process, it is thus the following partial derivatives over the mean value of the Binary Cross Entropy (BCE) loss for all the $b$ samples that is used for updating $\mathbf{W}^l$ of each layer:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:derivative} \Delta \mathbf{W}^l &\!=\! \nabla _{\mathbf{W}^l}\left[\frac{1}{b}\mathop{\sum}\limits _{j=1}^{b}BCE\left(\mathbf{x}_j, \mathbf{y}_{j}, \left\{\mathbf{W}^l\right\}_{l=4}^6\right)\right] \!+\! \lambda^l_{r}\mathbf{W}^l~,\\
\label{equ:update} \mathbf{W}^l &\longleftarrow \mathbf{W}^l - \eta^l_{r} \Delta \mathbf{W}^l~,
\end{align}
\end{shrinkeq}
where $\mathbf{x}_j$ is the long input vector of the sample $j$ in the current batch, $\mathbf{y}_j$ is the class (positive/negative) prediction vector, and $\lambda^l_{r}$ and $\eta^l_r$ are the weight decay and pre-defined learning rate respectively in the MBSGD process of RT-MDNet. The weight decay encourages the learned weights to be small in magnitude to improve the generalization performance of them~\cite{Zhang2019WeightDecay}.
\begin{algorithm}[!t]
\setstretch{1.1}
\small
\DontPrintSemicolon
\KwIn{Off-the-shelf deep RT-MDNet tracking model with multiple layers $\left\{\mathbf{W}^l\right\}_{l=1}^6$, initial $\mathbf{P}^l_0 = \left.\mathbf{I}\middle/\delta^l_r\right.$ for each fully-connected layer ($4\leq l \leq 6$), and test sequence with first frame annotated}
\KwOut{Estimated target states in the rest frames}
\lnl{InRes1}Pre-process identical to RT-MDNet, e.g., draw positive sample set $S_1^+$ and negative sample set $S_1^-$\;
\lnl{InRes2}Fine-tune $\left\{\mathbf{W}^l\right\}_{l=4}^6$ using $S_1^+ \!\cup\! S_1^-$ in initialization stage:\;
\For{the $n_{\mathrm{th}}$ improved MBSGD iteration}{
Update $\mathbf{P}^l_n$ based on the virtual input $\overline{\mathbf{x}}_n^l$ using Eqs.~\eqref{equ:kn} and \eqref{equ:inverse_Phi2}\;
Update $\mathbf{W}^l \leftarrow \mathbf{W}^l_n$ using Eqs.~\eqref{equ:derivative} and \eqref{equ:RLS_update_RTMDNet}\;
}
\lnl{InRes3}Initialize the frame index set $\mathcal{T} \leftarrow \{1\}$\;
\Repeat{end of sequence}{
\lnl{InRes4}Find the optimal target state like in RT-MDNet\;
\If{\texttt{target score} $> 0$}{\lnl{InRes5}Draw $S_t^+$ and $S_t^-$, and set $\mathcal{T} \leftarrow \mathcal{T} \cup \{t\}$\;
\lnl{InRes6}\lIf{$|\mathcal{T}| > \tau$}{$\mathcal{T} \leftarrow \mathcal{T} \setminus \left\{\min_{\upsilon \in \mathcal{T}}\upsilon\right\}$}
}
\If{\texttt{target score} $\leq 0$}{\lnl{InRes7}\lIf{$\left\{\mathbf{W}^l_{\mathrm{bk}}\right\}_{l=4}^6 = \varnothing$}{create backups $\left\{\mathbf{W}^l_{\mathrm{bk}}\right\}_{l=4}^6 \leftarrow \left\{\mathbf{W}^l\right\}_{l=4}^6$}
\lnl{InRes8}Occasionally update $\left\{\mathbf{W}^l\right\}_{l=4}^6$ using $S_{\upsilon \in \mathcal{T}}^+ \cup S_{\upsilon \in \mathcal{T}}^-$ based on the original MBSGD
}
\ElseIf{$t$ \textbf{\texttt{mod}} $10 = 0$}{\lnl{InRes9}\lIf{$\left\{\mathbf{W}^l_{\mathrm{bk}}\right\}_{l=4}^6\! \neq \!\varnothing$}{recover backups $\left\{\mathbf{W}^l\right\}_{l=4}^6 \!\! \leftarrow \!\! \left\{\mathbf{W}^l_{\mathrm{bk}}\right\}_{l=4}^6$, and set $\left\{\mathbf{W}^l_{\mathrm{bk}}\right\}_{l=4}^6 = \varnothing$}
\lnl{InRes10}Regularly update $\left\{\mathbf{W}^l\right\}_{l=4}^6$ using $S_{\upsilon \in \mathcal{T}}^+ \cup S_{\upsilon \in \mathcal{T}}^-$:\;
\For{the $n_{\mathrm{th}}$ improved MBSGD iteration}{
Update $\mathbf{P}^l_n$ based on the virtual input $\overline{\mathbf{x}}_n^l$ using Eqs.~\eqref{equ:kn} and \eqref{equ:inverse_Phi2}\;
Update $\mathbf{W}^l \leftarrow \mathbf{W}^l_n$ using Eqs.~\eqref{equ:derivative} and \eqref{equ:RLS_update_RTMDNet}\;
}
}
}
\caption{RLS Estimator-Aided RT-MDNet}
\label{algo:RLS_RTMDNet}
\end{algorithm}
\textit{RLS Estimator-Aided RT-MDNet.} In our preliminary version~\cite{Gao2020RLS-RTMDNet}, we have claimed and demonstrated that each layer of MLP in the case of squared-error loss function can be represented by the system of normal equations for RLS estimation by simply ignoring non-linear activation functions. In the present work, we add more explanations on how to extend the derivation of normal equations for MLP in~\cite{Gao2020RLS-RTMDNet} to generally handle other layers including non-linear activation functions and leave them in the supplementary material due to space constraints. We also provide more explanations on why all the derivations in the demonstration can be easily transplanted to the case of cross-entropy loss in this supplementary material and refer the readers to~\cite{Saitoh2021DLB} for more details. This means we can exploit our proposed RLS estimator-aided online learning approach to improve the updating of each MLP layer in RT-MDNet in order to retain memory. In specific, if we cast one mini-batch of samples used for one optimization iteration of the MBSGD process as a data {\it block} at some time index $n$ as in Section~\ref{Sec:RLS}, then this optimization iteration can be carried out as follows
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:RLS_update_RTMDNet} \mathbf{W}_n^{l} &= \mathbf{W}_{n-1}^{l} - \widetilde{\eta}^l_{r} \Delta \mathbf{W}_n^{l}\mathbf{P}_{n}^l~,\\
\label{equ:RLS_update2_RTMDNet} \mathbf{P}_{0}^l &= \left.\mathbf{I}\middle/\delta^l_r\right.~,
\end{align}
\end{shrinkeq}
for memory retention, where $\widetilde{\eta}^l_{r}$ is the new learning rate, $\delta^l_r$ is the positive constant for RLS estimation and the term $\mathbf{W}_{n-1}^l\left(\mathbf{I} - \widetilde{\eta}^l_{r}\lambda^l_{r}\mathbf{P}_{n}^l\right)$ serves as the weight decay for optimization. Note that Eq.~\eqref{equ:RLS_update_RTMDNet} is only implemented once for improving the MBSGD process, in contrast to Eq.~\eqref{equ:derivativenew}, as the same mini-batch of samples may appear several times for optimization at each update step. $\mathbf{P}_{n}^l$ for each layer is updated using Eqs.~\eqref{equ:kn} and \eqref{equ:inverse_Phi2} based on each layer's virtual input
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:mean2} \overline{\mathbf{x}}_n^l = \frac{1}{b}\mathop{\sum}\limits _{j=1}^{b}\mathbf{x}_{j}^l~,
\end{align}
\end{shrinkeq}
where $\mathbf{x}_{j}^l$ is the input to layer $l$ for the sample $j$.
Since we only concentrate on the online tracking part to improve RT-MDNet, we leave all the other parts of RT-MDNet almost unchanged including, for example, the improved RoIAlign technique, all their hyper-parameter counterparts and the off-the-shelf model trained with additional instance embedding loss in their paper. As the RLS estimator-aided online learning can overcome catastrophic forgetting of historical memory, we thus do not need maintaining the long-term memory to achieve robustness any more. That is to say, we still use the same recent short-term memory for RLS estimator-aided regular updates, while the setting of the occasional updates triggered by unreliable tracking is preserved intact, in that there may be failure cases for the model based on the regular updates and the model with the occasional updates customized (overfitting) to the recent short-term memory may work well with more discrimination. Note that the regularly updated model is fixed during the occasional updates. More details about the improved RT-MDNet with the RLS estimator-aided online learning, namely RLS-RTMDNet, are explained in Algorithm \ref{algo:RLS_RTMDNet}.
\section{Recursive Formulation for Convolutional Layers in DiMP}
\label{Sec:CNN}
In the recent state-of-the-art online tracker DiMP~\cite{Bhat2019DiMP} that is dedicated to both target classification and location estimation, high level knowledge is incorporated to train the target location estimation component, following its pioneering work ATOM~\cite{Danelljan2019ATOM}, through extensive offline
learning so as to predict the overlap between the target object and an estimated
bounding box; meanwhile a more powerful classification component is carefully designed to incorporate additional prior knowledge for fast initial adaptation through the offline meta-training process and finally online updated to guarantee high discriminative power in the presence of distractors. In specific, the target classification component has its target model constitute the filter weights of a convolutional layer and provides target classification scores as regression output without non-linear output activation. It also employs the block features from a ResNet-based backbone, which is pre-trained on ImageNet~\cite{Russakovsky2015ImageNet} and fine-tuned on tracking datasets along with the target classification and location estimation components. In this section, we will show how our proposed RLS estimator-aided online learning approach for memory retention can be realized in this multi-head real-time tracker to online learn the convolutional layer.
\textit{Online Learning in DiMP.} Besides the target model's filter weights, the target classification component also includes an initializer network for initial filter prediction and several free parameter predictors for setting the target mask, spatial weight, regularization factor and even regression target in the discriminative learning loss. In the first frame, the filter weights are initialized based on the initializer network and then fine-tuned based on the initial training samples from data augmentation, namely initialization stage. Subsequently, the extracted feature map annotated by a Gaussian centered at the estimated target location in every frame is added to the fixed-window sample set as a training sample. During online tracking, the convolutional layer with a single output channel is online optimized every $20_{th}$ frame, namely update stage.
Given the training set of samples, we can approximately formulate the online learning objective in DiMP as follows based on the $l_2$ norm of the {\it residuals}:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:costfunctionLSATOM} \mathscr{L}(\mathbf{W}) = \mathop{\sum}\limits _{j=1}^{N}\left\Vert \bm{\Gamma}_j \odot \left(\mathbf{Y}_j - \mathbf{W}\ast\mathbf{X}_j\right)\right\Vert^2 + \frac{\lambda_d}{2}\left\Vert \mathbf{W}\right\Vert^2~.
\end{align}
\end{shrinkeq}
Here, $\mathbf{X}_j$ is the extracted feature map of the sample $j$, $\mathbf{Y}_j$ is its annotated Gaussian confidence map, $\mathbf{W}$ represents the weights of the convolutional layer, $\ast$ denotes standard multi-channel convolution, $\lambda_d$ is the weight decay parameter, $N$ is the fixed-window size, $\odot$ denotes Hadamard product, and the impact of each training sample along with its spatial weight is controlled by the weighting factor $\bm{\Gamma}_j$.
\begin{algorithm}[!t]
\setstretch{1.1}
\small
\DontPrintSemicolon
\KwIn{Off-the-shelf deep DiMP tracking model with one convolutional layer $\mathbf{W}$ for target classification, initial $\mathbf{P}_0 = \left.\mathbf{I}\middle/\delta_d\right.$ for this layer $\mathbf{W}$, and test sequence with first frame annotated}
\KwOut{Estimated target states in the rest frames}
\lnl{InRes1}Initialize the fixed-window sample set $S$ with the data augmented samples extracted from the first frame\;
\lnl{InRes2}Predict a reasonable initial estimate of $\mathbf{W}$ and then fine-tune it using the SD methodology identical to DiMP in initialization stage, and set $\widehat{\mathbf{W}}_{0}^{appr} = \mathbf{W}$ in the following\;
\Repeat{end of sequence}{
\lnl{InRes3}Find the optimal target state like in DiMP\;
\If{\texttt{update\_flag}}{\lnl{InRes4}Update the sample set $S$ along with its sample weights identical to DiMP\;
}
\If{$t-1$ \textbf{\texttt{mod}} $20 = 0$ or \texttt{hard\_negative}}{
\For{the $n_{\mathrm{th}}$ improved BGD procedure}{
Set $\mathbf{W}_n^{0} = \widehat{\mathbf{W}}_{n-1}^{appr}$\;
Update $\mathbf{P}_n$ based on the virtual input $\overline{\mathbf{x}}_n$ using Eqs.~\eqref{equ:kn} and \eqref{equ:inverse_Phi2}\;
\Repeat{end of iteration}{
Compute the current gradient $\Delta \mathbf{W}_n^{iter}$\;
Update $\mathbf{W}_n^{iter}$ based on Eq.~\eqref{equ:derivativenewATOM}\;
}
Update $\mathbf{W} \leftarrow \widehat{\mathbf{W}}_{n}^{appr}$\;
}
}
}
\caption{Recursive LSE-Aided DiMP}
\label{algo:RLS_DiMP}
\end{algorithm}
In modern deep learning architectures, each convolutional layer is usually converted to a general matrix-matrix multiplication (GEMM) operation for fast execution on modern CPUs or GPUs. This is achieved by an unfolding and duplication of the input and an unrolling and concatenation of the kernels to produce the input matrix and kernel matrix. These transformations are realized based mainly on a function $$f(\cdot)=img2col(\cdot)$$ in the modern deep learning libraries. Consequently, the $l_2$ norm term in Eq.~\eqref{equ:costfunctionLSATOM} can be rewritten as
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:costfunctionLSATOM2} &\left\Vert \bm{\Gamma}_j \odot\left(\mathbf{Y}_j - \mathbf{W}\ast\mathbf{X}_j\right)\right\Vert^2\nonumber\\=& \left\Vert f^{\top}\left(\bm{\Gamma}_j\right) \odot \left(f^{\top}\left(\mathbf{Y}_j\right) - f^{\top}\left(\mathbf{W}\right)f\left(\mathbf{X}_j\right)\right)\right\Vert^2~.
\end{align}
\end{shrinkeq}
If we denote the column number of $f\left(\mathbf{X}_j\right)$ as $M$, the $k_{th}$ column in $f\left(\mathbf{X}_j\right)$ as $\mathbf{x}_{jk}$, the $k_{th}$ element in $f\left(\mathbf{Y}_j\right)$ as $y_{jk}$ and the $k_{th}$ element in $f\left(\bm{\Gamma}_j\right)$ as $\gamma_{jk}$, then the learning objective of Eq.~\eqref{equ:costfunctionLSATOM} can be rewritten as
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:costfunctionLSATOM3} \mathscr{L}(\mathbf{W}) \!=\! \mathop{\sum}\limits _{j=1}^{N}\mathop{\sum}\limits _{k=1}^{M}\left\Vert \sqrt{\gamma_{jk}}\left(y_{jk} \!-\! f^{\top}\left(\mathbf{W}\right)\mathbf{x}_{jk}\right)\right\Vert^2 \!+\! \frac{\lambda_d}{2}\left\Vert \mathbf{W}\right\Vert^2\!.
\end{align}
\end{shrinkeq}
During online tracking, the BGD optimization or the SD methodology can be used to update $\mathbf{W}$ based on the above loss function. Specifically, in the case of BGD optimization, it is the following partial derivatives over the above squared-error loss for all the $N$ samples that is used for updating $\mathbf{W}$ at each optimization iteration of the BGD process:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:derivativeATOM} \Delta \mathbf{W} &= \nabla _{\mathbf{W}}\mathscr{L}(\mathbf{W})~,\\
\label{equ:updateATOM} \mathbf{W} &\longleftarrow \mathbf{W} - \eta_d \Delta \mathbf{W}~,
\end{align}
\end{shrinkeq}
where $\eta_d$ is a pre-defined learning rate.
\textit{RLS Estimator-Aided DiMP.} From Eq.~\eqref{equ:costfunctionLSATOM3} we can easily find that our proposed RLS estimator-aided online learning approach is also able to improve the updating of the convolutional layer $\mathbf{W}$ in DiMP in order to retain memory. This is achieved by applying BGD optimization method in the update stage of DiMP and then improving the updating process. In specific, if we cast all columns of $f\left(\mathbf{X}_j\right)$ for all the training samples in the $N$-sized sample set as a data {\it block} at some time index $n$ as in Section~\ref{Sec:RLS}, then the optimization of our improved BGD process can be carried out as follows
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:derivativenewATOM} f\left(\mathbf{W}_n^{iter+1}\right) &= f\left(\mathbf{W}_n^{iter}\right) - \widetilde{\eta}_d f\left(\Delta \mathbf{W}_n^{iter+1}\right) \mathbf{P}_{n}~,\\
\label{equ:updatenewATOM} \mathbf{W}_n^{0} &= \widehat{\mathbf{W}}_{n-1}^{appr}~,\\
\label{equ:updatenewATOM2} \mathbf{P}_{0} &= \left.\mathbf{I}\middle/\delta_d\right.~,
\end{align}
\end{shrinkeq}
where $\widehat{\mathbf{W}}_{n-1}^{appr}$ is the last approximate optimum value, $\widetilde{\eta}_d$ is the new learning rate, $\delta_d$ is the positive constant for RLS estimation, the term $f\left(\mathbf{W}_n^{iter}\right)\left(\mathbf{I} - \widetilde{\eta}_d\lambda_d\mathbf{P}_{n}\right)$ serves as the weight decay for optimization, and $\mathbf{P}_{n}$ is updated using Eqs.~\eqref{equ:kn} and \eqref{equ:inverse_Phi2} based on the virtual input, which is the mean value of the data {\it block} at the current time index $n$:
\begin{shrinkeq}{-1ex}
\begin{align}
\label{equ:mean3} \overline{\mathbf{x}}_n =\frac{1}{\sqrt{NM}} \mathop{\sum}\limits _{j=1}^{N}\mathop{\sum}\limits _{k=1}^{M}\sqrt{\gamma_{jk}}\mathbf{x}_{jk}.
\end{align}
\end{shrinkeq}
Since we only concentrate on the online tracking part to improve DiMP, we also leave all the other parts of DiMP almost unchanged including, for example, the default off-the-shelf model trained using the train splits of LaSOT, TrackingNet, GOT10k and MS-COCO, all their hyper-parameter counterparts and the settings of initialization stage to achieve $\widehat{\mathbf{W}}_{0}^{appr}$. It is noteworthy that the optimization strategy of the SD methodology is preserved intact along with all its parameter settings in the initialization stage. The reason is that the SD methodology exhibits superior convergence speed compared to the BGD optimization strategy as claimed in~\cite{Bhat2019DiMP}. More details about the improved DiMP with the RLS estimator-aided online learning, namely RLS-DiMP, are explained in Algorithm \ref{algo:RLS_DiMP}.
\section{Experiments}
\label{Sec:EXP}
\subsection{Implementation Details and Baseline Setup}
\label{Section:setup}
We here only show the hyper-parameters specific to our recursive LSE-aided online learning, and leave all the other hyper-parameters and the off-the-shelf tracking models shared with the original RT-MDNet and DiMP trackers unchanged\footnote {We refer the reader to~\cite{Jung2018RTMDNet} and \cite{Bhat2019DiMP} for their parameter settings and their default off-the-shelf tracking models can be achieved at \texttt{\url{https://github.com/IlchaeJung/RT-MDNet}} and \texttt{\url{https://github.com/visionml/pytracking/blob/master/MODEL_ZOO.md\#Models}} respectively.} unless otherwise specified.
In our previous implementation of RLS-RTMDNet in~\cite{Gao2020RLS-RTMDNet}, $\delta^l_r$ in Algorithm~\ref{algo:RLS_RTMDNet} and the parameter to measure the memory in Eq.~\eqref{equ:normalequ_memo3new} for each of the fully-connected layers $\mathbf{W}^4$, $\mathbf{W}^5$ and $\mathbf{W}^6$ were simply set to $1.0$; the learning rates $\widetilde{\eta}^l_{r}$ for them in Eq.~\eqref{equ:RLS_update_RTMDNet} in both initialization and update stages were fixed to $0.01$, $0.01$ and $0.1$. According to~\cite{Haykin2002AFT, Moustakides1997}, however, initializing $\bm{\Phi}_0$ with a ``small" value for the most application scenarios will help the recursive LSE-aided online learning to achieve its optimal performance, which is insensitive to the exact ``small" value used. So we experimentally found some new configurations for RLS-RTMDNet in this paper, and the experimental results show that roughly setting $\delta^l_r$ of each layer to $5e^{-4}$ and directly inheriting the learning rates $\widetilde{\eta}^l_{r}$ from the original RT-MDNet tracker give rise to a further performance improvement over our previous tracking results in~\cite{Gao2020RLS-RTMDNet}, especially with a significant improvement on \textit{UAV123}~\cite{Mueller2016UAV} . We re-implemented the public source code of the original RT-MDNet as a baseline and followed all of its default settings to demonstrate the effectiveness of our proposed online learning framework in the case of the MLP layers. Although we leave many settings of RT-MDNet unchanged in our RLS-RTMDNet tracker, there are still some differences in the online tracking part related to the solely maintained short-term memory and the weight backups, in addition to the improved MBSGD. So we prepared another baseline by only replacing the improved MBSGD with original MBSGD in Algorithm \ref{algo:RLS_RTMDNet}, leading to a degraded version, namely simpleRT-MDNet.
We also selected DiMP to demonstrate the effectiveness of our proposed online learning framework in the case of the convolutional layer. For the updating of $\mathbf{W}$ in Algorithm~\ref{algo:RLS_DiMP}, we simply set $\delta_d$ to $0.1$ and the parameter to measure the memory in Eq.~\eqref{equ:normalequ_memo3new} to $1.0$. The feature dimensionality of $\mathbf{X}_j$ in Eq.~\eqref{equ:costfunctionLSATOM} is $512$, which makes the dimension size of $\mathbf{P}_{n}$ large enough for memory retention. Besides re-implementing the public source code of DiMP as one baseline, we add three more kinds of baselines for a more in-depth analysis. First, replacing only the improved BGD with vanilla BGD in Algorithm~\ref{algo:RLS_DiMP} leads to the baseline termed as BGD-DiMP, which can be used to directly validate the effectiveness of our approach. We experimentally found the optimal learning rates for BGD-DiMP while performing the BGD optimization with five recursions every $20_{th}$ frame, i.e., $3e^{-2}$ for short-term tracking benchmarks (the average sequence length does not exceed 600 frames), and $3e^{-3}$ for long-term ones. This learning rate setting is also applied in our RLS-DiMP for fair comparison. Second, increasing the SD optimizer recursion number from two to five for updating $\mathbf{W}$ every $20_{th}$ frame in the original DiMP leads to the baseline denoted by DiMP$^*$, which can be used to study the overfitting issue caused by fast converging to a {\it local} point with respect to the fixed-sized sample set at each update step. Finally, considering that the exponential moving average (EMA) strategy is widely applied in a slow-moving fashion for striking a compromise between adapting quickly and maintaining robustness in visual tracking~\cite{Bolme2010MOSSE, Henriques2015KCF, Danelljan2015SRDCF, Danelljan2016CCOT, Bertinetto2016Staple, Danelljan2017DSST, Bertinetto2017CFNet, Galoogahi2017BACF, YLi2019ARCF, Xu2019LADCF, Xu2019GFS-DCF, YLi2020AMCF, Wang2019UDT}, or stabilizing the network representation with smoother changes in self-supervised learning~\cite{meanteacher, mocov1, BYOL}, we can also use this strategy to combine the newly updated parameters with previous ones to save historical information and avoid overfitting. By setting different EMA coefficient for discounting older parameters in slow-moving, moderate-moving and fast-moving ways, we can achieve a new kind of baselines for comparison (see Sec.~\ref{sec:Ablation_RLS-DiMP} for more details). Our RLS-DiMP and all the above baselines are implemented with a ResNet50 backbone~\cite{He2016ResNet}.
Note that all the evaluations of our methods and the re-implemented baselines are conducted by running them several times due to the stochastic nature of RT-MDNet and DiMP, and performed on a single Nvidia RTX 3090 GPU accompanied by an Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz.
\subsection{Experimental Analyses of RLS-RTMDNet}
\label{Section:experimentRLS-RTMDNet}
We evaluate the proposed RLS estimator-aided online RLS-RTMDNet thoroughly over \textit{OTB-2015}~\cite{Wu2015OTB}, \textit{UAV123}~\cite{Mueller2016UAV} and \textit{VOT2016/2017}~\cite{Kristan2016VOTconf, Kristan2017VOTconf} benchmarks by following rigorously the evaluation protocols. We choose these small-scale tracking benchmarks for this evaluation due to the reason that the off-the-shelf RT-MDNet tracking model is trained without additionally relying on the \textit{MS-COCO} dataset~\cite{Lin2014COCO} or recently proposed large-scale tracking datasets, which makes it hard to achieve good results on the more challenging large-scale tracking benchmarks and thus do meaningful analyses.
\subsubsection{Ablation study}
We start by summarizing the ablation study comparison of our proposed RLS-RTMDNet with its corresponding baselines on the \textit{OTB-2015} and \textit{UAV123} benchmarks in Table~\ref{table:ablationstudy_RLS_RTMDNet}. \textit{OTB-2015} includes 100 objects to be tracked in 98 challenging sequences, and provides a no-reset One-Pass Evaluation (OPE) criterion by running test trackers until the end of a sequence. In its success plot for quantifying performance, the success rate ($SR$) refers to the mean overlap precision ($OP$) score over all sequences and is plotted against a uniform range of some thresholds between 0 and 1. An area-under-the-curve ($AUC$) metric can also be computed to rank the trackers. The $OP$ score is the fraction of frames in a sequence where the inter-section-over-union overlap of the predicted and ground-truth rectangles exceeds a given threshold. In its precision plot, the precision encodes the proportion of frames where the Euclidean distance between the predicted and ground-truth centroids is smaller than a given threshold. \textit{UAV123} compiles a larger set of 123 videos (with more than $110K$ frames) captured by low-altitude UAVs, which is inherently different from \textit{OTB-2015} despite that it also uses the same evaluation protocol to \textit{OTB-2015}.
\begin{table}
\small
\setlength{\tabcolsep}{2.6pt} \small
\fontsize{8pt}{10pt}\selectfont
\centering
\caption{Ablation Study Comparison of RLS-RTMDNet with Its Corresponding Baselines RT-MDNet and simpleRT-MDNet on the \textit{OTB-2015} and \textit{UAV123} Benchmarks. $Prec.$ and $Succ.$ Denote Precision Score at 20 Pixels and $AUC$ of Success Plot Respectively. The Average Over 50 Runs Is Reported for These MDNet-Style Trackers.}
\begin{tabular}{c!{\vrule}cc!{\vrule}cc!{\vrule}c!{\vrule}c}
\toprule
\multirow{2}{*}{Trackers} & \multicolumn{2}{c!{\vrule}}{\textit{OTB-2015}} & \multicolumn{2}{c!{\vrule}}{\textit{UAV123}} & Running & GPU Memory\\
\cmidrule(lr){2-5}
& $Prec.$ & $Succ.$ & $Prec.$ & $Succ.$ & Speed & Consumption \\
\midrule
RT-MDNet & 0.857 & 0.634 & 0.723 & 0.513 & $\sim$32 \emph{fps} & $<$2400MiB \\
\midrule
simpleRT-MDNet & 0.854 & 0.630 & 0.722 & 0.509 & $\sim$32 \emph{fps} & $<$2400MiB \\
\textbf{RLS-RTMDNet} & 0.871 & 0.644 & 0.727 & 0.525 & $\sim$31 \emph{fps} & $<$2400MiB \\
\end{tabular}
\label{table:ablationstudy_RLS_RTMDNet}
\end{table}
It is clearly and statistically shown in Table~\ref{table:ablationstudy_RLS_RTMDNet} that our online learning approach can unveil the power of RT-MDNet without incurring too much extra computational and memory cost. Specifically, without bells and whistles, our RLS-RTMDNet improves all the precision (at 20 pixels) and $AUC$ scores over the baseline RT-MDNet on both the \textit{OTB-2015} and \textit{UAV123} benchmarks, giving absolute $AUC$ gains no less than $1.0\%$. Our degraded version simpleRT-MDNet, however, exhibits a little inferior performance to RT-MDNet due to the reason that the update is based solely upon the recent short-term memory. It is noteworthy that our improvement of precision score on \textit{UAV123} is much less than on \textit{OTB-2015}, which can be attributed to the reason that the size of the ground-truth bounding boxes for most of the objects in \textit{UAV123} is much too small so that the tracking deviation from object area does not punish the precision score at 20 pixels too much. This is also why the metric of normalized precision is proposed in~\cite{Muller2018TrackingNet}.
\subsubsection{Comparison with others}
We perform the state-of-the-art comparison on \textit{UAV123} and \textit{VOT2016/VOT2017} by comparing our proposed RLS-RTMDNet to the state-of-the-art CPU-based trackers, i.e., MEEM~\cite{Zhang2014MEEM}, SRDCF~\cite{Danelljan2015SRDCF}, Staple~\cite{Bertinetto2016Staple}, CSRDCF with its real-time version CSRDCF++~\cite{Lukezic2018CSRDCF}, fDSST~\cite{Danelljan2017DSST}, BACF~\cite{Galoogahi2017BACF}, ARCF~\cite{YLi2019ARCF}, AutoTrack~\cite{YLi2020AutoTrack}, ECOhc~\cite{Danelljan2017ECO}, and some deep trackers trained without additionally relying on the \textit{MS-COCO} dataset or recently proposed large-scale tracking datasets, i.e., SiamFC~\cite{Bertinetto2016SiameseFC}, C-COT~\cite{Danelljan2016CCOT}, RT-MDNet~\cite{Jung2018RTMDNet}, MetaSDNet~\cite{Park2018MetaSDNet}, DeepSTRCF~\cite{Li2018STRCF}, UDT with its improved version UDT+~\cite{Wang2019UDT}, TADT~\cite{Li2019TADT}, ECO~\cite{Danelljan2017ECO}, LADCF-R50~\cite{Xu2019LADCF} with a ResNet50 backbone. Results for our degraded version simpleRT-MDNet are also included.
\begin{table}
\small
\setlength{\tabcolsep}{3.2pt} \small
\fontsize{8pt}{10pt}\selectfont
\centering
\caption{Comparison with Some Top-Performing Deep Trackers on UAV123 in Terms of Mean $OP$ Given Thresholds $0.50$ ($OP_{0.50}$) and $0.75$ ($OP_{0.75}$).}
\begin{tabular}{c!{\vrule width1.2pt}ccccc}
\Xhline{1.1pt}
\textit{UAV123} & RT-MDNet & DeepSTRCF & \textbf{RLS-RTMDNet} & TADT & ECO \\
\Xhline{0.6pt}
$SR_{0.5}$ & 0.627 & 0.612 & 0.648 & 0.634 & 0.640 \\
$SR_{0.75}$ & 0.308 & 0.342 & 0.326 & 0.307 & 0.328 \\
\Xhline{1.1pt}
\end{tabular}
\label{table:uav1}
\end{table}
\begin{figure}[!t]
\captionsetup{font={small}}
\centering
\subfloat{\includegraphics[width=0.9\linewidth]{UAV_OPE_AUCnew.pdf}
\label{UAV123}}
\vspace{-2.5mm}
\caption{Success plots for the whole \textit{UAV123} benchmark. We report the average over 50 runs for our tracker RLS-RTMDNet and its corresponding baselines. The legends show the $AUC$ scores. Best viewed in color.} \label{fig:stateofartstudyUAV1}
\vspace{-2.5mm}
\end{figure}
\begin{table*}
\captionsetup{font={small}}
\setlength{\tabcolsep}{3pt} \scriptsize
\fontsize{7pt}{9pt}\selectfont
\centering
\caption{Comparison of Our RLS-RTMDNet with Its Corresponding Baselines and Some Related Competing Trackers on the \textit{VOT2016/2017} Benchmarks; the Results Are Reported as $EAO$, $A$, $R_{S}$ ($S=100$) and Real-Time $EAO$ ($rtEAO$). For All These Metrics Except $rtEAO$, the Stochastic Trackers Are Run 15 Times on Each Sequence to Reduce the Variance of Their Results. Best Viewed in Color.}
\vspace{-2mm}
\begin{tabular}{cc!{\vrule width1.2pt}ccccccccccccc}
\Xhline{1.1pt}
\multicolumn{2}{c!{\vrule width1.2pt}}{\textit{VOT}} & Staple & SiamFC & MEEM & RT-MDNet & simpleRT-MDNet & MetaSDNet & CSRDCF++ & ECOhc & \textbf{RLS-RTMDNet} & CSRDCF & C-COT & ECO & LADCF-R50\\
\Xhline{1.1pt}
\multirow{3}{*}{2016} & $EAO\uparrow$ & 0.295 & 0.277 & - & 0.302 & 0.304 & 0.314 & - & 0.322 & \textcolor{green}{0.336} & \textcolor{blue}{0.338} & 0.331 & \textcolor{red}{0.374} & - \\
& $A\uparrow$ & 0.547 & 0.550 & - & \textcolor{blue}{0.559} & 0.550 & 0.539 & - & 0.542 & \textcolor{red}{0.573} & 0.524 & 0.541 & \textcolor{green}{0.555} & - \\
& $-\ln R_S\downarrow$ & 0.378 & 0.382 & - & 0.305 & 0.294 & 0.261 & - & 0.303 & \textcolor{green}{0.255} & \textcolor{blue}{0.238} & \textcolor{blue}{0.238} & \textcolor{red}{0.200} & - \\
\Xhline{1.1pt}
\multirow{4}{*}{2017} & $EAO\uparrow$ & 0.169 & 0.188 & 0.192 & 0.223 & 0.218 & - & 0.229 & 0.238 & 0.241 & 0.256 & \textcolor{green}{0.267} & \textcolor{blue}{0.280} & \textcolor{red}{0.389} \\
& $A\uparrow$ & \textcolor{red}{0.530} & 0.503 & 0.463 & \textcolor{green}{0.514} & 0.508 & - & 0.453 & 0.496 & \textcolor{blue}{0.516} & 0.491 & 0.494 & 0.484 & 0.503 \\
& $-\ln R_S\downarrow$ & 0.688 & 0.585 & 0.534 & 0.450 & 0.464 & - & 0.370 & 0.435 & 0.392 & 0.356 & \textcolor{green}{0.318} & \textcolor{blue}{0.276} & \textcolor{red}{0.159} \\
& $rtEAO\uparrow$ & 0.170 & \textcolor{blue}{0.182} & 0.072 & 0.162 & 0.154 & - & \textcolor{red}{0.212} & \textcolor{green}{0.177} & 0.150 & 0.099 & 0.058 & 0.078 & 0.066 \\
\Xhline{1.1pt}
\end{tabular}
\label{table:vot1}
\vspace{-4.5mm}
\end{table*}
\textit{VOT} creates a platform for organizing tracking challenges every year since 2013 by establishing datasets, evaluation measures and toolkits. \textit{VOT2017} departs from \textit{VOT2016} in two aspects: 10 least challenging sequences in \textit{VOT2016} is replaced with new ones; and a new experiment for evaluating real-time performance is introduced in \textit{VOT2017}. Different from \textit{UAV123}, all these \textit{VOT} challenges apply a reset-based methodology in the toolkit. Whenever a failure is detected, the tracker is re-initialized five frames after the failure. Thus, two weakly correlated performance measures can be used: the accuracy ($A$) measures the average overlap between the predicted bounding box and the ground truth computed over the successfully tracked frames; the robustness is estimated by considering the reliability ($R_S$) which shows the probability that the tracker will still successfully track the object up to $S$ frames since the last failure and is computed using the failure rate measure ($R_{\mathrm{fr}}$, the average number of failures) as follows $$R_S = \exp\left(-S\frac{R_{\mathrm{fr}}}{N_{\mathrm{frames}}}\right),$$ where $N_{\mathrm{frames}}$ is the average length of the sequences.
A more principled expected average overlap ($EAO$) measure is also proposed to measure the expected no-reset average overlap ($AO$) of a tracker run on a short-term sequence, although it is computed from the reset-based methodology.
We firstly show the comparison on \textit{UAV123} in Figure~\ref{fig:stateofartstudyUAV1} and Table~\ref{table:uav1}. All the entries in this comparison can be roughly categorized into DCF-based, Siamese-style and MDNet-style approaches. In specific, the DCF-based trackers ECO~\cite{Danelljan2017ECO}, DeepSTRCF~\cite{Li2018STRCF} and C-COT~\cite{Danelljan2016CCOT} all use pre-trained deep models to extract powerful features for more robust and accurate tracking. SiamFC~\cite{Bertinetto2016SiameseFC} is the first proposed Siamese-style tracker which can track object very efficiently with online learning abandoned. This is achieved by exploiting end-to-end training along with the advent of deep learning for training tracking models. TADT~\cite{Li2019TADT} further improves the Siamese-style tracking framework by introducing a target-aware feature module to guide the generation of target-active and scale-sensitive features. UDT~\cite{Wang2019UDT} is proposed to explore a feasible unsupervised training approach for deep tracking. AutoTrack~\cite{YLi2020AutoTrack} and ARCF~\cite{YLi2019ARCF} are both DCF-based trackers and dedicated to the real-world low-cost UAV tracking scenario for more efficient tracking, which has been validated in a real-world UAV localization system. As can be seen, the classical DCF-based tracker ECO with deep features and the Siamese-style tracker TADT can both surpass RT-MDNet and simpleRT-MDNet by large margins, while our approach can unveil the power of RT-MDNet to achieve comparable performance with both of them and even the best $SR_{0.5}$ score.
The comparison on \textit{VOT2016/2017} is shown in Table~\ref{table:vot1}. As similar to the analyses in ablation study, our RLS-RTMDNet always consistently improves upon its baselines (i.e., RT-MDNet and simpleRT-MDNet) by achieving significant gains on \textit{VOT2016/2017}, and our degraded version simpleRT-MDNet achieves similar or a little worse performance than RT-MDNet due to the lack of long-term memory. What's more, our performance gains are larger than MetaSDNet~\cite{Park2018MetaSDNet}, which uses sophisticated meta-training techniques to train the MDNet tracker using more datasets than RT-MDNet.
\subsection{Experimental Analyses of RLS-DiMP}
\label{Section:experimentRLS-DiMP}
We evaluate the proposed RLS estimator-aided online RLS-DiMP tracker thoroughly over the large-scale short-term \textit{TrackingNet-test}~\cite{Muller2018TrackingNet}, \textit{GOT10k-test}~\cite{Huang2019GOT} and \textit{VOT2018/2019}~\cite{Kristan2018VOTconf, Kristan2019VOTconf} tracking benchmarks, and long-term \textit{LaSOT-test}~\cite{Fan2019LaSOT}, \textit{OxUvA-dev}~\cite{Valmadre2018OxUvA} and \textit{TLP}~\cite{Moudgil2019TLP} benchmarks by following rigorously the evaluation protocols. As the off-the-shelf DiMP tracking model is fully trained with the training splits of the \textit{TrackingNet}, \textit{LaSOT}, \textit{GOT10k} and \textit{MS-COCO}~\cite{Lin2014COCO} datasets, we believe this evaluation on more challenging and larger benchmarks can clearly and statistically show the effectiveness of our online learning approach by unveiling the power of the state-of-the-art DiMP tracker.
\subsubsection{Ablation study}
\label{sec:Ablation_RLS-DiMP}
We start by summarizing the ablation study comparison of our proposed RLS-DiMP with its corresponding baselines on the long-term \textit{LaSOT-test} benchmark in Table~\ref{table:ablationstudy_RLS_DiMP} and Figure~\ref{fig:ablationstudyLaSOT}. \textit{LaSOT} has much longer video sequences with average duration of 84 seconds (2500 frames in average) and each sequence of it comprises at least 1000 frames, which is dedicated to the long-term tracking principle. It is split into training and testing subsets, and the testing subset consists of 280 sequences with $690K$ frames. It also uses the same evaluation protocol to \textit{OTB-2015}. However, due to the fact that the precision metric is sensitive to the resolution of the images and the size of the bounding boxes, a metric of normalized precision~\cite{Muller2018TrackingNet} over the size of the ground-truth bounding box can be exploited and the trackers can be then ranked using the $AUC$ for normalized precision between 0 and 0.5.
As claimed in Sec.~\ref{Section:setup}, we add the baselines using the EMA strategy for saving historical information in addition to BGD-DiMP and DiMP$^*$. Due to the reason that our memory retention is applied to improve the vanilla BGD in BGD-DiMP and BGD-DiMP performs better than the original DiMP on \textit{LaSOT-test}, we hence apply the EMA strategy on the baseline BGD-DiMP to construct new baselines. In specific, let $\alpha$ denote the EMA coefficient, and then the approximate optimum value $\widehat{\mathbf{W}}_{n}^{appr}$ after the vanilla BGD-based optimization at each update stage of BGD-DiMP can be further updated as follows
$$
\widehat{\mathbf{W}}_{n}^{appr} \longleftarrow \left(1-\alpha\right)\widehat{\mathbf{W}}_{n-1}^{appr} + \alpha\widehat{\mathbf{W}}_{n}^{appr}~,
$$
where $\alpha \in [0, 1]$ and represents the degree of weighting decrease. A higher $\alpha$ discounts older parameters faster.
The commonly used slow-moving average strategy as mentioned in Sec.~\ref{Section:setup} always sets $\alpha$ to be less than 0.05 so that the online tracking model's parameters or the network representation in self-supervised learning evolve smoothly. However, applying this kind of EMA strategy on BGD-DiMP makes it prone to lower adaptive capacity while saving historical information, leading to decrease of performance on \textit{LaSOT-test}. It is also impractical to do an exhaustive grid search to obtain the optimum value of $\alpha$. So we roughly set three baselines as follows: BGD-DiMP$^{\rm{sm}}$ for discounting older parameters in a slow-moving way with $\alpha$ set to 0.01, BGD-DiMP$^{\rm{mm}}$ in a moderate-moving way with $\alpha$ set to 0.5, and BGD-DiMP$^{\rm{fm}}$ in a fast-moving way with $\alpha$ set to 0.99. This practice can also be observed in LADCF~\cite{Xu2019LADCF} that the EMA coefficient is set to 0.95 and 0.13 for hand-crafted and deep LADCF respectively. It is noteworthy that the baseline BGD-DiMP can be seen as a special case of applying the EMA strategy by setting $\alpha$ to 1.0.
\begin{table}
\small
\setlength{\tabcolsep}{2.6pt} \small
\fontsize{8pt}{10pt}\selectfont
\centering
\caption{Ablation Study Comparison of RLS-DiMP with Its Corresponding Baselines on the Long-Term \textit{LaSOT-test} Benchmark. $Prec.$, $Norm.~Prec.$ and $Succ.$ Denote Precision Score at 20 Pixels, Normalized Precision Score and $AUC$ of Success Plot Respectively. The Average Over 20 Runs Is Reported for These DiMP-Style Trackers Except That We Report the Publicly Available 5-Run Results (Indicated by $^{\dagger}$) for DiMP.}
\begin{tabular}{c!{\vrule}ccc!{\vrule}c!{\vrule}c}
\toprule
\multirow{2}{*}{Trackers} & \multicolumn{3}{c!{\vrule}}{\textit{LaSOT-test}} & Running & GPU Memory\\
\cmidrule(lr){2-4}
& $Prec.$ & $Norm.~Prec.$ & $Succ.$ & Speed & Consumption \\
\midrule
DiMP & 0.567$^{\dagger}$ & 0.650$^{\dagger}$ & 0.569$^{\dagger}$ & $\sim$41 \emph{fps} & \multirow{2}{*}{$\sim$2211 MiB} \\
DiMP$^*$ & 0.556 & 0.636 & 0.557 & $\sim$40.6 \emph{fps} & \\
\midrule
BGD-DiMP & 0.575 & 0.673 & 0.574 & \multirow{4}{*}{$\sim$41 \emph{fps}}& \multirow{4}{*}{$\sim$2211 MiB} \\
BGD-DiMP$^{\rm{fm}}$ & 0.574 & 0.673 & 0.574 & & \\
BGD-DiMP$^{\rm{mm}}$ & 0.576 & 0.675 & 0.575 & & \\
BGD-DiMP$^{\rm{sm}}$ & 0.558 & 0.659 & 0.561 & & \\
\midrule
\textbf{RLS-DiMP} & 0.578 & 0.676 & 0.577 & $\sim$40 \emph{fps} & $\sim$2339 MiB \\
\end{tabular}
\label{table:ablationstudy_RLS_DiMP}
\end{table}
It is clearly and statistically shown in Table~\ref{table:ablationstudy_RLS_DiMP} that our online learning approach can also unveil the power of the state-of-the-art DiMP tracker without incurring too much extra computational and memory cost. As \textit{LaSOT-test} is dedicated to the long-term tracking principle, increasing the SD optimizer recursion number from two to five in DiMP$^*$ may aggravate the overfitting issue caused by fast converging to a \textit{local} point with respect to the fixed-sized sample set, and harm the overall performance on \textit{LaSOT-test}. By experimentally setting an appropriate small learning rate for BGD-DiMP, we can reduce overfitting and achieve superior results compared to the original DiMP tracker. Thanks to the retention of historical information by exploiting the EMA strategy, the baseline BGD-DiMP$^{\rm{mm}}$ with the coefficient $\alpha$ roughly set to 0.5 can further enhance this superiority. We believe we can use some art to choose an optimum value of $\alpha$ for this EMA strategy to make a comprise between adapting quickly and maintaining robustness, and thus improve BGD-DiMP by a larger margin. However, our proposed online learning approach has been demonstrated theoretically to be able to retain the memory in a more elegant way, and thus reduce the risk of overfitting as well as maintain adaptive capacity. That means our RLS-DiMP can improve the performance on \textit{LaSOT-test} without requiring fine-grained searching for hyper-parameters by hand.
Since some sequences in \textit{LaSOT-test} are less challenging for the DiMP-style trackers to achieve good performance, we selected some difficult ones for an in-depth analysis as shown in Figure~\ref{fig:ablationstudyLaSOT}, which also makes the abovementioned phenomena more clearly understood. As each sequence was evaluated with several runs by each entry in Table~\ref{table:ablationstudy_RLS_DiMP}, we can define the sequence for which the $AUC$ scores of different runs by our RLS-DiMP are all above $0.75$ as easy sequence and remove it from our evaluation to achieve the success plots on the remaining hard sequences of \textit{LaSOT-test}, leading to 156 hard sequences in total. From Figure~\ref{fig:ablationstudyLaSOT} we can see that, our RLS-DiMP improves the baseline BGD-DiMP with an absolute $AUC$ gain of $0.5\%$ on these hard sequences, which is computed based on 20 runs of the test sequences due to the stochastic nature of DiMP-style trackers. By comparing the performance gains between Table~\ref{table:ablationstudy_RLS_DiMP} and Figure~\ref{fig:ablationstudyLaSOT}, we can also find that introducing memory retention mainly helps to improve the tracking performance on the more challenging sequences, which is preferable as we aim to handle the challenging factors in tracking. It is noteworthy that only concentrating on saving historical information without maintaining adaptive capacity also harm the overall performance on \textit{LaSOT-test}.
\subsubsection{Comparison with others}
We perform the state-of-the-art comparison on \textit{TrackingNet-test}, \textit{GOT10k-test}, \textit{LaSOT-test}, \textit{OxUvA-dev}, \textit{TLP} and \textit{VOT2018/2019} by comparing our proposed RLS-DiMP to some short-term trackers, i.e., SiamFC~\cite{Bertinetto2016SiameseFC}, MDNet~\cite{Nam2016MDNet}, ECO~\cite{Danelljan2017ECO}, DaSiamRPN~\cite{Zhu2018DaSiamRPN}, ATOM~\cite{Danelljan2019ATOM}, GradNet~\cite{Li2019GradNet}, SiamMask~\cite{Wang2019SiamMask}, UpdateNet-DaSiamRPN~\cite{Zhang2019UpdateNet}, SiamRPN++~\cite{Li2019SiamRPN++}, SiamFC++-GoogLeNet~\cite{Xu2020SiamFC++}, D3S~\cite{Lukezic2020D3S}, FCOS-MAML~\cite{Wang2020Retina-MAML}, SiamBAN~\cite{Chen2020SiamBAN}, ROAM++~\cite{Yang2020ROAM}, PrDiMP~\cite{Danelljan2020PrDiMP}, and long-term trackers, i.e., SPLT~\cite{Yan2019SPLT}, GlobalTrack~\cite{Huang2020GlobalTrack}, Siam R-CNN~\cite{Voigtlaender2020SiamRCNN}, LTMU~\cite{Dai2020LTMU}. Most of these competitors are trained additionally relying on the \textit{MS-COCO} dataset or recently proposed large-scale tracking datasets. Results for the re-implemented baselines are also included.
\begin{figure}[!t]
\captionsetup{font={small}}
\centering
\subfloat{\includegraphics[width=0.90\linewidth]{LaSOT_OPE_AUCnew.pdf}
\label{LaSOT}}
\vspace{-2.5mm}
\caption{Success plots for the hard sequences of the \textit{LaSOT-test} dataset. We report the average over 20 runs for our RLS-DiMP and the re-implemented baselines, except that the publicly available 5-run results are used for DiMP. The legends show the $AUC$ scores. Best viewed in color.} \label{fig:ablationstudyLaSOT}
\vspace{-2.5mm}
\end{figure}
\begin{table*}[!t]
\begin{center}
\fontsize{8pt}{10pt}\selectfont
\caption{Comparison of Our RLS-DiMP with Its Corresponding Baselines and Some State-of-the-Art Competing Trackers on \textit{TrackingNet-test}, \textit{GOT10k-test}, \textit{LaSOT-test}, \textit{OxUvA-dev}, \textit{TLP}; the Results of Our RLS-DiMP and Its Corresponding Baselines Are Reported Based on 20 Runs on Each Sequence. $Prec.$ and $Succ.$ Denote Precision Score at 20 Pixels and $AUC$ of Success Plot Respectively. ($\cdot$, $\cdot$) Denotes the Number of Video Sequences and Frames for Evaluation in the Corresponding Dataset.}
\begin{threeparttable}
\begin{tabular}{llccccccccccc}
\toprule
\multirow{3}{*}{Trackers}&\multirow{3}{*}{Year}&\multicolumn{2}{c}{\textit{TrackingNet-test}}&\multicolumn{3}{c}{\textit{GOT10k-test}}&\multicolumn{2}{c}{\textit{LaSOT-test}}&\multicolumn{2}{c}{\textit{OxUvA-dev}}&\multicolumn{2}{c}{\textit{TLP}}\cr
& & \multicolumn{2}{c}{($511, 226K$)}&\multicolumn{3}{c}{($420, 56K$)}&\multicolumn{2}{c}{($280, 685K$)}&\multicolumn{2}{c}{($200, 547K$)}&\multicolumn{2}{c}{($50, 676K$)}\cr
\cmidrule(lr){3-4} \cmidrule(lr){5-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} \cmidrule(lr){12-13}
& &$Succ.$ &$Prec.$&$AO$&$SR_{0.5}$&$SR_{0.75}$&$Succ.$ &$Prec.$&$TPR$&$TNR$&$Succ.$ &$Prec.$\cr
\cmidrule(lr){1-13}
\multicolumn{2}{l}{\textit{Short-term Trackers}}&&&&&&&&&&&\vspace*{5pt}\cr
SiamFC&2016&0.571&0.663&0.348&0.353&0.098&0.336&0.339&0.391&0.0&0.237&0.280 \cr
MDNet&2016&0.606&0.565&0.299&0.303&0.099&0.397&0.373&0.472&0.0&0.372&0.381 \cr
ECO&2017&0.554&0.492&0.316&0.309&0.111&0.324&0.301&-&-&0.205&0.211 \cr
ATOM&2019&0.703&0.648&0.556&0.634&0.402&0.515&0.505&-&-&-&-\cr
UpdateNet-DaSiamRPN&2019&0.677&0.625&-&-&-&0.475&-&-&-&-&-\cr
SiamRPN++&2019&0.733&0.694&0.517&0.616&0.325&0.496&0.491&-&-&-&-\cr
SiamFC++-GoogLeNet&2020&0.754&0.705&0.595&0.695&0.479&0.544&0.547&-&-&-&- \cr
D3S&2020&0.728&0.664&0.597&0.676&0.462&-&-&-&-&-&- \cr
FCOS-MAML&2020&0.757&0.725&-&-&-&0.523&0.531&-&-&-&-\cr
SiamBAN&2020&-&-&-&-&-&0.514&0.521&-&-&-&-\cr
ROAM++&2020&0.670&0.623&0.465&0.532&0.236&0.447&0.445&-&-&-&-\cr
PrDiMP&2020&0.758&0.704&0.634&0.738&0.543&0.598&0.608&-&-&-&-\cr
\cmidrule(lr){1-13}
\multicolumn{2}{l}{\textit{Long-term Trackers}}&&&&&&&&&&&\vspace*{5pt}\cr
SPLT&2019&-&-&-&-&-&0.426&0.396&0.498&0.776&0.416&0.403\cr
GlobalTrack&2020&0.704&0.656&-&-&-&0.521&0.527&0.574&0.633&0.520&0.556\cr
Siam R-CNN&2020&0.812&0.800&0.649&0.728&0.597&0.648&-&0.701&0.745&-&-\cr
LTMU&2020&-&-&-&-&-&0.572&0.572&0.749&0.754&0.571&0.608\cr
\cmidrule(lr){1-13}
\multicolumn{2}{l}{\textit{Our Implementations}}&&&&&&&&&&&\vspace*{5pt}\cr
DiMP&2019&0.740&0.689&0.609&0.709&0.491&0.569$^{\dagger}$&0.567$^{\dagger}$&0.727&0.0&0.502&0.513 \cr
DiMP$^*$&2019&0.742&0.690&0.608&0.709&0.490&0.557&0.556&0.714&0.0&0.497&0.507 \cr
BGD-DiMP&Ours&0.741&0.689&0.606&0.713&0.495&0.574&0.575&0.731&0.0&0.508&0.506 \cr
\textbf{RLS-DiMP}&Ours&0.740&0.688&0.611&0.713&0.492&0.577&0.578&0.735&0.0&0.519&0.518 \cr
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[$\dagger$] We report the publicly available 5-run results for DiMP.
\end{tablenotes}
\end{threeparttable}
\vspace{-4mm}
\label{tab:RLS_DiMPresults}
\end{center}
\end{table*}
As most researchers have designed tracking methods tailored to the short-term scenario, which is poorly representative of practitioners’ needs, \textit{TLP} and \textit{OxUvA} are recently proposed aiming to address this disparity. The former compiles 50 HD videos from real world scenarios, encompassing a duration of over 400 minutes (676K frames), while the latter comprises 366 sequences spanning 14 hours of video. \textit{OxUvA} is also split into \textit{dev} and \textit{test} sets of 200 and 166 tracks respectively, where the trackers can be evaluated through an online server. What's more, \textit{OxUvA} contains an average of 2.2 absent labels per track and at least one labelled disappearance in 52$\%$ of the tracks, which enables to evaluate trackers in terms of $TPR$ (fraction of present objects that are reported present and correctly located) and $TNR$ (fraction of absent objects that are reported absent). A problem with \textit{OxUvA} is that it does not provide dense annotations in consecutive frames, i.e., each video in \textit{OxUvA} is annotated every 30 frames, ignoring rich temporal context between consecutive frames for developing a tracker. So the aforementioned long-term \textit{LaSOT} benchmark with high-quality dense annotations is proposed to bridge this gap, leading to the largest high-quality dense tracking benchmark which also provides a training subset with 2.8 million boxes for developing a tracker.
Before \textit{LaSOT}, \textit{TrackingNet} is the first large-scale tracking benchmark by simultaneously providing the training subset for developing a tracker and the test subset for evaluating different trackers. An online server is also provided for this evaluation. However, \textit{TrackingNet} is manually labeled at 1 \textit{fps} while its all other annotations are automatically generated using correlation filter-based tracking. The most recently proposed \textit{GOT10k} benchmark offers a much wider coverage of object classes and first introduces the one-shot protocol to avoid evaluation bias towards seen familiar object classes, i.e., the training and test classes are zero-overlapped, which can be used to evaluate the generalization of different trackers. This is very different from \textit{LaSOT} and \textit{TrackingNet}. Besides $SR$, \textit{GOT10k} also exploits $AO$ to denote the average of overlaps between all ground-truth and estimated bounding boxes, which is proved to be equivalent to the $AUC$ metric.
We firstly show the comparison on these large-scale tracking benchmarks in Table~\ref{tab:RLS_DiMPresults}. Since \textit{GOT10k} requires the trackers to be trained on its training set with no extra training data used and the DiMP tracking model following this protocol is not public available, we thus re-trained the DiMP tracking model for the evaluation on \textit{GOT10k}. Our implementation results of RLS-DiMP and its corresponding baselines are reported based on 20 runs on each sequence. It clearly shows that our online learning approach can effectively unveil the power of DiMP and surpass the baseline BGD-DiMP by achieving better performance on all the long-term \textit{LaSOT-test}, \textit{OxUvA-dev} and \textit{TLP} datasets. This is desirable as our framework is designed for memory retention and overfitting alleviation. What's more, the original DiMP tracker based on the SD optimizer performs worse on all these long-term datasets, especially with a significant performance decrease for DiMP$^*$. This is consistent with the analysis in Sec.~\ref{sec:Ablation_RLS-DiMP} that DiMP$^*$ may aggravate the overfitting issue and harm the overall performance in long-term tracking.
However, short-term tracking may prefer overfitting more than our memory retention, which makes DiMP$^*$ perform best on \textit{TrackingNet-test} in our implementations. This phenomenon can also be attributed to the reason that the object classes between training and test sets in \textit{TrackingNet} are fully overlapped with close distribution, and the DiMP tracking model trained with \textit{TrackingNet-train} used may lead to biased evaluation results towards familiar objects on \textit{TrackingNet-test}. This is further validated by the evaluation on \textit{GOT10k-test}, which shows that our online learning approach can make the DiMP tracking model generalize to the unseen object classes better than the other SD and vanilla BGD optimization methods in the one-shot tracker evaluation protocol.
Table~\ref{tab:RLS_DiMPresults} also shows that the novel ROAM++~\cite{Yang2020ROAM} and UpdateNet-DaSiamRPN~\cite{Zhang2019UpdateNet} trackers with corresponding LSTM-based and ConvNet model-based meta-learners exhaustively offline trained achieve substantially inferior results compared to our RLS-DiMP because of their exploited modest baselines. Our approach, however, can effectively unveil the power of the top-performing baseline tracker DiMP without incurring too much extra computational cost. In spite of the dominant performance on \textit{TrackingNet-test}, \textit{GOT10k-test} and \textit{LaSOT-test}, Siam R-CNN~\cite{Voigtlaender2020SiamRCNN} still generalizes not well on \textit{OxUvA-dev} compared with LTMU~\cite{Dai2020LTMU} due to the missing of online learning.
\begin{table*}
\captionsetup{font={small}}
\setlength{\tabcolsep}{2.4pt} \scriptsize
\fontsize{7pt}{9pt}\selectfont
\centering
\caption{Comparison of Our RLS-DiMP with Its Corresponding Baselines and Some State-of-the-Art Competing Trackers on \textit{VOT2018/2019}; the Results Are Reported as $EAO$, $A$, $R_{S}$ ($S=100$) and Real-Time $EAO$ ($rtEAO$). For All These Metrics Except $rtEAO$, the Stochastic Trackers Are Run 15 Times on Each Sequence to Reduce the Variance of Their Results. Best Viewed in Color.}
\vspace{-2mm}
\begin{tabular}{cc!{\vrule width1.2pt}ccccccccccccc}
\Xhline{1.1pt}
\multicolumn{2}{c!{\vrule width1.2pt}}{\multirow{2}{*}{\textit{VOT}}} & GradNet & DaSiamRPN & ROAM++ & SiamMask & UpdateNet & MAML & SiamRPN++ & SiamFC++ & BGD-DiMP & DiMP & DiMP$^*$ & \textbf{RLS-DiMP} & PrDiMP \\
& & & & & & (-DaSiamRPN) & (-FCOS) & & (-GoogLeNet) & & & & & \\
\Xhline{1.1pt}
\multirow{4}{*}{2018} & $EAO\uparrow$ & 0.247 & 0.326 & 0.380 & 0.384 & 0.393 & 0.392 & 0.415 & \textcolor{green}{0.426} & 0.415 & 0.423 & 0.425 & \textcolor{red}{0.443} & \textcolor{blue}{0.442}\\
& $A\uparrow$ & 0.507 & 0.570 & 0.543 & \textcolor{green}{0.609} & - & \textcolor{red}{0.635} & 0.600 & 0.587 & 0.598 & 0.602 & 0.604 & 0.602 & \textcolor{blue}{0.618}\\
& $-\ln R_S\downarrow$ & 0.375 & 0.337 & 0.195 & 0.281 & - & 0.220 & 0.234 & 0.183 & \textcolor{blue}{0.163} & 0.170 & \textcolor{green}{0.164} & \textcolor{red}{0.138} & 0.165\\
& $rtEAO\uparrow$ & - & - & - & 0.384 & - & - & 0.415 & - & 0.393 & \textcolor{green}{0.418} & \textcolor{red}{0.450} & \textcolor{blue}{0.422} & -\\
\Xhline{1.1pt}
\multirow{4}{*}{2019} & $EAO\uparrow$ & - & - & 0.281 & 0.287 & - & 0.295 & 0.285 & - & 0.324 & \textcolor{green}{0.357} & \textcolor{blue}{0.360} & \textcolor{red}{0.371} & -\\
& $A\uparrow$ & - & - & 0.561 & \textcolor{green}{0.594} & - & \textcolor{red}{0.637} & \textcolor{blue}{0.599} & - & \textcolor{blue}{0.599} & 0.588 & 0.583 & \textcolor{green}{0.594} & -\\
& $-\ln R_S\downarrow$ & - & - & 0.438 & 0.461 & - & 0.421 & 0.482 & - & 0.323 & \textcolor{green}{0.284} & \textcolor{blue}{0.280} & \textcolor{red}{0.265} & -\\
& $rtEAO\uparrow$ & - & - & 0.110 & 0.287 & - & - & 0.285 & - & 0.328 & \textcolor{blue}{0.353} & \textcolor{green}{0.340} & \textcolor{red}{0.354} & - \\
\Xhline{1.1pt}
\end{tabular}
\vspace{-2.5mm}
\label{table:vot2}
\end{table*}
We also report the comparison results on \textit{VOT2018/2019} short-term challenges in Table~\ref{table:vot2}. Since the \textit{VOT2017} dataset had not saturated by the time the \textit{VOT2018}'s challenges were held, the dataset was used unchanged in the \textit{VOT2018} short-term challenge. In the \textit{VOT2019} short-term challenge, it had been decided to refresh the public dataset by replacing 12 least difficult sequences with more challenging ones. As can be seen in Table~\ref{table:vot2}, our RLS-DiMP surpasses all its corresponding baselines on both \textit{VOT2018} and \textit{VOT2019} by significantly reducing the number of tracking failures thanks to the proposed memory retention mechanism.
\subsection{Complexity Analyses}
\label{Section:experimentComplexity}
In Table~\ref{table:ablationstudy_RLS_RTMDNet} and Table~\ref{table:ablationstudy_RLS_DiMP}, we also show some detailed comparisons regarding the efficiency of our proposed approach by reporting the running speed and GPU memory consumption for both our proposed methods and the re-implemented baselines. It can be seen that our approach enables us to unveil the power of RT-MDNet and DiMP trackers without incurring too much extra computational cost.
From Section~\ref{Sec:RLS-Formulation} we can find that there are some matrix multiplication operations in our approach, which leads to the runtime of ${\mathcal {O}}(p^2)$ with respect to the dimension $p$ of the input $\mathbf{x}$. However, all these operations are executed by the GPU along with the other neural network operations so that the final tracking speed is only slightly affected. In order to achieve memory retention, our approach requires the trackers to maintain the matrix $\mathbf{P}_{n}$ in the whole tracking phase and also update it with above matrix multiplication operations, which results in the ${\mathcal {O}}(p^2)$ memory requirement on the GPU. This factor has little impact on the GPU memory consumption for our RLS-RTMDNet while affecting RLS-DiMP too much due to the $img2col$ operation and its subsequent high-dimensional input for online learning ($p = 4\times 4 \times 512$). We thus use the half-float precision for maintaining and updating $\mathbf{P}_{n}$, which reduces the RLS's GPU memory usage significantly.
Finally, we can use some code optimization techniques to further improve the efficiency, though it is outside the scope of this work.
\section{Conclusion}
We present a recursive least-squares estimator-aided network online learning approach that allows in-built memory retention mechanism for improving few-shot online adaptation in tracking. We apply it to two networks in the online learning families for tracking: the MLPs-based as in RT-MDNet and the ConvNets-based as in DiMP. The experiments demonstrate its efficiency in reducing the risk of overfitting through memory retention while maintaining adaptive capacity. This can be attributed to the reason that this recursive learning approach enables the optimization to approximately converge to a {\it global} point with respect to all the historical training samples ever seen including the discarded old data. An interesting direction for future work is to integrate it into the meta-learning-based offline training framework to gain further improvements.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
regular IEEE prefers the singular form
\section*{Acknowledgment}
\fi
\HLredrevision{The authors would like to thank the anonymous reviewers for their valuable comments and suggestions.}
\bibliographystyle{IEEEtran}
|
1,116,691,499,182 | arxiv | \section{Introduction}
Accurate and efficient simulation of waves is important in many areas in science and engineering due to the ability of waves to carry information over large distances. This ability stems from the fact that waves do not change shape in free space. On the other hand when the background medium is changing this induces a change in the wave forms that propagate through the medium and the waves can be used for probing the interior material properties of objects.
In order to preserve the properties of waves from the continuous setting it is preferable to use high order accurate discretizations that are able to control dispersive errors. The development of high order methods for wave propagation problems has been an active area of research for a long time and there are by now many attractive methods. Examples include (but are not limited to) finite difference methods, \cite{SBP,Virta2014,Wang2016,Petersson2018,Hagstrom2012}, embedded boundary finite differences, \cite{appelo2012fourth,FCAD1,FCAD2,Li2004295,Wandzura2004763}, element based methods like discontinuous Galerkin (DG) methods, \cite{Wilcox:2010uq,Upwind2,ChouShuXing2014,ChungEngquist06,ChungEngquist09,GSSwave}, hybridized discontinuous Galerkin (HDG) methods, \cite{Nguyen2011,Stanglmeier2016}, cut-cell finite elements \cite{STICKO2016364,sticko2016higher} and Galerkin-difference methods \cite{BANKS2016310}.
An advantage of summation-by-parts finite differences and Galerkin type methods is that stability is guaranteed, however this guarantee also comes with some drawbacks. For diagonal norm summation-by-parts finite differences the order of accuracy is reduced to roughly half of that in the interior near boundaries. Further the need for multi-block grids also restricts the geometrical flexibility.
As DG and HDG methods are naturally formulated on unstructured grids they have good geometric flexibility. However, Galerkin based polynomial methods often have the drawback that they require small timesteps (the difference Galerkin and cut-cell finite element methods are less affected by this) when combined with explicit timestepping methods, but on the other hand they preserve high order accuracy all the way up to the boundary and it is easy to implement boundary conditions independent of the order of the method.
The pioneering work by Henshaw and co-authors, see for example \cite{chess1990}, describe techniques for generating overset grids as well as how they can be used to solve elliptic and first order time-dependent partial differential equations (PDE) by second order accurate finite differences.
In an overset grid method the geometry is discretized by narrow body-fitted curvilinear grids while the volume is discretized on one or more Cartesian grids. The generation of such body-fitted grids is local and typically produces grids of very high quality, \cite{OGEN}. The grids overlap (we say that they are overset) so that the solution on an interior (often referred to as non-physical or ghost) boundary can be transferred from the interior of another grid. In \cite{chess1990} and in most other overset grid methods the transfer of solutions between grids is done by interpolation.
Since the bulk of the domain can be discretized on a Cartesian grid the efficiency asymptotically approaches that of a Cartesian solver but still retains the geometrical flexibility of an unstructured grid method.
\rr{We note that the same type of efficiency can be expected for embedded boundary and cut-cell finite elements. A difference is that overset grid methods typically have smoother errors near physical boundaries and this may be important if quantities that include derivatives of the solution, such as traction or strain, are needed. }
Here we are concerned with the approximation of the scalar wave equation on overset grids. To our knowledge, high order overset grid methods for wave equations in second order form have been restricted to finite difference discretizations. For example, in \cite{henshaw:1730} high order centered finite difference approximations to Maxwell's equations (written as a system of second order wave equations) was introduced. More recently, in \cite{ANGEL2018534}, the upwind discretizations by Banks and Henshaw introduced in \cite{BANKS20125854} were generalized to overset grids. \bb{In \cite{Hagstrom2012} convergence at 11th order for a finite difference method is demonstrated.} A second order accurate overset grid method for elastic waves can be found in \cite{smog}.
We use the recently introduced dissipative Hermite methods for the scalar wave equation in second order form, \cite{secondHermite}, for the approximation on Cartesian grids. To handle geometry we use the energy based DG methods of \cite{Upwind2} on thin grids that are grown out from physical boundaries. We use projection to transfer the solutions between grids rather than interpolation.
Both the Hermite and DG methods we employ increase the order of accuracy by increasing the number of degrees of freedom on an element or cell. This has practical implications for grid generation as a single grid with minimal overlap can be used independent of order, reducing the complexity of the grid generation step. This can be important for example in problems like optimal shape design, where the boundary changes throughout the optimization. This is different from the finite difference methods where, due to the wider finite difference stencils, the overlap must grow as the order is increased.
The transfer of solutions between overset grids typically causes a perturbation to the discrete operators which, especially for hyperbolic problems, results in instabilities, see \cite{smog} for example. These instabilities are often weak and can thus be suppressed by a small amount of artificial dissipation. There are two drawbacks of this added dissipation, first it is often not easy to determine the suitable amount needed, i.e. big enough to suppress instabilities but small enough not to reduce the accuracy or timestep too severely. Second, in certain cases the instabilities are strong enough that the dissipation must scale with the discretization parameter (the grid size) in such a way that the order of accuracy of the overall method is reduced by one.
Similar to~\cite{ANGEL2018534}, we use a dissipative method that has naturally built--in damping that is sufficient to suppress the weak instabilities caused by the overset grids. The order of the hybrid overset grid method is the design order of the Hermite method or DG method, whichever is the smallest.
In the hybrid H--DG overset grid method the Hermite method is used on a Cartesian grid in the interior of the domain, and the discontinuous Galerkin method on another, curvilinear grid at the boundary. The numerical solution is evolved independently on these grids for one timestep of the Hermite method. By using the Hermite method in the interior the strict timestep constraints of the DG method are relaxed by a factor that grows with the order of the method. Asymptotically, as discussed above, the complexity of the hybrid H--DG solver approaches that of the Cartesian Hermite solver~\cite{secondHermite}.
The paper is organized as follows. The Hermite method is described in the next section. We first explain the method in simple one dimensional case and then explain how the method generalizes to two dimensions. The DG method is described in section~\ref{sec:dg}. The details of the overset grids and a hybridization of the DG and the Hermite methods are described in section~\ref{sec:overset}. We illustrate the hybrid H--DG method with numerical simulations in the section~\ref{sec:experiment}.
\section{Dissipative Hermite method for the scalar wave equation} \label{sec:Hermite}
We present the Hermite method in some detail here and refer the reader to the original work~\cite{secondHermite} for convergence analysis and error estimates.
Consider the one dimensional wave equation in second order form in space and first order in time
\begin{eqnarray}
u_t &=& v, \label{eq1a} \\
v_t &=& c^2 u_{xx} + f, \ \ x\in \Omega, \ \ t \in (0, T). \label{eq1b}
\end{eqnarray}
Here $u \in C^{2m+3}\left(\Omega \times [0, T]\right)$, $v \in C^{2m+1} \left(\Omega \times [0,T] \right)$ and $f \in C^{2m+1}(\Omega \times [0, T])$ for optimal convergence.
We refer to $u$ as the displacement, and $v$ as the velocity. The speed of sound is $c$. We consider boundary conditions of Dirichlet or Neumann type
\begin{eqnarray}
u(t,x) &=& h_0(t,x), \ \ x \in \partial \Omega_D, \nonumber \\
u_x(t,x) &=& h_1(t,x), \ \ x \in \partial \Omega_N, \nonumber
\end{eqnarray}
and initial conditions
\begin{eqnarray*}
u(0,x) &=& g_0(x), \\
v(0,x) &=& g_1(x).
\end{eqnarray*}
Let the spatial domain be $\Omega = [a, b]$. The domain will be discretized by a primal grid
\[
x_{i} = a + ih, \ \ h = (b-a)/N,\ \ i = 0,\dots, N,
\]
and a dual grid
\[
x_i = a + ih, \ \, i = \frac12,\dots,N-\frac12.
\]
The use of staggered grids \rr{allow us to evaluate the derivatives of the polynomial approximations to derivatives at the cell center, rather than throughout the cell as in most other element based methods. The slow growth with polynomial degree of the derivative approximations near the cell centers (see \cite{TAMECFL}) allows us to use timesteps that are bounded by the speed of sound and not by the degree of the polynomial.}
In time we discretize using a uniform grid with increments $\Delta t / 2$, that is
\begin{equation}
t_n = n \Delta t, \ n = 0,1/2,1,\ldots. \nonumber
\end{equation}
At each grid point $x_i$ the approximation to the solution is represented by its degrees of freedom (DOF) that approximate the values and spatial derivatives of $u$ and $v$. Equivalently, the approximations to $u$ and $v$ can be represented as polynomials centered at grid points $x_i$. The Taylor coefficients of these polynomials are scaled versions of the degrees of freedom. To achieve the optimal order of accuracy $(2m+1)$ we require the $(m+1)$ and $m$ first derivatives of $u$ and $v$ respectively to be stored at each grid point.
At the initial time (which we take to be $t = 0$) these polynomials are approximations to the initial condition on the primal grid
\begin{align}
u(x,0) &\approx \sum_{l=0}^{m+1} \hat{u}_{l} \bigg(\frac{x-x_i}h\bigg)^l \equiv p_i(x), \ \ i = 0,\dots,N,\nonumber \\
v(x,0) &\approx \sum_{l=0}^{m} \hat{v}_l \bigg(\frac{x-x_i}h\bigg)^l \equiv q_i(x),
\ \ i = 0,\dots,N. \nonumber
\end{align}
The coefficients $\hat{u}_{l}$ and $\hat{v}_{l}$ are assumed to be accurate approximations to the scaled Taylor coefficients of the initial data. If expressions for the derivatives of the initial data are known we simply set
\begin{equation}
\hat{u}_l = \frac{h^l}{l!}\frac{d^l g_0}{dx^l}\bigg|_{x=x_i},\ \
\hat{v}_l = \frac{h^l}{l!}\frac{d^l g_1}{dx^l}\bigg|_{x=x_i}. \label{proj}
\end{equation}
Alternatively, if only the functions $g_0$ and $g_1$ are known, we may use a projection or interpolation procedure to find the coefficients in (\ref{proj}).
The numerical algorithm for a single timestep consists of two phases, an interpolation step and an evolution step. First, during the interpolation phase the spatial piecewise polynomials are constructed to approximate the solution at the current time. Then, in the time evolution phase we use the spatial derivatives of the interpolation polynomials to compute time derivatives of the solution using the PDE. We compute new values of the DOF on the next time level by evaluating the obtained Taylor series. We now describe each step separately.
\subsection{Hermite interpolation}\label{ssec:Hermite:int}
At the beginning of a timestep at time $t_n$ (or at the initial time) we consider a cell $[x_i, x_{i+1}]$ and construct the unique local Hermite interpolant of degree $(2m+3)$ for the displacement and degree $(2m+1)$ for the velocity. The interpolating polynomials are centered at the dual grid points $x_{i+\frac12}$ and can be written in Taylor form
\begin{eqnarray}
p_{i+\frac12}(x) &=& \sum_{l=0}^{2m+3} \hat{u}_{l,0} \bigg(\frac{x-x_{i+\frac12}}h\bigg)^l,\ \ x\in[x_{i}, x_{i+1}],
\ \ i = 0,\dots,N-1, \label{hi:p} \\
q_{i+\frac12}(x) &=& \sum_{l=0}^{2m+1} \hat{v}_{l,0} \bigg(\frac{x-x_{i+\frac12}}h\bigg)^l,\ \ x\in[x_{i}, x_{i+1}],
\ \ i = 0,\dots,N-1. \label{hi:q}
\end{eqnarray}
The interpolants $p_{i+\frac12}$ and $q_{i+\frac12}$ are determined by the local interpolation conditions:
\begin{align}
\frac{d^l p_{i+\frac12}}{dx^l} &= \frac{d^l p_i}{dx^l}\bigg|_{x=x_i},\ \
\frac{d^l p_{i+\frac12}}{dx^l} = \frac{d^l p_{i+1}}{dx^l}\bigg|_{x=x_{i+1}}, \ \ l = 0,\dots,m+1,
\nonumber \\
\frac{d^l q_{i+\frac12}}{dx^l} &= \frac{d^l q_i}{dx^l}\bigg|_{x=x_i},\ \
\frac{d^l q_{i+\frac12}}{dx^l} = \frac{d^l q_{i+1}}{dx^l}\bigg|_{x=x_{i+1}}, \ \ l = 0,\dots,m.
\nonumber
\end{align}
We find the coefficients in \eqref{hi:p} and \eqref{hi:q} by forming a generalized Newton table as described in \cite{hagstrom2015solving}.
\subsection{Time evolution}\label{ssec:Hermite:ts}
To evolve the solution in time we further expand the coefficients of $p_{i+\frac12}$ and $q_{i+\frac12}$. At each point on the dual grid, $x_{i+\frac12}$ we seek temporal Taylor series
\begin{eqnarray}
p_{i+\frac12}(x,t) &=& \sum_{l=0}^{2m+3}\sum_{s=0}^{\kappa_p}
\hat{u}_{l,s} \bigg(\frac{x-x_{i+\frac12}}h\bigg)^l\bigg(\frac{t}{\Delta t}\bigg)^s,\label{tts:p} \\
q_{i+\frac12}(x,t) &=& \sum_{l=0}^{2m+1}\sum_{s=0}^{\kappa_q}
\hat{v}_{l,s} \bigg(\frac{x-x_{i+\frac12}}h\bigg)^l\bigg(\frac{t}{\Delta t}\bigg)^s,\label{tts:q}
\end{eqnarray}
where $\kappa_p = (2m+3-2\lceil \frac l2 \rceil)$ and $\kappa_q=(2m+1-2\lfloor \frac l2 \rfloor)$. The coefficients $ \hat{u}_{l,0}$ and $ \hat{v}_{l,0}$ are given
by the coefficients of \eqref{hi:p} and \eqref{hi:q}. At this time the scaled time derivatives, $ \hat{u}_{l,s}$ and $ \hat{v}_{l,s}$ $s > 0$, are unknown and must be determined. Once they are determined we may simply evaluate (\ref{tts:p}) and (\ref{tts:q}) at $t = t_n + \Delta t /2$ to find the solution at the next half timestep.
In Hermite methods the coefficients of temporal Taylor polynomials are determined by collocating the differential equation, \cite{good2006,secondHermite,hagstrom2015solving}. In particular, by differentiating \eqref{eq1a} and \eqref{eq1b} in space and time the time derivatives of the solution can be directly expressed in terms of spatial derivatives
\begin{eqnarray}
\frac{\partial^{s+1+r} u }{\partial t^{s+1} \partial x^{r}} &=& \frac{\partial^{s+r} v}{\partial t^{s} \partial x^{r}}, \label{deq1a} \\
\frac{\partial^{s+1+r} v }{\partial t^{s+1} \partial x^{r}} &=& c^2 \frac{\partial^{s + r + 2} u }{\partial t^{s} \partial x^{r+2}} + \frac{\partial^{s+r} f }{\partial t^{s} \partial x^{r}}. \label{deq1b}
\end{eqnarray}
Substituting \eqref{tts:p} and \eqref{tts:q} into \eqref{deq1a}
and \eqref{deq1b} and evaluating at $x = x_{i+\frac12}$ and $t = t_n$,
we can match the powers of the coefficients to find the recursion relations
\begin{eqnarray}
\hat{u}_{l,s+1} &=& \frac{\Delta t}{s} \hat{v}_{l,s}, \label{tcofsa} \\
\hat{v}_{l,s+1} &=& c^2\frac{(l+1)(l+2)}{h^2} \frac{\Delta t}{s} \hat{u}_{l+2,s} + \frac{\Delta t}{s} \hat{f}_{l,s}. \label{tcofsb}
\end{eqnarray}
Here $\hat{f}_{l,s}$ are the coefficients of the Taylor expansion of $f$, or of the polynomial which interpolates $f(t, x_{i+1/2})$ in time around $t = t_n$. Note that since there are a finite number of coefficients, representing the spatial derivatives at the time $t_n$, the recursions truncate and only $\kappa_p$ and $\kappa_q$ terms need to be considered.
To complete a half timestep we evaluate the approximation at $t =t_n+ \frac{\Delta t}{2}$ for the $(m+1)$ and $m$ first derivatives
\begin{align}
& \frac{\partial^l u}{\partial x^l} (x_{i+\frac12},t_{n+\frac{1}{2}}) \approx \frac{\partial^l p_{i+\frac12}}{\partial x^l} (x_{i+\frac12},t_{n+\frac{1}{2}})
= \frac{l!}{h^l} \sum_{s=0}^{\kappa_p} \frac{\hat{u}_{l,s}}{2^s} , \ \ l = 0,\dots, m+1,
\label{tsa}\\
& \frac{\partial^l v}{\partial x^l} (x_{i+\frac12},t_{n+\frac{1}{2}}) \approx \frac{\partial^l q_{i+\frac12}}{\partial x^l} (x_{i+\frac12},t_{n+\frac{1}{2}})
=\frac{l!}{h^l}
\sum_{s=0}^{\kappa_q} \frac{\hat{v}_{l,s}}{2^s}, \ \ l = 0,\dots, m. \label{tsb}
\end{align}
\begin{rem}
\rr{A remarkable feature of Hermite methods is that (independent of order of accuracy) since the initial data for each cell is a polynomial the time evolution is exact whenever the following conditions are met: 1.) The recursion relations (\ref{tcofsa}) and (\ref{tcofsb}) are run until they truncate, 2.) The forcing is zero (or a polynomial of degree $2m+1$), 3.) Each cell $[x_i, x_{i+1}]$ includes the base of the domain of dependence of the solution at a dual grid point $x_{i+\frac12}$ at time $t = \frac {\Delta t}2$ (see e.g. \cite{secondHermite}). The latter condition can also be stated as a CFL condition}
\[
c\frac{\Delta t}{2} \le \frac{h}{2}.
\]
\rr{
In the present method we do not quite achieve this optimal CFL condition but have verified numerically that our solvers of orders of accuracy 3, 5 and 7 are stable for $c\Delta t \le 0.75 h$.}
\end{rem}
\bb{
\subsubsection{Variable coefficients}
For problems with a variable wave speed the acceleration is governed by
\begin{equation}
v_t = \underbrace{(c^2(x) u_x)_x}_{s(x)}. \label{eq:vc1}
\end{equation}
To compute $v_t, v_{tt}, v_{ttt}, $ etc., needed to evolve the solution by a Taylor series method, we must evaluate the right hand side of (\ref{eq:vc1}). This is done in sequence by forming the polynomial $s(x)$ by operations on polynomials. Let $p \approx u$ and $a \approx c^2$ be polynomials approximating $u(t,x)$ and $c^2(x)$ then
\[
s(x) = \mathcal{D} (a(x) \otimes (\mathcal{D} p(t,x))).
\]
Here $\mathcal{D}$ denotes polynomial differentiation and $\otimes$ represents polynomial multiplication with degree truncation to the degree of $p$. Computation of $v_t, v_{tt}$ etc. can now be done by forming $s(x)$ with $p \approx u, u_t, u_{tt}$, etc.
}
\subsection{Imposing boundary conditions for the Hermite method}\label{ssec:Hermite:bc}
\bb{In the hybrid Hermite-DG overset grid method, physical boundary conditions can be imposed on any grid that discretizes the boundary. For example, in the numerical experiments in Section \ref{ss:body}, the boundary conditions are imposed on both grids. In this section we explain how physical boundary conditions are imposed for the Hermite method and a Cartesian grid.}
Physical boundary conditions are enforced at the half time level, i.e. when the solution on the dual grid is to be advanced back to the primal grid. As there are many degrees of freedom that are located on the boundary the physical boundary condition must be augmented by the differential equation to generate more independent conditions so that the degrees of freedom can be uniquely determined. The basic principle, often referred to as compatibility boundary conditions (see e.g. \cite{henshaw:1730}), is to take tangential derivatives of the boundary conditions and combine these with the PDE.
For example, assume we want to impose the boundary condition
\begin{equation}
u(t,0) = g(t). \label{e:bc}
\end{equation}
\rr{
Then, as $x_0=0$ is a boundary grid point the Taylor polynomials
}
\eqref{tts:p}-\eqref{tts:q},
\rr{centered at $x_0$, should satisfy the boundary condition
}
\eqref{e:bc}
\rr
{and compatibility conditions, (i.e. conditions for the derivatives), that one obtains by differentiating} \eqref{e:bc} \rr{in time and then replace time derivatives of $u$ in favor of spatial derivatives by using the wave equation}.
We thus seek a polynomial outside the domain which together with the polynomial just inside the boundary forms a Hermite interpolant that satisfies the boundary and compatibility conditions.
\rr{
Precisely, to evolve the solution on the boundary we must determine the $2(m+2)$ and $2(m+1)$ coefficients of the polynomials approximating $u$ and $v$ at the boundary. For example for $u$, this polynomial must interpolate the $(m+2)$ data describing the current approximation of $u$ at dual grid point next to the boundary, this yields $(m+2)$ independent linear equations. The remaining $(m+2)$ independent linear equations can be obtained by requiring that the polynomial satisfies with the boundary condition $u(0,t) = g(t)$ and its time derivatives as described above.
}
Once the interpolant is determined on the boundary we evolve it as in the interior (see section \ref{ssec:Hermite:ts}).
\begin{rem}
We note that in the special case of a flat boundary and homogeneous Dirichlet or Neumann boundary conditions then enforcing the boundary conditions reduces to enforcing that the polynomial on the boundary is either odd or even, respectively, in the normal direction. Then the correct odd polynomial can be obtained by constructing the polynomial outside the domain $\Omega$ (often referred as ghost-polynomial) by mirroring the coefficients corresponding to even powers in the normal coordinate variable with a negative sign and the coefficients corresponding to odd powers with the same sign.
\end{rem}
Boundary conditions at interior overset grid boundaries are supplied by projection of the known solutions from other grids and will be discussed below.
\subsection{Higher dimensions}
In higher dimensions the approximations to $u$ and $v$ take the form of centered tensor product Taylor polynomials. In two dimensions (plus time) the coefficients would be of the form \bb{$\hat u_{k,l,s}$}, with the two first indices representing the powers in the two spatial directions, and the third representing time.
For the scalar wave equation
\begin{align*}
u_t &= v, \\
v_t &= c^2 (u_{xx}+u_{yy}), \ \ (x,y) \in \Omega, \ \ t > 0,
\end{align*}
the recursion relations for computing the time derivatives are a straightforward generalization of the one dimensional case
\begin{align}
& \bb{\hat u}_{k,l,s} = \frac{\Delta t}{s} \bb{\hat v}_{k, l, s-1}, \label{eq:recursion_ut} \\
& \bb{\hat v}_{k, l, s} =
c^2\frac{(k+2)(k+1)}{s} \frac{\Delta t}{h_x^2} \bb{\hat u}_{k+2, l, s-1} +
c^2\frac{(l+2)(l+1)}{s} \frac{\Delta t}{h_y^2} \bb{\hat u}_{k, l+2, s-1}. \label{eq:recursion_vt}
\end{align}
As noted in \cite{secondHermite}, using this recursion for all the time derivatives does not produce a method with order independent CFL condition but a method whose time-step size decrease slightly as the order increases. For optimally large timesteps it is necessary to use the special start up procedure
\begin{align*}
\bb{\hat u}_{k,l,1} &= \Delta t\, \bb{\hat v}_{k, l,0}, \\
\bb{\hat v}_{k, l, 1} &= \Delta t c^2 \left( \frac{(k+2)(k+1)}{h_x^2} \bb{\hat u}^X_{k+2, l} +
\frac{(l+2)(l+1)}{h_y^2} \bb{\hat u}^Y_{k,l+2} \right).
\end{align*}
Here $\bb{\hat u}^X_{k, l}$ are the $(2m+4) \times (2m+2)$ coefficients of the interpolating polynomial of degree $(2m+3)$ in $x$ and degree $(2m+1)$ in $y$ and $\bb{\hat u}^Y_{k, l}$ are the $(2m+4) \times (2m+2)$ coefficients of the interpolating polynomial of degree $(2m+3)$ in $y$ and degree $(2m+1)$ in $x$. For the remaining coefficients $s = 2\ldots, 4m+3$ we use (\ref{eq:recursion_ut}) and (\ref{eq:recursion_vt}) with $k,l = 0,\ldots,2m+1$. Further details of the two dimensional method can be found in \cite{secondHermite}.
\section{Energy based discontinuous Galerkin methods for the wave equation} \label{sec:dg}
Our spatial discontinuous Galerkin discretization is a direct application of the energy based formulation described for general second order wave equations in \cite{Upwind2,el_dg_dath,fluid_solid_DG}. Here, our energy based DG method starts from the energy of the scalar wave equation
\[
H(t) = \int_{\Omega} \frac{v^2}{2} + G(x,y,\nabla u) \, d \Omega,
\]
where $ G(x,y,\nabla u) = \frac{c^2(x,y)}{2} |\nabla u|^2 $ is the potential energy density, $v$ is the velocity or the time derivative of the displacement, $v = u_t$.
Now, the wave equation, written as a second order equation in space and first order in time takes the form
\[
u_t = v, \ \ v_t = - \delta G,
\]
where $\delta G$ is the variational derivative of the potential energy
\[
\delta G = - \nabla \cdot (c^2(x,y) \nabla u).
\]
For the continuous problem the change in energy is
\begin{equation}
\frac{d H(t)}{dt} = \int_{\Omega} v v_t + u_t \left[ \nabla \cdot (c^2(x,y) \nabla u \right] \,d \Omega = [ u_t (n \cdot (c^2(x,y) \nabla u))]_{\partial \Omega}, \nonumber
\end{equation}
where the last equality follows from integration by parts together with the wave equation.
A variational formulation that mimics the above energy identity can be obtained if the equation $v-u_t=0$ is tested with the variational derivative of the potential energy. Let $\Omega_j$ be an element and $(\Pi^{q_u}(\Omega_j))^2$ and $(\Pi^{q_v}(\Omega_j))^2$ be the spaces of tensor product polynomials of degrees $q_u$ and $q_v=q_u-1$. Then, the variational formulation on that element is:
\begin{problem}
Find $v^h \in (\Pi^{q_v}(\Omega_j))^2$, $u^h \in (\Pi^{q_u}(\Omega_j))^2$ such that for all
$\psi \in (\Pi^{q_v}(\Omega_j))^2$, $\phi \in (\Pi^{q_u}(\Omega_j))^2$
\begin{align}
\int_{\Omega_j} (c^2 \nabla \phi) \cdot \left( \frac {\partial \nabla u^h}{\partial t}-\nabla v^h \right) d \Omega & =
[ (c^2 \nabla \phi ) \cdot n \left( v^{\ast}-v^h \right)]_{\partial \Omega_j},
\label{var1} \\
\int_{\Omega_j} \psi \frac {\partial v^h}{\partial t} + c^2 \nabla \psi \cdot \nabla u^h \, d\Omega& =
[\psi \, (c^2\,\nabla u \cdot n)^{\ast}]_{\partial \Omega_j}. \label{var2}
\end{align}
\end{problem}
Let $[[f]]$ and $\{f\}$ denote the jump and average of a quantity $f$ at the interface between two elements, then, choosing the numerical fluxes as
\begin{align}
v^{\ast} &= \{v^h\} -\tau_1 [[c^2\,\nabla u^h \cdot n]], \nonumber\\
(c^2\,\nabla u \cdot n)^{\ast} &= \{ c^2\,\nabla u^h \cdot n \} -\tau_2 [[v^h]], \nonumber
\end{align}
yields a contribution $ -\tau_1 ([[c^2\,\nabla u^h \cdot n]])^2 -\tau_2 ([[v^h]])^2$ from each element face to the change of the discrete energy, guaranteeing that
\[
\frac{d H^h(t)}{dt} \equiv \frac{d}{dt} \sum_{j} \int_{\Omega_j} \frac{(v^h)^2}{2} + G(x,y,\nabla u^h) \le 0.
\]
Physical boundary conditions are enforced through the numerical fluxes, see \cite{Upwind2} for details.
Note that the above energy estimate follows directly from the formulation \eqref{var1} - \eqref{var2} but as the energy is invariant to constants equation \eqref{var1} must be supplemented by the equation
\begin{equation}
\int_{\Omega_j} \left( \frac {\partial u^h}{\partial t} - v^h \right) d \Omega = 0. \nonumber
\end{equation}
Our implementation uses quadrilaterals and approximations by tensor product Chebyshev polynomials of the solution on the reference element $(r,s) \in [-1,1]^2$. That is, on each quadrilateral we have approximations on the form
\begin{align}
u(x(r,s),y(r,s),t_n) &\approx \sum_{l=0}^{q_u}
\sum_{k=0}^{q_u} c_{lk}
T_l (r)
T_k(s), \nonumber \\
v(x(r,s),y(r,s),t_n) &\approx \sum_{l=0}^{q_v}
\sum_{k=0}^{q_v} d_{lk}
T_l (r)
T_k(s). \nonumber
\end{align}
We choose $\tau_1 = \tau_2 = 1/2$ (so called upwind or Sommerfeld fluxes) which result in methods where $u$ is observed to be $q_u+1$ order accurate in space \cite{Upwind2}. \bb{We note that another basis like Legendre polynomials could also be used. In fact we have repeated some of the long time computations in the numerical experiments section below to confirm that a change of basis to Legendre polynomials does not effect the stability or accuracy properties of the method.}
\subsection{Taylor series time-stepping}\label{ssec:DG:ts}
In order to match the order of accuracy in space and time for the DG method we employ Taylor series time-stepping. Assuming that all the degrees of freedom have been assembled into a vector ${\bf w}$ we can write the semi-discrete method as ${\bf w}_t = A {\bf w} $ with $A$ being the matrix representing the spatial discretization. If we know the discrete solution at the time $t_n$ we can advance it to the next time step $t_{n+1} = t_n + \Delta t$ by the simple formula
\begin{eqnarray*}
{\bf w}(t_{n}+\Delta t) &=& {\bf w}(t_{n}) + \Delta t {\bf w}_t(t_{n}) + \frac{(\Delta t)^2}{2!}{\bf w}_{tt}(t_{n}) \ldots \\
&=& {\bf w}(t_{n}) + \Delta t A {\bf w}(t_{n}) + \frac{(\Delta t)^2}{2!} A^2 {\bf w}(t_{n}) \ldots
\end{eqnarray*}
As we use dissipative fluxes this timestepping method is stable as long as the number of stages in the Taylor series is greater than the order of accuracy in space and with the timestep small enough.
\section{Overset grid methods} \label{sec:overset}
In this section we explain how we use the two discretization techniques described above on overset grids to approximate solutions to the scalar wave equation.
The idea behind the overset grid methods is to cover the bulk of the domain with a Cartesian grid, where efficient methods can be employed, and to discretize the geometry with narrow body-fitted grids. In Figure \ref{fig:composite_grid} we display two overset grids, a blue Cartesian grid, which we denote $a$, and a red curvilinear grid, which we denote $b$, that are used to discretize a geometry consisting of a circular hole cut out from a square region. Note that the grids overlap, hence the name overset grids. Also, note that the annular grid cuts out a part of the Cartesian grid. This cut of the Cartesian grid creates \rr{an} internal, non-physical boundary in the blue grid.
Here physical boundary conditions are enforced on the red grid at the black boundary which defines the inner circle and on the outermost boundary on the blue grid.
In order to use the Hermite or DG methods on the grids we will need to supply boundary conditions at the interior boundaries. In the example in Figure \ref{fig:composite_grid} this means that we would have to specify the solution on the outer part of the annular grid and on the staircase boundary (marked with filled black circles) that has been cut out from the Cartesian grid.
\begin{figure}[htb]
\begin{center}
\setlength{\unitlength}{0.4mm}
\begin{picture}(130,130)(-65,-65)
\multiput(-65,-65)(10,0){6}{\color{blue}\line(0,1){130}}
\multiput(-65,-65)(0,10){6}{\color{blue}\line(1,0){130}}
\multiput(65,65)(-10,0){6} {\color{blue}\line(0,-1){130}}
\multiput(65,65)(0,-10){6} {\color{blue}\line(-1,0){130}}
\multiput(5,15)(-10,0){2} {\color{blue}\line(0,1){50}}
\multiput(5,-15)(-10,0){2} {\color{blue}\line(0,-1){50}}
\multiput(15,5)(0,-10){2} {\color{blue}\line(1,0){50}}
\multiput(-15,5)(0,-10){2} {\color{blue}\line(-1,0){50}}
\put(0,0){\color{black}\linethickness{0.4mm}\circle{24}}
\put(0,0){\color{red}\circle{40}}
\put(0,0){\color{red}\circle{60}}
\put(0,0){\color{red}\linethickness{0.4mm}\circle{80}}
\put(0,0){\color{red}\linethickness{0.4mm}\circle{100}}
\put( 12.0000, 0){\color{red}\line( 12.0000, 0){38}}
\put( 10.9625, 4.8808){\color{red}\line( 10.9625, 4.8808){34.5}}
\put( 8.0296, 8.9177){\color{red}\line( 8.0296, 8.9177){25.5}}
\put( 3.7082, 11.4127){\color{red}\line( 3.7082, 11.4127){11.7}}
\put( -1.2543, 11.9343){\color{red}\line( -1.2543, 11.9343){4}}
\put( -6.0000, 10.3923){\color{red}\line( -6.0000, 10.3923){19}}
\put( -9.7082, 7.0534){\color{red}\line( -9.7082, 7.0534){30.6}}
\put(-11.7378, 2.4949){\color{red}\line(-11.7378, 2.4949){37.5}}
\put(-11.7378, -2.4949){\color{red}\line(-11.7378, -2.4949){37.5}}
\put( -9.7082, -7.0534){\color{red}\line( -9.7082, -7.0534){30.6}}
\put( -6.0000, -10.3923){\color{red}\line( -6.0000, -10.3923){19}}
\put( -1.2543, -11.9343){\color{red}\line( -1.2543, -11.9343){4}}
\put( 3.7082, -11.4127){\color{red}\line( 3.7082, -11.4127){11.7}}
\put( 8.0296, -8.9177){\color{red}\line( 8.0296, -8.9177){25.5}}
\put( 10.9625, -4.8808){\color{red}\line( 10.9625, -4.8808){34.5}}
\put( 40.0000, 0){\linethickness{0.4mm}\color{red}\line( 12.0000, 0){ 9.5000 }}
\put( 36.5418, 16.2695){\linethickness{0.4mm}\color{red}\line( 10.9625, 4.8808){ 8.6250}}
\put( 26.7652, 29.7258){\linethickness{0.4mm}\color{red}\line( 8.0296, 8.9177){ 6.3750}}
\put( 12.3607, 38.0423){\linethickness{0.4mm}\color{red}\line( 3.7082, 11.4127){ 2.9250}}
\put( -4.1811, 39.7809){\linethickness{0.4mm}\color{red}\line( -1.2543, 11.9343){ 1.0000}}
\put( -20.0000, 34.6410){\linethickness{0.4mm}\color{red}\line( -6.0000, 10.3923){ 4.7500}}
\put( -32.3607, 23.5114){\linethickness{0.4mm}\color{red}\line( -9.7082, 7.0534){ 7.6500}}
\put( -39.1259, 8.3165){\linethickness{0.4mm}\color{red}\line(-11.7378, 2.4949){ 9.3750}}
\put( -39.1259, -8.3165){\linethickness{0.4mm}\color{red}\line(-11.7378, -2.4949){ 9.3750}}
\put( -32.3607, -23.5114){\linethickness{0.4mm}\color{red}\line( -9.7082, -7.0534){ 7.6500}}
\put( -20.0000, -34.6410){\linethickness{0.4mm}\color{red}\line( -6.0000, -10.3923){ 4.7500}}
\put( -4.1811, -39.7809){\linethickness{0.4mm}\color{red}\line( -1.2543, -11.9343){ 1.0000}}
\put( 12.3607, -38.0423){\linethickness{0.4mm}\color{red}\line( 3.7082, -11.4127){ 2.9250}}
\put( 26.7652, -29.7258){\linethickness{0.4mm}\color{red}\line( 8.0296, -8.9177){ 6.3750}}
\put( 36.5418, -16.2695){\linethickness{0.4mm}\color{red}\line( 10.9625, -4.8808){ 8.6250}}
\multiput(-15,-15)(10,0){4}{\circle*{2}}
\multiput(-15,15)(10,0){4}{\circle*{2}}
\multiput(-15,-5)(0,10){2}{\circle*{2}}
\multiput(15,-5)(0,10){2}{\circle*{2}}
\end{picture}
\caption{An example of an overset grid for a circular boundary inside a square. The red grid is curvilinear and the blue grid is Cartesian (in a realistic problem the red grid would be significantly thinner). The black filled circles indicate the cut out domain boundary. \label{fig:composite_grid}}
\end{center}
\end{figure}
\begin{figure}[H]
\includegraphics[width=0.49\textwidth]{HDG_1_1.pdf}
\includegraphics[width=0.49\textwidth]{HDG_1_2.pdf}
\caption{Typical setup for communication. In the left subfigure the local tensor product GLL grid around a Hermite grid point is marked with filled blue circles. The points in the GLL grid may be covered by different DG elements. In the right subfigure the tensor product grid inside the DG element is marked with filled red circles. The points in the GLL grid may be contained in different Hermite cells. \label{f:ogm1}}
\end{figure}
In most methods that use overset grids, in particular those using finite differences, the communication of the solution on the interior boundaries is done by interpolation, see e.g. \cite{chess1990}. For the methods we use here we have found that the stability properties are greatly enhanced if we instead transfer volumetric data (numerical solution) in the elements / gridpoints near the internal boundaries by projection rather than by interpolation. In fact, when we use volume data the resulting methods are stable without adding artificial dissipation, when we use interpolation they are not. \rr{At the end of this section we discuss a possible reason why the projection behaves better than interpolation.}
As mentioned above, in a Hermite method, we can think of the degrees of freedom as either being nodal data, consisting of function and derivative values, or as coefficients in a Taylor polynomial. Thus, when transferring data to a grid where a Hermite method is used (like the example in the left subfigure of Figure \ref{f:ogm1}) we must determine a tensor product polynomial centered around a gridpoint local to that grid (the points we would center around are indicated by black points in Figure \ref{fig:composite_grid}). Below we will explain in detail how we determine this polynomial.
For elements with an internal boundary face (denoted by thick red lines in Figure~\ref{fig:composite_grid}) we could in principle transfer the solution by specifying a numerical flux on that face, however we have found that this approach results in weakly unstable methods. Instead we transfer volumetric data to each element that has an internal boundary face, we give details below. Given the timestep constraints of DG methods we must march the DG solution using much smaller timesteps than those used for the Hermite method. This necessitates the evaluation of the Hermite data not only at the beginning of a Hermite timestep but at many intermediate times.
\subsection{Determining internal boundary data for the Hermite solver}\label{ssec:Overset:hdg}
We first consider the problem of determining internal boundary data required by the Hermite method. An example of how to compute solution data at the gridpoints $(x_i, y_{j})$ at the boundary of Cartesian grid (filled black circles) is depicted in Figure \ref{fig:composite_grid}.
In general, the tensor product polynomial centered around $(x_i, y_{j})$
is found by a two step procedure. First we project into a local $L_2$ basis spanned by Legendre polynomials and perform a numerically stable and fast change of basis into the monomial basis. Then we truncate the monomial to the degree required by the Hermite method.
To carry out the $L_2$ projection we introduce a local tensor product Gauss-Legendre-Lobatto (GLL) grid centered around $(x_{i}, y_{j})$. These points are marked as filled blue circles in the left subfigure of Figure~\ref{f:ogm1}. The number of grid points in the local grids are determined by the order of the projection. To maintain the order of the method, the order of the projection should be at least the same as the order of the spatial discretization, thus it is sufficient to have $2m+4$ points in each direction. The GLL quadrature nodes are defined on the reference element $(r,s) \in [-1,1]^2$ that maps to a cell defined by the dual gridpoints closest to $(x_{i}, y_{j})$.
Let $\tilde{u}$ be the numerical solution on the red grid. In the first step of the communication we compute the coefficients of a polynomial $\tilde{p}$ approximating $\tilde{u}$ by projecting $\tilde{u}$ on the space of tensor product Legandre polynomials $P_lP_k$, that is
\begin{equation}
\tilde{p}(r,s) = \sum_{l=0}^{2m+3}
\sum_{k=0}^{2m+3} c_{lk}
P_l (r)
P_k(s),\
c_{lk} =
\frac{(\tilde{u}, P_l P_k)}
{\| P_l P_k \|^2}. \label{eq:leg_proj}
\end{equation}
Here $(f,g)$ denotes the $L_2$ inner product on $(r,s) \in [-1,1]^2$ and $\| f \|^2_2 = (f,f)$ is the norm induced by the inner product. Note that the expression (\ref{eq:leg_proj}) is particularly simple since the Legendre polynomials are orthogonal on the domain of integration. To do this we evaluate $\tilde{u}$ at the underlying blue quadrature points in the left subfigure of Figure~\ref{f:ogm1}.
Once the polynomial (\ref{eq:leg_proj}) has been found we perform a change of basis into the local monomial used by the Hermite method. Such a change of basis can be done by the fast Vandermonde techniques by Bj\"{o}rk and Pereyra, see e.g. \cite{bp70,Dahlquist:2008fu}. At this stage the polynomial is of total degree $2m+3$ so the final step is to truncate it to total degree $m$ or $m+1$ depending on whether we are considering the displacement or the velocity. With the $(m+1)^2$ and $(m+2)^2$ degrees of freedom determined everywhere on a Hermite grid we may evolve the solution as described in Section \ref{sec:Hermite}.
\subsection{Determining data for DG elements with internal boundary faces}\label{ssec:Overset:dgh}
We now consider the problem of determining the data required by the DG method. Here we show how to obtain the data at a single DG element with at least one internal boundary face. As the timesteps of the DG method are significantly smaller than for the Hermite method we must repeat the transfer of data many times. We must also explicitly transfer time derivative data in order to use a Taylor series timestepping approach.
The tensor product polynomials in our implementation of the DG method are composed by the product of Chebyshev polynomials $T_j(z) = \cos (j \cos^{-1}(z))$ that are expressed on the reference element $(r,s) \in [-1,1]^2$. Precisely we seek
\begin{equation}
{p}(r,s) = \sum_{l=0}^{q}
\sum_{k=0}^{q} c_{lk}
T_l (r)
T_k(s). \nonumber
\end{equation}
To determine such polynomials we perform a projection of the solution $u$, i.e the solution on Cartesian grid,
\begin{equation}
c_{lk} =
\frac{(\tilde{u}, T_l T_k)_{\rm C}}
{\| T_l T_k \|_{\rm C}^2},\nonumber
\end{equation}
but in this case the weighted inner product is
\begin{equation*}
(f,g)_{\rm C} = \int_{-1}^1\int_{-1}^{1} \frac{f(r,s) g(r,s)}{\sqrt {1-r^2} \sqrt{1-s^2}}\, dr ds,
\end{equation*}
where the Chebyshev polynomials are orthogonal. To carry out this projection we use a local tensor product Chebyshev quadrature nodes, $2m+2$ in each dimension, as shown in right subfigure of Figure~\ref{f:ogm1}.
Denoting The local time levels used by the DG solver $n$th Hermite timestep are defined to be
\begin{equation}
t_{n,\nu} = t_{n,0}+\nu \Delta t_b,\ \nu = 0,\dots N_{\rm DG}, \nonumber
\end{equation}
where $\Delta t_b$, and similarly $\Delta t_a$, are timesteps taken on grids $b$ (curvilinear) and $a$ (Cartesian) respectively. For simplicity the starting local time level and the final local time level are equal to consequent timesteps on the Hermite grid, $t_n$ and $t_{n+1}$
\begin{equation}
t_{n,0} = t_n,\ \ t_{n, N_{\rm DG}} = t_{n+1}. \nonumber
\end{equation}
To transfer the solution values and the time derivatives needed at each of the quadrature points and at each $t_{n,\nu}$ we carry our the following ``start up'' procedure at $t_{n,0}$. For each of the quadrature points we re-center the Hermite interpolants closest to it and compute the time derivatives precisely by the recursion relations described in section~\ref{sec:Hermite}. We note that this is an inexpensive computation as the interpolants have already been found as a step in the evolution of the Hermite solution, the only added operation is the re-centering.
\rr{\subsection{Discussion of projection and interpolation}
One of the differences in the present method and a finite difference method is that during the transfer of data to the Hermite method there is a degree truncation of (in one dimension and for $u$) a polynomial of degree $(2m+3)$ to a polynomial of degree $(m+1)$. It is natural to ask how the truncated polynomial depends on whether projection or interpolation was used to find the un-truncated polynomial.
Suppose the same $(2m+4)$ data has been used to determine two polynomials
\[
p_{\rm interp.}(z)= \sum_{l = 0}^{2m+3} a_l z^l, \ \ z \in [-1/2,1/2],
\]
and
\[
p_{\rm proj.}(z) = \sum_{l = 0}^{2m+3} \tilde{b}_l P_l(2z) = \sum_{l = 0}^{2m+3} b_l z^l, \ \ z \in [-1/2,1/2].
\]
Then due to the orthogonality of the projected polynomial it is clear that the truncated polynomial satisfies
\[
\int_{-\frac{1}{2}}^{\frac{1}{2}} \left( \sum_{l = 0}^{m+1} \tilde{b}_l P_l(2z) \right)^2 dx \le \int_{-\frac{1}{2}}^{\frac{1}{2}} \left(p_{\rm proj.}(z) \right)^2 dx.
\]
The polynomial determined by interpolation does not satisfy a similar inequality. In fact the truncation can cause a significant increase in the $L_2$-energy. To investigate this we find
\[
{\{a_0^\ast,\ldots,a_{2m+3}^\ast\}} ={\rm argmax} \frac{\int_{-\frac{1}{2}}^{\frac{1}{2}} \left( \sum_{l = 0}^{m+1} a_l z^l \right)^2 dx}{\int_{-\frac{1}{2}}^{\frac{1}{2}} \left( \sum_{l = 0}^{2m+3} a_l z^l \right)^2 dx},
\]
for 10000 randomly selected initial data and for $m = 1,2,3$. The largest ratio between square of the $L_2$-norms of the truncated and un-truncated polynomials were 19, 657 and 3555 for $m=1$, $m=2$ and $m=3$ respectively. While this does not conclusively rule out that it could be possible to use interpolation it does indicate that a projection based approach is to be preferred. We stress that the cause of the problem is the combination of the truncation and interpolation and that there is therefore not obvious that there is any advantage to use projection rather than interpolation for methods that does not have truncation (like finite difference methods).
}
\section{Numerical experiments} \label{sec:experiment}
The hybrid H--DG method is empirically stable and accurate, and here we demonstrate it with numerical experiments. To test the stability of the method in one dimension we first define the amplification matrix and compute its spectral radius. To test the stability in two dimensions, where the amplification matrix will take too long to compute, we provide the long time simulation and estimate the error growth for multiple refinements. Convergence tests in one and two dimensions are done for the domains where the exact solution is known. In the second half of this section we apply the method to the domain with complex curvilinear boundary in the experiment with wave scattering of the smooth pentagonal object. Finally, in the end of this section we apply the method to the inverse problem of locating the underground cavities as the forward solver.
\subsection{Numerical stability test}
Unlike the Hermite and DG methods, stability of the hybrid H--DG method cannot easily be shown analytically. \bb{As a weaker alternative, the stability can be investigated numerically by looking at the spectrum of the amplification matrix associated with the method, \cite{Strik}.
To construct the amplification matrix we apply the method to initial data composed of the unit vectors. The vector that is returned after one timestep is then placed as columns in a square matrix. If the spectral radius of the amplification matrix is smaller than $1$, or if the eigenvalues with magnitude one correspond to no-trivial Jordan blocks, then the amplification matrix is power-bounded.}
We consider the wave equation \eqref{eq1a}-\eqref{eq1b} on the unit interval $x \in [0,1]$ with homogeneous Dirichlet and Neumann boundary conditions
at $x = 0$ and $x = 1$ respectively.
We introduce two uniform Cartesian grids which overlap inside a small interval close to one of the boundaries.
Precisely, the grids are
\begin{align}
\Omega_a &= \{x^a_i = ih_a,\ \ i = 0, \dots, n_a\},\nonumber \\
\Omega_b &= \{x^b_i = 1 - (n_b-i)h_b, \ \ i = 0, \dots, n_b\}. \nonumber
\end{align}
The Hermite method is used on a grid $a$ and the DG method is used on grid $b$.
The grids thus overlap inside the interval $[x^b_0, x^a_{n_a}]$. Here the ratio of the overlap size and the discretization width is $(x^a_{n_a} - x^b_0)/h_a$. This ratio is fixed for all values of $h_a$ and $h_b$. We also fix $n_b$ so that the amount of work
done on grid $b$ is constant per timestep for all refinements. Fixing the ratio $(x^a_{n_a} - x^b_0)/h_a$ and $n_b$ makes the efficiency of the overall method
to asymptotically be the determined by the efficiency the Hermite method.
Let ${\mathbf w}^n$ be a vector holding the degrees of freedom of both methods at $n$th timestep, then we may express the complete timestep evolution as ${\mathbf w}^{n+1} = {\cal H} \mathbf w^n$ where $\cal H$ incorporates timestepping and projection. $\mathcal{H}$ can be expressed as the matrix $H$ that can be computed column by column via
\begin{equation}
H_k = \mathcal{H} e_k,\label{constr}
\end{equation}
where $e_k$ is the $k$th unit vector. The equation
\begin{equation}
{\mathbf w}^n = H^n {\mathbf w}^0,
\label{e:5.1.1}
\end{equation}
is equivalent to the $n$ timesteps of the hybrid H--DG method. \bb{Let, $\lambda = \rho(H)$ is the spectral radius of $H$. If $|\lambda|<1$ then $\| H^n \|_2$ will tend to zero for large $n$. Of course this only means that this particular discretization of this particular problem is stable and does (in principle) not tell us anything about other grid configurations.}
We consider the case $c=1$ and take the parameters to be
\[
n_a = 10, 20, \dots, 60,\ \ n_b = 5, \ \ \frac{h_b}{h_a} = 0.9.
\]
Other parameters are $q_u,q_v$ for the DG method and $n_{DG}$, the number of timesteps done by the DG method during one step of the Hermite method. The parameters $q_u$ and $q_v$ are set so the methods used have the same order of accuracy as the approximation of $v$ for the Hermite method
\[
q_u = 2m+2,\ q_v = 2m+1.
\]
To get an optimal $n_{DG}$, we take the largest possible timestep for the energy based DG method \rr{(empirically determined in \cite{Upwind2})}, so that
\[
\frac{\Delta t_b}{h_b} \le 0.15/q_u,
\]
and
\[
n_{\rm DG} = \frac {\Delta t_a}{\Delta t_b},
\]
is an integer. Equivalently, if the Hermite method CFL number is set, we get
\[
n_{\rm DG} = \frac{\Delta{t_a}}{\Delta{t_{b}}} = \left\lceil {\rm CFL }\frac{q_u}{0.15}\frac{h_a}{h_b} \right\rceil.
\]
\begin{figure}[]
\begin{center}
\includegraphics[width=\textwidth]{HDG_2_1.pdf}
\caption{Spectrum of the amplification matrix $H$ for CFL numbers $\Delta t_a/h_a = 0.5, 0.8,$ orders of accuracy $3,5,7,$ and \mbox{$n_a = 40,\ n_b = 5$}. No eigenvalues are outside the unit circle.\label{spectr2}}
\end{center}
\end{figure}
Following the column-by-column construction process \eqref{constr} described \rr{above} we compute the amplification matrix $H$. The spectrum of $H$ is shown in Figure~\ref{spectr2} for $m=1,2,3$. Displayed results are for the cases $n_a = 40$ and $n_b = 5$. The CFL numbers set for Hermite method are $\Delta t_a/ h_a = 0.5$ and $0.8$. The absolute value of eigenvalues do not exceed $1$. We note that if interpolation is used some eigenvalues of the amplification matrix shift outside of the unit circle. Such unstable modes can possibly be stabilized by numerical dissipation / hyperviscosity but we do not pursue such stabilization here. Instead we observe that when projection is used all eigenvalues are inside the unit circle and the method is stable. Although we only display the results for one problem here the same results were obtained for other grid sizes, various overlap sizes to grid spacing ratios and different CFL numbers set for the Hermite method. We stress that it is possible to make the method unstable if we take the CFL number close to one and if we take $m$ to be larger than 3 and thus we only claim that the methods of orders of accuracy up to $7$ are stable.
\subsection{Convergence to an exact solution}
Using the same grid setup and boundary conditions as in the example above we test the method for the wave equation \eqref{eq1a}-\eqref{eq1b}, $c=1$ and initial conditions
\begin{align}
u(x,0) &= \sin\left(\frac{15\pi}2 x\right), \\
v(x,0) &= 0.
\end{align}
A solution to this problem is the standing wave
\begin{equation}
u(x,t) = \sin\left(\frac{15\pi}2x\right)\cos\left(\frac{15\pi}2t\right). \nonumber
\end{equation}
The errors for the solution on the grids are
\begin{equation}
\varepsilon_a(x,t) = p_{i+{\frac12}}(x,t) - u(x,t), x \in {x^a_i, x^a_i+1},\ i = 0, \dots, n_a,
\end{equation}
for the Hermite grid and
\begin{equation}
\varepsilon_b(x,t) = u^{h_b}(x,t) - u(x,t),
\end{equation}
for the DG grid. The maximum error for the total method is
\begin{equation}
\max \left( \max_{x \in {[x^a_0, x^{a}_{n_a}]}} |\varepsilon_a(x,t)|),\max_{x \in [x^b_0, x^b_{n_b}} |\varepsilon_b(x,t)|) \right). \nonumber
\end{equation}
In Figure~\ref{fe:1} we display computed maximum errors as functions of time for the method with $m=3$ (i.e. the order of accuracy is 7). In the left subfigure the CFL number for the Hermite method is set to be $0.5$ and in the right subfigure the CFL number is set to be 0.75. For all Hermite grid sizes, the error growth is linear in time (dashed lines display a least squares fit of a linear function), indicating that the solution is stable for long time computations.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.49\textwidth]{HDG_2_2.pdf}
\includegraphics[width=0.49\textwidth]{HDG_2_3.pdf}
\caption{Maximum error of the solution as a function of time. The curves correspond to different refinements for $m$ = 3. In the left subfigure CFL number for Hermite method is set to $0.5$. In the right subfigure CFL number for Hermite method is set to $0.75$. Dashed lines display lines $\alpha t$. \label{fe:1}}
\end{center}
\end{figure}
In the left subfigure of Figure~\ref{conv2} the numerical solution and the absolute error are shown for the $7$th order accurate method at time $t = 2$. As can be seen in the lower left subfigure in Figure~\ref{conv2} the error is rather smooth across the overlap indicating that the projection is highly accurate.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.49\textwidth]{HDG_2_4.pdf}
\includegraphics[width=0.47\textwidth]{HDG_2_5.pdf}
\caption{The upper left subfigure displays the solution at time $t=2$. The error at time $t=2$ is shown in the lower left subfigure. The number of grid points are $n_a = 200,\ n_b = 5$ and $m = 3$, the Hermite CFL number is set to $0.75$. Red curves indicate the solution and the error on the DG grid. Blue curves indicate the solution and the error on the Hermite grid. (The solution and the error were computed on finer grid, 10 grid points per cell/element). In the right subfigure we display a convergence plot for $m=1,2,3$. Dashed lines show the least squares fit of $C_mh_{a}^q,\ q = 3,5,7$.}
\label{conv2}
\end{center}
\end{figure}
To the right in Figure~\ref{conv2} the error at the final time $t=2$ is shown as a function $h = h_a$. The dashed lines show the least squares fit with polynomial functions of $h_a$ of order $3, 5$ and $7$ respectively. The results indicate that the orders of accuracy of the methods are $2m+1$ as expected. The parameters ($n_a$, $n_b$, $n_{DG}$, etc.) are the same is in previous example.
\subsection{Analytical solution in a disk. Rates of convergence}
Consider the solution of \eqref{eq1a}-\eqref{eq1b} with $f(x,y,t) \equiv 0$ on the unit disk, $(x,y) \in x^2+y^2 \le 1$, with homogeneous Dirichlet boundary conditions. Then the analytical solution can be expressed in polar coordinates as a composition of modes
\begin{equation}
u_{\mu \nu}(r,\theta,t) = J_\mu(r\kappa_{\mu \nu})\cos(\mu \theta)\cos \kappa_{\mu \nu}t. \label{e:sol}
\end{equation}
Here $J_\mu(z)$ is the Bessel function of the first kind of order $\mu$ and $\kappa_{\mu \nu}$ is the $\nu$th zero of $J_\mu$. In the following experiment we set $\mu=\nu=7$, $\kappa_{77} = 31.4227941922$. The initial condition $u_{77}(x,y,0)$ is displayed in the left subfigure of Figure~\ref{f:circ}.
\begin{figure}[hbt]
\includegraphics[width=0.49\textwidth]{HDG_2_6.png}
\includegraphics[width=0.5\textwidth]{HDG_2_7.pdf}
\caption{The left subfigure displays the initial condition.
In the right subfigure the max-error at time $t = 2\pi/\kappa{77}$ as a function of grid spacing of the Hermite method. Solid curves correspond the methods with $m=1,2,3$ and dashed lines display the expected the convergence rates i.e. ${\cal O}(h_a^{2m+1})$.}
\label{f:circ}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=0.49\columnwidth]{HDG_2_8.png}
\includegraphics[width=0.49\columnwidth]{HDG_2_9.png}
\caption{Overset grid set up for two different discretization widths. The Hermite grid is blue and DG grid is red. The Hermite grid is truncated at radius $1-h_a$, i.e one Hermite grid spacing smaller than the computational domain. This creates a stair sharped interior boundary. The solution at that boundary is imposed by by the projection described above. The curvilinear grid has 7 elements in the radial direction, thus the number of elements grows linearly with $n_a$. The number of grid points in the Hermite grid grows as $n_a^2$.}
\label{f:circ2}
\end{figure}
We setup overset grids as displayed in Figure \ref{f:circ2}. Grid $a$ is a Cartesian grid discretizing a square domain with $2n_a+1$ grid points in each direction and grid spacing $h_a = 1/n_a$. Grid $b$ is a curvilinear grid discretizing a thin annulus with radial grid spacing $1.1h_a$. For all refinements Grid $b$ has 7 elements in the radial direction thus the number of elements (or equivalently the number of DOFs of DG method) will grow linearly with the reciprocal of the discretization size $h_a$. In contrast the number of grid points in the Cartesian grid where the Hermite method will be used grows quadratically with $1/h_a$.
To measure the error we evaluate the solution on a finer grid, oversampled with 20 grid points inside each Hermite cell and DG element. The convergence is displayed in the right subfigure of Figure~\ref{f:circ}. The errors at time $t=2\pi/\kappa_{77}$ as functions of $h_a$ for \mbox{$m = 1,2,3$} are displayed as solid lines. The dashed lines show the polynomials in $h_a$ of order $2m+1$. We use $h_a=1/34,1/36,...,1/94$ in the computations. As can be seen the expected orders of accuracy (3,5 and 7) are observed.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.39\textwidth]{HDG_3_1.pdf}
\includegraphics[width=0.39\textwidth]{HDG_3_2.pdf}
\caption{In the left subfigure the error at fixed time $t = 2\pi/\kappa_{77}$ is shown as a function of CPU time. The right subfigure displays the error as a function of time $t$. Dashed lines are the linear functions $\alpha t$ formed by a least squares fit.}
\label{f:circ3}
\end{center}
\end{figure}
To test the stability of the method we evolve the solution until time $t = 60\pi/7$ which is roughly $130$ periods of the solution. We set $h_a = 1/54$ and test methods with orders of accuracy $3, 5$ and $7$.
The error growth appears to be linear in time as indicated by dashed lines in the right subfigure of Figure~\ref{f:circ3}.
To test the performance of the method we evolve the method over one time period of the solution and measure the CPU time, see the left subfigure of Figure~\ref{f:circ3}. The red curve, displaying the error of the $3$rd, order accurate method only reaches the error $10^{-6}$ in about 1000 seconds while the $5$th and $7$th order accurate methods, using the same compute time, yield errors on the order of $10^{-8}$ and $10^{-10}$ respectively. Clearly the higher order methods are more efficient.
\rr{Table \ref{tab_speed1} displays a breakdown of time spent in the various parts of the code. As can be seen from the timing results the largest time is spent in the DG solver even for the finest grid. The increase in time does grow approximately quadratically and linearly for the Hermite and DG respectively so that eventually the complexity of the Hermite solver will dominate but practically speaking this may not happen for practical refinements for this problem. The large computational cost of the DG method is, in part, due to the small timestep requirement but also due to our implementation.}
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& HERMITE & DG & DG per step & H$\to$ DG & DG $\to$ H \\
\hline
TIME & 0.34 & 70.33 & 1.56289 & 10.00 & 11.32 \\
\hline
DOF & 300448 & 222992 & 222992 & 15744 & 55748 \\
\hline
TIME / DOF & 1.13(-6) & 3.15(-4) & 7.00(-6)& 6.35(-4) & 2.03(-4) \\
\hline
\hline
TIME & 1.61 & 159.67& 3.39 & 22.32 & 23.77 \\
\hline
DOF & 1244760 & 451052 & 451052 & 32144 & 112763 \\
\hline
TIME / DOF & 1.29(-6) & 3.53(-4) & 7.51(-6) & 6.94(-4) & 2.10(-4) \\
\hline
\hline
TIME & 4.86 & 183.45 & 4.08 & 32.74 & 41.07 \\
\hline
DOF & 5066944 & 905724 & 905724 & 64944 & 226431 \\
\hline
TIME/DOF & 9.59(-7) & 2.02(-4) & 4.45(-6) & 5.04(-4) & 1.8(-4) \\
\hline
\end{tabular}
\caption{Timing of the 7th order accurate hybrid Hermite-DG method for the disk experiment. The table contains timings for three different numbers of degrees of freedom. TIME denotes average time in seconds per 1 Hermite timestep of Hermite timestepping, DG timestepping and communication stages with the exception of the fourth column which displays the time per 1 DG timestep for the DG method. The TIME/DOF row in each block displays the time per degree of freedom computed by time evolution or communication. \label{tab_speed1}}
\end{center}
\end{table}
\subsection{A wave scattering of a smooth pentagon}
In this experiment we study the scattering of a smooth pentagon in free-space. In addition to the use of non-reflecting boundary conditions experiment demonstrates the hybrid Hermite-DG method for the solution which is propagated over many wavelengths.
The geometry of the pentagon is defined as the smooth closed parametric curve:
\begin{eqnarray}
x(s) &=& \frac 1{10}\left(1+\frac 1{10} \cos(10s)\right)\cos(s), \\
y(s) &=& \frac 1{10}\left(1+\frac 1{10} \cos(10s)\right)\sin(s), \ \ s \in [0,2\pi).
\end{eqnarray}
The pentagon is placed in a square domain $(x,y) \in [-2,2]^2$ discretized by a Cartesian grid with grid spacing $1/n$, $n = 40$.
The curvilinear grid has 10 elements in the radial direction and the outer boundary is a circle of radius $0.1+20/n$. The overlap width is at most 5 DG elements.
On the boundary of the body we set Dirichlet data
\begin{equation}
u(x,y,t) = \sin(\omega t),\ \ (x,y) \in \Gamma,\ \ t \ge 0,\ \ \omega = 250.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.39\textwidth]{HDG_3_3.png}
\includegraphics[width=0.4\textwidth]{HDG_3_4.png}
\caption{Left: Overset grid set up around the body. The overlapping DG grid and background Cartesian grid shown on the domain $[-0.5, 0.5]^2$.
Right: Snapshot of $u(x,y,10)$.\label{fig:pent}}
\end{center}
\end{figure}
The exterior boundary condition is modeled by truncating the domain using perfectly matched layers governed by the equations, (see \cite{appelo2012fourth} for derivation)
\begin{equation}
u_{tt} = \frac{\partial}{\partial x}\left(
u_x + \sigma^{x} \phi^{(1)} \right)
+ \frac{\partial}{\partial y}\left(
u_y + \sigma^{y} \phi^{(2)} \right)
\sigma^{(x)}\phi^{(3)} + \sigma{(y)}\phi^{(4)}, \label{mod_pml}
\end{equation}
where the auxiliary variables satisfy the equations
\begin{equation}
\arraycolsep=1.4pt\def2.2{2.2}
\begin{array}{r@{}l}
&\phi_t^{(1)}+(\alpha + \sigma{(x)})\phi^{(1)} = -u_x, \\
&\phi_t^{(2)}+(\alpha + \sigma{(y)})\phi^{(2)} = -u_y, \\
&\phi_t^{(3)}+(\alpha + \sigma{(x)})\phi^{(3)} = -u_{xx}-\frac{\partial}{\partial x}
\left(\sigma^{(x)}\phi^{(1)} \right), \\
&\phi_t^{(4)}+(\alpha + \sigma{(y)})\phi^{(4)} = -u_{yy}-\frac{\partial}{\partial y}
\left(\sigma^{(y)}\phi^{(2)} \right).
\label{newvars}
\end{array}
\end{equation}
The damping profiles $\sigma^{(z)}$, $z = x,y$ are taken as
\begin{equation}
\sigma^{(z)}(z) = \sigma_s \left(\tanh\left(\frac{z-z_1}{0.7w_{\rm lay}}\right)
- \tanh \left(\frac{z-z_2}{0.7w_{\rm lay}}\right)\right). \nonumber
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{HDG_3_5.pdf}
\caption{The damping profile $\sigma^{(z)}$ for PML with damping strength $\sigma_s$. The damping is zero at the center of the domain $\Omega$ and rapidly increases in the PML on the both edges of the domain.} \label{sig}
\end{center}
\end{figure}
Here $\sigma_s$ is a damping strength, $w_{\rm lay}$ is layer width and $z_1$ and $z_2$ control the location of the damping profile of the PML. The shape of $\sigma^{(z)}$ is displayed in Figure~\ref{sig}. In experiments involving PML we discretize the modified equations \eqref{mod_pml} with the Hermite method.
The order of accuracy of the methods is set to be 7, i.e. $q_u = q_v = 6$, and $m = 3$. The solution is evolved to $t=10$. A snapshot of the solution at the final time is displayed in the right subfigure of Figure~\ref{fig:pent}. The proposed algorithm clearly is able to accurately propagate waves in complex domains.
\begin{table}[]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& HERMITE & DG & DG per step & H$\to$ DG & DG $\to$ H \\
\hline
TIME & 23.87 & 14.15 & 0.25 & 1.98 & 0.27 \\
\hline
DOF & 1972100 & 87010 & 87010 & 3280 & 7910 \\
\hline
T/DOF & 0.12(-4) & 0.16(-3) &0.29(-5) & 0.60(-3) & 0.34(-4) \\
\hline
\hline
TIME & 40.10 & 17.52 & 0.37 & 2.52 & 0.42 \\
\hline
DOF & 4170520 & 125543 & 125543 & 4920 & 11413 \\
\hline
T/DOF & 0.96(-5) & 0.13(-3) &0.30(-5) &0.51(-3) &0.37(-4) \\
\hline
\hline
TIME & 76.18 & 27.38 & 0.60 & 3.94 & 0.70 \\
\hline
DOF & 11007352 & 203852 & 203852 & 8200 & 18532 \\
\hline
T/DOF & 0.69(-5) & 0.13(-3) & 0.29(-5) & 0.48(-3) &0.38(-4) \\
\hline
\hline
TIME & 162.28 & 43.08 & 0.87 & 7.30 & 1.38 \\
\hline
DOF & 34433112 & 287811 & 287811 & 15088 & 31979 \\
\hline
T/DOF & 0.47(-5) & 0.14(-3) &0.31(-5) &0.48(-3) &0.43(-4) \\
\hline
\end{tabular}
\caption{Timing of the 7th order accurate hybrid Hermite-DG method for the smooth pentagon experiment. The table contains timings for three different numbers of degrees of freedom. TIME denotes average time in seconds per 1 Hermite timestep of Hermite timestepping, DG timestepping and communication stages with the exception of the fourth column which displays the time per 1 DG timestep for the DG method. The T/DOF row in each block displays the time per degree of freedom computed by time evolution or communication.}
\label{tab_speed2}
\end{center}
\end{table}
\rr{Table \ref{tab_speed2} displays a breakdown of time spent in the various parts of the code. As can be seen from the timing for this problem the largest time is now spent in the Hermite solver. Here, due to the geometry being an interior object, the relative number of degrees of freedom in the DG solver is small and we see the asymptotic behavior more clearly than for the disc experiment. }
\subsection{Wave scattering of many cylinders in free space}
As another demonstration of the method we simulate a domain with multiple circular holes. Precisely we consider the infinite domain $\Omega \in [-\infty,\infty]\times[-\infty,1.33]$ with homogeneous Neumann boundary condition at $y = 1.33$. The computational domain is a rectangle $[-1,1]\times[-1.33]$ with PML $|x| > 1$ and $y < -1$. Inside the computational domain there are 5 cylinders of radii $0.1$ and centers at $(x_k,y_k), k = 1,\dots 5$. We impose the homogeneous Dirichlet boundary conditions on the boundary of all cylinders except first. On the first cylinder we impose a time dependent boundary condition
\begin{equation}
u(t,x,y) =
(t-0.1)\exp\left(-918(t-0.1)^2\right),\ (x,y) \in \{ (x-x_1)^2 + (y - y_1)^2 = 0.01\}.
\nonumber
\end{equation}
The initial solution is at rest.
The set up of the numerical method is similar to the previous experiment. The Cartesian grid covers the background domain and the PML. The 5 circular grids are placed around the bodies as shown in the upper left subfigure Figure~\ref{fig:5b}. In this experiment we used the 7th order method. It can be noticed that as in all solution plots provided in this paper the solution is smooth across the overlap due to the high accuracy of methods used and the projection used for communication.
\begin{figure}[]
\begin{center}
\includegraphics[width=.245\textwidth]{HDG_4_1.png}
\includegraphics[width=.245\textwidth]{HDG_4_2.png}
\includegraphics[width=.245\textwidth]{HDG_4_3.png}
\includegraphics[width=.245\textwidth]{HDG_4_4.png}
\includegraphics[width=.245\textwidth]{HDG_4_5.png}
\includegraphics[width=.245\textwidth]{HDG_4_6.png}
\includegraphics[width=.245\textwidth]{HDG_4_7.png}
\includegraphics[width=.245\textwidth]{HDG_4_8.png}
\caption{Overset grid setup and solution plots for 5 bodies in a free half space. In the upper left subfigure the grids are shown: DG gridlines plotted with red color, Cartesian grid lines inside the domain plotted with blue. Other figures are the solution plots at various increasing times.}
\label{fig:5b}
\end{center}
\end{figure}
\subsection{An inverse problem, locating a body in free space} \label{ss:body}
As a final experiment we solve the inverse problem of locating a cylindrical body in free space. An application of this problem could be a to locate a tunnel under the ground and determining its radius by sending waves from source devices buried at a relatively small distance from the surface and recording the solution near the surface. Waves will propagate from a source, reflect from an underground cavity and travel back to the surface to be captured by the recording devices. The underground cavity can be located by minimizing a cost functional, i.e misfit function of recorded data and data obtained from the numerical simulation in each iteration of the optimization process.
Consider a square region $\Omega \in [-1,1]\times[-1,1.25]$ with 3 circular bodies of radius $r = 0.1$ with centers at $x_1 = -0.7$, $x_2 = 0$, $x_3 = 0.7$ and $y_1=y_2=y_3 = -0.7$. On the boundary of the bodies we impose homogeneous Dirichlet boundary conditions. On the top boundary $y = 1.33$, that acts as a "ground surface" we impose homogeneous Neumann boundary conditions. The exterior boundary conditions at $x=\pm 1$ and $y = -1$ are imposed by truncating the domain using PML. We discretize the domain with a Cartesian grid. Around each of the cavities we place annular DG grids that are 5 cell wide. An example of a complete set up with $4$ receivers is shown in Figure~\ref{fig:inv}.
First we create synthetic data by recording the displacement $u$ at equidistant locations of the receivers \[(0,0.125),\ (0.25,0.125),\ (0.25,0.125),\ (0.25,0.125),\] to time $T=2$. Let there be another circular body of radius $A_1$ and center at $(x,y) = (A_2,A_3)$ we want to locate. In the right figure the first source is active, i.e. the initial condition is a smooth Gaussian centered at $\hat x = -0.25$, $\hat y = 1$, with no initial velocity
\begin{equation}
u(0,x,y) = \exp\left(-40\left((x-\hat x)^2+(y-\hat y)^2\right)\right),\ \ v(0,x,y)\equiv 0. \label{in:gauss}
\end{equation}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=.39\textwidth]{HDG_5_1.png}
\includegraphics[width=.39\textwidth]{HDG_5_2.png}
\caption{The inverse problem set up. Receivers are marked as dots. The left subfigure displays the complete overset grid set up, with 4 DG grids around bodies,
and a Cartesian background grid. Blue and red grids discretize physical subdomain; the black grid is the PML layer; the gray grids indicate the subdomains covered by DG grids.
The right subfigure displays of the initial condition, a smooth Gaussian centered at (-0.25, 1), receivers and the DG grids. \label{fig:inv}}
\end{center}
\end{figure}
First we create the synthetic data for the exact location of the target, $A_1^\ast = 0.12$, $A_2^\ast = 0.25$ and $A_3^\ast = 0.1$. This gives us $u^\ast$. To locate the cavity we minimize the cost function that is a sum of squared $L_2$ norms of discrepancies between the output of the numerical simulation and synthetic data $u^*(t,\check x_l, 0.125)$
\begin{equation}
F(A_1, A_2, A_3) = \sum_{l = 1}^4 \int_0^T \left(u(t,\check x_l, 0.125) - u^*(t,\check x_l, 0.125) \right)^2.
\end{equation}
During the minimization we impose the bounds $0.01 \le A_1 \le 0.2$, $|A_2| < 0.5$, $|A_3| < 0.2$. To recover $A^\ast_1, A^\ast_2$ \bb{and $A_3^\ast$} we use the L--BFGS-B algorithm, (see \cite{byrd1995limited} for description). Forward differences are used to compute the gradients, resulting in total $1+3$ simulations per iteration. Table~\ref{tab1} displays the convergence results in detail for the initial values \bb{at $1\%$ of the exact solution, that are $0.101$, $0.2525$ and $0.1212$ respectively}. At the $6$th iteration the values computed were $0.1$, $0.25$ and $0.12$, accurate to the $8$th digit. \bb{For the initial guesses with larger the $1\%$ deviation from the exact solution, it becomes harder to converge to a global minimum. The minimization process would become more robust if more data is recorded at the receivers, for example by increasing the number of receivers, recording longer data traces or adding simulations with different initial conditions}.
\bb{Although during the minimization process before each simulation the grids have to be regenerated this is inexpensive since the grid generation is local. Precisely in each new iteration the DG grid is adjusted to by regenerating an annular grid based on the updated center location and radius.
}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$N$ iter. & $A_1$ & $A_2$ & $A_3$ & $F$ & $\|\nabla F\|$ \\
\hline
$0$ & $0.10100$ & $0.25250E$ & $0.12120$ &$7.83582(-6)$ & $4.41312(-3) $ \\
\hline
$1$ & $0.10117$ & $0.25077E$ & $0.11944$ &$1.96547(-7)$ & $2.88478(-4) $ \\
\hline
$2$ & $0.10117$ & $0.25066E$ & $0.11955$ &$1.38952(-7)$ & $2.32250(-4) $ \\
\hline
$3$ & $0.10105$ & $0.25012E$ & $0.12000$ &$2.26171(-8)$ & $4.32507(-5) $ \\
\hline
$4$ & $0.10095$ & $0.25010E$ & $0.12001$ &$1.87633(-8)$ & $3.97138(-5) $ \\
\hline
$5$ & $0.10002$ & $0.24999E$ & $0.12001$ &$5.81672(-11)$ & $5.80355(-6) $ \\
\hline
$6$ & $0.10000$ & $0.25000E$ & $0.12000$ &$1.00121(-15)$ & $4.21271(-8) $ \\
\hline
\end{tabular}
\caption{Convergence results of L-FBGS-B algorithm for the inverse problem for locating a body in free space. At each iteration the cost function $F$ and its gradient $\nabla F$ is computed from the numerical solution of the wave equation. The forward solver is implemented using the $5$th order accurate Hybrid Hermite-DG overset grid method.}
\label{tab1}
\end{center}
\end{table}
\section{Summary}
We have presented overset high order numerical methods for numerical solution of the wave equation. The hybrid H--DG overset grid method combines the highly efficient Hermite method on Cartesian grids with a DG method to treat complex boundaries. To combine the methods the overset grids were used. The advantage of using the overset grids for complex boundary problems is the low computational cost that asymptotically approaches the cost of the Cartesian solver.
In this work we communicate solutions via $L_2$ projection and this procedure combined with the dissipative nature of the methods was observed to be sufficient to guarantee stability without the need to add any artificial dissipation.
Stability, accuracy and efficiency of the method were tested numerically. To test the stability in 1 dimension, we looked at the spectrum of the amplification matrix associated with the method. For CFL numbers $< 0.75$ for the Hermite method, the overall method was stable in all tested settings for grid sizes and orders of accuracy $3, 5$ and $7$. In 1 and 2 dimensions we also tested the stability by displaying the error growth as a function of time for long times.
Finally, three example applications of the methods were presented. First, the wave scattering of the pentagonal object in free space was shown, demonstrating the use of the method for the problem with curvilinear boundary and free space boundary conditions. Second, a simulation with five round objects in free space was demonstrated. Finally the method was used to solve the inverse problem of locating a cylindrical underground body.
A future extension could be to improve the efficiency of the DG method used on the curvilinear body fitted grids by the use of an implicit timestepping method. This would allow the timesteps to be commensurate to those of the Hermite method at a relatively low cost since the linear systems needed to be inverted would be essentially one dimensional. Another natural extension of this work would be to apply the techniques presented here to the elastic wave equation.
\begin{acknowledgements}
This work was supported in part by the National Science Foundation under Grant NSF-1913076. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
\end{acknowledgements}
\section*{Conflict of interest}
The authors declare that they have no conflict of interest.
\bibliographystyle{spmpsci}
|
1,116,691,499,183 | arxiv | \section{Introduction}
The task of Natural Language Inference (NLI) involves reasoning across a premise and hypothesis, determining the relationship between the two sentences. Either the hypothesis is implied by the premise (\textit{entailment}), the hypothesis contradicts the premise (\textit{contradiction}), or the hypothesis is neutral to the premise (\textit{neutral}). NLI can be a highly challenging task, requiring lexical, syntactic and logical reasoning, in addition to sometimes requiring real world knowledge \cite{Dagan_RTE_first_paper}.
While neural NLI models perform well on in-distribution test sets, this does not necessarily mean they have a strong understanding of the underlying task. Instead, NLI models are known to learn from annotation artefacts (or \emph{biases}) in their training data \cite{gururangan-etal-2018-annotation, poliak-etal-2018-hypothesis}. Models can therefore be right for the wrong reasons \cite{mccoy-etal-2019-right}, with no guarantees about the reasons for each prediction, or when the predictions are based on a genuine understanding of the task. We address this issue with our logical reasoning framework, creating more interpretable NLI models that definitively show the specific logical atoms responsible for each model prediction.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/first_example.pdf}
\caption{Example of a hypothesis segmented into spans, with the four different hypothesis spans underlined. The four spans identified are: `A man', `poses in front', `of an ad' and `for beer.'. The model predicts the neutral class only for the `for beer.' span.} \label{example_spans}
\end{figure}
One challenge when applying a logical approach to NLI is determining the choice of logical atoms. When constructing examples from knowledge bases, the relationships between entities in the knowledge base can be used as logical atoms \cite{rocktaschel2017endtoend}. Whereas, in synthetic fact-based datasets, each fact can become a logical atom \cite{clark2020transformers, talmor2020leapofthought}. Neither approach would be suitable for SNLI \cite{bowman-etal-2015-large}, which requires reasoning over gaps in explicitly stated knowledge \cite{clark2020transformers}, with observations covering a range of topics and using different forms of reasoning.
Instead, our logical reasoning framework considers spans of the hypothesis as logical atoms. We determine the class of each premise-hypothesis pair based entirely on span-level decisions, identifying exactly which parts of a hypothesis are responsible for each model decision. By providing a level of assurance about the cause of each prediction, we can better understand the reasons for the correct predictions and highlight any mistakes or misconceptions being made. Using the e-SNLI dataset \cite{camburu2018esnli}, we assess the performance of our span-level predictions, ensuring that the decisions about each hypothesis span align with human explanations.
As no span labels are provided in the SNLI training data, we supervise our Span Logical Reasoning (SLR-NLI) model at a span level using logical rules, for example requiring a contradiction example to include at least one contradiction span. Inspired by previous work on error detection \cite{DBLP:conf/naacl/ReiS18, DBLP:conf/aaai/ReiS19, pislar-rei-2020-seeing, kamil}, we train models to detect both neutral and contradiction spans, jointly training on sentence labels while using auxiliary losses to influence the model behaviour at a span-level. Finally, we evaluate our model based on the span-level decisions for each logical atom.
To summarise our findings:
1) We introduce a logical reasoning framework (SLR-NLI) that almost fully retains performance on SNLI, while also performing well on the SICK dataset \cite{marelli-etal-2014-sick}. This contrasts with previous work, where the inclusion of logical frameworks result in substantially worse performance.
2) Our SLR-NLI model produces highly interpretable models, identifying exactly which parts of the hypothesis are responsible for each model prediction.
3) Evaluating the SLR-NLI predictions at a span level shows that the span-level decisions are consistent with human explanations, with further improvements if e-SNLI explanations are used during training.
4) SLR-NLI improves model robustness when training in a reduced data setting, improving performance on unseen, out-of-distribution NLI datasets.
\section{Our logical framework}
\subsection{Span-level Approach}
We consider each hypothesis as a set of spans ${s_{1}, s_{2}, s_{3}, ..., s_{m}}$ where each span is a consecutive string of words in the hypothesis. For example, in \cref{example_spans} the hypothesis `A man poses in front of an ad for beer' has the following spans: `A man', `poses in front', `of an ad', `for beer.'. Each span $s_{i}$ has a label of either entailment, contradiction or the neutral class. In practice, there are no predefined spans in NLI datasets, nor are there labels for any chosen spans. As a result, we propose a method of dividing hypotheses into spans, introducing a semi-supervised method to identify entailment relationships at this span level. In the example provided in \cref{example_spans}, $s_{4}$=`for beer.' has a neutral class, while each other span has an entailment class.
\begin{figure}
\includegraphics[width=\columnwidth]
figures/logical_rules.pdf}
\caption{Logical reasoning rules} \label{logic_rules}
\end{figure}
We observe that a hypothesis has a contradiction class if any span present in that hypothesis has a label of contradiction. Similarly, if a hypothesis contains a span with a neutral class and no span with a contradiction class, then the hypothesis has a neutral class. Therefore, a hypothesis only has an entailment class if there are no spans present with span labels of either contradiction or neutral (see \cref{logic_rules}).
When evaluating a hypothesis-premise pair in the test-data, our model makes discrete entailment decisions about each span in the hypothesis, deciding the sentence-level class based on the presence of any neutral or contradiction spans. This method highlights the exact parts of a hypothesis responsible for each entailment decision.
\subsection{Span Selection}
We identify spans based on the presence of noun chunks in the hypothesis. Initially, the hypothesis is segmented into spans containing each noun chunk and its preceding text.
However, the most appropriate segmentation of a hypothesis into spans may depend on the corresponding premise, and in some cases we may need to consider long range dependencies across the sentence. As a result, we also provide additional spans that are constructed from combinations of consecutive spans. We set the number of consecutive spans that we include as a hyper-parameter. For the example in \cref{example_spans}, this means also including spans such as `a man poses in front' and `of an ad for beer.'.
\subsection{Modelling Approach}
A BERT model is used for encoding the NLI premise together with each specific hypothesis span, masking the parts of the hypothesis that are not included in the given span. The BERT model provides a [CLS] representation $h_i$ for each span $i$. A linear layer is applied to these representations to provide logits $L_{ni}$ and $L_{ci}$ for each span, representing the neutral and contradiction classes respectively.
A separate attention layer is created for both the neutral and contradiction classes that attend to the different span-level outputs. The neutral attention layer attends more to neutral spans, while the contradiction attention layer attends more to contradiction spans.
Both the neutral and contradiction attention layers consider the same [CLS] representation $h_i$.
The two attention layers use the same architecture, with details provided below for the neutral (\textit{n}) attention layer. Our span-level predictions will be based on the unnormalized attention weights $\widetilde{a}_{ni}$, which are calculated as:
\begin{equation}
\widetilde{a}_{ni} = \sigma{(W_{n2}(\tanh{(W_{n1} h_{i} + b_{n1})})+b_{n2})}
\end{equation}
where $W_{n1}$ and $W_{n2}$ are trainable parameters along with their respective bias terms. Equation (1) uses a sigmoid so that the output is in the range between 0 and 1 for binary classification.
Upon normalisation, the attention weights $\widetilde{a}_{ni}$ define an attention distribution:
\begin{equation}
a_{ni} = \frac{\widetilde{a}_{ni}}{\sum_{k=1}^{n}\widetilde{a}_{nk}}
\end{equation}
These weights are used to create a new weighted logit, $L_{n}$:
\begin{equation}
L_{n} = \sum_{i=1}^{n}a_{ni}L_{ni}
\end{equation}
Using a binary label $y_{n}$, indicating if the example is neutral or not, we create a sentence-level loss to optimise using the sentence labels:
\begin{equation}
\mathcal{L}_{n}^{\text{Sent}} = (\sigma(W_{n3}\times L_{n} + b_{n3}) - y_{n})^2
\end{equation}
We combine this with an auxiliary span loss on the model attention weights, $\mathcal{L}_{n}^{\text{Span}}$:
\begin{equation}
\mathcal{L}_{n}^{\text{Span}} = (\max_{i}(\widetilde{a}_{ni}) - y_{n})^2
\end{equation}
The auxiliary span attention loss has the effect of encouraging the span-level unnormalized attention weights to be closer to zero for entailment examples. This supports our logical framework, which states that all entailment examples must only consist of entailment spans. As neutral predictions require at least one neutral span, by supervising the maximum unnormalized attention weight we encourage one of the spans to be predicted as neutral. The contradiction-detection attention layer behaves in a similar way, detecting the presence of contradiction spans.
We then combine together the auxiliary span attention loss with the sentence-level loss:
\begin{equation}
\mathcal{L}_{n}^{\text{Total}} = \mathcal{L}_{n}^{\text{Sent}} + \mathcal{L}_{n}^{\text{Span}}.
\end{equation}
While our model evaluation exclusively makes predictions from the unnormalized attention weights for each span, we find in practice that including a sentence level objective improves the span-level decisions. In particular, the sentence supervision influences the attention values directly in (3), in addition to supervising the representations $h_i$. The sentence-level supervision does not have access to the full hypothesis, separately considering each span representation $h_i$.
\subsection{Training Process}
Our neutral and contradiction attention models have two class labels, with $y_{n}$ and $y_{c}$ taking values of 0 or 1. ${y}_{n}=0$ when there are no neutral spans present, while ${y}_{n}=1$ when there is at least one neutral span. ${y}_{c}$ follows the same approach for the contradiction detection label.
For neutral NLI examples, we train our neutral-detection model using a sentence-level label of $y_{n}=1$. Using our logical framework, we also know that a neutral example cannot contain a contradiction span, as any example with a contradiction span would have a contradiction label. We therefore train our contradiction-detection model using a sentence-level label of $y_{c}=0$ for these examples. For contradiction examples, we do not train our neutral-detection attention model, as there may or may not be neutral spans present in addition to the contradiction spans. For entailment examples, we train both neutral and contradiction detection models using the labels $y_{n}=0$ and $y_{c}=0$.
Therefore, for neutral or entailment examples we consider the total of both $\mathcal{L}_{n}^{\text{Total}}$ and $\mathcal{L}_{c}^{\text{Total}}$, whereas for the contradiction class we only consider $\mathcal{L}_{c}^{\text{Total}}$.
\subsection{Logic-based Evaluation}
We evaluate each NLI sentence based exclusively on our span-level decisions. Specifically, an NLI hypothesis is classified as the contradiction class if any of the unnormalized attention weights are classified as being contradiction ($\widetilde{a}_{ci} \ge 0.5$).
If there are no contradiction spans present, an NLI example is classified as neutral if there exists at least one neutral span ($\widetilde{a}_{ni} \ge 0.5)$. Otherwise, the NLI example is classified as entailment.
The sentence-level logits $L_{n}$ or $L_{c}$ are only used during training and discarded for evaluation -- they consider information across all the spans and therefore do now allow for a deterministic evaluation of which spans are responsible for the model predictions.
\subsection{Span-level Supervision with Human Explanations}
To provide our model with more information about individual spans, we can use the e-SNLI \cite{camburu2018esnli} human explanations, with rationales highlighting the most important words for a given hypothesis and premise. Training models with the e-SNLI explanations can improve both model performance \cite{zhao2020lirex, joe_and_marek_paper} and model robustness \cite{joe_and_marek_paper}, although not all prior work has found improvements \cite{kumar-talukdar-2020-nile, camburu2018esnli, esnli_work_just_after_us, hase2021models}. We assess whether the human explanations can help our model make better decisions at the span level, and also whether the explanations further improve performance of our SLR-NLI model.
To incorporate the human explanations during training, we consider the highlighted word rationales for each hypothesis. If any of our SLR-NLI hypothesis spans contain all of the highlighted words in the e-SNLI explanation, we assign the overall sentence label as the individual span label. If the hypothesis rationale is not a single consecutive span then we do not provide any supervision with the explanation, as we observe that only single-span rationales consistently align with the desired span-level labels.
Let $p_{i}$ be the value of 1 where the hypothesis rationale is fully contained within the $i$-th span, and 0 otherwise. Our neutral span loss $\mathcal{L}_{n}^{\text{e-SNLI}}$ is defined as:
\[\mathcal{L}_{n}^{\text{e-SNLI}} = \lambda^{\text{e-SNLI}}\sum_{i}{p_{i}(\widetilde{a}_{ni} - y_{n})^2}\]
$\mathcal{L}_{c}^{\text{e-SNLI}}$ is defined in
a similar way, using $\widetilde{a}_{ci}$ and $y_{c}$.
\section{Related Work}
\subsection{NLI with Neural Theorem Provers}
Neural theorem provers can effectively solve a range of natural language tasks \cite{rocktaschel2017endtoend,DBLP:conf/acl/WeberMMLR19,minervini2020learning,DBLP:conf/icml/Minervini0SGR20}, many of which could be recast in a similar form to NLI. These datasets are often built from knowledge graphs \cite{sinha2019clutrr, Bouchard2015OnAR, Kok2007StatisticalPI}, for example identifying relationships between characters in short stories \cite{sinha2019clutrr}. Non-neural theorem provers have also shown promising results on the SICK dataset \citep{martinez-gomez-etal-2017-demand, Abzianidze20, Abzianidze17, yanaka-deduction-proofs}, although these methods cannot be easily translated to SNLI, which covers a wide range of topics and uses various forms of reasoning.
\subsection{Monotonic Reasoning with NLI}
Using monotonic reasoning involves matching components of the hypothesis and premise, and using external knowledge from resources including WordNet \cite{WordNet} to
determine the entailment relationships between corresponding parts of both sentences \cite{Hy-NLI, MonaLog, NeuralLog}. To improve performance, this logical approach can be combined with traditional neural models, learning which examples would benefit from a neural approach rather than using logical rules \cite{Hy-NLI}, or using neural models to decide the likelihood of examples where entailment and contradiction are not detected \cite{MonaLog}. A hybrid approach can improve performance, but at the expense of the interpretability benefits. Logic models using monotonic reasoning are mostly evaluated on SICK and other datasets with a small number of differences between the premise and hypothesis. While our logical framework is not specifically designed for these datasets, we show our performance on SICK still remains competitive with this prior work.
\subsection{Logical Reasoning with SNLI}
Previous work has applied logical reasoning techniques to SNLI, but with performance substantially below baseline levels. \citet{FengRecent} segment a hypothesis into spans, choosing one of seven logical relations for each hypothesis span. A logical relation is predicted for each span using a GPT-2 model \cite{radford2019language} which considers the premise, the given span and all prior hypothesis spans, with reinforcement learning training this span-level behaviour \citep{FengRecent}. Previous work also predicts the seven logical relations for individual words rather than for hypothesis spans \cite{FengOld}
Closest to our work, \citet{wu2021weakly} label spans as entailment, neutral or contradiction, evaluating at a sentence level based on the presence of neutral or contradiction spans. Our substantial performance improvements compared to \citet{wu2021weakly} reflect our different approaches to supervising at a span level. \citet{wu2021weakly} provide each span model with information about the entire premise and hypothesis, in addition to a hypothesis span and a corresponding premise span. The span label is then predicted using a three class classifier.
In comparison, we create separate additional attention layers for neutral and contradiction span detection, combining together multiple different losses to supervise at both the sentence and span level. As we consider neutral and contradiction span detection as separate binary tasks, we also introduce logical rules during training which include not supervising our neutral detection model for contradiction examples, and how a neutral label means there are no contradiction spans present.
We directly compare our results to \citet{FengOld}, \citet{wu2021weakly} and \citet{FengRecent}.
\section{Experiments}
We experiment with training the SLR-NLI model on either SNLI \cite{bowman-etal-2015-large} or SICK \cite{marelli-etal-2014-sick}. SNLI is a larger corpus, with 570k observations compared to 10k in SICK. SNLI involves a diverse range of reasoning strategies between the premise and hypothesis, where annotators have been asked to create a hypothesis for each class for each premise, with the premises provided from image captions. In contrast, SICK initially uses sentence pairs from image captions and video descriptions, with additional sentence pairs generated by applying a series of rules, including replacing nouns with pronouns and simplifying verb phrases. As a result, entailment and contradiction examples in SICK are often the same except with one or two small changes. Previous work exploits this similarity, using logical reasoning to identify the contradiction and entailment examples \cite{NeuralLog, MonaLog}. Compared to this setting, SNLI provides a more challenging dataset for applying logical reasoning approaches.
We further experiment with training our model in a reduced data setting, motivated by the hypothesis that forcing our model to learn at a span-level will make better use of a smaller number of examples. We expect SLR-NLI to also be more robust in a reduced data setting, with existing models known to rely on dataset biases when overfitting to small datasets \cite{utama-etal-2020-towards}. For the reduced data setting, we train SLR-NLI with either 100 or 1,000 examples from SICK or SNLI, evaluating out-of-distribution performance on other unseen NLI datasets including SNLI-hard \cite{gururangan-etal-2018-annotation}, MNLI \cite{williams-etal-2018-broad} and HANS \cite{mccoy-etal-2019-right}. As no explanations are provided for SICK, we only use explanations when training on SNLI (described as SLR-NLI-esnli).
For SICK, when we consider out-of-distribution performance we evaluate on the corrected SICK dataset \cite{MonaLog}, with labels manually corrected by \citet{MonaLog, kalouli2017textual}. However, for a fair comparison to previous work, we use the original SICK dataset when evaluating in-distribution performance from SICK.
To validate that the model is making sensible decisions at a span level, we compare the span-level predictions to the e-SNLI human explanations. For each single-span hypothesis rationale in e-SNLI, we consider each model span containing this entire rationale. Each span that does contain the e-SNLI rationale is evaluated, with its span predictions compared to the sentence-level label. As the e-SNLI test data contains three human explanations for each observation, the hypothesis spans are compared to each of these three explanations.
In summary, we consider the following research questions:
1) Does our interpretable SLR-NLI model retain performance on SNLI?
2) Is SLR-NLI a flexible approach that can also work on SICK?
3) Does the SLR-NLI model improve performance in a reduced data setting?
4) In the reduced data setting, does SLR-NLI also improve robustness?
5) Does SLR-NLI make sensible decisions at a span level?
\subsection{Hyper-parameter Settings}
We use a BERT-base \cite{devlin-etal-2019-bert} model, providing a direct comparison to previous work. We choose the best learning rate for the baseline, SLR-NLI and SLR-NLI-esnli from $\{ 10^{-6}, 2.5\times10^{-6}, 5\times10^{-6}, 7.5\times10^{-6}, 10^{-5} \}$. Each SNLI model is trained over 2 epochs, using a linear learning schedule with a warmup and warmdown period of a single epoch. For the SICK dataset, we train with 3 warmup and warmdown epochs to reach a baseline comparable with previous work. $\lambda^{\text{e-SNLI}}$ is set as 0.1.
We also consider spans that consist of up to 3 smaller, consecutive spans. Hyper-parameters are selected based on SNLI, with the same hyper-parameters applied to SICK. A separate hyper-parameter search is conducted for the reduced data setting, with models evaluated with early stopping across 10 epochs. Each hyper-parameter is tested across 5 random seeds, comparing the mean results.
\section{Results}
\begin{table}[!t]
\begin{center}
\begin{tabular}{rcccccccccc}
\toprule
{\bf Accuracy} & \textbf{SNLI} & {\bf $\Delta$} \\
\midrule
BERT (baseline) & 90.77 & \\
\midrule
\citet{FengOld} & 81.2 & \textit{-9.57} \\
\citet{wu2021weakly} & 84.53 & \textit{-6.24} \\
\citet{FengRecent} & 87.8 & \textit{-2.97} \\
\midrule
SLR-NLI & 90.33 & \textit{-0.44} \\
\textbf{SLR-NLI+esnli} & \textbf{90.49} & \textbf{\textit{-0.28}} \\
\bottomrule
\end{tabular}
\end{center}
\caption{Performance (accuracy) on the SNLI test-set from our SNLI-NLI model, with and without the additional e-SNLI supervision during training. Each condition is tested across 5 random seeds, including the baseline.}
\label{SNLI_results}
\end{table}
\begin{table
\begin{center}
\begin{tabular}{rcccccccccc}
\toprule
{\bf Accuracy} & \textbf{SICK} & {\bf $\Delta$} \\
\midrule
BERT (baseline) & 85.52 & \\
\midrule
Hybrid systems \\
\midrule
\citet{MonaLog}+BERT & 85.4 & \textit{-0.1} \\
\citet{Hy-NLI} & 86.5 & \textit{+1.0} \\
\midrule
Logic-based systems \\
\midrule
\citet{MonaLog} & 77.2 & \textit{-8.3} \\
\citet{Abzianidze17} & 81.4 & \textit{-4.1} \\
\citet{martinez-gomez-etal-2017-demand} & 83.1 & \textit{-2.4} \\
\citet{yanaka-deduction-proofs} & 84.3 & \textit{-1.2} \\
\citet{Abzianidze20} & 84.4 & \textit{-1.1} \\
\citet{NeuralLog} & \textbf{90.3} & \textbf{\textit{+4.8}} \\
\midrule
SLR-NLI & 85.43 & \textit{-0.09} \\
\bottomrule
\end{tabular}
\end{center}
\caption{Performance (accuracy) of SLR-NLI compared to previous work. Results for SLR-NLI-esnli are not provided as the e-SNLI explanations are specific to SNLI. Results for SLR-NLI and the baseline are an average from across 5 random seeds.}
\label{SICK_results}
\end{table}
\begin{table*}[!t]
\begin{center}
\begin{tabular}{rcccccccccc}
\toprule
Training data: & \multicolumn{4}{c}{\textbf{SNLI}} & \multicolumn{4}{c}{\textbf{SICK}} \\
\cmidrule(lr){2-5} \cmidrule(lr){6-9}
Model: & SLR-e & Base. & SLR-e & Base. & SLR & Base. & SLR & Base. \\
Data-set size: & 1,000 & 1,000 & 100 & 100 & 1,000 & 1,000 & 100 & 100 \\
\midrule
\textit{\textbf{OOD datasets:}} \\
SNLI-dev & \textbf{74.22} & 73.98 & \textbf{61.45} & 51.50 & \textbf{46.96} & 38.50 & \textbf{39.86} & 33.58 \\
SNLI-test & \textbf{74.05} & 73.90 & \textbf{60.95} & 51.75 & \textbf{46.88 }& 38.17 & \textbf{39.69} & 33.41 \\
SNLI-hard & \textbf{59.51} & 59.25 & \textbf{50.46} & 45.34 & \textbf{44.58} & 38.34 & \textbf{39.35} & 34.07 \\
MNLI-mismat. & \textbf{57.05} & 49.17 & \textbf{43.70} & 34.64 & \textbf{47.85} & 40.90 & \textbf{39.99} & 35.84 \\
MNLI-mat. & \textbf{54.76} & 48.46 & \textbf{42.50} & 34.80 & \textbf{46.51} & 39.72 & \textbf{39.31} & 35.36 \\
SICK & \textbf{52.23} & 52.19 & \textbf{45.77} & 37.72 & \textbf{81.33} & 81.11 & \textbf{71.13} & 65.31 \\
HANS & 50.00 & \textbf{50.27} & 49.99 & \textbf{50.31} & 50.61 & \textbf{53.22} & \textbf{50.73} & 49.95 \\
\bottomrule
\end{tabular}
\end{center}
\caption{Performance of SLR-NLI compared to a BERT baseline in a reduced data setting. We use SLR-NLI-esnli (SLR-e) when training on SNLI, whereas for SICK we train SLR-NLI without any human explanations (SLR). Results in \textbf{bold} outperform the corresponding results in either the baseline or from SLR-NLI. When training on SICK, we use the original SICK dataset, whereas the out-of-distribution evaluating on SICK uses the SICK-corrected dataset. All results are an average across 5 random seeds.}
\label{reduced_data}
\end{table*}
\begin{table*}[!t]
\begin{center}
\begin{tabular}{rcccccccccc}
\toprule
& {\bf Sent. acc.} & {\bf Span acc.} & {\bf F-macro} & {\bf F-ent} & {\bf F-neut} & {\bf F-cont} \\
\midrule
\textit{\textbf{Zero-shot:}} \\
SLR-NLI & 90.33 & 84.75 & 84.61 & 81.74 & 84.80 & 87.27 \\
SLR-NLI + dropout & 90.33 & 87.91 & 87.81 & 85.96 & 86.52 & 90.94 \\
\midrule
\textit{\textbf{Supervised:}} \\
SLR-NLI-esnli & 90.49 & 88.29 & 88.17 & 86.24 & 86.99 & 91.28 \\
\bottomrule
\end{tabular}
\end{center}
\caption{Span-level performance of our SLR-NLI-esnli model compared to SLR-NLI without the supervision with the human explanations. A version of SLR-NLI with a dropout mechanism applied is also included. All results are an average across 5 random seeds.}
\label{esnli_performance}
\end{table*}
\subsection{Performance on SNLI and SICK}
On the SNLI test set, the SLR-NLI model achieves in-distribution results very close to the standard BERT model, with 90.33\% accuracy compared to the baseline of 90.77\% (\cref{SNLI_results}).
This result outperforms prior work on logical reasoning in SNLI, as the inclusion of logical frameworks has previously been accompanied by large performance drops \cite{FengRecent, wu2021weakly, FengOld}. We achieve this level of performance without ever training or evaluating on the full premise and hypothesis pairs.
When training with the e-SNLI explanations, we see an additional small improvement in accuracy (reaching 90.49\%).
SLR-NLI also compares favourably to prior logical reasoning work on the SICK dataset, despite the benchmarks being specifically designed for this dataset (\cref{SICK_results}).
In particular, \citet{NeuralLog} is specifically designed to bridge the differences between a hypothesis and premise, which is not possible for SNLI. \citet{Hy-NLI} also outperforms SLR-NLI, however this is a hybrid approach that uses logic for some examples and a standard neural network for other examples without the same interpretability benefits. The strong performance on both SNLI and SICK shows the flexibility of SLR-NLI, which can be applied to a range of different NLI datasets.
\subsection{Reduced data setting}
In a reduced data setting training with 1,000 SNLI observations, there is a small in-distribution improvement from SLR-NLI-esnli, with larger improvements observed out-of-distribution (see \cref{reduced_data}). When training with only 100 observations we also see substantial improvements in-distribution, with SLR-NLI-esnli outperforming the baseline by 18\% for SNLI-test, with improvements on SNLI-hard, MNLI-mismatched, MNLI-matched and SICK of 11\%, 26\%, 22\% and 21\% respectively. Training on a reduced SICK dataset shows the same findings, with in-distribution improvements accompanied by out-of-distribution improvements on the SNLI and MNLI datasets. The improved out-of-distribution performance in the reduced data setting suggests that both SLR-NLI and SLR-NLI-esnli are less reliant on database biases compared to the baseline model \cite{belinkov-etal-2019-dont, karimi-mahabadi-etal-2020-end, stacey-etal-2020-avoiding, conf_reg_paper, he-etal-2019-unlearn, clark-etal-2019-dont}.
The only dataset where SLR-NLI underperformed compared to the baseline was HANS. This dataset contains syntactic heuristics, with each hypothesis consisting of words that are also words in the corresponding premise. For example, the premise of `the doctor was paid by the actor' is accompanied by the hypothesis `the doctor paid the actor' \cite{mccoy-etal-2019-right}. In these examples, evaluating on the smaller spans provides no additional benefit.
\section{Analysis}
\subsection{Evaluating span performance}
\begin{figure*}[t!]
\includegraphics[width=\columnwidth*2-40pt]{figures/logical_examples.pdf}
\caption{Span and sentence level predictions for SNLI-test, SNLI-dev and MNLI-mismatched (out-of-distribution). Each of the four examples above are correctly predicted by SLR-NLI-esnli (the first two examples are neutral, while the second two examples are contradiction).} \label{multiple_example_spans}
\end{figure*}
Even without additional supervision from e-SNLI, SNLI-NLI performs well at a span level, with 84.75\% accuracy in a zero-shot setting (\cref{esnli_performance}). This shows that human explanations are not required for the model to make sensible decisions at a span level. With the additional e-SNLI supervision, SLR-NLI-esnli reaches a span-level accuracy of 88.29\%. We observe that without the additional e-SNLI supervision, the model tends to rely more on the longer spans that consist of consecutive smaller spans. To mitigate this issue, we experiment with a dropout mechanism during training which randomly masks large spans consisting of consecutive smaller spans, encouraging the model to also make sensible decisions at the most granular span level. In 10\% of observations, all such large spans are masked, leaving only smaller spans as the model input.
This dropout mechanism improves span performance to 87.91\%, although the sentence level performance does not improve in tandem (\cref{esnli_performance}).
\subsection{Model Interpretability}
The main advantage of our span-level approach is the interpretability of the model predictions, allowing us to understand which specific parts of the hypothesis are responsible for each predicted label. We define explanation spans as the set of the smallest neutral and contradiction spans such that any longer span that is predicted as neutral or contradiction contains one of these spans. As a result, we only choose longer, multi-segment spans when there are no smaller spans that explain the model decisions. For contradiction predictions, we only include contradiction spans in our explanations.
As shown in \cref{multiple_example_spans} - Example 1, SLR-NLI-esnli consistently makes sensible span-level predictions for easier, shorter hypotheses. We therefore show the results of a longer, more challenging example (Example 2), along with two examples from the unseen, out-of-distribution MNLI-mismatched validation set (Examples 3 and 4). In each case, the model is making correct decisions in line with our human expectations. To provide an unbiased sample of the span-level explanations, we also show the first four neutral and contradiction examples from SNLI-test in the Appendix.
\section{Conclusion}
We introduce SLR-NLI as a logical framework that predicts the class of an NLI sentence pair based on span-level predictions. We achieve this span-level supervision by combining together a sentence-level loss with span losses based on logical rules. SLR-NLI almost fully retains performance on SNLI, outperforming previous logical methods, whilst also performing well on the SICK dataset. The model also outperforms the baseline in a reduced data setting, with substantially better model robustness. Finally, we create a highly interpretable model whose decisions can easily be understood, highlighting why these predictions are made and the reasons for any misclassifications.
|
1,116,691,499,184 | arxiv |
\section{Baseline with and without projector}\label{sec:woprojector}
\input{resource/table/rn50/Main_projector}
Recall the \cref{ssec:feat}, our experiments on \cref{table:main} is produced on ResNet-18 with projector, to match the number of parameters among baseline.
\cref{table:projector} is an expansion of \cref{table:main} to show difference between original ResNet-18 and ResNet-18 with projector.
In \cref{table:projector}, result from original ResNet-18 baseline is similar to result from ResNet-18 with projector.
We believe that it is due to initialization of projector, while original ResNet-18 training starts from ImageNet pre-trained weights.
Although some original ResNet-18 experiments work slightly better than our modified ResNet-18 baselines, DeTT still performs better than original ResNet-18 baseline.
\section{Feature Level Analysis of DeTT}\label{sec:tsne}
\input{resource/fig/tsne/tsne}
In this section, we will analyze how feature map looks like in debiased model DeTT can distill robustness to student.
\cref{fig:tsne} is feature map output from models trained with Waterbirds dataset.
From \cref{sfig:t1} to \ref{sfig:tf}, feature map from debiased teacher is represented through t-SNE plot.
\cref{sfig:tf}, \ref{sfig:erm}, \ref{sfig:simkd} and \ref{sfig:dett} are feature distribution as input of teacher's classifier layer.
In the shallowest feature map in \cref{sfig:t1}, data are classified based on background feature which is the easy-to-learn feature.
The second last feature map in \cref{sfig:t4} (while the pooled last feature map (\cref{sfig:tf}) is input of transplanted classifier) is still not aware of core feature.
However, in \cref{sfig:tf}, it shows obvious pattern to classify $\mathtt{waterbird}$ and $\mathtt{landbird}$.
Groups with same label (but different annotation) share their region in t-SNE map without overlapping different label.
Through this pattern, feature map distillation works well to distinguish labels of data even if dataset with spurious correlation is used during distillation.
With \cref{fig:tsne}, we can think about feature map distribution with biased dataset.
Bias-conflict groups (i.e. pale red and light blue data points in \cref{fig:tsne}) is more sparse than spuriously correlated groups in biased dataset.
It means that there would be inaccurate distillation on bias-conflict group's own distribution.
It makes unclear samples (i.e. samples near decision boundary) in student model, which can be observed in \cref{sfig:simkd}.
From this observation, we think that distilling fine-tuned feature extractor would reduce error for bias-conflict sample because spuriously correlated groups shares feature distribution of bias-conflict groups in \cref{sfig:tf} while \cref{sfig:erm} does not.
In other words, bias-conflict groups can distill their feature map distribution through spuriously correlated samples when feature map is fine-tuned.
Although each feature extractor with its own fine-tuned classifier can work debiased model \citep{izmailov2022feature}, fine-tuning feature extractor is recommended when considering feature distillation.
\newpage
\section{DeTT with different teacher}\label{sec:jttteacher}
\input{resource/table/rn50/different_teacher}
To show how DeTT works for teacher with different performance, we try DeTT method using teacher trained by JTT \citep{liu2021just} method.
We find two observations from experiment in \cref{table:jttteacher}.
First observation is that performance of DeTT can drop if feature extractor is overfitted to training dataset.
In \cref{table:jttteacher}, DeTT shows the best performance among knowledge distillation baselines (i.e. KD($\alpha = 1$) and SimKD) on both Waterbirds and CelebA dataset.
However, JTT baseline outperforms DeTT in Waterbirds dataset even if JTT does not need teacher network.
We observe that training accuracy saturates to 100\% for all groups in Waterbirds during DeTT training, as teacher network does.
Student network which shows 100\% training accuracy (but has not seen any true label) means network is overfitted in feature level.
We believe this low distillation performance is due to overfitting in feature level, which does not happen in the same experiment in \cref{table:main}.
Second observation in \cref{table:jttteacher} is that DeTT shows performance gain compared with SimKD in CelebA dataset, while \cref{table:ablaUpweight} shows less performance boost after upweighting.
It supports our third belief in \cref{ssec:main}, that if student's performance does not saturate to teacher, performance can be boosted when data samples are upweighted.
\section{Experimental details}\label{sec:expdetails}
In this section, we explain further experimental details.
This section is separated into three parts: details about debiased training, details about knowledge distillation and further details about dataset.
\textbf{Debiased training.}
To train debiased teacher and baselines, we use official code released by \citet{liu2021just} and dataset introduced by \citet{sagawa2019distributionally} for our experiments.
Projector used to match dimension of classifier layer is from official code released by \citet{chen2022knowledge}.
For NLP tasks, we use BERT and DistilBERT implemented in HuggingFace \citep{huggingface} library.
Both teacher and baseline training start from pretrained weights.\\
We search hyperparameters to train debiased model based on performance at early stopping.
Learning rate and weight decay parameter are searched for training debiased model.
Learning rate search space is divided into two types, with and without scheduler.
We tuned learning rate over $l \in \{ 1\mathrm{e}{-3},1\mathrm{e}{-4},2\mathrm{e}{-5},1\mathrm{e}{-5} \}$ for both ResNet and BERT architecture without scheduler.
In vision tasks (e.g. Waterbirds and CelebA), weight decay is searched over $\mathtt{wd} \in \{1, 0.1\}$ for strong regularization.
With scheduler, learning rate starts from 0.1 and decays when loss is on plateau for 25 and 5 epochs in Waterbirds and CelebA respectively.
In NLP tasks (e.g. CivilComments and MultiNLI), we do not use learning rate scheduler due to small number of epochs and weight decay is searched over $\mathtt{wd} \in \{0.1, 0.01, 0\}$.
Training epochs for vision task is 300 and 50 in Waterbirds and CelebA respectively.
BERT is trained with 5 epochs and DistilBERT is trained with 10 epochs.
\textbf{Distilling Knowledge to Student.}
Code for knowledge distillation (both vanilla KD and SimKD) is released by \citet{chen2022knowledge}.
Implementation of student model is identical with that of debiased model.\\
We can divide knowledge distillation setting with two parts, end-to-end vanilla knowledge distillation and feature map distillation with transplanted layer.
For both distillation situations, we tune learning rate as described in training debiased model.
Weight decay is tuned over $\mathrm{wd} \in \{ 0.1, 0.01, 0.001\}$.
In addition, we additionally tune settings about upweighting data sample.
To select what samples to be upweighted, vanilla teacher model (i.e. model without projector) trained with ERM method is used as identification model.
Because its role is distinguishing critical samples, we select identification model from model checkpoint at 60 and 2 epoch for Waterbirds and CelebA respectively.
For NLP tasks, BERT model from checkpoint in 3 epoch is used as identification model.
Upweighting parameter is tuned over $\lambda_{\mathrm{up}} \in \{ 5,20,50\}$.
Training epochs for knowledge distillation is same with training debiased model.
When upweight is 50, training epochs in vision task is reduced to 200 and 30 for Waterbirds and CelebA, respectively.
\textbf{Dataset.}
We describe the details of the datasets used: type of bias which exists in dataset and how much dataset is biased.
\begin{itemize}[leftmargin=*,topsep=0pt,parsep=-1pt]
\item \textit{Waterbirds} dataset is first introduced in \citet{sagawa2019distributionally} as a benchmark dataset for spurious correlation problem.
Waterbirds is artificially generated by pasting bird image to specific background.
As a result, bird type $\mathtt{\{waterbirds, landbirds\}}$ is used as class label and background feature $\mathtt{\{water, land\}}$ is used as spurious attribute.
Bias in Waterbirds is symmetric in training split (i.e. $95\%$ of $\mathtt{waterbirds}$ appear on $\mathtt{water}$ background and same ratio for $\mathtt{landbirds}$).
In test and validation split, proportion of spuriously correlated feature is balanced with given label (i.e. half is on $\mathtt{water}$ background and other help is $\mathtt{land}$ background).
It makes average accuracy to be highly affected by worst-group accuracy as represented in \cref{table:ablaFD}.
\item \textit{CelebA} dataset is a face dataset introduced in \citet{liu2015deep}.
As a benchmark for spurious correlation problem, attribute hair color $\mathtt{\{ blond, nonblond\}}$ is used as class label and gender $\mathtt{\{ female, male\}}$ is used as spurious attribute.
This new CelebA setup is used in \citet{sagawa2019distributionally}.
Different from Waterbirds dataset, its bias is unbalanced.
If label is $\mathtt{nonblond}$, proportion of $\mathtt{female}$ is $51.7\%$ (little or no bias) while proportion of $\mathtt{female}$ given $\mathtt{blond}$ label is $94.2\%$.
It means that only $(\mathtt{blond, male})$ group is victim of spurious correlation in this dataset.
\item \textit{CivilComments} dataset \citep{koh2021wilds} is a benchmark dataset used for spurious correlation problem in language model.
Classifying whether given sentence is $\mathtt{toxic, nontoxic}$ is goal of this benchmark dataset while existence of demographic identities $\mathtt{present, absent}$ (i.e. gender, LGBT, religion or race) works as spuriously correlated attribute.
In this dataset, $\mathtt{toxic}$ sentence is more likely to have demographic identities ($60.9\%$) than $\mathtt{nontoxic}$ sentence ($39.8\%$).
We can observe that CivilComments dataset is less biased among all our benchmarks.
\item \textit{MultiNLI} dataset is introduced in \citet{williams2017broad} and used as benchmark for spurious correlation problem in \citet{sagawa2019distributionally}.
This task classifies logical relationship given two sentences, whether one sentence is $\mathtt{\{ contradict, entailed, neutral\}}$ to another one.
Presence of negation $\mathtt{present, absent}$ works as spuriously correlated attribute.
If negation appears in one sentence, proportion of label $\mathtt{contradict}$ is $76.2\%$ while only $30.0\%$ of sentence pair is $\mathtt{contradict}$ when there is no negation.
In addition, it is a ternary classification task while other benchmarks are binary classification task.
\end{itemize}
\section{Conclusion and discussion}
In this paper, we have discovered that knowledge distillation of a debiased teacher does not necessarily lead to a well-debiased student, whenever we train with a dataset with spurious correlations. To address this issue, we have proposed a last-layer-centric knowledge distillation algorithm, coined DeTT. Without relying on full annotations on the spurious correlated attributes, DeTT successfully debiases the student model, outperforming the baseline methods on four benchmark datasets that encompass vision and language domains; both upweighting and the last layer transplanting play key roles in boosting the worst-group performance. We also find that DeTT can also utilize out-of-domain datasets for distillation, such as ImageNet.
\textbf{Limitations and Future directions.} One of the most notable limitations of the presented DeTT is that it requires an additional \textit{projection layer} for transplanting the last layer of the student. Modern neural networks, especially the ones designed for on-device deployment, are highly optimized in terms of the size-to-performance tradeoff and the compute budget of the target device. In such cases, attaching additional layers may either undermine the peak performance or may render the modified model not deployable to the target device. In this sense, developing a knowledge distillation algorithm that does not alter the network architecture of the student model is a worthy subject of study.
Another limitation of DeTT is that the method requires separately training a biased model to select the samples to be upweighted, introducing a nonnegligible overhead in terms of the training time and computing. We believe that such overhead can be circumvented whenever we have extra supervision on the spurious correlation; how we should best utilize the extra supervision is an important future direction.
\subsection{Experimental Setup} \label{ssec:setup}
We focus on evaluating the worst-group performance of the proposed DeTT for classification tasks.
\input{resource/table/rn50/MainTable}
\textbf{Datasets.} We use total four benchmark datasets, two from the vision domain and two from the language domain.
\begin{itemize}[leftmargin=*,topsep=0pt,parsep=-1pt]
\item \textit{Waterbirds} \citep{sagawa2019distributionally} is an artificial dataset consisting of images of birds; the images have been synthetically generated from the Caltech-UCSD Birds (CUB) dataset \citep{wah2011caltech}, by cropping an image segment of a bird and pasting it on the background image. The goal of this dataset is to classify the type of birds appearing in the image, with the label set $\{\mathtt{waterbirds}, \mathtt{landbirds}\}$. The spuriously correlated attribute is the binary set of background landscapes $\{\mathtt{water},\mathtt{land}\}$.
\item \textit{CelebA} \citep{liu2015deep} is a dataset consisting of images of the faces of celebrities, with annotations on $40$ different attributes of the faces. We use the blondness of the hair color $\{\mathtt{blond}, \mathtt{nonblond}\}$ as the class attribute (i.e., target of prediction). As the spuriously correlated attribute, we use the gender attribute $\{\mathtt{female},\mathtt{male}\}$.
\item \textit{CivilComments} \cite{koh2021wilds} is a text dataset consisting of various user-generated online comments, with annotations on the toxicity of the comment and demographic identities (e.g., gender, LGBT, ethnicity, etc.) appearing in the comment. The goal of the dataset is to classify whether a sentence is $\{\mathtt{toxic},\mathtt{nontoxic}\}$. As the spuriously correlated attribute, we use whether any of the words related to demographic identities appear in the comment $\{\mathtt{present},\mathtt{absent}\}$.
\item \textit{MultiNLI} \cite{williams2017broad} is a text dataset consisting of pairs of sentences. The task is to classify the logical relationship between two sentences, where the relationship can be $\{\mathtt{contradict},\mathtt{entail},\mathtt{neutral}\}$. The spuriously correlated attribute is the presence of the negation words (e.g., ``never''), i.e., $\{\mathtt{present},\mathtt{absent}\}$.
\end{itemize}
As previously mentioned, DeTT uses a small number of ``validation samples'' with annotations on the spuriously correlated attributes for tuning the hyperparameters. For this purpose, we use the same split as \citet{liu2021just}.
\textbf{Models.} For image datasets (Waterbirds and CelebA), we use ResNet-50 as our teacher models \citep{he2016deep}. The student models are the smaller version, ResNet-18, starting from the weights that have been pretrained on the ImageNet dataset. For the text datasets (CivilComments and MultiNLI), we use BERT as our teacher \citep{devlin19}. The student models are the pretrained version of DistilBERT \citep{sanh2019distilbert}. Unless noted otherwise, the teacher classifiers are trained with group DRO \citep{sagawa2019distributionally}, using the group annotations in the considered datasets. For the identification models (i.e., biased models for identifying samples to upweight), we use the same architecture as the student model, without any additional projectors.
\textbf{Projectors.} For the image classification models, we use a lightweight convolutional module consisting of three convolutional layers, where the first and the last layers are 1x1 convolution and the middle layer is the 3x3 depthwise convolution with 1024 input channels and 1024 output channels. For the text classification models, we do not need any projector layers, as the dimensions match already. For a fair comparison (in model size), we also attach the projectors and use teacher-sized last layer to the baseline student that does not use any layer transplanting.
\textbf{Baselines.} We compare the performance of the DeTT mainly with two baseline distillation algorithms.
\begin{itemize}[leftmargin=*,topsep=0pt,parsep=-1pt]
\item \textit{Knowledge distillation} \citep{hinton2015distilling} is the standard distillation algorithm which regularizes the student training by matching the outputs of the teacher and student models; \citet{simonsays} observes that the algorithm gives a strong baseline for mitigating the class imbalance of student models, which is (weakly) associated with the spurious correlations. The method uses the loss
\begin{equation}
\mathcal{L} = (1-\alpha) \cdot \mathcal{L}_{\mathtt{CE}} + \alpha \cdot \mathcal{L}_{\mathtt{KD}}^{(\tau)},
\label{eq:distillation}
\end{equation} where $\mathcal{L}_{\mathtt{CE}}$ is the cross-entropy loss of the student, $\mathcal{L}_{\mathtt{KD}}^{(\tau)}$ is the KL divergence between the teacher and student softmax outputs with temperature-$\tau$ scaling, and $\alpha \in [0,1]$ is the hyperparameter balancing two losses. For the temperature hyperparameter, we use $\tau = 4$, following a recent work by \citet{chen2022knowledge}. We write ``KD'' to denote this baseline method.
\item \textit{Simple KD} \citep{chen2022knowledge} is a knowledge distillation algorithm which also transplants the last layer of the teacher model to the student and distills only the feature map. Comparing with DeTT, the method lacks the upsampling step for debiasing the feature map. We write ``SimKD'' to denote this baseline.
\end{itemize}
In addition, we compare with additional baselines that does not perform any knowledge distillation. In particular, we compare with two debiasing algorithms, JTT \citep{liu2021just} and group DRO \citep{sagawa2019distributionally}, and the vanilla ERM. We note that group DRO utilizes the annotations on the spuriously correlated attribute, and thus is not intended for a direct comparison with DeTT.
\textbf{Random seeds.} All figures reported in this paper are averaged over three independent trials.
\textbf{Other experimental details.} Other experimental details, such as the optimization schedule or the choice of hyperparameters, are given in the \cref{sec:expdetails}.
\subsection{Main Result}\label{ssec:main}
The main experimental results are given in \cref{table:main}, and we make the following four observations:
\begin{itemize}[leftmargin=*,topsep=0pt,parsep=-1pt]
\item First, we observe that DeTT consistently achieves the best performance among all baselines. DeTT even outperforms the student debiased using full annotations (i.e., Group DRO), except for the CivilComments dataset.
\item Second, DeTT also tend to outperform other knowledge distillation baselines in terms of the average accuracy. In other words, the worst-group performance gain of DeTT is not from trading off the average cost, but from a better utilization of the teacher model.
\item Third, the upweighting procedure is essential for the performance. Comparing DeTT with SimKD, the performance boost is as large as $16.1\%\mathrm{p}$ (Waterbirds) or $6.1\%\mathrm{p}$. The gain is relatively small in CelebA and CivilComments, but we believe this is due to a saturation of performance; the DeTT performance is close to the teacher.
\item Fourth, the model trained only with a knowledge distillation loss (i.e., $\mathrm{KD}(\alpha=1)$) often achieves a competitive worst-group accuracy. In fact, the vanilla KD achieves a higher performance than SimKD in the waterbirds dataset, and the same performance on the CivilComments dataset.
\end{itemize}
\subsection{Ablations: Transplanting and Upweighting}
\label{ssec:feat}
\input{resource/table/rn50/FeatureDistillAbla}
\input{resource/table/rn50/UpweightAbla}
We now ablate two components of DeTT: Transplanted teacher layer, and sample upweighting. In other words, we are comparing the performance of DeTT with (1) the vanilla KD using only the distillation loss, (2) the vanilla KD with upsampling, and (3) the SimKD baseline. The comparison on vision datasets is given in \cref{table:ablaFD}. We observe that both components are essential for a good worst-case performance. When the upweighting is missing, the performance dramatically fails on the Waterbirds dataset, to a level that is lower than the vanilla KD. When the transplanted teacher's last layer is missing, the performance is similar to baselines with projector even if the number of parameters decreased. Result details for baselines without projector is reported in \cref{sec:woprojector}.
We perform an additional sensitivity analysis in \cref{table:ablaUpweight}. We observe, somewhat depressingly, that upweighting does not necessarily boost the worst-group performance than the version not upweighted (SimKD). Instead, we still rely on the availability of (a small number of) validation samples with annotations on spuriously correlated attributes to find the right value of the hyperparameter $\lambda_{\mathrm{up}}$.
\subsection{Groupwise Accuracy}\label{ssec:groupwise}
We take a closer look at the subgroup accuracy of DeTT (\cref{table:distillpatternup}). One thing we note is that the worst group for the student model tends to be identical to the worst group for the teacher model, rather than the subgroup with the smallest effective number of samples (the effective number of samples means takes upweighting into account). In the Waterbirds dataset, we see that the worst-performing subgroup of DeTT is $(\mathtt{waterbirds},\mathtt{land})$, while the subgroup with the smallest effective number of samples is $(\mathtt{waterbirds},\mathtt{water})$.
\input{resource/table/rn50/distillpattern_up}
\input{resource/table/rn50/distillpattern}
We additionally look at the subgroup accuracy of SimKD (\cref{table:distillpattern}). We observe a similar pattern as in DeTT, where the worst subgroup of the student tends to be the same as the worst subgroup of the teacher model.
From these observations, we hypothesize that the transplanted last layer of the teacher to the student may allow the student to more faithfully follow the behavior of the teacher.
\subsection{DeTT with out-of-domain datasets}\label{ssec:ood}
We also test whether the DeTT can utilize out-of-domain datasets for distilling the feature map of the teacher model. In particular, we use the ImageNet dataset to distill the teachers trained on Waterbirds and CelebA, respectively. The experimental results are given in \cref{table:distillood}.
From the table, we observe that the distilled student model achieves a moderately high worst-group accuracy. In particular, in the Waterbirds dataset, the model performs better than the debiasedly trained student-size models without teachers (Group DRO and JTT), and even SimKD. In the CelebA dataset, the model performs better than JTT and the KD baseline with $\alpha=0.5$.
Considering the fact that this distillation utilizes absolutely zero supervision on the bias (even including validation samples) other than the debiased teacher, the worst-group performance of over $80\%$ accuracy provides a strong baseline.
\input{resource/table/rn50/distillood}
\subsection{Other Experiments}
We provide additional experimental results in the supplementary materials. Here, we briefly summarize what experiment they are and the takeaway messages from the results:
\begin{itemize}[leftmargin=*,topsep=0pt,parsep=-1pt]
\item \cref{sec:woprojector}: Recall that we have also attached projector layers to the baseline student models for a fair comparison. We additionally compare the empirical performance of the baselines without projectors (except for SimKD and DeTT), and find that the versions often work better than the ones without projectors. Still, DeTT outperforms these newly added baselines as well.
\item \cref{sec:tsne}: We visualize the intermediate layer outputs of the debiased teacher model via t-SNE plots to understand whether transplanting the ``last layer'' is an optimal choice. We observe that the last hidden layer indeed provides the best separation according to the target label.
\item \cref{sec:jttteacher}: Instead of the teacher trained with group DRO, we try distilling the teacher trained with JTT---a debiased training algorithm that do not require extensive annotations on the spuriously correlated attributes. Again, we find that the DeTT outperforms other baselines that does not use spurious attribute annotations for training; the only exception is on Waterbirds, where JTT (without teacher) works particularly well.
\end{itemize}
\section{Experiment}
In this section, we evaluate the empirical performance of the proposed method, DeTT, under four debiasing benchmarks in vision and language domains. Comparing with baseline knowledge distillation algorithms, we observe that DeTT brings a significant gain in worst-group performance.
\input{contents/experiment/1Setup}
\input{contents/experiment/2MainResult}
\input{contents/experiment/3FeatureDistill}
\section{Introduction}
While recent applications of deep learning have demonstrated its capability to capture the core knowledge in the training data, it has also been observed that neural networks are prone to learning undesirable correlations that are easier to learn than the desired ones \citep{geirhos2020}. In many training datasets, there exist attributes which are \textit{spuriously correlated}---i.e., misleadingly correlated but not causally related---to the label. For instance, a dataset of birds may contain numerous images of birds on diverse background landscapes. It is likely that the vast majority of the images of waterbirds (the label) appear on images with the water background (spuriously correlated attribute), but not necessarily so, e.g., ``a duck on the grass.'' Given such training datasets, neural networks often learn to make predictions based on the backgrounds instead of the birds. This problem has gained much attention, and a wealth of \textit{debiasing} algorithms has been proposed \citep{sagawa2019distributionally,nam2020learning,liu2021just}. These algorithms address the spurious correlation by maximizing the \textit{worst-group accuracy} among all subgroups defined by the label and spurious correlated attribute pairs; if a model can classify a duck on the grass and the water equally well, it may be viewed as not relying on the spurious correlation.
\input{resource/fig/method}
After debiased training, however, neural network models often need to go through several post-processing steps before being deployed to the end users \citep{huyen}. Such post-processing opens up an additional window of vulnerability to bias. One of the most widely used post-processing routines is the \textit{model compression}, such as pruning or quantization \citep{han16}. It has been discovered in recent works that standard compression algorithms often disproportionately impact the performance of each subgroup, resulting in a highly biased model \citep{hooker2020characterising,lukasik2021teacher}. Following this observation, several bias-aware compression methods have been proposed to train debiased lightweight models \citep{simonsays,lin2022fairgrape}.
This paper focuses on identifying and resolving the interplay of the spurious correlation with the \textit{knowledge distillation} (KD), i.e., regularizing the training of lightweight student models with larger-scale teacher models \citep{hinton2015distilling}. KD is a widely used post-processing technique, both as a standalone model compression method and as a post-compression subroutine applied after pruning or quantization, e.g., TernaryBERT \citep{ternarybert}. Under a related setup of learning with long-tailed data \citep{van2017devil}, prior works have shown that KD brings highly disproportionate performance gains (or sometimes even degradations) to different ImageNet classes \citep{lukasik2021teacher,wang2022robust}. None of these, however, studies the effect of KD on spuriously correlated dataset; the setup critically differs from the long-tail literature in the sense that it is concerned with shortcut learning which does not necessarily originate from class imbalances. Furthermore, the target of classification is much more decoupled from the subgroup identity in the spurious correlation literature than in the long-tail learning.
\textbf{Contribution.} In this paper, we consider the task of distilling the knowledge from a debiased teacher, and find that the vanilla KD often trains a \textit{biased student} whenever the training dataset contains a spurious correlation. For example, vanilla KD on the MultiNLI dataset \citep{williams2017broad} results in over $30\%\mathrm{p}$ degradation in the worst-group performance ($78.8\% \to 46.2\%$). Importantly, the worst-group performance of the student is \textit{worse} than the student debiasedly trained without any distillation, making it questionable whether the standard KD algorithm can effectively transfer the teacher knowledge on debiasing.
To address this problem, we propose an algorithm for distilling the knowledge from debiased teachers, coined DeTT (debiasing by teacher transplanting; \cref{fig:sub1}). In a nutshell, DeTT consists of two steps:
\begin{itemize}[leftmargin=*,topsep=0pt,parsep=-1pt]
\item \textit{Transplanting:} DeTT directly transplants the most bias-critical layer of the teacher to the student---the last layer. This design is inspired by recent observations suggesting that the last layer of the model plays an overwhelmingly important role in debiasing; the (oracle) last layer retraining of a vanilla ERM model achieves competitive performance to state-of-the-art debiasing methods \citep{kirichenko2022last}, while the retraining does not improve an already-debiased model \citep{izmailov2022feature}.
\item \textit{Feature Distillation:} We match the feature map outputs of the teacher and student models with an MSE loss. Importantly, we upweight the samples that are identified as bias-conflicting samples, where we use a separately trained biased model for the identification. We find that such upweighting significantly boosts the worst-group performance. This suggests that the feature map may also play a nonnegligible substantial role in debiasing a neural network, contrary to the prior belief.
\end{itemize}
We highlight that DeTT utilizes the labels only for identifying the samples to upweight to debiasedly train the feature map. This property leads to two practical advantages of the method: (1) DeTT can be applied to scenarios where we only have a limited number of samples with annotations on the spuriously correlated attributes, as in \citet{nam2020learning,liu2021just}, during the distillation phase. (2) DeTT can alternatively utilize external unlabeled dataset from other domains, which presumably contains less spuriously correlated attribute for the task, for the feature distillation. Indeed, we find that DeTT using ImageNet for distillation outperforms several distillation algorithms, giving a strong baseline that does not use any type of the bias supervision except for the teacher model itself.
Throughout our experiments on four vision and language debiasing benchmarks, DeTT consistently outperforms the baseline knowledge distillation and debiased training algorithms (summarized in \cref{fig:sub2}). Here, we observe that \textit{upweighting} is essential in the feature map distillation for a good debiasing performance; in Waterbirds dataset, it gives more than $16\%\mathrm{p}$ gain in the worst-group accuracy.
We believe that proposed DeTT may be a strong baseline for the future debiasing algorithms for distillation.
\section{Method: Debiased Teacher Transplant (DeTT)}
Our method distills teacher's feature map and reuses classifier not to train bias-sensitive module, classifier layer.
With this method, we can distill and reuse teacher's knowledge with less bias without group annotation.
Also, characteristic of feature distillation with biased dataset is analyzed through different hyperparameter setting and ablation studies.
Leftover bias in feature map is removed by weak sample upweighting.
\subsection{Feature distillation and Reusing teacher module}
\label{subsec:simkd}
Vanilla knowledge distillation \cite{hinton2015distilling} uses teacher model output $p_T = g_T(\phi_T (x))$ as knowledge to be distilled.
Its distillation loss function to be minimized is represented as follows:
\begin{equation}
\mathcal{L}_{\mathrm{KD}}=(1-\alpha)\mathcal{L}_{\mathrm{CE}}(y, p_S) + \alpha \mathcal{L}_{\mathrm{KL}}(p_T, p_S)
\label{eq:distill}
\end{equation}
$\mathcal{L}_{\mathrm{CE}}$ and $\mathcal{L}_{\mathrm{KL}}$ are cross-entropy loss and Kullback–Leibler divergence (KLDiv) loss respectively.
$\alpha$ is a hyperparameter for distillation ratio.
Previous research \cite{izmailov2022feature} shows that classifier layer is more sensitive to bias in dataset and debiasing algorithm compared with feature map.
We do not want to take risk of being biased by training classifier.
From this motivation, our method reuse $g_T(\cdot)$ with projector $P(\cdot)$ to fit dimension (or no projector if dimension already fits) and only use optimization function for feature map as follows \cite{chen2022knowledge}:
\begin{equation}
\mathcal{L}_{\mathrm{KD}}=\mathcal{L}_{mse}(\phi_T (x), P(\phi_S (x)))
\end{equation}
We use mean square error as loss ($\mathcal{L}_{mse}$) for simplicity.
However, we show it is not fully debiased for some imbalanced dataset in \ref{sec:AblaUpweight}.
We think that frequency of feature activation appearance is different and it works as bias in feature map.
\subsection{Upweight samples without group annotation}
\label{subsec:upweight}
To solve problem mentioned above subsection, our method needs to upweight data sample to remove leftover bias.
Samples to be upweighted is selected by \textit{identification model} trained via empirical risk minimization(ERM) whose architecture is identical with that of student network \cite{liu2021just}.
Training example which failed to be optimized in identification model (i.e. expected to be minor group sample).
Dataset used for knowledge distillation is upweighted dataset $\mathcal{D}_{kd}$ constructed by concatenating original dataset $\mathcal{D}$ and upweighted samples $\mathcal{D}_{\lambda_{up}}$ when $\lambda$ is upweight hyperparameter.
In our work, those weak debias technique is sufficient for removing bias in feature map level.
\section{DeTT: Debiasing by Teacher Transplanting}
We now present DeTT (Debiasing by Teacher Transplanting), a simple yet effective algorithm for distilling the knowledge of a debiased teacher using a biased training dataset without annotations on the spuriously correlated attribute.
DeTT distills the teacher by dividing-and-conquering the last layer and the remaining layers. In other words, given a teacher model $f_T$ and the student model $f_S$, we decompose both models into two parts as
\begin{equation}
\begin{aligned}
f_T(x) &= g_T \circ \phi_T(x)\\
f_S(x) &= g_S \circ \phi_S(x),
\end{aligned}
\end{equation}
where $g_T, g_S$ denote the last fully connected layers (i.e., linear classifier) of teacher and student models, respectively, and $\phi_T, \phi_S$ denotes the preceding layers (i.e., feature extractor) of the models. Given this decomposition, we apply separate strategies to distill each component. Roughly, DeTT performs the following two operations.
\begin{itemize}[leftmargin=*,topsep=0pt,parsep=-1pt]
\item \textbf{Last layer.} We directly \textit{transplant} the last fully connected layer of the teacher model to the student model and keep the layer parameters frozen $(i.e., g_S = g_T)$. If the input dimensions of $g_T$ and the output dimension of $\phi_S$ does not match, we add additional projection layer in between (see \cref{ssec:lastlayer} for more details).
\item \textbf{Feature extractor.} We train the student model feature extractor $\phi_S$ by distilling the feature map outputs from the teacher feature extractor $\phi_T$, using the biased training data. Importantly, we upweight the data samples that have been misclassified by another separately trained \textit{identification model} which has the same network architecture as the student model; we find this procedure essential for boosting the worst-group performance (see \cref{ssec:restlayers} for details).
\end{itemize}
\textbf{Intuition.} This separate processing of the last layer and preceding layers is inspired by a recent series of observations suggesting that the last layer may play an overwhelmingly important role in model debiasing than the rest of the layers; \citet{kirichenko2022last} observe that one can effectively debias a classifier trained by vanilla ERM (empirical risk minimization) procedure by simply retraining its last layer, while \citet{izmailov2022feature} report that the last layer retraining does not improve the worst-group performance of the model trained by group DRO. The latter observation, in particular, suggests that the last layer of a group DRO model (or potentially any debiasedly trained model) may contain rich information for a debiased classification. This motivates us to design a \textit{last-layer-centric} distillation algorithm, DeTT.
We note that the ideas of \textit{transplanting the last layer} and the \textit{feature distillation} of the teacher model are not entirely new. The feature distillation has been first proposed by \citet{romero2014fitnets} and has been revisited recently by \citet{yang21}. The effectiveness of transplanting the last layer with distilled features has been recently discovered by \citet{chen2022knowledge}, which is the most closely related work to ours. Our work contributes to this line of work by (1) identifying that such method gives a reasonable starting point for distilling debiased classifiers, and (2) adopting sample upweighting for resolving the feature-level bias, which gives a significant boost to the worst-group performance of the student.
We also note that, in DeTT, the labels in the training dataset are used only for identifying the samples to upweight, by which we mitigate the dataset bias. In other words, if we have an access to an external unlabeled training dataset (e.g., unlabeled ImageNet) that does not contain spurious correlation, it may be possible to debiasedly distill the feature map without any upweighting. We test this idea in section \ref{ssec:ood}, where we find that unsupervised distillation indeed leads to a competitive worst-group performance of the student.
In the remainder of this section, we describe each distillation procedure with more technical details.
\subsection{Last Layer: Transplanting the Teacher Classifier}\label{ssec:lastlayer}
For a $k$-class classification problem, let $g_S: \mathbb{R}^{d_S} \to \mathbb{R}^k$ be the last fully connected layer of the student model, where $d_S$ is the number of last hidden layer neurons. To generate $g_S$, DeTT directly transplants the teacher linear classifier $g_T: \mathbb{R}^{d_T} \to \mathbb{R}^k$, where $d_T$ is the number of last hidden layer neurons in the teacher model.
Whenever there is no mismatch in the input dimensionality between the linear classifiers of the teacher and the student (i.e., $d_S = d_T$), we simply replace $g_S$ by $g_T$. That is, DeTT changes the student model to be
\begin{equation}
f_S(x) = g_T \circ \phi_S(x),\label{eq:transplant_student}
\end{equation}
where $\phi_S$ is the feature map of the student.
In many cases, however, we have $d_S \ne d_T$. Indeed, as we are mainly considering the student model that is smaller in size than the teacher model (so that the student can be run on a limited-resource device, whereas teacher cannot), it is likely that the student model has a smaller hidden layer dimension than the teacher. In such case, we attach a lightweight trainable projector $\Pi: \mathbb{R}^{d_S} \to \mathbb{R}^{d_T}$ in front of the transplanted $g_T$. In other words, the student is
\begin{equation}
f_S(x) = g_T \circ \Pi \circ \phi_S(x).
\end{equation}
With this student model with the transplanted last layer, we train the feature map $\Pi \circ \phi_S(x)$ by distillation loss, which we will describe in more detail in \cref{ssec:restlayers}.
\subsection{Feature Map: Distillation with Sample Upweighting}\label{ssec:restlayers}
Unlike the last layer which is directly transplanted from the teacher module to the student, the feature map of the student model is distilled from the teacher using the (biased) training data. We use the MSE loss for the distillation; in the most basic form (without any sample upweighting), DeTT trains the student feature map $\phi_S$ to solve
\begin{equation}
\min_{\phi_S} \mathbb{E}\left\|\phi_S(x) - \phi_T(x) \right\|^2,\label{eq:mseloss}
\end{equation}
where $\phi_T$ is the feature map of the teacher model and the expectation is taken with respect to the training data. Whenever there is a mismatch between the output dimensionalities of $\phi_S$ and $\phi_T$, we replace $\phi_S$ by $\Pi \circ \phi_S$ and jointly train the combined feature map, where $\Pi$ is the trainable lightweight projector (described in \cref{ssec:lastlayer}).
\textbf{Sample Upweighting.} Empirically, we observe that a na\"{i}ve distillation of the feature map (via \cref{eq:mseloss}) results in a suboptimal worst-group accuracy. Instead, one can debias the student feature map further by upweighting the samples that have been misclassified by a biasedly trained model, i.e., an \textit{identification model} \citep{nam2020learning,liu2021just}. In particular, DeTT uses the identification model proposed by \citet{liu2021just} to train a vanilla ERM model with the same size as the student, and upweight the samples misclassified by this model by the factor of $\lambda_{\mathrm{up}}$. For tuning this hyperparameter $\lambda_{\mathrm{up}}$, we use a small number of \textit{validation samples} with bias annotations, as in standard debiasing works \citep{nam2020learning,liu2021just}.
\section{Related Work}
In this section, we briefly overview existing work on debiasing and the knowledge distillation.
\textbf{Debiasing.} The potential harms of spurious correlations residing in the training dataset have received a lot of attention recently. The literature is distinguished from the field of ``learning from class-imbalanced data,'' in the sense that its main cause is attributed to the easy-to-learn nature of spuriously correlated attributes, instead of the imbalance in subgroup population \citep{geirhos2020}.
Many \textit{debiasing} strategies to prevent fitting the spurious correlation has been proposed. \citet{sagawa2019distributionally} formulates debiasing as a worst-group loss minimization, leveraging the fact that if one achieves a good performance on the subgroup of bias-conflicting samples, then the model can be viewed as not utilizing the spurious correlation. Subsequent works mostly focus on a more challenging setup of debiasing without annotations on the spuriously correlated attribute, inspired by the fact that the bias in a dataset is difficult to detect \citep{nam2020learning,sohoni20,liu2021just}. A popular line of works proposes debiasing algorithms based on sample reweighting \citep{nam2020learning,liu2021just,idrissi2022simple}, while a more recent group of works focus on the role that the last layer plays in debiasing \citep{kirichenko2022last,izmailov2022feature}. Our algorithm is inspired by both lines of work, utilizing resampling and paying a special attention to the last layer.
\textbf{Knowledge distillation (KD).}
Knowledge distillation \citep{hinton2015distilling} is a model compression technique that uses the predictions of a large-scale teacher models to enhance the training of small student models. KD also plays a significant role outside model compression, showing its power to help training of a similar-sized model \citep{furlanello2018born}. Apart from using the teacher predictions, several other frameworks have been proposed, including the feature distillation \citep{romero2014fitnets} or distilling the inter-sample relations \citep{park19}.
Recently, \citet{chen2022knowledge} distills the feature map with the mean square loss and reuses part of teacher to perfectly mimic the teacher action; this work is most closely related to our method, which also reuses the last layer and distills the feature map.
\textbf{Knowledge distillation from imbalanced dataset.}
Several prior works study knowledge distillation under the presence of class imbalance, which is a closely related field to debiasing.
\citet{lukasik2021teacher} shows that KD algorithms can distill not only the `good knowledge' of the teacher, but also the bad knowledge of the teacher; student networks are likely to boost their performance by reproducing the teacher's behavior in an exaggerated way. This gives rise to the network which is more correct on the samples teacher gets correct, and more wrong on the samples teacher gets wrong. \citet{simonsays} shows that KD can help resolving the inter-class performance discrepancy resulting from pruning the model. \citet{wang2022robust} proposes a distillation algorithm that is specialized for distilling on the long-tailed dataset. These works focus primarily on the long-tail problem, which is distinct from the debiasing that we consider. To the best of our knowledge, our work is the first to consider knowledge distillation under the presence of spurious correlation in the training (or distillation) dataset.
\section{Problem Setup}
Roughly stated, our goal is to train a small-sized \textit{debiased} classifier (student) by distilling the knowledge from a large debiased classifier (teacher), using the training dataset which may contain spurious correlations.
More concretely, we consider the following setup: Let $x \in \mathcal{X}$ be an input feature from which we want to predict the corresponding label $y(x) \in \mathcal{Y}$. In addition to the label, we assume that there exists another attribute $a(x) \in \mathcal{A}$ which may be \textit{spuriously correlated} (i.e., misleadingly correlated but not causally related) with the label. For example, the majority of the images labeled as $\mathtt{waterbird}$ may contain the background attribute $\mathtt{water}$, while there can also be images where waterbirds appear in other backgrounds (e.g., ``a duck on the grass''). We say that a dataset is \textit{biased} when such spurious correlation is present in the dataset.
Given a (potentially) biased dataset, our goal is to train a small-sized student model $f_S: \mathcal{X} \to \mathcal{Y}$ which avoids making predictions based on the spuriously correlated attribute. A standard way to formalize this goal is by formulating it as a group distributionally robust optimization (group DRO) problem \citep{sagawa2019distributionally}. The group DRO framework aims to minimize the expected loss for the worst-case subgroup, where the subgroup is defined as all samples sharing the same label and the spuriously correlated attribute pair. In other words, group DRO solves
\begin{equation}
\min_{f} \max_{(y,a) \in \mathcal{Y} \times \mathcal{A}} \mathbb{E}[\ell(y,f(x))~|~y(x) = y, a(x) = a], \label{eq:gdro}
\end{equation}
where $\ell(\cdot,\cdot)$ denotes the loss function. By solving the group DRO (\cref{eq:gdro}), one can avoid learning a predictor which performs too poorly on the samples that conflict with the spurious correlation, e.g., a duck on the grass. Whenever a model is trained to (approximately) achieve this goal, we will say that the model is \textit{debiased}.
For training a debiased student model $f_S$, we assume that the learner has two ingredients at hand:
\begin{itemize}[leftmargin=*,topsep=0pt,parsep=-1pt]
\item First, we have access to a \textit{biased training dataset} containing the spurious correlation. Importantly, we assume that there is \textit{no annotation} on the spuriously correlated attribute, except for a small number of samples for validation purposes \citep{nam2020learning}. This assumption comes from the difficulty of collecting bias annotations, especially when (1) the bias is conceptually difficult to be characterized and communicated to the human labelers, or when (2) the bias attribute itself carries sensitive or private information, such as ethnicity.
\item Second, we can access a \textit{debiased teacher model} $f_T$. We assume that the teacher is large in size, so that the teacher cannot be directly used for the desired purpose (e.g., deploying on a device with limited computing resources). The teacher may have been trained either by (1) utilizing another bias-annotated dataset that is not available at the distillation stage, e.g., due to privacy reasons, or (2) using the same dataset but different debiasing algorithms which are less dependent on the availability of the bias annotations \citep{nam2020learning,liu2021just}.
\end{itemize}
As we will see in \cref{table:main}, the student distilled using a biased dataset with vanilla knowledge distillation method \citep{hinton2015distilling} suffers from a limited worst-group performance; the student often performs worse than a debiasedly trained model without any teacher.
|
1,116,691,499,185 | arxiv | \section{Introduction} \label{sec:Intro}
Despite the tendency of social actors to become more similar
to each other through their local interactions \cite{Latane_81,Moscovici_85}, global polarization, i.e., the existence of
stable multicultural regimes, is a ubiquitous phenomenon in human society \cite{Boyd_85,Nowak_90}.
This issue has been addressed by somewhat idealized agent-based simulations of human behavior producing
highly nontrivial insights on the nature of the collective behavior that results from the homogenizing local
interactions (see \cite{RMP} for a recent review).
In this line, a particularly successful model is Axelrod's model for the dissemination of culture or social influence
\cite{Axelrod_97}.
In Axelrod's model, the agents are placed at the sites of a square lattice of size $L \times L$ and can interact with their nearest neighbors only.
The culture of an agent
is represented by a string of $F$ cultural features, where each feature can adopt a certain number $q$ of distinct traits.
The interaction between any two neighboring agents takes place with probability proportional to the number of
traits they have in common. Although the result of such interaction is the increase of the similarity between
the two agents, as one of them modifies a previously distinct trait to match that of its partner, the model exhibits global
polarization \cite{Axelrod_97}. The key ingredient for the existence of stable globally polarized
states in Axelrod's model is the rule that prohibits the interaction between agents which
do not have any cultural trait in common. Relaxation of this rule so as to permit interactions
regardless of the similarity between agents leads to one of the $q^F$ distinct
absorbing homogeneous configurations \cite{Kennedy_98}. In addition, introduction of external noise so that
traits can change at random with some small probability \cite{Klemm_03c}, as well as the increase of the connectivity of the agents
\cite{Greig_02,Klemm_03b,Klemm_03a}, destabilizes the polarized state also.
Although Axelrod's model enjoyed great popularity among the statistical physics community due mainly to
the existence of a non-equilibrium phase transition \cite{Marro_99} that separates the globally homogeneous from the globally polarized regimes
\cite{Castellano_00,Barbosa_09}, the vulnerability of the polarized absorbing configurations
was considered a major drawback to explaining robust collective social behavior. In this vein,
Parisi et al. \cite{Parisi_03} have proposed a lattice version of the classic frequency bias mechanism
for cultural or opinion change \cite{Boyd_85,Nowak_90}, which
assumes that the number of people holding an opinion is the key factor for an agent to adopt that opinion, i.e., people
have a tendency to espouse cultural traits that are more common in their social environment. Actually, almost any model of cultural
transmission admits that the probability that an individual acquires a cultural variant depends on the frequency of that
variant in the population. The frequency-dependent bias mechanism requires, however, that the individual be {\it disproportionately}
likely to acquire the more common variant \cite{Boyd_85}.
More to the point, in the model of Parisi et al.
the culture of an agent is specified by a binary string of length $F$ (so $q=2$) and each bit of that
string takes the value which is more common among its neighbors \cite{Parisi_03}. Since the model can be immediately recognized as $F$ independent majority-vote models \cite{Liggett_85, Gray_85}, we found the claim by those authors that such model exhibits a polarized regime most intriguing.
In order to check whether the multicultural absorbing configurations reported by Parisi et al. were not artifacts of the small lattice size
used in their study ($L =20$), in this paper we present a detailed analysis of the effects of the finite size of the lattice by considering
square lattices of linear size up to $L = 4000$. In addition, we set $F=1$ to stress the identification with the well-known majority vote
model of statistical physics. Hence the agents or sites of the lattices are modeled by Ising spins.
Our findings indicate that the polarized regime
is indeed stable in the thermodynamic limit $L \to \infty$ and that the element accountable for this stability is the procedure used to calculate the majority, which includes the
site to be updated (target site) in addition to its nearest neighbors.
For sites in the bulk of the lattice, this procedure essentially specifies the criterion of update in case of a tie, i.e.,
in case there is no majority among the neighbors. In this case, the majority-vote variant used by Parisi et al. leaves the state of
the agent untouched, whereas the most common variant used in the literature sets that state at random with probability $1/2$ \cite{Tome_91,deOliveira_92}. It should be noted, however, that the criterion used by Parisi et al. is actually the original definition of the majority-vote model as introduced in Refs.\ \cite{Liggett_85, Gray_85}.
It is surprising that such (apparently) minor implementation detail produces nontrivial consequences in the thermodynamic limit, which can actually be confirmed analytically using a mean-field approach that takes into account the correlation between nearest neighbors -- the pair approximation \cite{Marro_99}. However,
the addition of a thermal-like noise such that the individuals may take a stance opposed to the majority opinion with some small probability
destabilizes the globally polarized regime leading to one of the two low-temperature homogeneous steady states of the Ising model \cite{Glauber_63}.
In that sense we disagree with the claim by Parisi et al. that their variant of the majority-vote model is more robust than Axelrod's model to the effect of this type of noise.
The rest of this paper is organized as follows. In Sect.~\ref{sec:model} we describe the variant of the majority-vote model proposed by Parisi et al.,
which henceforth we refer to as the extended majority-vote model
since its key ingredient is the stretching of the neighborhood to include the target site \cite{Parisi_03}.
In Sect.~ \ref{sec:MF} we present the mean-field approach to this model using the single-site and pair approximations
\cite{Tome_91,Marro_99}.
In Sect.~ \ref{sec:res} we use extensive Monte Carlo simulations to investigate several properties of the absorbing configurations such as
the average number of opinion domains or clusters,
the size of the largest cluster and the distribution of cluster sizes, giving emphasis to their dependences on the lattice size. In that section also
we discuss the vulnerability of the heterogeneous opinion regime against a thermal-like noise, which allows the individuals to disagree
with the majority opinion.
Finally, in Sect.~ \ref{sec:conc} we present our concluding remarks.
\section{Model}\label{sec:model}
The agents are fixed at the sites of a square lattice of size $L \times L$ with open boundary conditions
(i.e., agents at the corners interact with two neighbors, agents at the sides with three,
and agents in the bulk with four nearest neighbors).
The initial configuration is random: the opinion of each agent is set
by a random digit $0$ or $1$ with equal probability. At each time we pick a target agent at random and then verify
which is the more frequent opinion ($1$ or $0$) among its extended neighborhood, which includes the target agent itself. The opinion of the target agent is then changed
to match the corresponding majority value. We note that there are no ties for agents in the bulk or at the corners of the square
lattice since in these cases the extended neighborhood comprises $5$ and $3$ sites, respectively. However, agents at the
sides of the lattice have an extended neighborhood of $4$ sites (i.e., $3$ neighbors plus the target agent) and so in case of a tie,
the opinion of the target agent remains unchanged. Of course, in the limit of large lattices the contribution of these
boundary sites will be negligible. This procedure is repeated until
the system is frozen in an absorbing configuration.
Although the majority-vote rule or, more generally, the frequency bias mechanism for
opinion change \cite{Boyd_85} is a homogenizing assumption by which the agents become more similar
to each other, the above-described model does seem to exhibit global polarization, i.e., heterogeneous absorbing configurations
\cite{Parisi_03}. Since the study of Ref.\ \cite{Parisi_03} was based on
a small lattice of linear size $L=20$ a more careful analysis is necessary to confirm whether this
conclusion holds in the thermodynamic limit as well. It is interesting to note that for
the more popular variant of the majority-vote model, in which the state of the target site is not included in
the majority reckoning, and ties are decided by choosing the opinion of the target agent at
random with probability $1/2$, the only absorbing states in the thermodynamic limit are the two homogeneous configurations.
For finite lattices, however, we find heterogeneous absorbing configurations characterized by stripes that
sweep the entire lattice. In Sect.~ \ref{sec:res} we carry out a finite size scaling analysis of the extended
majority-vote rule aiming at understanding the nature of the opinion domains that fragment
the absorbing configurations.
\section{Mean-field analysis}\label{sec:MF}
In this section we offer an analytical approximation to the extended majority-vote model introduced by Parisi et al \cite{Parisi_03}.
The state of the agent at site $i$ of the square lattice is
represented by the binary variable $\eta_i = 0,1$ and so the configuration of the entire lattice comprising $N=L^2$ sites is
denoted by $\eta \equiv \left ( \eta_1, \eta_2, \ldots, \eta_N \right )$. The master equation that governs the time evolution
of the probability distribution $P \left ( \eta, t \right )$ is given by
\begin{equation}\label{Master1}
\frac{d}{dt} P \left ( \eta, t \right ) = \sum_i \left [ W_i \left ( \tilde{\eta}^i \right ) P \left (\tilde{\eta}^i, t \right )
- W_i \left ( \eta \right ) P \left (\eta, t \right ) \right ]
\end{equation}
where $\tilde{\eta}^i = \left ( \eta_1, \ldots, 1- \eta_i, \ldots, \eta_N \right )$ and $W_i \left ( \eta \right )$ is
the transition rate between configurations $\eta$ and $\tilde{\eta}^i$ \cite{Tome_91,deOliveira_92}.
For the extended majority-vote model we have
\begin{equation}\label{W1}
W_i \left ( \eta \right ) = \left | \Theta \left [ \sum_\delta \eta_{i+\delta} + \eta_i- 3 \right ] -
\eta_i \right |
\end{equation}
where the notation $\sum_\delta \left ( \ldots \right )$ stands for the sum over the 4 nearest neighbors of site $i$
and $\Theta \left ( x \right ) =1 $ if $x \geq 0$ and $0$ otherwise.
We are interested in determining the fraction of agents holding opinion $1$ or, equivalently, the fraction of sites in state $1$, which we denote by $\rho$. Since all sites are
equivalent we have $\rho \equiv \sum_i \eta_i/N = \left \langle \eta_i \right \rangle $ and this mean value is given by the equation
\begin{equation}\label{R0}
\frac{d}{dt} \left \langle \eta_i \right \rangle = \left \langle \left ( 1 - 2 \eta_i \right ) W_i \left ( \eta \right )
\right \rangle
\end{equation}
with $ \left \langle \left ( \ldots \right ) \right \rangle \equiv \sum_\eta \left ( \ldots \right ) P \left (\eta, t \right ) $
as usual. To carry out this average we need to make approximations. In what follows we study in detail two such approximation schemes,
namely, the single-site approximation and the pair approximation.
\subsection{The single-site approximation}
This the simplest mean-field scheme which assumes that the sites are independent random variables so that
Eq. (\ref{R0}) becomes
\begin{eqnarray}\label{R1a}
\frac{d \rho }{dt} & = & -\rho \sum_{n=0}^4 B_n^4 \left ( \rho \right ) \left | \Theta \left [ n - 2 \right ] - 1 \right |
\nonumber \\
& & + \left ( 1 - \rho \right ) \sum_{n=0}^4 B_n^4 \left ( \rho \right ) \Theta \left [ n - 3 \right ]
\end{eqnarray}
where $B_n^m$ is the Binomial distribution
\begin{equation}\label{Bin}
B_n^m \left ( \rho \right ) = \left ( \begin{array}{c} m \\ n \end{array} \right ) \rho^n \left ( 1- \rho \right )^{m-n} .
\end{equation}
Carrying out the sums explicitly and rearranging the terms yield
\begin{equation}\label{R1b}
\frac{d \rho }{dt} = - \rho \left ( 1 - \rho \right ) \left ( 2 \rho - 1 \right ) \left ( 3 \rho^2 - 3 \rho -1 \right ) .
\end{equation}
This equation, which is invariant to the change $\rho \leftrightarrow 1- \rho$, has three fixed points,
namely, $\rho^* = 0$, $ \rho^* = 1 $ and $ \rho^* = 1/2$. The first two fixed points are stable and the
third one is unstable. This means that in the single-site approximation the only stable configurations are the homogeneous ones.
Finally, we note that $\rho$ contains the same information as the single-site probability
distribution $p_1 \left ( \eta_i \right ) $. In fact, $p_1 \left ( \eta_i = 1 \right ) = \rho $ and $p_1 \left ( \eta_i = 0 \right ) = 1 -\rho $.
\subsection{The pair approximation}
In this scheme we assume that nearest neighbors sites are statistically dependent random variables so that, in addition to the
single-site probability distribution $p_1 \left ( \eta_i \right ) $, we need to compute the pair probability distribution $
p_2 \left ( \eta_i, \eta_{i+\delta} \right )$ as well. Of particular importance for the subsequent calculations is the conditional
probability distribution $ p_{1|1} \left ( \eta_{i+\delta} \mid \eta_i \right )$ which is given simply by the ratio between the pair
and the single-site probability distributions.
For the sake of concreteness, let us denote the states of the 4 neighbors of site $i$ by $\eta_1, \eta_2, \eta_3$ and $\eta_4$. These are
independent random variables in the pair approximation since
they are not nearest neighbors in the square lattice, so the sum $n = \eta_1 + \eta_2 + \eta_3 + \eta_4 $
that appears in the argument of the Theta functions is a sum of independent variables. With these notations we can rewrite Eq.\ (\ref{R0}) as
\begin{eqnarray}\label{R2a}
\frac{d \rho }{dt} & = & -\rho \sum_{\eta1,\ldots,\eta_4} p_{4 \mid 1} \left ( \eta_1,\eta_2,\eta_3 ,\eta_4 \mid 1 \right ) \left |
\Theta \left ( n - 2 \right ) -1 \right | \nonumber \\
& & + \left ( 1 - \rho \right ) \sum_{\eta1,\ldots,\eta_4} p_{4 \mid 1} \left ( \eta_1,\eta_2,\eta_3 ,\eta_4 \mid 0 \right )
\Theta \left ( n - 3 \right ) \nonumber \\
\end{eqnarray}
where we have carried out the sum over $\eta_i =0,1$ explicitly. The assumption that $\eta_1,\dots,\eta_4$ are statistically
independent random variables allows to write
\begin{equation}
p_{4 \mid 1} \left ( \eta_1,\eta_2,\eta_3 ,\eta_4 \mid \eta_i \right ) = p_{1 \mid 1} \left ( \eta_1 \mid \eta_i \right ) \times \ldots \times
p_{1 \mid 1} \left ( \eta_4 \mid \eta_i \right )
\end{equation}
and finally obtain
\begin{eqnarray}\label{R2b}
\frac{d \rho }{dt} & = & -\rho \sum_{n=0}^4 B_n^4 \left [ p_{1 \mid 1} \left ( 1 \mid 1 \right ) \right ] \left |
\Theta \left [ n - 2 \right ] - 1 \right | \nonumber \\
& & + \left ( 1 - \rho \right ) \sum_{n=0}^4 B_n^4 \left [ p_{1 \mid 1} \left ( 1 \mid 0 \right ) \right ] \Theta \left [ n - 3 \right ]
\end{eqnarray}
which is identical to Eq. (\ref{R1a}) except for the arguments of the binomial distributions. Using the notation
$\phi \equiv p_2 \left ( 1, 1 \right )$ we write
\begin{eqnarray}
p_{1 \mid 1} \left ( 1 \mid 1 \right ) & = & \frac{\phi}{\rho} \\
p_{1 \mid 1} \left ( 1 \mid 0 \right ) & = & \frac{\rho - \phi}{1-\rho}
\end{eqnarray}
so that Eq.\ (\ref{R2b}) involves only two unknowns, $\rho$ and $\phi$. Carrying out the summations explicitly so as to eliminate
the Theta functions yields
\begin{eqnarray}\label{R2c}
\frac{d \rho }{dt} & = & -\rho \left [ B_0^4 \left ( \frac{\phi}{\rho} \right ) + B_1^4 \left ( \frac{\phi}{\rho}
\right ) \right ] \nonumber \\
& & +
\left ( 1 - \rho \right ) \left [ B_3^4 \left ( \frac{\rho-\phi}{1-\rho} \right ) + B_4^4 \left ( \frac{\rho - \phi}{1-\rho} \right ) \right ] .
\end{eqnarray}
We note that this equation reduces to Eq.\ (\ref{R1b}) in the case the neighboring sites are assumed independent, i.e., $\phi = \rho^2$.
\begin{figure}
\centerline{\epsfig{width=0.52\textwidth,file=fig1.eps}}
\caption{Time evolution of the fraction of sites in state $1$, $\rho$, in the pair approximation obtained by solving Eqs.\ (\ref{R2c})
and (\ref{R2e}) using Euler's method with step-size $0.01$ for different initial conditions (bottom to top) $\rho_0 = 0.1, \ldots, 0.9$.
\label{fig:1} }
\end{figure}
\begin{figure}[ht]
\centerline{\epsfig{width=0.52\textwidth,file=fig2.eps}}
\caption{The fraction of sites in state $1$ at equilibrium, $\rho^* $ (represented by $\circ$), and
the probability that two neighbors are in state $1$ at equilibrium, $\phi^*$ (represented by $\triangle$)
as functions of the initial fraction of sites in state $1$, $\rho_0$.
The solid line is the result of the pair approximation for which $\phi^* \to \rho^*$.
The initial condition is $\rho = \rho_0$ and $\phi = \rho_0^2$. The symbols
show the results of the Monte Carlo simulations for a square lattice of linear size $L=200$.
\label{fig:2} }
\end{figure}
Next, our task is to determine an equation for $\phi = \left \langle \eta_i \eta_j \right \rangle $ where $j$ labels one of the four
nearest neighboring sites of $i$. We have
\begin{equation}
\frac{d}{dt} \left \langle \eta_i \eta_j \right \rangle = 2 \left \langle \eta_j \left ( 1 - 2 \eta_i \right ) W_i \left ( \eta \right ) \right \rangle .
\end{equation}
Carrying out the average over $\eta_i$ and $\eta_j$ (say $j=1$) explicitly yields
\begin{eqnarray}\label{R2d}
\frac{d \phi }{dt} & = & -2 \rho p_{1 \mid 1}\left ( 1 \mid 1 \right )
\sum_{m=0}^3 B_m^3 \left [ p_{1 \mid 1} \left ( 1 \mid 1 \right ) \right ] \left | \Theta \left [ m - 1 \right ] - 1 \right | \nonumber \\
& & + 2 \left ( 1 - \rho \right ) p_{1 \mid 1} \left ( 1 \mid 0 \right )
\sum_{m=0}^3 B_m^3 \left [ p_{1 \mid 1} \left ( 1 \mid 0 \right ) \right ] \Theta \left [ m - 2 \right ] \nonumber \\
\end{eqnarray}
which can be written more compactly as
\begin{eqnarray}\label{R2e}
\frac{d \phi }{dt} & = & - 2 \phi B_0^3 \left ( \frac{\phi}{\rho} \right ) \nonumber \\
& & + 2 \left ( \rho - \phi \right ) \left [ B_2^3 \left ( \frac{\rho-\phi}{1-\rho} \right )
+ B_3^3 \left ( \frac{\rho-\phi}{1-\rho} \right ) \right ] . \nonumber \\
\end{eqnarray}
Equations (\ref{R2c}) and (\ref{R2e}) determine completely the time evolution of $\rho$ and $\phi$ and so they are our final equations
for the pair approximation of the extended majority-vote model. Figure \ref{fig:1}, which shows
the time evolution of the density of sites in state $1$, confirms that Eq. (\ref{R2e}) is indeed invariant to the change $ 0 \leftrightarrow 1$.
Most surprisingly, this figure uncovers an unexpected dependence on the choice of the (random) initial condition $\rho_0$ and $\phi_0 = \rho_0^2$: indeed in the range $\rho_0 \in \left ( \rho_{m}, 1 - \rho_m \right )$, where $\rho_m \approx 0.25$, the equilibrium
solution $\rho^*$ is a smooth function of $\rho_0$, i.e., there is a continuum of fixed points. Accordingly,
Fig. \ \ref{fig:2} shows the fixed point
solutions of Eqs.\ (\ref{R2c}) and (\ref{R2e}) for the usual situation in which the states of the sites of the initial configuration are set $1$ with probability $\rho_0$ and
$0$ with probability $1 - \rho_0$. For this setting we have $\phi_0 = \rho_0^2$.
The mathematical explanation for the odd dynamical behavior exhibited in Figs. \ref{fig:1} and \ref{fig:2} is that the dynamic evolution is
such that $\rho \to \phi$ for $t \to \infty$, a condition that solves the two equations
(\ref{R2c}) and (\ref{R2e}) at equilibrium (i.e., $d\rho/dt = d\phi/dt = 0$) simultaneously
and so leaves one of the unknowns free to take any arbitrary value set by the dynamics or the initial conditions.
The physical reason is that there are in fact many absorbing configurations very close to the random initial configurations and a few
lattice updates (typically 10) is sufficient to freeze the dynamics (see Fig.\ \ref{fig:3}).
In fact, the comparison with the results of the Monte Carlo simulations exhibited in Fig.\ \ref{fig:2} shows a good qualitative agreement between the pair
approximation and the simulation results.
Finally, we note that the pair approximation for the variant of the majority-vote model in which ties are decided by
flipping the state of the target site with probability $1/2$ yields the anticipated result that the only stable fixed points are $\rho=\phi = 0$ and $\rho=\phi = 1$, whose basins of attraction depend on whether $\rho_0$ is less or greater than $1/2$.
\section{Monte Carlo simulations}\label{sec:res}
In this section we first demonstrate that the heterogeneous absorbing configurations of the extended majority-vote model do persist in the
thermodynamic limit and then we proceed with the characterization of the opinion domains (clusters) that fragment those configurations.
Figure \ref{fig:3} illustrates one such a configuration for $L=300$.
We recall that a cluster is simply a connected, bounded lattice region wherein the agents share the same opinion.
We focus on a few relevant statistical quantities to characterize the cluster organization,
namely, the average number of clusters
$\langle M \rangle $, the average size of the largest
cluster $\langle S_m \rangle$, and the distribution of cluster sizes $P_S$. We must evaluate these quantities
for different lattice sizes (typically we use $10^4$ samples for each $L$) and then take an appropriate extrapolation procedure
to infinite lattices ($L \to \infty $).
\begin{figure}
\centerline{\epsfig{width=0.52\textwidth,file=fig3.eps}}
\par
\caption{A typical absorbing configuration of the extended majority-vote model for a lattice of linear size $L=300$.
Agents with opinion $1$ are painted black and agents
with opinion $0$ are painted white. There are a total of $M= 779$ clusters and the largest one comprises
$S_m= 4581$ agents.
\label{fig:3} }
\end{figure}
To simulate efficiently the extended majority-vote model for large lattices we first make a list of the active agents. An active agent is an agent
whose opinion differs from the most frequent opinion among its extended neighborhood.
Clearly, only active agents can change their opinions and so it is more efficient to select the
target agent randomly from the list of active agents rather than from the entire lattice. In the case that
the opinion of the target agent is modified by the majority-vote rule, we
need to re-examine the active/inactive status of the target agent as well as of all its neighbors so as
to update the list of active agents. The dynamics is frozen when the list of active agents is empty. The implementation of
a similar procedure allowed the simulation of Axelrod's model for very large lattices and the clarification of
several issues regarding the stability of the heterogeneous absorbing configurations of that model \cite{Barbosa_09,Peres_10}.
\subsection{Statistics of the number of clusters}
We begin our analysis by presenting the dependence of the average number of clusters $\langle M \rangle$
on the lattice area $L^2$ in Fig.\ \ref{fig:4}. The increase of $\langle M \rangle$
with increasing $L^2$ is the evidence that confirms that the extended majority-vote model
exhibits heterogeneous absorbing configurations in the thermodynamic limit, $L \to \infty$. More to the point,
we find
\begin{equation}\label{a_F1}
\lim_{L \to \infty} \frac{\langle M \rangle}{L^2} = 0.00742 \pm 10^{-5} .
\end{equation}
For the purpose of comparison, for frozen random configurations in which the $L^2$ sites are set to $1$ or $0$
with the same probability
this ratio yields $0.1342 \pm 0.0001$ (and so the average number of clusters scales linearly with the
lattice area too). We note that the reciprocal of the ratio given in Eq.\ (\ref{a_F1}) is the average size of the clusters:
$\langle S \rangle \approx 134.8$ for the extended majority-vote model and $\langle S \rangle \approx 7.45$ for random configurations.
\begin{figure}[t]
\centerline{\epsfig{width=0.52\textwidth,file=fig4.eps}}
\caption{Logarithmic plot of the average number of clusters $\langle M \rangle $ as function of the
the lattice area $L^2$. The solid straight line is the fitting
$ \langle M \rangle = 0.00742 L^2 $ for large $L$. Each symbol represents the average over
$10^4$ samples so the sizes of the error bars are smaller than the symbol sizes.
\label{fig:4} }
\end{figure}
\subsection{Statistics of the largest cluster}
Although the mean size of a typical cluster is finite, there may be clusters
whose sizes can become arbitrarily large as the lattice size increases. To investigate this point we show in
Fig. \ref{fig:5} the average size of the largest cluster $\langle S_m \rangle$ as function of the lattice area.
As in the case of $\langle M \rangle $, the asymptotic scaling is revealed only for large lattices ($L > 500$) and
it indicates that $\langle S_m \rangle$ increases with $\ln L^2 $ for large $L$.
\begin{figure}
\centerline{\epsfig{width=0.52\textwidth,file=fig5.eps}}
\caption{The average size of the largest cluster $ \langle S_m \rangle$ as function of the
the lattice area $L^2$. The solid line is the fitting $ \langle S_m \rangle = 1933 \ln L^2 - 36363 $.
Note the logarithmic scale in the $x$-axis.
\label{fig:5} }
\end{figure}
In the thermodynamic limit the relevant quantity is then
\begin{equation}\label{b_F1}
\lim_{L \to \infty} \frac{\langle S_m \rangle}{\ln L^2} = 1933 \pm 10 .
\end{equation}
We found that $\langle S_m \rangle$ scales with $\ln L^2 $ for random configurations also, though the ratio
$\lim_{L \to \infty} \langle S_m \rangle /\ln L^2 = 53.0 \pm 0.2$ is considerably smaller than the ratio
given in Eq.\ (\ref{b_F1}).
It is interesting to note that the standard deviation $\left [ \left \langle S^2 \right \rangle
- \left \langle S \right \rangle^2 \right ]^{1/2} $ tends to the constant value $695.5 \pm 0.5$ in the
thermodynamic limit. This amounts to a large but finite variance and so in order to satisfy the
Tchebycheff inequality the set of clusters whose size grows like $\ln L^2$ must have measure zero
in that limit.
Since the probability that a randomly chosen site belongs to one such a cluster vanishes like
$ \left ( \ln L \right )/L^2$, the measure of the set of diverging clusters will be vanishingly small if
its cardinality is finite, i.e., if only a finite number of clusters have sizes scaling with $ \ln L^2$.
\subsection{Distribution of cluster sizes}
Up to now we found no qualitative differences between the statistical properties of the clusters produced by the extended majority-vote rule or by
assigning randomly the digits $1$ and $0$ to the sites of the square lattice. In fact,
for both types of configurations the quantities $\langle M \rangle$ and $ \langle S_m \rangle $ exhibit the same scaling behavior
with the lattice area $L^2$. However, a more detailed view of the cluster organization is given by the distribution $P_S$ of cluster sizes $S$, which is
shown in Fig.\ \ref{fig:6} for different lattice sizes. The data seems to be remarkably well fitted
by the power-law distribution $P_S \sim S^{-1.5}$ for over more than three decades.
In addition, the figure indicates that the region where the power-law fitting holds increases with increasing $L$.
This power-law distribution, however, is not compatible with our previous findings of finite
values for the mean and variance of $P_S$, since the distribution $P_S \sim S^{-1.5}$ has infinite mean and variance.
To answer this conundrum, in Fig.\ \ref{fig:7} we plot the same data using a semi-logarithmic scale instead of the log-log scale of
Fig.\ \ref{fig:6}.
The results show that for large lattice sizes the distribution $P_S$ is in fact exponential, the power-law behavior
being valid only in an intermediate range of cluster sizes. The results for the larger lattice $L=1000$ (though not the results for
the smaller lattices $L=50$ and $L=100$) show that the exponential distribution is
not a mere cut-off inherent to the finitude of the lattices used in the simulations, as manifested by the straight-line appearance of
$P_S$ revealed by the semi-log graph in the regime of large clusters. For the sake of comparison, in Fig.\ \ref{fig:8} we show the distribution of cluster sizes for frozen random configurations which can be
well-described by the exponential distribution
$P_s \approx 0.00085 \exp \left ( - 0.02 S \right ) $ for large $S$. The puzzling
power-law behavior found for intermediate values of $S$ in the extended majority-vote model is missing
in this case.
Finally, it is interesting to note that, for any absorbing configuration of the extended majority-vote model, clusters of size $1$ are prohibited, clusters of sizes $2$ and $3$ are allowed at the sides of the square lattice only, and the only cluster of size $4$ allowed in the bulk of the lattice is that in which the four sites surrounding a unit cell are in the same state. In fact, this elementary square cluster is the only structure capable to propagate through the lattice, i.e., whenever a stable cluster is formed one such elementary square is formed.
\begin{figure}
\centerline{\epsfig{width=0.52\textwidth,file=fig6.eps}}
\caption{Logarithmic plot of the distribution of cluster sizes $S$ for (solid lines from left to right at
$P_S = 10^{-6}$) $L=50, 100, 200$ and
$1000$. The dashed straight line is the fitting $P_s = 0.79 S^{-1.5}$. These distributions are obtained by
sampling $10^7$ absorbing configurations of the extended majority-vote model.
\label{fig:6} }
\end{figure}
\begin{figure}
\centerline{\epsfig{width=0.52\textwidth,file=fig7.eps}}
\caption{The same data shown Fig.\ \ref{fig:6} plotted using a semi-logarithmic scale.
\label{fig:7} }
\end{figure}
\begin{figure}
\centerline{\epsfig{width=0.52\textwidth,file=fig8.eps}}
\caption{Semi-logarithmic plot of the distribution of cluster sizes $S$ for random configurations with (solid lines from bottom to top) $L=100$ and
$300$. These distributions are obtained through
the sampling of $10^8$ random configurations.
\label{fig:8} }
\end{figure}
\subsection{Vulnerability to noise}
A word is in order about the robustness of the heterogeneous absorbing configurations to the effect of noise, which allows the
agents to oppose the opinion of the majority with some small probability. More pointedly, after applying the
update rule for each agent, which may or may not result in the change of its opinion, we
flip its opinion with probability $p \in \left [ 0,1/2 \right ] $ \cite{deOliveira_92}.
Since in this case there are no absorbing configurations the notion of active sites is useless and so we
implement the dynamics by picking lattice sites at random with the same probability. Then the unit of
time (a Monte Carlo step) corresponds to the update of
$L^2$ randomly chosen sites. For $L =50$ we present in Fig.\ \ref{fig:9} the time evolution of
the fraction of sites in state $1$, which we denote by $\rho$ as in Sect. \ref{sec:MF},
for different values of the
noise parameter $p$ but for the same initial configuration. This figure, which exhibits the evolution of
a single sample, shows the vulnerability of the heterogeneous absorbing configurations against a vanishingly amount of thermal
noise if one gives enough time for thermalization. For the small values of $p$ considered, the steady-state is close to
one of the two homogeneous configurations, with all individuals exhibiting opinion $0$ or opinion $1$. The specific outcome is a stochastic
process which depends on the choice of the initial configuration as well as on the sequence of
random numbers used in the update rule.
\begin{figure}
\centerline{\epsfig{width=0.52\textwidth,file=fig9.eps}}
\par
\caption{Fraction of sites in state $1$ as function of the number of Monte Carlos steps $\tau$ for the thermal noise intensity
(solid lines from left to right)
$p = 10^{-3},10^{-4} $ and $10^{-5}$ and for a lattice of size $L=50$. The dashed line is the evolution for the noiseless case which rapidly gets stuck in
an absorbing configuration. The initial configuration is the same for the four trajectories.
\label{fig:9} }
\end{figure}
\section{Conclusion}\label{sec:conc}
This paper is motivated by the claim that the extended majority-vote model, for which the target site counts in the reckoning of the majority, exhibits a nontrivial multicultural regime, i.e., heterogeneous absorbing configurations \cite{Parisi_03}. From the perspective of the statistical physics, we find this claim
most intriguing, as we expect the model to behave similarly to the
majority-vote models commonly considered in the physics literature, which are basically Ising models with
zero-temperature Glauber kinetics \cite{Glauber_63}, and so exhibit only two homogeneous absorbing configurations in the thermodynamic limit. Our first impression was that the heterogeneous absorbing configurations were
artifacts of the small lattice size ($L=20$) used or of some `non-physical' elements of the
original model such as the nearest and next-nearest neighbors interactions (Moore neighborhood) and the
parallel update of the sites \cite{Parisi_03}.
Accordingly, we have modified the original model proposed by Parisi et al. by introducing the usual
nearest neighbors interactions (von Neumann neighborhood) and the random sequential update of sites. A
careful finite size scaling analysis of a few relevant measures that characterize the absorbing configurations shows
that the heterogeneous absorbing configurations not only persist in the thermodynamic limit but
produce a tricky distribution of cluster sizes $P_S$ that decays as a power law $P_S \sim S^{-1.5}$ for clusters of intermediate size $S$ and as an exponential for very large clusters sizes (see Figs.\ \ref{fig:6} and
\ref{fig:7}). Essentially, the reason for this somewhat unexpected outcome is the criterion of update in case of a tie for a site in the bulk
of the lattice:
the site remains unchanged
rather than flipping to another state with probability $1/2$ as usually done in statistical mechanics models \cite{Tome_91,deOliveira_92}.
What is remarkable is that the existence of these heterogeneous absorbing configurations for the extended majority-vote model,
as well as their absence in the more usual variants of that model, can actually be predicted analytically using the pair approximation: the multiple-clusters regime is signaled by the presence of an infinity of attractive fixed points of the mean-field equations.
We found that the statistical properties (e.g., average number of clusters, mean size of the largest cluster and the asymptotic distribution of cluster sizes) of the clusters that break up the absorbing configurations (see Fig.\ \ref{fig:3}) are qualitatively identical to those of random configurations. This could also be inferred by the very short convergence times (see dashed line in Fig.\ \ref{fig:9}) which indicate that the absorbing configurations are very close to the initial random configurations, since a few lattice updates suffice to reach them. We are not aware of any similar study of the statistical properties of the heterogeneous configurations of Axelrod's model, so we cannot compare the cluster organization of these two models.
Our findings, which corroborate the results of Parisi et al. \cite{Parisi_03}, shows that
the extended majority-vote model does exhibit a `multicultural' regime, despite the fact that the update rule
biases the agents to become more similar to each other. A similar conclusion holds for Axelrod's model
as well \cite{Axelrod_97,Castellano_00,Barbosa_09}, except that in Axelrod's model the similarity is a prerequisite for interactions - the `birds of a feather flock together'
hypothesis which states that individuals who are similar to each other are more likely to interact
and then become even more similar \cite{Moscovici_85}. (A similar assumption has been used to model the
interspecies interactions in spin-glass like model ecosystem \cite{deOliviera_02}.)
The majority-vote model is considerably simpler and converges to the absorbing configurations much faster
than Axelrod's. However, the (short) inventory of advantages stops here: in disagreement with the claim of Parisi et al. \cite{Parisi_03} we found
that the absorbing configurations of the extended majority-vote model are
vulnerable to the noisy effect of flipping the opinions of the agents with some small probability.
It is well-known that this type of noise destabilizes the heterogeneous configurations of Axelrod's model too \cite{Klemm_03c}.
Of course, the extended majority-vote model lacks the main appealing feature of Axelrod's model, namely,
the existence of a non-equilibrium phase transition
that separates the homogeneous and the polarized regimes in the thermodynamic limit. In that sense it would be
interesting to find out how a similar transition could be induced in the zero-temperature majority-vote model.
\begin{acknowledgments}
This research was supported by The Southern Office of Aerospace Research and Development (SOARD), grant FA9550-10-1-0006,
and Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq).
\end{acknowledgments}
|
1,116,691,499,186 | arxiv | \section{Introduction}
The measurements of the frequencies of p-mode
oscillations provide an insight into the internal structure of stars and are nowadays
the most powerful constraint to the theory of stellar evolution.
The five-minute oscillations in the Sun have led to a wealth of information
about the solar interior. These results stimulated various attempts to detect
a similar signal on other solar-like stars by photometric or equivalent
width measurements, with little success due to the extreme weakness of the
expected amplitude. These past years, the stabilized spectrographs developed
for extra-solar planet detection achieved accuracies needed for solar-like
oscillation detection by means of radial velocity measurements
(Carrier et al. \cite{carrier1}, Bouchy \& Carrier \cite{bouchy1}).
A primary target for the search for p-mode oscillations is the well-studied bright subgiant G0
\object{$\eta$ Bootis} (HR5235). Several attempts have been made to detect solar-like oscillations in the
G0 IV star \object{$\eta$ Boo}. The first result was obtained by Kjeldsen et al. (\cite{kjel1})
with observations conducted with the 2.5-m Nordic Optical Telescope (NOT) on La Palma.
In contrast to all other detections which were based on velocity measurements
obtained using high-dispersion spectrographs with stable references, they monitored changes
in the equivalent widths (EW) of temperature-sensitive spectral lines. This enabled them to determine
a large separation of 40.3\,$\mu$Hz. Meanwhile, a search for velocity oscillations in \object{$\eta$ Boo}
using the AFOE spectrograph
by Brown et al. (\cite{brown}) has failed to detect a stellar signal.
The analysis of NOT data was refined by Kjeldsen et al. (\cite{kjel2})
using all existing complementary data: new EW measurements obtained with the NOT, new
radial velocities (RV) measured at Lick Observatory and RVs from Brown with the AFOE spectrograph.
They found a large separation of $\Delta\nu$\,=\,40.4\,$\mu$Hz and identified 21 oscillation
frequencies.
In this paper, we report Doppler observations of \object{$\eta$ Boo}tis made with the \textsc{Coralie} and
the \textsc{Elodie} spectrographs in a multi-sites configuration. These new measurements confirm the detection of p-modes
and enable the identification of twenty-two individual mode frequencies, which are compared with those
independently identified by Kjeldsen et al. (\cite{kjel2}). We also present new models of \object{$\eta$ Boo} based on our seismological
constraints. The observations and data
reduction are presented in Sect.~\ref{datareduc}, the acoustic spectrum analysis and the mode identification
in Sect.~\ref{asa}, the calibration and modeling of \object{$\eta$ Boo} in Sect.~\ref{mod}, and the conclusion is given in Sect.~\ref{conc}.
\section{Observations and data reduction}
\label{datareduc}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig1.eps}}
\caption{Radial-velocity measurements of \object{$\eta$ Boo}. The dispersion
reaches 3.8\,m\,s$^{-1}$. \textsc{Coralie} and \textsc{Elodie} data are represented with full circles and open
triangles respectively. The upper panel represents the photon noise uncertainties.}
\label{fig:vr}
\end{figure}
\object{$\eta$ Boo} was observed in May 2002 simultaneously with the spectrographs
\textsc{Coralie} at La Silla Observatory (Chile) and \textsc{Elodie}
at the Observatoire de Haute
Provence (France) in order to improve the window function and to make the mode identification easier (see Sect.~\ref{asa}).
\subsection{\textsc{Coralie} measurements}
\object{$\eta$ Boo} was observed over fourteen nights (April 23 -- May 07 2002) with \textsc{Coralie}, the high-resolution (50\,000)
echelle spectrograph mounted on the 1.2-m Swiss telescope at La Silla, known for the p-mode identification in
the $\alpha$~Cen system (Bouchy \& Carrier \cite{bouchy3}, Carrier \& Bourban \cite{carrier3}).
During the stellar exposures, the spectrum of a
thorium lamp carried by a second fiber is simultaneously recorded in order to
monitor the spectrograph's stability and thus to obtain high-precision
velocity measurements. A description of the spectrograph and the data reduction process is
presented in Carrier et al. (\cite{carrier2}) and Bouchy et al. (\cite{bouchy2}).
Exposure times of 180\,s, thus cycles of 295\,s with a dead-time of 115\,s, allowed us to obtain 1055 spectra,
with a typical signal-to-noise ratio ($S$/$N$) in the range of 115-260 at 550\,nm.
For each night, radial velocities were computed relative to the
highest signal--to--noise ratio optical spectrum obtained in the middle of the
night. The mean for each night is then subtracted.
The radial velocity measurements are shown in Fig.~\ref{fig:vr} and their distribution and
dispersion are listed in Table~\ref{tab:vr}. The dispersion of these measurements reaches 3.6\,m\,s$^{-1}$.
\subsection{\textsc{Elodie} measurements}
Due to bad weather, only 184 spectra were collected over 3 nights with the high resolution (42\,000)
spectrograph \textsc{Elodie}
(Baranne et al. \cite{baranne}) mounted on the 1.93-m telescope at Haute-Provence Observatory (France).
The observations are also achieved in simultaneous-Thorium mode. The wavelength coverage of the spectra
is 3890--6815\,\AA, recorded on 67 orders. The dead-time is about 120\,s as the exposure time varies between
150 and 240\,s depending on the seeing, the extinction and the airmass in order to reach a
$S$/$N$ in the range 150--300 at 550\,nm. The radial velocity determination method is the same than for
\textsc{Coralie} data. The radial velocity measurements are shown in Fig.~\ref{fig:vr} and their distribution and
dispersion are listed in Table~\ref{tab:vr}. The dispersion of these measurements reaches 4.3\,m\,s$^{-1}$.
\begin{table}
\caption{Distribution and dispersion of Doppler measurements. Indications for \textsc{Elodie} measurements
are in brackets.}
\begin{center}
\begin{tabular}{clll}
\hline
\hline
Date & No spectra & No hours & $\sigma$ (m\,s$^{-1}$) \\ \hline
2002/04/23 & 96 & 7.83 & 3.69 \\
2002/04/24 & 98 & 8.00 & 3.87 \\
2002/04/25 & 86 & 7.96 & 3.17 \\
2002/04/26 & 93 & 7.54 & 3.60 \\
2002/04/27 & 64 & 5.23 & 3.34 \\
2002/04/28 & 51 & 4.35 & 3.61 \\
2002/04/29&46 (62)&11.81& 4.52 (4.81) \\
2002/04/30 & 93 & 7.59 & 2.97 \\
2002/05/01 & 99 & 8.02 & 3.86 \\
2002/05/02 & 99 & 7.83 & 3.56 \\
2002/05/03 & -- & -- & -- \\
2002/05/04&68 (61)&12.61& 3.98 (4.21) \\
2002/05/05&71 (61)&12.26& 3.79 (4.01) \\
2002/05/06 & 91 &7.77 & 4.04 \\ \hline
\label{tab:vr}
\end{tabular}
\end{center}
\end{table}
\section{Power spectrum analysis}
\label{asa}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig2.eps}}
\caption{Power spectrum of the radial velocity measurements of \object{$\eta$ Boo}.
The window function is shown in the inset. The noise is represented by the white line.}
\label{fig:tf}
\end{figure}
In order to compute the power spectrum of the velocity time series, we use
the Lomb-Scargle modified algorithm (Lomb \cite{lomb}; Scargle \cite{scargle})
with a weight being assigned to each
point according its uncertainty estimate. The time scale gives a formal resolution
of 0.87\,$\mu$Hz. The resulting periodogram, shown in Fig.~\ref{fig:tf},
exhibits a series of peaks near 0.8\,mHz, exactly where
the solar-like oscillations for this star are expected.
Typically for such a power spectrum, the noise has two components:
\begin{itemize}
\item At high frequencies it is flat, indicative
of the Poisson statistics of photon noise. The mean white noise level $\sigma_{\mathrm{pow}}$ calculated between 1.2 and 1.6~mHz
is 0.0179\,m$^2$\,s$^{-2}$, namely 11.9\,cm\,s$^{-1}$ in amplitude. With 1239 measurements, this high frequency noise
corresponds to $\sigma_{RV}\,=\,\sqrt{N \sigma_{\mathrm{pow}} /4 }\,=\,2.35$~m\,s$^{-1}$. This radial velocity uncertainty,
larger than for others stars with the spectrograph \textsc{Coralie} (see e.g. Eggenberger et al. \cite{eggen1} or
Bouchy \& Carrier \cite{bouchy3}), can be mainly explained by the high rotational velocity of \object{$\eta$ Boo} ($v \sin i = 12.8$\,km\,s$^{-1}$, determined
from \textsc{Coralie} spectra).
\item Towards the lowest frequencies, the power should scale inversely with frequency squared as expected for instrumental instabilities.
However, the computation of the radial velocities introduces a high pass filter. Indeed, the radial velocities were computed relative to one
reference for each night and the average radial velocities of the night fixed to zero (see Sect.~\ref{datareduc}). This results in an
attenuation of the very low frequencies which can be seen on Fig.~\ref{fig:tf}.
\end{itemize}
The power spectrum presents an excess in the range 0.4--1.1~mHz.
The combined noise has a value decreasing from 0.027 to 0.019\,m$^2$\,s$^{-2}$ (14.5 to 12.2\,cm\,s$^{-1}$) in the above
mentioned interval (see Fig.~\ref{fig:tf}).
The noise has been determined by fitting a function of the type $1/\nu^2$.
Note that the filtering induced by the radial velocities computation does not influence the frequency of the peaks in the
range 0.4--1.1~mHz, but could slightly change their amplitudes.
The amplitude of the strongest peaks reaches 79~cm\,s$^{-1}$, corresponding to a signal to noise of 6 (in the amplitude spectrum).
This amplitude is estimated as the height of the peaks in the power spectrum with a quadratic subtraction of the mean noise level.
The analysis of the 184 \textsc{Elodie} spectra in addition to the \textsc{Coralie} spectra allows us to diminish the
first and second daily aliases (11.57 and 23.14\,$\mu$Hz) by only 9\,\%. Note that a full coverage of \textsc{Elodie} data would
have allowed us to diminish daily aliases by 33\,\%.
\subsection{Search for a comb-like pattern}
\label{clp}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig3.eps}}
\caption{Autocorrelation of the power spectrum undersampled with a resolution of
1.75\,$\mu$Hz and a threshold of 0.2\,m$^2$\,s$^{-1}$. The large splitting is estimated
to about 40\,$\mu$Hz.}
\label{fig:ac}
\end{figure}
In solar-like stars, p-mode oscillations of low-degree are expected to produce a
characteristic comb-like structure in the power spectrum with mode
frequencies
$\nu_{n,l}$ reasonably well approximated by the asymptotic
relation (Tassoul \cite{tassoul80}):
\begin{eqnarray}
\label{eq1}
\nu_{n,l} & \approx &
\Delta\nu(n+\frac{l}{2}+\epsilon)-l(l+1) D_{0}\;.
\end{eqnarray}
Here, $D_0$, which is equal to $\frac{1}{6} \delta\nu_{02}$ if the
asymptotic relation holds exactly, and $\epsilon$ are sensitive to the sound speed near the core and to
the surface layers respectively.
The quantum numbers $n$ and $l$ correspond to the radial
order and the angular degree of the modes, and $\Delta\nu$ and
$\delta\nu_{02}$
to the large and small separations.
To search for periodicity in the power spectrum, an autocorrelation
is calculated and presented in Fig.~\ref{fig:ac}. As the large spacing is not
strictly constant, we use an undersampled power spectrum with a resolution of 1.75\,$\mu$Hz.
We are thus less sensitive to small variations of the large spacing versus the frequency.
Each peak of the autocorrelation corresponds to a structure present in the power spectrum.
The two strong peaks at low frequency close to 11.5 and 23\,$\mu$Hz
correspond to the daily aliases. The most probable large spacing
is situated near 40\,$\mu$Hz for \object{$\eta$ Boo}.
The result of the autocorrelation confirms the large spacing of 40.4\,$\mu$Hz deduced
from Kjeldsen et al. (\cite{kjel2}).
\subsection{Mode identification}
\begin{table}
\caption[]{Identification of extracted frequencies. Some frequencies can be either $\ell=1$ modes or due to noise.}
\begin{center}
\begin{tabular}{rcc}
\hline
\hline
\multicolumn{1}{c}{Frequency} & Mode ID & S/N \\
\multicolumn{1}{c}{$[\mu$Hz$]$} & & \\
\hline
512.2 & $\ell = 1$ & 3.2 \\
$544.6 - 11.6 = 533.0$ & $\ell = 0$ & 3.0 \\
550.3 & $\ell = 1$ & 3.0 \\
589.9 & $\ell = 1$ & 4.6 \\
$622.2 - 11.6 = 610.6$ & $\ell = 0$ & 3.6 \\
$614.1 + 11.6 = 625.7$ & $\ell = 1$ & 3.3 \\
$653.8 + 11.6 = 665.4$ & $\ell = 1$ & 3.3 \\
669.9 & $\ell = 1$ & 3.9 \\
691.3 & $\ell = 0$ & 4.4 \\
724.5 & $\ell = 2$ & 5.2 \\
728.3 & noise & 3.4 \\
729.5 & $\ell = 0$ & 4.6 \\
748.5 & $\ell = 1$ & 6.5 \\
$777.2 - 11.6 = 765.6$ & $\ell = 2$ & 5.0 \\
$781.0 - 11.6 = 769.4$ & $\ell = 0$ & 4.1 \\
$775.8 + 11.6 = 787.4$ & $\ell = 1$ & 3.3 \\
805.1 & $\ell = 2$ & 3.0 \\
809.2 & $\ell = 0$ & 3.7 \\
$834.5 + 11.6 = 846.1$ & $\ell = 2$ & 4.1 \\
888.7 & $\ell = 2$ & 4.0 \\
891.6 & $\ell = 0$ & 3.2 \\
947.6 & $\ell = 1$ & 3.1 \\
$960.3 + 11.6 = 971.9$ & $\ell = 0$ & 3.7 \\
\hline
\end{tabular}
\end{center}
\label{tab:identif}
\end{table}
\begin{figure}[thb]
\resizebox{\hsize}{!}{\includegraphics{fig4.eps}}
\caption[]{{\bf Top:} Original power spectrum of \object{$\eta$ Boo}. {\bf Bottom:} Cleaned power spectrum: all peaks listed
in Table~\ref{tab:identif} have been removed. The dot-dashed, dotted and dashed lines indicate an amplitude of 5\,$\sigma$,
4\,$\sigma$ and 3\,$\sigma$, respectively. Numerous peaks are still present below 3\,$\sigma$,
since no peaks have been cleaned below this threshold. These peaks can be due to p--mode oscillations and noise or have
artificially been
added by the extraction algorithm due to the finite lifetimes of the modes}
\label{clean}
\end{figure}
\begin{figure*}[thb]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{fig5.eps}}
\caption[]{Power spectrum of \object{$\eta$ Boo} with the twenty-two extracted frequencies indicated by dashed lines.
The identification of each extracted frequency is given in Table~\ref{tab:freq}.}
\label{tfiden}
\end{center}
\end{figure*}
The frequencies were extracted using an iterative algorithm, which identifies the highest peak between
400 and 1100\,$\mu$Hz and subtracts
it from the time series.
First, we iterated this process until all peaks with an amplitude higher than $3\sigma$ were removed
(see Fig. \ref{clean}). $\sigma$ represents the noise in the amplitude spectrum decreasing from 14.5 to 12.2\,cm\,s$^{-1}$
in the above mentioned interval (see Sect.~\ref{asa}). Peaks with amplitudes below the 3$\sigma$ threshold were
not considered since they
were too strongly influenced by noise and by interactions between noise and daily aliases.
This threshold, which ensures that the selected peaks have
only a small
chance to be due to noise, gave a total of twenty-three frequencies (see Table~\ref{tab:identif}).
Because of the daily alias of 11.57 $\mu$Hz,
we cannot know {\it a priori} whether the frequency
selected by the algorithm is the right one or an alias. We thus considered
that the frequencies could be shifted by $\pm 11.57$ $\mu$Hz, and made echelle diagrams
for different large spacings near 40\,$\mu$Hz until every frequency could be identified as
an $\ell=0$, $\ell=1$ or $\ell=2$ mode.
In this way, we found an averaged large spacing of 39.9 $\mu$Hz.
It is difficult to identify $\ell=1$ modes as they appear to be mixed modes. Some identified $\ell=1$ modes
could thus be rather due to noise, e.g. peaks at 625.7 and 665.4\,$\mu$Hz.
The peak at 728.3\,$\mu$Hz was attributed to the noise, however it could be related to the $\ell=0$ mode 729.5\,$\mu$Hz
"split" owing to its lifetime.
\begin{table}
\caption{Oscillation frequencies (in $\mu$Hz). The frequency resolution
of the time series is 0.87\,$\mu$Hz.}
\begin{center}
\begin{tabular}{lccc}
\hline
\hline
& $\ell$ = 0 & $\ell$ = 1 & $\ell$ = 2 \\
\hline
n = 11 & & 512.2 & \\
n = 12 & 533.0 & 550.3 & \\
n = 13 & & 589.9 & \\
n = 14 & 610.6 & 625.7 & \\
n = 15 & & 665.4 / 669.9 & \\
n = 16 & 691.3 & &724.5 \\
n = 17 & 729.5 &748.5 & 765.6 \\
n = 18 & 769.4 & 787.4 &805.1 \\
n = 19 & 809.2 & & 846.1 \\
n = 20 & & & 888.7 \\
n = 21 &891.6 & & \\
n = 22 & & 947.6 & \\
n = 23 &971.9 & & \\
\hline
$\Delta\nu_{\ell}$ & 39.9 (0.1) & 39.7 (0.2)& 40.9 (0.2)\\
\hline
\end{tabular}\\
\end{center}
\label{tab:freq}
\end{table}
The echelle diagram showing the twenty-two identified modes is shown in Fig.~\ref{fig:ed}.
The frequencies of the modes are shown in Fig.~\ref{tfiden} and are given in Table~\ref{tab:freq},
with radial order of each oscillation mode deduced from the asymptotic relation
(see Eq.~1) assuming that the parameter $\epsilon$ is near the solar value
($\epsilon_{\odot} \sim 1.5$).
We can see that the oscillation modes do not strictly follow the asymptotic relation due to mixed $\ell=1$ modes
and a large curvature of others modes in the echelle diagram.
The average small spacing has a value of $\delta\nu_{02} = 3.95 \pm 0.9$\,$\mu$Hz.
The large spacing is separately determined for each value of $\ell$ and is
given in the last line of Table~\ref{tab:freq}. The weighted average of these three
$\Delta\nu_{\ell}$ yields the value of $\Delta\nu = 39.9 \pm 0.1$\,$\mu$Hz.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig6.eps}}
\caption{Echelle diagram of identified modes (in black) with a
large separation of 39.9\,$\mu$Hz. The
modes $\ell$=2 ($\blacksquare$), $\ell$=0 ({\Large $\bullet$}),
and $\ell$=1 ($\blacktriangle$)
are represented with a size proportional to
their amplitude. Open symbols correspond to modes determined
by Kjeldsen et al. (\cite{kjel2}). Both identifications are
in good agreement at low frequency. However, at high frequency
their $\ell=2$ mode frequencies($\square$) seem to be identified in this paper
as $\ell=0$ modes
({\Large $\bullet$}), and their
$\ell=0$ modes ({\tiny $\bigcirc$}) are not present in our data.}
\label{fig:ed}
\end{figure}
The twenty-two identified modes are compared to previously identified ones by Kjeldsen et al. (\cite{kjel2}) in Fig.~\ref{fig:ed}.
Both identifications are rather in good agreement at low frequency but present major discrepancies at high frequency.
Although the present data have a higher $S/N$, additional measurements are needed to resolve
these ambiguities.
\subsection{Oscillation amplitudes}
\label{sec amp}
Concerning the amplitudes of the modes, theoretical computations predict oscillation
amplitudes between 1 and 1.5\,m\,s$^{-1}$ for a 1.6 $M_{\odot}$ star
like \object{$\eta$ Boo}, with mode lifetimes of the order of a few days
(Houdek et al. \cite{ho99}). The amplitudes of the highest modes,
in the range 55--80\,cm\,s$^{-1}$, are then lower than expected. The observations indicate that oscillation amplitudes
are typically 2.5--3.5 times solar.
This disagreement can be partly explained by the lifetimes of the modes.
Indeed, the oscillation modes have finite lifetimes,
because they are continuously damped. Thus, if the star is observed during a time longer than
the lifetimes of the modes, the signal is weakened due to
the damping of the modes and to their re--excitation with a random phase.
\section{Comparison with models}
\label{mod}
In order to compare the asteroseismic observations with theoretical predictions,
stellar models were computed using the Geneva evolution code including shellular rotation
and atomic diffusion (see Eggenberger et al. \cite{eg04c} for more details).
We used the OPAL opacities, the NACRE nuclear reaction rates
(Angulo et al. \cite{an99}) and the standard mixing--length formalism
for convection.
\subsection{Non--asteroseismic observational constraints}
\label{obs}
Following Di Mauro et al. (\cite{di03}) (hereafter DM03),
the luminosity of \object{$\eta$ Boo} was deduced from the Hipparcos parallax:
$L/L_{\odot}=9.02 \pm 0.22$.
Concerning the effective temperature,
we added recent results to the references used by DM03 to determine
a new average value (see Table~\ref{tab:temp}). As a result, the effective temperature
$T_{\mathrm{eff}}=6030 \pm 90$\,K was adopted. This value is in perfect agreement with the
effective temperature of $6028 \pm 45$\,K used by DM03, with a larger error which seems
to us more realistic in view of the different values found in the literature.
The box in the HR diagram for \object{$\eta$ Boo}
delimited by the observed effective temperature and luminosity is shown in Fig.~\ref{dhr}.
The metallicity of \object{$\eta$ Boo} adopted by DM03 is [Fe/H$]=0.305 \pm 0.051$.
Compared to different observed metallicities, this value seems to be quite large.
We thus decided to adopt
a lower average value of [Fe/H$]=0.23 \pm 0.07$, which is determined from the recent measurements
listed in Table~\ref{tab:temp}.
Finally, we used the observed surface velocity of \object{$\eta$ Boo} to constrain the rotational velocity of our models.
From Coralie spectra, we determined a rotational velocity $v \sin i=12.8$\,km\,s$^{-1}$. Since the value of the
angle $i$ is unknown, we assumed that it is close to 90\,$\degr$. Thus our models of \object{$\eta$ Boo} have to reproduce a
surface velocity of about 13\,km\,s$^{-1}$.
\begin{table}
\caption{Metallicity and temperature determination for \object{$\eta$ Boo} (since 1990).
The errors on the selected parameters are
chosen to encompass all acceptable values (last line).}
\begin{center}
\begin{tabular}{ll|l}
\hline
\hline
$T_{\rm eff}$ [$\degr$ K] & [$Fe/H$] & References \\
\hline
6000 & 0.16 & McWilliam \cite{mcw}\\
6219 & 0.30/0.37& Balachandran \cite{bal}\\
6068 & 0.19 & Edvardsson et al. \cite{edv}\\
5943 & 0.20 & Gratton et al. \cite{gra}\\
-- & 0.23 & Mishenina \cite{mis}\\
-- & 0.28 & Fuhrma (Cayrel de Strobel \cite{cay})\\
6003 & 0.25 & Cenarro et al. \cite{cen}\\
-- & 0.16 & Buzzoni et al. \cite{buz}\\
6000 & 0.25 & Feltzing \& Gonzalez \cite{fel}\\
6120 & 0.16& Gray \cite{gray}\\
5964 & 0.19& Mallik et al. \cite{mal}\\
5957 & 0.15 & Nordstrom et al. \cite{nord} \\
5942 & 0.18 & Allende Prieto et al. \cite{all}\\
\hline
6030\,$\pm$\,90 & 0.23\,$\pm$\,0.07 & \\
\hline
\end{tabular}\\
\end{center}
\label{tab:temp}
\end{table}
\subsection{Computational method}
Basically, the computation of a stellar model for a given star consists in finding the set of stellar modeling parameters which best reproduces
all observational
data available for this star.
The characteristics of a stellar model including the effects of rotation
depend on six modeling parameters: the mass $M$ of the star, its age ($t$ hereafter),
the mixing--length parameter $\alpha \equiv l/H_{\mathrm{p}}$ for convection, the initial surface velocity $V_{\mathrm{i}}$
and two parameters describing the initial chemical composition of the star. For these two parameters,
we chose the initial hydrogen abundance $X_{\mathrm{i}}$ and the initial ratio between the mass fraction of heavy elements and hydrogen
$(Z/X)_{\mathrm{i}}$.
Assuming that this ratio is proportional to the abundance ratio [Fe/H], we can directly relate $(Z/X)$ to [Fe/H]
by using the solar value $(Z/X)_{\odot}=0.0230$ given by Grevesse \& Sauval (\cite{gr98}).
Moreover, we fixed the mixing--length parameter to its solar calibrated value ($\alpha_{\odot}=1.75$) and we assumed the
initial hydrogen abundance $X_{\mathrm{i}}$ to be $X_{\mathrm{i}}=0.7$.
As a result, any characteristic $A$ of a given stellar model has the following formal dependences with respect to modeling parameters:
$A=A(M,t,V_{\mathrm{i}},(Z/X)_{\mathrm{i}})$.\\
The determination of the set of modeling parameters $(M,t,V_{\mathrm{i}},(Z/X)_{\mathrm{i}})$ leading to the best agreement
with the observational constraints is made in two steps. First, we constructed a grid of models with position in the HR diagram
in agreement with the observational values of the luminosity and effective temperature (see Fig.~\ref{dhr}).
Note that the initial ratio between the mass fraction of heavy elements and hydrogen
$(Z/X)_{\mathrm{i}}$ is directly constrained by the observed surface metallicity [Fe/H], while the initial velocity
$V_{\mathrm{i}}$ is directly constrained by the observed rotational velocity.
For each stellar model of this grid, low-$\ell$ p--mode frequencies were then calculated using the Aarhus adiabatic
pulsations package written by J. Christensen-Dalsgaard (\cite{cd97}). Following our observations, modes $\ell \leq 2$
with frequencies between 0.4 and 1.1\,mHz were computed and the mean large ($\Delta \nu$) and small spacings ($\delta \nu_{02}$)
were determined. The mean large spacing was computed by considering only the radial modes.
Once the asteroseismic characteristics of all models of the grid were determined, we performed a $\chi^2$ minimization
as in Eggenberger et al. (\cite{eg04b}). Thus, two functionals are defined: $\chi^2_{\mathrm{tot}}$ and $\chi^2_{\mathrm{astero}}$.
The $\chi^2_{\mathrm{tot}}$ functional is defined as follows:
\begin{eqnarray}
\label{eq11}
\chi^2_{\mathrm{tot}} \equiv \sum_{i=1}^{5} \left( \frac{C_i^{\mathrm{theo}}-C_i^{\mathrm{obs}}}{\sigma C_i^{\mathrm{obs}}} \right)^2 \; ,
\end{eqnarray}
where the vectors $\mathbf{C}$ contains the following observables for one star:
\begin{eqnarray}
\nonumber
\mathbf{C} \equiv (L/L_{\odot},T_{\mathrm{eff}},
[\mathrm{Fe/H}],\Delta \nu,\delta \nu_{02}) \; .
\end{eqnarray}
The vector $\mathbf{C}^{\mathrm{theo}}$ contains the theoretical values of these observables for the model to be tested, while
the values of $\mathbf{C}^{\mathrm{obs}}$ are those
listed above. The vector $\mathbf{\sigma C}$ contains the errors on these observations.
Note that the observed rotational velocity is not included in this minimization, because of its large uncertainty resulting
from the unknown inclination angle $i$.
The $\chi^2_{\mathrm{astero}}$ functional is defined as follows:
\begin{eqnarray}
\nonumber
\chi^2_{\mathrm{astero}} & \equiv & \frac{1}{N} \sum_{i=1}^{N} \left(
\frac{\nu_i^{\mathrm{theo}}-\nu_i^{\mathrm{obs}} - \langle D_{\nu}\rangle}{\sigma} \right)^2\,, \\
\label{eq2}
\end{eqnarray}
where $\sigma=0.87$\,$\mu$Hz is the error on the observed frequencies estimated as the frequency resolution, $N=22$
is the number of observed frequencies, and $\langle D_{\nu}\rangle$ is the mean value of the differences between
the theoretical and observed frequencies :
\begin{eqnarray}
\nonumber
\langle D_{\nu}\rangle \equiv \frac{1}{N} \sum_{i=1}^N (\nu_i^{\mathrm{theo}}-\nu_i^{\mathrm{obs}}) \; .
\end{eqnarray}
The determination of the best set of parameters
was based on the minimization of the functional defined in equation~(\ref{eq11}) which includes three non--asteroseismic
and two asteroseismic observational constraints.
Once the model with the smallest $\chi^2_{\mathrm{tot}}$ was determined, we refined the grid in the vicinity of this
preliminary solution in order to find the best solution which minimizes at the same time $\chi^2_{\mathrm{tot}}$ and $\chi^2_{\mathrm{astero}}$.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{dhr.eps}}
\caption{Evolutionary tracks in the HR diagram for two models of \object{$\eta$ Boo}.
The dot and the square indicate the location of the M1 and M2 model respectively.
The error box in continuous line indicates the observational constraints of $L$ and $T_{\mathrm{eff}}$ used in our
analysis, while the dotted box corresponds to the constraints used by Di Mauro et al. (\cite{di03}).}
\label{dhr}
\end{figure}
\subsection{Results}
Using the observational constraints listed in Sect.~\ref{obs} with the observed frequencies listed in Table~\ref{tab:freq},
we performed the $\chi^2$ minimization described above. We found the solution $M=1.57 \pm 0.07$\,$M_{\odot}$,
$t=2.67 \pm 0.10$\,Gyr, $V_{\mathrm{i}}\cong 90$\,km\,s$^{-1}$ and $(Z/X)_{\mathrm{i}}=0.0391 \pm 0.0070$.
The position of this model in the HR
diagram (denoted model M1 in the following) is indicated by a dot in Fig.~\ref{dhr}.
The characteristics of this model are given in Table~\ref{tab:res}.
The confidence limits of each modeling parameter given in Table~\ref{tab:res} are estimated as the maximum/minimum values which
fit the observational constraints when the other calibration parameters are fixed to their medium value.
Note that the radius deduced for \object{$\eta$ Boo} $R\,=\,2.72\,R_{\odot}$ is in good agreement with the
interferometric radius of 2.766$\pm$0.039\,$R_{\odot}$ and 2.682$\pm$0.043\,$R_{\odot}$ determined respectively
with the Mark III optical interferometer (Mozurkewich et al. \cite{mozur}) and
with the VLTI (Kervella et al. \cite{kerv}).
Theoretical p--mode frequencies of the M1 model are compared to the observed frequencies by plotting the echelle diagram (see Fig.~\ref{figdiagech}).
Note that in this figure the systematic difference $\langle D_{\nu}\rangle$ between theoretical and observed frequencies
has been taken into account. Following Christensen-Dalsgaard et al. (\cite{cd95}), the theoretical oscillation amplitudes are estimated by
\begin{equation}
\frac{A_{nl}}{A_0(\nu_{nl})} \cong \sqrt{\frac{\epsilon_{0}(\nu_{nl})}{\epsilon_{nl}}} \quad ,
\end{equation}
where $A$ is the surface amplitude and $\epsilon$ the normalized energy of the mode. $A_0(\nu)$ and $\epsilon_0(\nu)$ are
obtained by interpolating to frequency $\nu$ in the results of radial modes.
The model shows that $\ell=1$ modes deviate from the asymptotic relation for p--modes.
This is a consequence of the avoided crossings (Christensen--Dalsgaard et al. \cite{cd95}; Guenther \& Demarque \cite{gu96};
DM03; Guenther \cite{gu04}, hereafter GU04).
This results in frequencies which are shifted relative to the frequencies expected for pure p--modes.
This is particularly true for the $\ell=1$ modes at low frequency which strongly deviate from the asymptotic relation.
The observed frequencies for $\ell=1$ modes seem to deviate from the asymptotic relation, which is in accordance with
theoretical predictions. However, Fig. \ref{figdiagech} shows that the model results are not able to precisely reproduce
the individual frequencies of these modes at low frequency.
Figure~\ref{figdiagech} also shows that
the agreement between observations and theoretical predictions for modes with $\ell=0$ and $\ell=2$
is good, except for the two $\ell=2$ modes with
the smallest and the largest frequency ($\nu=724.5$ and $888.7$\,$\mu$Hz).
Note that the $\ell=2$ modes are also influenced by the avoided crossings.
However, the effects of coupling
become much weaker for these modes than for modes with $\ell=1$, since p--modes with $\ell=2$ penetrate less deep in the stellar interior.
\begin{table}
\caption[]{Models for \object{$\eta$ Boo}. The M1 and M2 models include rotation and atomic diffusion,
while the M3 model is a standard model computed with an overshooting parameter $\alpha_{\mathrm{ov}}=0.2$.
The upper part of the table gives the observational constraints used for the
calibration. The middle part of the table presents the modeling parameters with their confidence limits, while the bottom
part presents the global parameters of the star.}
\begin{center}
\label{tab:res}
\begin{tabular}{c|ccc}
\hline
\hline
& Model M1 & Model M2 & Model M3\\ \hline
$L/L_{\odot}$ & $9.02 \pm 0.22$ & $9.02 \pm 0.22$ & $9.02 \pm 0.22$ \\
$T_{\mathrm{eff}}$ [K]& $6030 \pm 90$ & $6028 \pm 45$ & $6030 \pm 90$ \\
$(Z/X)_{\mathrm{s}}$ & $0.039 \pm 0.007$ & $0.057 \pm 0.007$ & $0.039 \pm 0.007$ \\
$V$ [km\,s$^{-1}$] & $\sim 12.8$ & $\sim 12.8$ & -\\
$\Delta \nu$ [$\mu$Hz] & $39.9 \pm 0.1$ & $39.9 \pm 0.1$ & $39.9 \pm 0.1$ \\
$\delta \nu_{02}$ [$\mu$Hz] & $3.95 \pm 0.90 $ & $3.95 \pm 0.90$ & $3.95 \pm 0.90 $\\
\hline
$M$ $[M_{\odot}]$ & $1.57 \pm 0.07$ & $1.69 \pm 0.05$ & $1.70 \pm 0.05$\\
$t$ [Gyr] & $2.67 \pm 0.10$ & $2.65 \pm 0.10$ & $2.14 \pm 0.10$ \\
$V_{\mathrm{i}}$ [km\,s$^{-1}$] & $\sim 90 $& $\sim 80$ & -\\
$(Z/X)_{\mathrm{i}}$ & $0.0391 \pm 0.0070$ & $0.0580 \pm 0.0070$ & $0.0390 \pm 0.0070$ \\
\hline
$L/L_{\odot}$ & $8.89$ & $9.10$ & $9.11$\\
$T_{\mathrm{eff}}$ [K]& $6053$ & $6013$ & $6005$\\
$R/R_{\odot}$ & $2.72$ & $2.78$ & $2.79$\\
$V$ [km\,s$^{-1}$] & $12$ & $15$ & -\\
$(Z/X)_{\mathrm{s}}$ & $0.0388$ & $0.0570$ & $0.0390$\\
$\Delta \nu$ [$\mu$Hz] & $39.9$ & $39.9$ & $39.9$\\
$\delta \nu_{02}$ [$\mu$Hz] & $3.79$ & $3.67$ & $3.45$\\
\hline
\end{tabular}
\end{center}
\end{table}
The variations of the large and small spacing with the frequency are given in Fig.~\ref{gdpt}.
Large spacings for $\ell=1$ modes are not plotted in Fig.~\ref{gdpt}, since these modes deviate
too strongly from the asymptotic behaviour of pure p--modes.
Table~\ref{tab:res} and Fig.~\ref{gdpt} show that the mean large spacing of the M1 model is in perfect
agreement with the observed value. The observed variation of the large spacing with the frequency is also correctly reproduced
by the model, except for the large value of the $\ell=2$ point close to 900\,$\mu$Hz.
Table~\ref{tab:res} and Fig.~\ref{gdpt} also show that the observed small spacings are compatible with theoretical predictions.
The observed mean small spacing is however slightly larger than the theoretical one; this is mainly due to the large value
of the observed small spacing at $724.5$\,$\mu$Hz.
We conclude that the observed frequencies listed in Table~\ref{tab:freq} are compatible with theoretical predictions.
Although the general agreement is satisfactory,
we also note some discrepancies between observed and predicted frequencies, especially for the $\ell=1$ modes
at low frequency.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{diagech_M157.eps}}
\caption{Echelle diagram for the M1 model with a large spacing $\Delta \nu=39.9$ $\mu$Hz. Open symbols refer to theoretical
frequencies, while the filled circles correspond to the observed frequencies
listed in Table~\ref{tab:freq}. Open circles are used for modes with $\ell=0$, triangles for $\ell=1$, squares for $\ell=2$ and pentagons for $\ell=3$. The
size of the open symbols is proportional to the relative surface amplitude of the mode (see text). Modes with too small
surface amplitudes (e.g. g modes) are not shown on this diagram.}
\label{figdiagech}
\end{figure}
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{gde_M1.eps}}
\resizebox{\hsize}{!}{\includegraphics{pte_M1.eps}}
\caption{Large and small spacings versus frequency for the M1 model.
Dots indicate the observed values of the large ($\ell=0$) and small spacings, while squares correspond
to the large spacing determined with $\ell=2$ modes.}
\label{gdpt}
\end{figure}
\subsection{Discussion of the results and comparison with previous studies}
Detailed studies of \object{$\eta$ Boo} based on the asteroseismic observations of
Kjeldsen et al. (\cite{kjel2}) have already been
performed by DM03 and GU04.
Compared to these studies, we notice that our M1 model has a smaller mass. Indeed, DM03 found that the mass of \object{$\eta$ Boo}
is limited to the range $M=(1.64-1.75)$\,$M_{\odot}$, while GU04 proposed two different solutions: a model with a mass
of 1.706\,$M_{\odot}$ which has exhausted its hydrogen core,
and another model with a mass of 1.884\,$M_{\odot}$ which is still on the main--sequence, but is approaching hydrogen exhaustion.
These two authors used the same non--asteroseismic
constraints (see Sect.~\ref{obs}). However, contrary to the analysis by DM03 and contrary to the present work, GU04 used
a calibration method (the QDG method) which is not limited to models with position in the HR diagram
in agreement with the observational values of the luminosity and effective temperature.
This explains why GU04 found another solution with a mass of 1.884\,$M_{\odot}$, while DM03 determined a mass
between 1.64 and 1.75\,$M_{\odot}$.
Recently, Di Mauro et al. (\cite{di04}) (hereafter DM04) showed that main--sequence models
provide a match to the observed location of \object{$\eta$ Boo} in the
HR diagram when overshooting from the convective core ($\alpha_{\mathrm{ov}} \geq 0.1$)
is included in the computation.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{difrot.eps}}
\caption{Helium abundance profile in the external layers of the star
at different age during its evolution on the main--sequence. The model on top only includes
atomic diffusion, while the model on bottom includes atomic diffusion and shellular rotation.
Apart from the inclusion of rotation, the two models have been computed with the same initial
parameters corresponding to the M1 model.
}
\label{proy}
\end{figure}
The fact that our M1 model is less massive than the solution of about 1.7\,$M_{\odot}$
found by DM03 and GU04 can either be due to the different observational constraints used or to the different
input physics of the evolution codes.
Indeed, our models include shellular rotation and atomic diffusion contrary to the models calculated
by DM03 and GU04. For stars more massive than about 1.4\,$M_{\odot}$,
it is necessary to introduce another transport mechanism,
like the rotationally induced mixing, in order to counteract the effect of atomic
diffusion in the external layers. When only atomic diffusion is included in
a star with a thin convective envelope, helium and heavy elements are drained out
of the envelope, resulting in too low surface abundances which are incompatible with
observation. This is illustrated on Fig.~\ref{proy} which shows the helium profile in the external
layers at different age during the evolution on the main--sequence for a model
including only atomic diffusion and for the M1 model which includes shellular rotation and atomic
diffusion. Fig.~\ref{proy} shows that rotationally induced mixing prevents the helium from being
drained out of the convective envelope. Indeed, the decrease of the surface helium abundance during the
main--sequence evolution is found to be very small for models including rotation and atomic diffusion.
\subsubsection{Effect of the metallicity}
To investigate the effects of non--asteroseismic observational constraints on
the solution, we decided to redo the whole calibration
using the non--asteroseismic constraints adopted by DM03 and GU04.
The metallicity was increased to $Z=0.04$
and a temperature of $T_{\mathrm{eff}}=6028 \pm 45$\,K was adopted. Note that we still used our asteroseismic constraints for this calibration.
In this way, we found the solution $M=1.69 \pm 0.05$\,$M_{\odot}$,
$t=2.65 \pm 0.10$\,Gyr, $V_{\mathrm{i}}\cong 80$\,km\,s$^{-1}$ and $(Z/X)_{\mathrm{i}}=0.0580 \pm 0.0070$.
The position of this model in the HR
diagram (noted model M2 in the following) is denoted by a square in Fig.~\ref{dhr}.
The characteristics of the M2 model are given in Table~\ref{tab:res}.
We conclude that the difference in the adopted value for the metallicity explains the different mass determined. Indeed, Fig.~\ref{figdiagech}
shows that the fact that we used $T_{\mathrm{eff}}=6030 \pm 90$\,K instead of $T_{\mathrm{eff}}=6028 \pm 45$\,K
has no significant influence on the solution since the M1 model is also included in the smaller observational box determined by DM03.
The higher metallicity used by DM03 and GU04 results of course from the larger value of the observed [Fe/H]: they adopted
[Fe/H$]=0.305 \pm 0.051$, while we fixed [Fe/H] to $0.23 \pm 0.07$ for the M1 calibration. However, we notice that it also results from the way
one relates the observed [Fe/H] to the mass fractions $Z$ and $X$ used in the models. Indeed, we directly related $(Z/X)$ to [Fe/H]
by using the solar value $(Z/X)_{\odot}=0.0230$ given by Grevesse \& Sauval (\cite{gr98}), while DM03 related
$Z$, and not $(Z/X)$, to [Fe/H]. As a result, for a same value of [Fe/H$]=0.305$, we determined $Z=0.032$ while DM03 obtained
a higher value of $Z=0.040$ (for $X=0.7$).
We conclude that the derived mass of \object{$\eta$ Boo} is very sensitive to the choice of the observed metallicity. When
a metallicity of [Fe/H$]=0.23 \pm 0.07$ is adopted, a mass of $1.57 \pm 0.07$\,$M_{\odot}$ is found. When the higher metallicity determined by
DM03 is used, we obtain a mass of $1.69 \pm 0.05$\,$M_{\odot}$, in perfect agreement with the results of
DM03 and GU04.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{core_2.eps}}
\caption{Ratio of the mass of the convective core to the total mass of the star ($q_{\mathrm{cc}}$)
as a function of the central hydrogen abundance ($X_{\mathrm{c}}$), for a standard model without overshooting,
for a standard model with $\alpha_{\mathrm{ov}}=0.2$ and for a model with rotation and atomic diffusion.
Apart from the inclusion of rotation and overshooting, the three models have been computed with the same initial
parameters corresponding to the M2 model.
}
\label{ccore}
\end{figure}
\subsubsection{Effect of the input physics}
Contrary to the masses, the ages of the M1 and M2 models are very similar (2.67 and 2.65\,Gyr respectively) and are therefore not very
sensitive to a change of metallicity. However, this age is larger than the age of $2.393$\,Gyr obtained by GU04 for its solution with a mass of
$1.706$\,$M_{\odot}$. DM03 pointed out that the age of the models depends on the inclusion of overshooting:
the age is about 2.3--2.4\,Gyr without overshooting, and between 2.4--2.7\,Gyr in presence of overshooting. The age of \object{$\eta$ Boo}
seems therefore to be sensitive to the input physics used.
To investigate these effects on the solution, we decided to calculate models without rotation
and atomic diffusion using the same observational constraints as the M2 model. In this way, we find the solution
$M=1.70 \pm 0.05$\,$M_{\odot}$ and $t=2.39 \pm 0.10$\,Gyr, in perfect accordance with the results of GU04 and
DM03. Rotating models predict a larger age for \object{$\eta$ Boo} than non--rotating ones.
This illustrates the fact that, for small initial velocities, rotational effects are found to mimic the effects due
to an overshoot of the convective core into the radiative zone.
Indeed, the lifetimes of rotating models are enhanced with respect to those of standard models, because
the mixing feeds the core with fresh hydrogen fuel. As a result, the exhaustion of hydrogen in the central
region is delayed and the time spent on the main-sequence increases. This can be seen on Fig.~\ref{ccore}
which shows the ratio of the mass of the convective core to the total mass of the star ($q_{\mathrm{cc}}$)
as a function of the central hydrogen abundance ($X_{\mathrm{c}}$).
We see that the rotating model exhibits a larger convective core for a given $q_{\mathrm{cc}}$, i.e. for
a given evolutionary stage on the main-sequence, than the standard model without overshooting. In the same
way, the non-rotating model with $\alpha_{\mathrm{ov}}=0.2$ also exhibits a larger convective core on the main-sequence
than standard models without overshooting. This explains why the inclusion of rotation or overshooting increases
the lifetimes of the model on the main-sequence and hence the deduced age for \object{$\eta$ Boo}.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{dhr_M3_2.eps}}
\caption{Evolutionary tracks in the HR diagram for two models of 1.7\,$M_{\odot}$
computed with a different amount of overshooting.
The dot indicates the location of the M3 model.
The error box in continuous line corresponds to the observational constraints of $L$ and $T_{\mathrm{eff}}$ used in our
analysis, while the dotted box corresponds to the constraints used by DM03.}
\label{dhr_ov}
\end{figure}
Finally, we investigated the solution of a model which is still on the main-sequence. As found by
DM04, these models do not provide a match to the observed $T_{\mathrm{eff}}$ and $L$ of \object{$\eta$ Boo}
unless overshooting is included. Thus, our analysis using the input physics described above leads
to only one solution which is in accordance with asteroseismic and non-asteroseismic
observables: the M1 model which is in the post-main-sequence phase of evolution.
Using the observational constraints listed in Sect.~\ref{obs}, we tried to determine a model of \object{$\eta$ Boo}
which is still on the main-sequence by computing non-rotating stellar models including overshooting.
In this way, we found that a model computed with an overshooting parameter $\alpha_{\mathrm{ov}}=0.2$
and a mass of 1.7\,$M_{\mathrm{\odot}}$ enables to match the location of \object{$\eta$ Boo} in the HR
diagram (see Fig.~\ref{dhr_ov}). As discussed above, the mass of this model (denoted model M3 in the following)
is lower than the mass of 1.884\,$M_{\odot}$ determined by GU04 and the
values of $M=(1.75-1.90)$\,$M_{\odot}$ found by DM04, because of the smaller metallicity used in our
analysis. The characteristics of the M3 model are given in Table~\ref{tab:res}.
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{diaech_M17_ov02.eps}}
\caption{Echelle diagram for the M3 model with a large spacing $\Delta \nu=39.9$ $\mu$Hz. Open symbols refer to theoretical
frequencies, while the filled circles correspond to the observed frequencies
listed in Table~\ref{tab:freq}. Open circles are used for modes with $\ell=0$, triangles for $\ell=1$, squares for $\ell=2$ and pentagons for $\ell=3$.}
\label{diagech_M3}
\end{figure}
\begin{figure}[htb!]
\resizebox{\hsize}{!}{\includegraphics{pte_M3.eps}}
\caption{Small separations versus frequency for the M3 model.
Dots indicate the observed values of the small separations.}
\label{pt_M3}
\end{figure}
As already pointed out by GU04 and DM04, the fundamental seismic difference between
post-main-sequence (M1) and main-sequence (M3) models
concerns the avoided crossings.
Models in the post-main-sequence phase show $\ell=1$ modes that deviate
from the asymptotic relation, while models still on the main-sequence show
no occurrence of avoided crossing. Indeed, for these models only modes with a radial order lower than
the observed ones are mixed (see Fig.~\ref{figdiagech} and \ref{diagech_M3}).
Since observation show $\ell=1$ modes that deviate
from the asymptotic relation, we conclude that models in the
post-main-sequence phase of evolution are in better agreement with the asteroseismic measurements than the
main-sequence models. Moreover, the small separation of the M3 model is smaller than the one of the M1 model and is
therefore in slightly less good agreement with the observed frequencies (see Fig.~\ref{pt_M3}).
Note that DM04 also found that post-main-sequence models are characterized by larger small separations than
main-sequence models. Although the M1 model constitutes the solution
which best reproduced all
observational constraints, the actual precision on the observed frequencies
does not enable us to definitively reject the solution on the main-sequence.
\section{Conclusions}
\label{conc}
Our observations of \object{$\eta$ Boo} yield a clear detection of p-mode oscillations.
Several identifiable modes appear in the power spectrum between 0.4 and 1.1\,$\mu$Hz
with average large and small spacings of 39.9 and 3.95\,$\mu$Hz respectively and
a maximal amplitude of 79\,cm\,s$^{-1}$. The global results are in rather good agreement
with the recent work by Kjeldsen et al (\cite{kjel2}) who found large and small spacings
of 40.4 and 3.00\,$\mu$Hz. However, the comparison of the individual frequencies leads to
some discrepancies. Although the present data have a higher $S/N$, additional Doppler measurements
using spectrographs like \textsc{Harps} (ESO) and forthcoming \textsc{Most} data with a clean window function
might help resolving these ambiguities.
We identified 22 mode frequencies which have been compared to theoretical models.
The combination of non--asteroseismic observations now available for \object{$\eta$ Boo} with the observed p--mode
frequencies listed in Table~\ref{tab:freq} leads to the following solution:
a model in the post-main-sequence phase of evolution, with a mass of $1.57 \pm 0.07$\,$M_{\odot}$, an age $t=2.67 \pm 0.10$\,Gyr and an initial metallicity $(Z/X)_{\mathrm{i}}=0.0391 \pm 0.0070$.
We also show that the mass of \object{$\eta$ Boo} is very sensitive to the choice of the observed metallicity and that
its age depends on the inclusion of rotation and atomic diffusion. Indeed, non--rotating models
without overshooting predict a smaller age of $2.39 \pm 0.10$\,Gyr.
\begin{acknowledgements}
We would like to thank J. Christensen--Dalsgaard for providing us with the Aarhus adiabatic pulsation code.
Part of this work was
supported financially by the Swiss National Science Foundation.
\end{acknowledgements}
|
1,116,691,499,187 | arxiv | \section{Introduction}
Several studies have estimated the total energy consumption of all Bitcoin miners, starting at least in 2014~\cite{odwyer14:_bitcoin_minin_energ_footprint,mccook14:_bitcoin}, with later followups, updates and extensions. To some extent, the topic attracted the interest of the media in 2018, with some studies that compared Bitcoin mining with other activities, such as gold mining~\cite{Krause2018QuantificationOE}. Several observatories have been established, that claim to track the energy consumption of Bitcoin such as the Cambridge Bitcoin Electricity Consumption Index (CBECI)\footnote{Cambridge Bitcoin Electricity Consumption Index: \url{https://cbeci.org/}} and the Bitcoin Energy Consumption Index (BECI)\footnote{Bitcoin Energy Consumption Index: \url{https://digiconomist.net/bitcoin-energy-consumption}}.
When I was checking those studies and the methodology used in those observatories, I found that most of them used some variation of the same general method. As data sources, they use an estimation of the the total hashpower of all Bitcoin miners, and some assumption about the equipment used by miners (in terms of hashpower and energy consumption). To compute the total energy consumption, they compute how much equipment would be needed to produce the total hashpower, and then add up the energy consumption of all that equipment.
In this paper, I'm using a method which is not novel, but is less used in those studies (it is used for example in the BECI). It starts with an economic assumption: miners would only mine Bitcoin if it makes economical sense for them, that is, when they found economical benefit in it. In other words, the total costs faced by miners would be lower then their expected income. Therefore, since energy cost is one among other costs that miners face, energy costs for all miners together would not be higher than expected income by all miners together. In fact, if other costs (such as amortization of equipment) could be estimated, upper bounds on energy consumption could be more precisely estimated as well. Fortunately, in the case of Bitcoin the expected income is a relatively easy function, as will be shown below, which makes the estimation of this upper bound feasible.
Exploring this assumption was useful for me to better understand the factors that determine Bitcoin energy consumption, and how they interact. This exploration led me to some interesting consequences, including:
\begin{itemize}
\item The aggregated hashpower of all Bitcoin miners together has no impact on the upper limit for their aggregated energy consumption. In fact, aggregated hashpower is the consequence of other factors.
\item The exchange rate of Bitcoin to traditional currencies has a linear impact on the upper bound for the aggregated energy consumption: the higher the rate, the higher the upper bound for energy consumption.
\item Amortization costs have a negative impact on the upper bound of energy consumption: the higher the cost of the equipment for mining, the lower the upper bound.
\item Energy price has a negative impact on the upper bound of energy consumption: the higher the energy price, the lower the upper bound.
\item A single number (the Amortization Factor) can be defined to characterize mining equipment. This single number drives the split in costs for the miners between amortization of equipment and energy consumption.
\item There are some actions that could lead to reducing the aggregate energy consumption by Bitcoin miners without affecting the main properties of Bitcoin.
\end{itemize}
Some of those consequences are found only if some other assumptions hold, which is something that could be the subject of further research. Those assumptions will be presented in detail later in this paper. On the other hand, some of these consequences may also seem obvious once you think about them. For example, the fact that if energy price increases and all other factors remain stable miners will adapt by consuming less energy, seems straightforward from a maximization of benefits point of view. However, showing the relationship with equations helps to explore it in detail.
The method is based on three main factors (price of energy, exchange rate of Bitcoin to traditional currencies, and amortization cost for equipment), and to some extent on a fourth one (amount of fees in Bitcoin transactions), and maybe a fifth one (minted coins per block, which only changes every four years). Using this method, different strategies for lowering the upper bound of energy consumption can be explored, or the impact on energy consumption of the exchange rate reaching some value can be estimated.
The rest of this paper is structured as follows. First, we present the Bitcoin protocol (how Bitcoin work, and how Bitcoin ``miners'' mine blocks), and some definitions, in the next section. Then, Section~\ref{sec:income-cost} presents the analysis of how miners get their income, and how they spend it to mine Bitcoin. Section~\ref{sec:energy-consumed} shows how the energy consumed by all miners can be computed based on the analysis of income and expenses, and proposes an equation showing which factors affect energy consumption. Section~\ref{sec:current-energy} computes the maximum energy consumed in some scenarios (some of them maximalist or minimalist, and a medium scenario roughly similar to the situation in June 2021). Then, Section~\ref{sec:impact-factors} explores in some detail the impact of the different factors on energy consumption. The paper ends with some discussion on the results, including some proposals for actions that could impact energy consumption (Section~\ref{sec:discussion}), a brief discussion of related work (Section~\ref{sec:related-work}), and some conclusions (Section~\ref{sec:conclusions}).
Please, while reading through the paper remember that I'm making no claim that the approach is novel, neither even that it is correct. Any feedback about prior work, about errors in the approach will be more than welcome, or about anything else, is welcome. From that point of view, please consider this paper as a request for comments.
\section{The Bitcoin protocol}
There are many descriptions of how Bitcoin works, both from a technical and an economic point of view. From the point of view of the knowledge useful for following this paper, maybe the read can refer to~\cite{bohme15:_bitcoin}, in case of not being familiar with the concepts presented here. Of course, the original paper by Satoshi Nakamoto will also be helpful~\cite{nakamoto09:_bitcoin}. A detailed discussion on Bitcoin characteristics, mainly from an economic point of view, but also detailing how their properties emerge from the Bitcoin protocol can be found in~\cite{chen20:blockchain_economics}.
This paper is exclusively focused on Bitcoin, and although the approach and some of the conclusions could be extended to other blockchain-based currencies, there is no exploration of that in it.
For the purposes of introducing the main characteristics of it relevant for this paper, we can assume Bitcoin works as follows. First of all, Bitcoin is implemented as a persistent, distributed ledger. The unit of information in that ledger is a transaction, and all transactions are ordered in the ledger. Each transaction codifies how a number of Bitcoins change hands (or to be more precise, change from one or more public keys to one or more public keys, which all public keys being associated to Bitcoin wallets).
Transactions are grouped in blocks, and a block is added to the ledger every approximately 10 minutes, forming the Bitcoin blockchain. Adding a block to the blockchain is a key procedure. At any given moment, a lot of computers are competing for adding a block, with a certain collection of transactions. The Bitcoin protocol guarantees that only one of them will eventually win, thus maintaining a single blockchain. Computers compete by performing computations, in which is named as ``proof of work''. When a computer ``wins'', and adds a block to the blockchain, it gets in that block a transaction with some new Bitcoin for a wallet of its preference, in a process which is named ``minting''. This minting process is how Bitcoin are created. The number of Bitcoin that are obtained when a block is added to the blockchain is halved every approximately 4 years, so that the total number of Bitcoin that can be ever minted is fixed and known in advance.
Transactions can also specify a fee. The fee is decided by whomever does the transaction, and when a computer builds a block to try to add it to the blockchain, it decides which transactions to include it, and usually tries to maximize profit by including transactions with a higher fee. The reason is that the block includes a transaction collecting fees, so the computer whose block is chained decides to which wallet are sent the fees for the transactions in it. This usually means that the larger the fee in a transaction, the most likely that it is quickly included in the blockchain.
In the rest of this paper, we refer to ``miner'' as a computer trying to include blocks in the blockchain, a process which is usually named ``mining''. A ``miner's wallet'' will be a wallet controlled by whomever is running the mining operations on that miner, which can be used to collect minted Bitcoin or transaction fees. We refer to ``block'' as any of the elements of the Bitcoin blockchain, including a set of transactions. When the block is inserted in the blockchain, those transactions are inserted into it too, and after a few more blocks are inserted, are considered to be immutable. We will refer to the operation of inserting a new block in the blockchain as ``mining a block''.
We will refer to ``hashpower'' (in some cases ``hash rate'') as the number of hashes per unit of time that can be computed by some miner (or by all miners aggregated). The number of hashes captures how fast are performed the operations to mine a new block, and when referred to the whole set of miners, reflects how ``difficult'' it is to mine a block: the more hashpower, the more difficulty for mining a block.
Even when these definitions are not rigorous, we consider they are clear enough in the context we use them in this paper.
\section{Income and costs for all miners}
\label{sec:income-cost}
The basis of the analysis is the balance of income and expenses of the whole set of miners mining blocks for Bitcoin. Therefore, we will start by analyzing income and expenses for the set of all Bitcoin miners. Instead of analyzing income and expenses for each miner, and then add all of them together we will analyze from the beginning all the set of miners.
\subsection{Income}
Miners have two economic incentives for mining, both received in Bitcoin, both received when they chain a new block to the Bitcoin blockchain (they ``mine'' a block). So, both incentives are income per block. They were briefly explained above, when the functioning of Bitcoin was introduced, and they are:
\begin{itemize}
\item Minting. Each time a miner mines a new block some Bitcoins are allocated to the miner's wallet. The number of Bitcoins that are minted for each block is reduced by half every 210,000 blocks (or about four years), being currently (2021) 6.25 Bitcoins per block.
\item Fees. Each transaction can propose a fee, and it usually does. The miner that mines a block with that transaction can transfer the fee to a wallet under its control. Therefore, the miner that mines a block ``collects'' the fees in all the transactions in that block. Each transaction can include a different fee, and fees usually vary depending on several factors, the number of pending transactions being one of the most important.
\end{itemize}
Therefore, if we consider $Miners$ as the set of all the miners working over a certain time period ($T$), the total income collected by $Miners$ during $T$ is:
\begin{equation}
{Income}_T = ({Blocks}_T \times {Mint}) + {Fees}_T
\label{eq:income}
\end{equation}
with the following definitions:
\begin{itemize}
\item $Income_T$: Total income by $Miners$ during the period $T$, in Bitcoin.
\item $Blocks_T$: Number of blocks mined by $Miners$ during the period $T$. Since $Miners$ includes all miners working during the period, this is equal to the total number of blocks mined during that period.
\item $Mint$: Income due to minting for a single block, in Bitcoin, during the period $T$. For simplicity, this is assumed to be constant during the period $T$.
\item $Fees$: Sum of all fees collected from all the transactions in blocks mined by $Miners$ during the period $T$. Again, since $Miners$ include all miners mining blocks, and therefore, all miners collecting fees, this is equal to all fees collected during $T$, and to all fees in all transactions in blocks mined during $T$.
\end{itemize}
It is important to notice that total income, in Bitcoin, is not related to the total hashpower of all the miners, is predictably related to the period considered, and is sensible to variations in fees.
Total income by all miners is not related to the total hashpower because whatever it is, the number of Bitcoin collected by all miners is the same: it will only be more or less difficult to obtain in aggregate for all of them. This is due to how the total hashpower is adjusted, approximately every week, to keep the block rate approximately constant in one block about every ten minutes.
Income is predictable related to the period considered ($T$), because (${Blocks}_T\times{Mint}$ is known in advance, given that $Blocks$ will be approximately equal to the number of 10 minutes periods in $T$.
Income is sensible to variations in fees, since fees are a sizable part of the income, and they are known to vary over time. In fact, since the fraction of income due to minting will decrease over time: $Mint$ is halved every four years, which means that ${Blocks}_T*{Mint}$ will be much smaller in some years.
\subsection{Expenses for all miners}
Expenses needed to keep miners in production can be split in:
\begin{itemize}
\item Expenses due to the hardware being used to mine, and all the related investment to keep it housed and in good condition. Although these expenses may be in a large part a capital expenditure, we will assume its amortization over a period $T$ can be calculated.
\item Expenses due to operation expenditure. A good part of current literature assumes these costs are driven almost exclusively by power consumption.
\end{itemize}
For the rest of this paper, we will assume all costs (expenses) of a miner are for a certain period of time $T$:
\begin{equation}
Costs_T = Amort_T + (Energy_T \times EnergyPrice)
\end{equation}
with the following definitions:
\begin{itemize}
\item $Costs_T$. All costs (expenses) for $Miners$ over the period $T$, in Bitcoin.
\item $Amort_T$. All amortization costs for $Miners$, calculated for the period $T$, in Bitcoin. In practice, we include in this term all costs that are not due to energy consumption for operation, and assume it can be computed for a given period.
\item $Energy_T$. All energy consumed by $Miners$ for operation during the period $T$, in KWh (KiloWatt-hour).
\item $EnergyPrice$. Price of energy during period $T$. For simplicity, it is assumed that the price is constant during $T$, and equal for all miners. In practice, $EnergyPrice$ could be calculated as the weighted average of the price of energy for all miners (weighted by the amount of energy consumed).
\end{itemize}
It is important to notice that costs are not related to the total hashpower of $Miners$. Of course, if more hashpower is used to produce blocks, more energy will be needed, for a given kind of hardware. But in any case, this is due to the number and quality of miners joining $Miners$, not to the hashpower needed, which will be adjusted up or down depending on the aggregated hashpower of $Miners$ grow or shrink.
\section{Energy that miners can consume for mining}
\label{sec:energy-consumed}
Assuming miners act as rational economic actors, they will operate when their mining costs are below their expectations of income. To understand how much they can allocate to energy cost, we already analyzed income and costs. Therefore, miners will operate during a period $T$ trying to make the following inequality hold:
\begin{equation}
Costs_T \leq Income_T
\end{equation}
or
\begin{equation}
Amort_T + (Energy_T \times EnergyPrice) \leq
(Blocks_T \times Mint) + Fees_T
\label{eq:cost-income-equilibrium}
\end{equation}
Therefore, we can calculate an upper limit on the energy consumed by $Miners$, which is the energy consumed by all Bitcoin miners, as:
\begin{equation}
Energy_T \leq \frac{(Blocks_T \times Mint) + Fees_T - Amort_T}{EnergyPrice}
\end{equation}
This means that, for a given period $T$, the maximum energy consumed by all Bitcoin miners (if they don't operate under expectation of losses), $EnergyMax_T$ is:
\begin{equation}
EnergyMax_T = \frac{(Blocks_T \times Mint) + Fees_T - Amort_T}{EnergyPrice}
\end{equation}
Up to this point, income and cost have been computed in Bitcoin. However, those costs are usually converted to US Dollar, Euro, or some other traditional currency. Since the price of Bitcoin with respect to those currencies has a high volatility, it is interesting to include that conversion ratio in the equation:
\begin{equation}
EnergyMax_T = \frac{
(Blocks_T \times \frac{MintEur}{EurBTC}) +
\frac{FeesEur_T}{EurBTC} -
\frac{AmortEur_T}{EurBTC}}
{\frac{EnergyPriceEur}{EurBTC}}
\end{equation}
with the following definitions:
\begin{itemize}
\item $EurBTC$: exchange rate in Euros per Bitcoin.
\item $MintEur = Mint \times EurBTC$
\item $FeesEur_T = Fees_T \times EurBTC$
\item $AmortEur_T = Amort_T \times EurBTC$
\item $EnergyPriceEur = EnergyPrice \times EurBTC$
\end{itemize}
This equation can be transformed easily in the following one, which we call the ``Bitcoin Energy Factors Formula'':
\begin{equation}
{EnergyMax}_T =
\frac{(Blocks_T \times Mint \times EurBTC)} {EnergyPriceEur} +
\frac{Fees_T \times EurBTC} {EnergyPriceEur} -
\frac{AmortEur_T} {EnergyPriceEur}
\label{eq:energy-factors}
\end{equation}
In this last equation, we have tried to keep each cost or income on the most usual currency (Bitcoin or Euro), according to how people tend to interact with it. For example, when people buy hardware or pay for housing costs ($AmortEur$), or pay for electricity ($EnergyPriceEur$), usually they pay for it in Euro (being Euro just a proxy for all ``traditional'' currencies). However, when they get minted coins ($Mint$), they get Bitcoin. For fees ($Fees$), we have used also Bitcoin as the unit, since people usually express fees in that currency when specifying Bitcoin transactions. To make the equation fit, we have used the exchange ratio of Bitcoin to Euro (EurBTC) whenever convenient. In any case, all of this is only to make the following discussion more easy to follow: results should be the same if you use other currencies for each factor.
Since this last equation shows that the maximum amount of energy that all Bitcoin miners can consume during a period $T$, while still having some expectation of profit, we can see that it depends on $Blocks_T$, $Mint$, $Fees_T$, $AmortEur$, $EurBTC$ and $EnergyPriceEur$. And nothing else.
The number of blocks per period of time ($Blocks_T$) is (approximately) known in advance, and does not change. The Bitcoin protocol adjusts every week to produce a block about every 10 minutes. Therefore, except for (unforeseen for now) changes in the core Bitcoin protocol, there is no influence of this factor in energy consumption, since it remains invariable. $Mint$ is invariable during long periods of time (about four years). Therefore, the only factors that influence the maximum energy consumption for all miners are fees collected from transactions, amortization costs, the exchange rate of Bitcoin, and the price of energy.
\section{Maximum energy consumed in some scenarios}
\label{sec:current-energy}
\begin{table}
\begin{tabular}{|r||r|r|r|r|}
\hline
Name & $Fees_{day}$ & $EurBTC$ & $AmortEur_{year}$ & $EnergyPriceEur$ \\
& (BTC) & (EUR) & (MEUR) & (EUR) \\ \hline\hline
\ExpandableInput{notebooks/scenarios1_table}
\hline
\end{tabular}
\caption[Scenarios]{Some scenarios for illustrating the different factors influencing maximum energy consumption by all miners. Low (high) income scenarios assume low (high) fees per transaction and low (high) Bitcoin exchange rate. Low (high) cost scenarios assume low (high) amortization costs, and low (high) energy price. All of them compared with the current situation, as of mid-June 2021 (which is similar to the medium scenario). For all scenarios, the parameters are: transaction fees per day, collected by all miners; exchange rate for Bitcoin to Euro; amortization costs for all miners, per year; and energy price (weighted average).}
\label{table:scenarios1}
\end{table}
Let's define some scenarios in Table~\ref{table:scenarios1}, which we will use to illustrate the results of applying the previous equations. They present four extreme situations, and one which could be close to the real situation while writing this paper. But all of them are only illustrative. Each scenario is defined by fixing the values for some of the variables in Equation~\ref{eq:energy-factors}. Other factors will remain constant:
\begin{description}
\item{$Blocks_T$:} 52560 blocks
($6 blocks/hour \times 8,760 hours/year$, for a year of 365 days)
\item{$Mint$:} 6.25 Bitcoin/block (halving in late 2024)
\end{description}
Some rationale for parameters used in these scenarios:
\begin{description}
\item{$Fees$} For the medium scenario, we used 100 Bitcoin/day, which is close to the values during June 2021, obtained from BTC.com\footnote{\url{https://btc.com/stats/fee}}. This value was close to 500 Bitcoin/day in December 2017, but has remained below 130 Bitcoin/day since February 2018. For a low income scenario, we used 30 Bitcoin/day, way below fees during the last years, and for high income, 300 Bitcoin/day, lower than historical fees, but higher than fees during the last years.
\item{$EurBTC$} We consider a range of 10,000 (low income) to 200,000 (high income) Euro to Bitcoin exchange rate, with a medium scenario of 30,000, which is close to current exchange. Of course, these values are arbitrary, but try to reflect extremely low exchange and extremely high rates, by today expectations.
\item{$AmortEur_{year}$} For aggregated amortization we have considered an extreme scenario of 0 euros for low income, to show the case when miners won't care about amortization, either because they consider their equipment as amortized, or because it considers its cost as a sunken cost. For the medium scenario, we have considered 5,000 MEuro per year, close to the actual values calculated for June 2021. High cost scenarios use twice this value.
To estimate the aggregated amortization costs of all miners we have computed the cost of the number of devices needed to produce the aggregated hashpower of all of them. We assume that devices used for mining are similar to some of the most profitable device models available in June 2021 (see~Table~\ref{table:devices}). For a period T, we can compute amortization costs for all miners ($AmortEur_T$) as:
\begin{equation}
AmortEur_T = DeviceCostEur \times \frac{HashPower}{HashPowerDevice} \times T
\end{equation}
being $DeviceCostEur$ the average cost of mining devices, $HashPowerDevice$ the average hashing power of mining devices, and $HashPower$ the aggregated hashing power of all active Bitcoin miners (in both cases, in THash/sec).
\begin{table}
\begin{tabular}{|r||r|r|r|r|}
\hline
Name & $DeviceHashPower$ & $DevicePower$ & $DeviceEur$ & $AmortEur_{year}$ \\
& (TeraHash/sec) & (KW) & (EUR) & (MEUR) \\ \hline\hline
\ExpandableInput{notebooks/devices_table}
\hline
\end{tabular}
\caption[Hashing devices]{Hashing devices. List of some of the most profitable hashing devices available for purchase on June 13th 2021. Performance, consumed power, price, and estimated amortization cost for all miners, should all of them use this equipment. For estimating amortization cost, we use a total hash power of all of the miners of 136.316 ExaHash/sec. ($136.316 \times 10^6$ TeraHash/sec), which is the estimated total hash power of all miners as of June 13th 2021, according to Blockchain.com\protect\footnotemark. We consider a full amortization period of 2 years.}
\label{table:devices}
\end{table}
\footnotetext{Blockchain.com (total hash rate)): \url{https://www.blockchain.com/charts/hash-rate}}
\item{$EnergyPriceEur$} Energy price is difficult to estimate, since it should be the weighted average value of energy for all miners (weighted by their energy consumption). After looking at several references we have used for the medium scenario the value of 0.03 Euro/KWh based on the average price estimated by the World Bank in 2019\footnote{World Bank DB16-20 methodology, prices in US cents per KWh \\ \url{https://govdata360.worldbank.org/indicators/h6779690b}} for some countries like China (12.80 US cents per KWh), Iceland (12.20 US cents per KWh), Argentina (10.80 US cents per KWh), Saudi Arabia (7.40 US cents per KWh), and considering that usually miners tend to use electricity rates lower than the average in their countries. For the high cost scenario we used 0.10 Euro/KWh, which is probably higher than any price actually paid by miners in any country. For low cost, we used 0.01 Euro/KWh, which is likely lower than the real cost available to most miners.
\end{description}
\begin{table}
\begin{tabular}{|r||r|r|r|r|r|}
\hline
Name & $Fees_{day}$ & $EurBTC$ & $AmortEur_{year}$ & $EnergyPriceEur$ & $EnergyMax_{year}$ \\
& (BTC) & (EUR) & (MEUR) & (EUR) & (TWh) \\ \hline\hline
\ExpandableInput{notebooks/scenarios1_energy_table}
\hline
\end{tabular}
\caption[Maximum energy consumption for scenarios]{Maximum energy consumption for scenarios. Scenarios described in Table~\ref{table:scenarios1}, with maximum energy consumed by all miners ($EnergyMax_{year}$), according to Equation~\ref{eq:energy-factors}. Values of maximum energy below 0 mean the corresponding scenario is not profitable.}
\label{table:scenarios1-energy}
\end{table}
Using formula~\ref{eq:energy-factors} we can estimate the maximum energy consumption of all Bitcoin miners per year for the scenarios defined in Table~\ref{table:scenarios1}. The result is shown in Table~\ref{table:scenarios1-energy}.
It is interesting to notice that the value for the medium scenario should be close to the maximum energy consumption around May/June 2021, since the parameters of that scenario are close to those of these days.
\section{Impact of the different factors}
\label{sec:impact-factors}
Let's analyze the impact of these factors in some detail. As a part of that analysis, we will use data from the scenarios described in Table~\ref{table:scenarios1}.
\subsection{The impact of the price of energy}
Since we have $EnergyPriceEur$ in the denominator of all the fractions, it is clear that the lower the price of energy in traditional money (the way it is usually paid for), the higher the maximum energy that can be consumed by miners, still expecting a profit.
This fact could be expected, since in it just means ``all other factors equal, the cheaper the energy, the more miners will consume energy''.
For certain values of $Fees_T$, $AmortEUR$, $EurBTC$, we can define $NonEnergyBalanceEur$ (the balance of income and expenditure for non-energy factors, in Euro) as:
\begin{equation}
NonEnergyBalanceEur_T = Blocks_T \times Mint \times EurBTC +
Fees_T \times EurBTC - AmortEur_T
\end{equation}
And substituting in Equation~\ref{eq:energy-factors},
\begin{equation}
EnergyMax_T = \frac{NonEnergyBalanceEur_T} {EnergyPriceEur}
\end{equation}
In other words, for a certain scenario of factors not directly dependent on energy consumption, the maximum energy consumed by all miners will be inversely proportional to the price of energy. The lower the price of the energy they can consume, the higher the total energy consumed.
Factors contributing to $NonEnergyBalanceEur_T$ are either constant, with the assumptions mentioned earlier in this paper, or independent factors: $Blocks_T$ and $Mint_T$ are constant. $Fees_T$ will depend on the number of transactions during $T$, $AmortEur_T$ will depend on amortization cost of equipment, and $EurBTC$ will depend on the conditions in the exchange market. Therefore, for some given non-energy factors, the lower the energy price miners can get, the higher their aggregated energy consumption.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{energy-energypriceeur}
\caption[Impact of price of energy]{Impact of price of energy. Values of maximum energy consumed per year (in TWh) depending on the price of energy (in Euro per KWh), for Bitcoin exchange rate, amortization costs, and collected transaction fees of scenarios described in Table~\ref{table:scenarios1}. Values of maximum energy below 0 mean those prices of energy are not profitable for the corresponding scenario. The price for the vertical line corresponds to USD 0.05 per KWh, used by CBECI~\cite{cbeci_methodology:2021} as the average electricity price for miners.}
\label{fig:energy-energypriceeur}
\end{figure}
Figure~\ref{fig:energy-energypriceeur} illustrates how the maximum energy consumed by Bitcoin depends on the price of electricity for scenarios defined in Table~\ref{table:scenarios1}.
\subsection{The impact of the exchange rate of Bitcoin}
The exchange rate of Bitcoin ($EurBTC$) is in the numerator of both fractions contributing to income. The income obtained by miners in Bitcoin will therefore be proportional (in traditional money) to the exchange rate, which is easy to understand, since all income comes in Bitcoin. Therefore, an increase in the exchange rate for Bitcoin means, other things equal, more money to spend in electricity.
Defining the ratio of income in Bitcoin to energy price ($IncomeEnergyPriceRatio_T$) as
\begin{equation}
Income_T =
\frac{(Blocks_T \times Mint) + Fees_T} {EnergyPriceEur}
\end{equation}
maximum energy consumption is
\begin{equation}
EnergyMax_T =
\frac{Income_T}{EnergyPriceEur} \times EurBTC -
\frac{AmortEur_T}{EnergyPriceEur}
\end{equation}
From this equation, the impact of the exchange rate is more apparent: for a given income in Bitcoin (minted plus fees), a given amortization cost, and a given energy price, the relation between energy consumption and the Bitcoin exchange rate is linear, with the following slope:
\begin{equation}
\frac{Income_T}{EnergyPriceEur}
\end{equation}
Therefore the, higher this ratio, the quicker $EnergyMAx_T$ will grow when EurBTC grows. This is shown in Figure~\ref{fig:energy-eurbtc}, which shows how the maximum energy that miners can consume varies with the exchange rate of Bitcoin to Euro, for two prices of energy (and the same $Income_T$).
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{energy-eurbtc}
\caption[Impact of exchange rate]{Impact of exchange rate. Values of maximum energy consumed per year (in TWh) depending on the exchange rate of Bitcoin to Euro, for the price of energy, amortization costs, and and collected transaction fees of scenarios defined in Table~\ref{table:scenarios1}. Values of maximum energy below 0 mean those exchange rates are not profitable for the corresponding scenario. The exchange rate for the vertical line (30,000 Euro per Bitcoin) is close to the actual exchange rate for early June 2021.}
\label{fig:energy-eurbtc}
\end{figure}
Figure~\ref{fig:energy-eurbtc} illustrates how the maximum energy consumed per year depends on the Bitcoin exchange rate.
\subsection{The impact of minting}
The amount of minted Bitcoin per block is known in advance, and halves about every four years. This means that the number of minted Bitcoin that will be obtained by all the miners during period $T$ ($Mint$) also follows the same pattern. Therefore, in the short or medium term this factor has no impact on the energy consumption, but over time, since it will be decreasing, it will tend to drive energy consumption down.
Being minting one of the two incomes miners get, there is also a relationship with the other one, fees, which are discussed in the next subsection. For now, it is enough to say that, to maintain a certain income that balances a certain expenditure for all miners, when minting is halved, fees need to increase in the same quantity. If that does not happen, income will decrease, which will mean less expenditure, which other things being equal will mean less energy consumption.
\subsection{The impact of fees}
Fees ($Fees_T$) are in the numerator of an income fraction in~\ref{eq:energy-factors}. Therefore, the higher the fees collected by miners over period $T$, the more energy they can consume during that period, all other things equal.
Fees collected over a period are difficult to predict, but they are related to the number of Bitcoin transactions during the period. That means that, even when the relationship is not lineal, the more transactions during the period, the more energy can be consumed by miners, all other things equal.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{energy-fees}
\caption[Impact of fees]{Impact of fees. Values of maximum energy consumed per year (in TWh) depending on fees collected from transactions, for the price of energy, amortization costs, and exchange rate of scenarios defined in Table~\ref{table:scenarios1}. Values of maximum energy below 0 mean those fees are not profitable for the corresponding scenario. Fees for the vertical line (50 BTC) is close to the weekly average of fees collected per day during early June 2021, according to Blockchain.com\protect\footnotemark.}
\label{fig:energy-fees}
\end{figure}
\footnotetext{Blockchain.com (transaction fees): \url{https://www.blockchain.com/charts/transaction-fees}}
Figure~\ref{fig:energy-fees} illustrates the relationship of maximum energy consumed and transaction fees collected by miners for the five basic scenarios we are considering.
An important consequence of the relationship the maximum energy consumed and the amount of fees is that it shows a control factor for the former. If fees can be lowered, the maximum amount of energy that miners can consume at profit is lowered too. Since fees tend to increase when the number of pending transactions for each block increases, lowering the number of transactions per unit of time for all Bitcoin would lower maximum energy consumption. Current approaches such as Lightning Network~\cite{poon2016:lighting_network} are proposing a second layer, on top of Bitcoin, that allows a large of number of transactions to be mapped on an single Bitcoin transactions. This way, the capacity of Bitcoin, in terms of transactions per unit of time, could increase without increasing the number of transactions in the Bitcoin blockchain, thus helping to keep fees and maximum energy consumption lower.
With the current ratio of fees to minted coins per block, this effect of lowering fees is relatively small. But in the future, as minting is reduced by halving, keeping fees under control will have a more noticeable impact on the maximum energy consumed by miners.
\subsection{The impact of amortization}
We have included in amortization costs ($AmortEur$) all the other costs (except for energy) that miners face to maintain their activity. According to the literature, the vast amount of it is amortization of the devices used for mining.
Since $AmortEur$ is in the numerator of the only negative addend in Equation~\ref{eq:energy-factors}, the relationship with the maximum energy consumption is linear, with a negative slope (the larger the amortization costs, the smaller the maximum energy consumption).
In Figure~\ref{fig:energy-amort} we have depicted the relationship between amortization costs and maximum consumed energy for the scenarios we defined for this paper. In this figure we include a reasonable value for yearly amortization costs for all miners (vertical line, 4,500 MEUR/year), based on the values computed in Table~\ref{table:devices}.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{energy-amort}
\caption[Impact of amortization]{Impact of amortization. Values of maximum energy consumed per year (in TWh) depending on amortization costs (in million Euro), for the price of energy, collected fees, and exchange rate of scenarios defined in Table~\ref{table:scenarios1}. Values of maximum energy below 0 mean those amortization costs are not profitable for the corresponding scenario. Amortization for the vertical line (4,500 MEUR) is estimated assuming miners are amortizing with prices and performance similar to that of the most efficient recent equipment (see Table~\ref{table:devices}).}
\label{fig:energy-amort}
\end{figure}
The figure shows how the more miners invest in equipment (amortization), the less energy they consume. This may seem counterintuitive, because in the end more equipment will consume more energy. But in fact, more equipment only means miners could consume more energy, if they buy it. Miners will decide, based on the rest of the factors, if it makes sense to buy the equipment or not, if they want to stay profitable. If other factors (for example the price of energy or the exchange rate of Bitcoin) make buying more equipment non profitable, they will not buy it.
\subsection{The impact of the Amortization Factor}
The role of amortization in controlling energy consumption can be explored further if we realize that there is a relationship between amortization costs and energy consumed. That relationship is fixed by the price and energy consumption of mining devices. We can define the Amortization Factor, $AmF$, to capture this relationship, as the ratio between the amortization cost of a miner or a set of miners during a certain period $T$ and the energy consumed by them during that same period:
\begin{equation}
AmF = \frac{AmortEur_T}{Energy_T}
\end{equation}
Using this factor, we can compute amortization costs for a certain period T, for a certain set of miners with $AmF$ as their Amortization Factor, and consuming $Energy_T$ as:
\begin{equation}
AmortEur_T = Energy_T \times AmF
\end{equation}
With this, we can rewrite Equation~\ref{eq:cost-income-equilibrium}, expressing both sides in Euro, as:
\begin{equation}
(Energy_T \times AmF) + (Energy_T \times EnergyPriceEur) \leq
((Blocks_T \times Mint) + Fees_T) \times EurBTC
\label{eq:cost-income-amf-equilibrium}
\end{equation}
And finally, we get the maximum aggregated energy consumption of all bitcoin Miners, by applying the inequality to all Bitcoin miners in the expenditure-income equilibrium:
\begin{equation}
Energy_T =
\frac{((Blocks_T \times Mint) + Fees_T) \times EurBTC}{AmF + EnergyPriceEur}
\end{equation}
The numerator in this fraction is the aggregated income, in Euro. So, we can simplify the equation to help understand its consequences, obtaining the ``Bitcoin Energy Amortization Factor Formula'' for Bitcoin miners aggregated energy consumption:
\begin{equation}
Energy_T =
\frac{IncomeEur}{AmF + EnergyPriceEur}
\label{eq:energy-income-amf}
\end{equation}
This shows clearly how there are two factors that control energy consumption, given a certain income in euros (which is controlled by Bitcoin obtained per block, and by the Bitcoin exchange rate). These two factors are the price of energy in Euro, which we already explored, and the Amortization Factor. It is interesting to notice that, when $AmF$ is much lower than $EnergyPriceEur$, energy consumption is almost controlled by $EnergyPriceEur$. But if the Amortization Factor is large enough, it is necessary to have it into account. In fact, if it becomes much larger than $EnergyPriceEur$, it will control almost alone the total energy consumed by Bitcoin miners.
\begin{table}
\begin{tabular}{|r||r|r|r|}
\hline
Name & $AmortEur_T$ & $Energy_T$ & $AmF$ \\
& (EUR) & (KWh) & (EUR/KWh) \\ \hline\hline
\ExpandableInput{notebooks/devices_amf_table}
\hline
\end{tabular}
\caption[Amortization factors]{Amortization factors of devices in Table~\ref{table:devices}. We consider devices are fully amortized in two years, and use a period for the calculus equal to that amortization period ($T = 2 \times 365 \times 24$, in hours).}
\label{table:devices-amf}
\end{table}
Table~\ref{table:devices-amf} shows the values of Amortization Factor for the devices presented in Table~\ref{table:devices}. It is clear that, for all devices considered, $AmF$ is comparable to the cost of energy in all our scenarios. In fact, for the most energy-efficient equipment, for low prices of energy (for example, 0.01 Euro/KWh) the impact of $AmF$ in the aggregated energy consumed by all miners is more than 10 times that of the price of energy.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{energy-amf}
\caption[Impact of Amortization Factor]{Impact of Amortization Factor. Values of maximum energy consumed per year (in TWh) depending on the value of Amortization Factor for the price of energy, collected fees, and exchange rate of scenarios defined in Table~\ref{table:scenarios1}. The vertical line corresponds to the Amortization Factor of the most recent device in Table~\ref{table:devices} (0.15)}
\label{fig:energy-amf}
\end{figure}
Figure~\ref{fig:energy-amf} shows how the total energy consumption varies with the Amortization Factor, for the different scenarios defined earlier. One of the most interesting takeaways of this chart is how, above a certain threshold, all the lines become close to flat, and total energy consumed is relatively low.
\subsection{The impact of aggregated hashpower}
\label{sec:impact-hashpower}
The aggregated hashpower of all miners has a direct influence in the energy that miners consume:
\begin{equation}
EnergyEur_T = DevicePower \times \frac{HashPower}{HashPowerDevice} \times T
\end{equation}
However, the aggregated hashpower is only a consequence of the income miners get. This is the reason why the aggregated hashpower does not appear in Equation~\ref{eq:energy-factors}. In fact, the hashing power needed to produce a block is adjusted (approximately) every week by the Bitcoin protocol. If other factors allow miners to add hashpower by putting in operation new devices, the difficulty of mining a block will be increased in less than a week, rendering the increase void with respect to collecting (collectively) more income: they can only collect the aggregated income produced.
In the equilibrium, miners will tune hashpower, by switching on or off devices, according to how much money they can spend to balance their income. We can define cost per hash in Euro ($CostHashEur$), which is characteristic of each device, given a certain price for energy: it will depend on its hashpower and its energy consumption. The aggregated hashpower can be computed as:
\begin{equation}
HashPower = \frac{IncomeEur_t}{CostHashEur \times T}
\end{equation}
This leads to the interesting result that even when individual miners compete in hashpower, since the larger the fraction of hashpower they have, the larger the income), for all of them together this is a zero-sum game: their total income is independent of the aggregated hashpower of all miners. In fact, it is worse than zero-sum, because the more hashpower, the more costs (in the form of energy and amortization costs), which means that they are competing for the same income with increased costs. If miners could control the total hashpower, for them it would be more profitable to keep it as low as possible, since that is a cost for all of them (both in terms of energy and amortization). This would mean that all miners together would stay way below their maximum energy consumption, to minimize their aggregate costs. But as long as joining the set of miners remain a competitive action, the total hashpower will tend to be as high as possible, since as long as there is profit, every single miner will tend to invest as much as possible in both energy and amortization, or new miners will.
We can also see how, in the end, the aggregated hashpower will tend to follow the exchange rate of Bitcoin since, from Equation~\ref{eq:income}, we can substitute $IncomeEur_T$ for its factors:
\begin{equation}
HashPower = \frac{(Blocks_T \times Mint) + Fees_T}{CostHashEur \times T} \times EurBTC
\label{eq:hashpower-income-cost}
\end{equation}
This means that, for some given block minting and transaction fees, the aggregated hashpower will be proportional to the exchange rate of Bitcoin. Which is consistent with the discussion we already had on the impact of the exchange rate on energy consumption.
\section{Discussion and threats to validity}
\label{sec:discussion}
After exploring the different factors that affect aggregated energy consumption of all miners, let's now discuss these factors, which of them could be used to control, to some extent, energy consumption, and the validity of the main assumptions that base our exploration.
\subsection{Factors controlling energy consumption}
Of the many factors controlling energy consumption of Bitcoin miners, there are two that historically have been more volatile, and difficult to predict:
\begin{description}
\item[Exchange rate] Bitcoin miners receive all their income in Bitcoin. Therefore, while their expenditure is mainly in traditional currency, the exchange of Bitcoin to traditional currency is a major driver of their aggregated income. Since this exchange has been very volatile in the past, their income is also very volatile. In particular, in periods of a rising Bitcoin, miners income increases proportionally, and as we saw, their energy consumption too (being other factors constant). This causes problems for miners in periods of a declining exchange rate, since their income decreases, and they can no longer afford amortization and energy costs. Since they can only really act on energy costs (they need to face amortization costs whatever happens), it would seem rational for them to reduce energy consumption more than equations of equilibrium suggest. However, the entry of new miners not subject to amortization cost of past equipment, could mitigate this mechanism.
In any case, this factor alone may explain the great increase in energy consumption during the past years, since the exchange rate of Bitcoin has increased a big amount during this period. The volatility of the exchange rate also means that miners cannot act based only on current or past exchange rates: they need to predict the exchange rate during the next months, or years, to adequately plan for their purchases and mining operations. Therefore, it is likely that it is more the expected exchange rate than the current exchange rate what drives their decisions. In any case, in the long term, the equilibrium presented in this paper will hold. If miners overspend, buying more equipment and consuming more energy than the income they get, sooner or later they will go bankrupt, and be substituted by others who can adjust their costs better.
Since the exchange rate of Bitcoin is fixed in the market, based on its perceived utility and other factors, it can be considered to be out of control of any agent, and therefore cannot be used to control aggregated energy consumption.
\item[Transaction fees] Transaction fees is the volatile part of miners income in Bitcoin. Minted coins also change, but they change only every four years, and in a predictable way. Income via transaction fees are dependent on the fees users of Bitcoin are ready to face, which in turn are related to the size of the queue of pending transactions, and the urgency of users to have their transactions included in the blockchain.
With time, as minted coins per block get halved every four years, fees are expected to be the major source of income for miners. Therefore, they are already an important driver of energy consumption, and will be even more important with time. If pending transactions, and their urgency to enter the blockchain, are reduced, transaction fees will likely be reduced too.
\end{description}
Other variable, but more stable factors affecting energy consumption are:
\begin{description}
\item[Minting] This mechanism is entirely predictable: minted coins per block will be halved every approximately four years. This means that income for miners will also be reduced every four years, and that fact should be reflected in the energy they can consume (being other factors equal).
\item[Energy price] The price of energy is given by the technologies available to produce the energy that miners consume, and other factors such as taxing. The current evolution of the cost of energy production is clearly directed towards decreasing costs. Other factors are more difficult to predict. However, given that miners can relocate with relative ease, it is expected that they move as much as possible to places with access to cheap energy, and in general their energy costs will be close to the lower costs that technology and other factors allow.
\item[Amortization cost] Cost of equipment, and its duration, are the main factors that affect amortization cost. Cost per hash is in general declining, as technology improves. But as we showed, the real impact on energy consumption is in the relationship between equipment cost and duration (amortization) and energy consumed. During the last years, it seems that this ratio (Amortization Factor) is increasing, with a higher amortization cost per unit of energy consumed. This evolution is reducing aggregated energy consumption, being other factors equal. Likely the reason for this trend is the increasing specialization of mining equipment, from general purpose CPUs to GPUs to ASICs.
\end{description}
\subsection{Actions that could decrease energy consumption}
Some specific actions that influence some of the factors could lead to reducing energy consumption by Bitcoin miners, not impacting the chances or controlling the aggregated hashpower (leading to majority attacks on the blockchain), neither interfering with the fundamentals of the proof of work mechanism. The two actions that are most remarkable in this respect are tuning the Amortization Factor of mining devices, and reducing transaction fees:
\begin{description}
\item[Tuning the Amortization Factor] The Amortization Factor shows possible scenarios in which more expensive machines, with less energy consumption, do not impact the aggregated hashpower neither the costs that miners have to face to mine blocks, but have a lower aggregated energy consumption. In some sense, the Amortization Factor, and the relationship of income and expenditure for all miners together could lead to consider ``proof of cost'' as an extended version of the traditional ``proof of work''. In the end, to produce its benefits, avoiding concentration and minimizing majority attacks, Bitcoin needs that a high cost is distributed among a large number of independent miners. Consuming energy is in fact a sizable part of this cost. But when the Amortization Factor of equipment used to mine is high, amortization is a much larger cost than energy consumption. Thus, ``proof of cost'', tailored to minimize energy consumption, may have all the benefits of ``proof of work'', but minimizing the problems of energy consumption. To which extent Amortization Factor will increase in the future will depend on the technological evolution of mining devices, but also could be influenced by tuning of the specific algorithms used to produce hashes.
This is just a consequence of Equation~\ref{eq:energy-factors}, where it is clear that, on the expenses side, every euro spent in equipment is an euro not spent in energy. The Amortization Factor, characteristic of each device, gives a tool to better understand this effect, as shown in Equation~\ref{eq:energy-income-amf}.
\item[Reducing transaction fees] Transaction fees depend on how many transactions are performed, and how quickly people want them to be included in the blockchain. Therefore, any mechanism that influence the number of transactions, or the need to have them quickly included in the blockchain, has the potential of reducing transaction fees. Mechanism that build transaction layers on top of Bitcoin (off-chain transactions) usually help in both directions. Because of their nature, the sort of aggregate many higher layer transactions into a few Bitcoin transactions, therefore making it possible to have many transactions between users with a much lower number of ``real'' Bitcoin transactions. Usually, since the upper layer is good enough for most uses, ``real'' Bitcoin transactions that reflect groups of upper layer transactions can also wait for longer periods until they are included in the blockchain. Therefore, these mechanisms tend to reduce transaction fees at the Bitcoin level (even if they may include other fees at higher levels), which make miners income to be lower, which reduces the energy they may consume in the profit equilibrium.
\item[Accelerating the halving] ``Halving'' is the process of reducing the coins minted per block. Currently, Bitcoin is halving minted coins every approximately four years. If this process would accelerate, say to halving every two years, income per minting will be dramatically reduced in a few years, with the consequent impact in energy consumption. This acceleration would not have a significant impact on the amount of Bitcoin in circulation (most coins were already minted), neither in the total number of Bitcoin that will be produced (will be the same, only it will be reached earlier). It also doesn't show an impact on the concentration of miners.
\item[Increasing energy price] Given that miners relocate relatively easily, regulating the price of energy for them is not easy, except if worldwide regulations are enforced. However, if the needed agreements could be reached, the price of energy for miners could be increased, for example by taxing, thus reducing the aggregated consumption. However, since usually it is not exactly energy consumption what may be a problem, but energy consumption with an impact on the environment, access of miners to non-clean energy could be controlled or made more expensive, thus acting not on the total energy consumed, but on the non-clean energy consumed, which could be appropriate in some cases.
\end{description}
\subsection{Validity of the main assumptions}
Some assumptions that influence the results presented in this paper could be wrong, therefore affecting some parts of the analysis:
\begin{description}
\item[Miners are operating at income / expenses equilibrium]
Our exploration of the factors that determine the maximum energy consumption by Bitcoin miners is based on the analysis of the equilibrium between their income and their expenditures. However, the assumption that miners operate in this equilibrium is just a convenient artifact. Usually, miners will tend to work for profit, which means that they will try to cut expenditures as much as possible, and their benefit (the difference between their income and their expenses) should also be accounted.
However, since miners are competitive agents, the moment one of them find ways to reduce expenses while obtaining the same income others will try to do the same, nullifying their advantage, and tending to an aggregated zero profit. This will be reflected in general in the cost of a hash being equal to the income for producing a hash: miners that can reduce the cost per hash will tend to add more hashpower, to maximize their expected income and benefits. That will expel from the set of miners to those who are less efficient, and with a higher cost per hash cannot afford to compete. Usually, they will be substituted by others, more efficient miners, and benefits will come again close to zero. In the past, we have seen this once and again, with the substitution of miners using traditional CPUs by miners using GPUs, then with miners using ASICs. Or with miners finding increasingly cheaper sources of energy.
Therefore, although in the study we have been interested in the maximum energy consumed, the numbers found should be, if competition is really working, quite close to the actual numbers for energy consumption. In other words, Bitcoin miners are likely working very close, as a whole, to the economic equilibrium of profits and expenses. Of course, any deviation from this assumption will mean deviations in the actual energy consumed.
\item[Miners are competing with other miners]
In general, miners are considered to be competing agents, trying to get as much benefit as possible without other agreements with other miners than those imposed by the Bitcoin functioning. However, if miners could collude, and somehow find ways to reduce the aggregated hashpower below what Equation~\ref{eq:hashpower-income-cost} mandates, they could collectively have more benefits, and consume less energy. In other words, if competing among themselves, miners are forced to operate as much hashpower as permitted by the equilibrium equation, spending all their income in equipment and energy. But if they could agree on an upper threshold to hashpower, somehow impeding other new miners to fill the gap until the theoretic maximum, they could devote a part of their income to profit, instead of expenses.
It is very unlikely that, given how easy it is to become a miner, and how distributed is the population of miners, these kind of thresholds can be imposed, so in general we assume this assumption holds.
\item[Energy consumption corresponds to energy consumed by mining devices]
In real mining facilities, energy is not needed only to power mining devices. It is used also for cooling, maintenance, and other tasks. When we analyze the aggregated balance of expenses for miners, this is not a problem, since ``energy consumed'' is just a cost, and can refer to any kind of energy needed by miners. However, in the analysis of the Amortization Factor, the nominal energy consumption of devices is used. To be more real, the analysis should also include other kinds of energy needed to mine. In general, they could be computed as a multiplier of the nominal energy consumption, to get more realistic values for the Amortization Factor of a miner
\item[Amortization cost is derived only from the cost of devices]
Miners will usually face other amortization costs, not derived from the cost of purchasing mining equipment. They will have installation costs, infrastructure costs, management costs, etc. When we analyze the aggregated balance of expenses for miners, this is not a problem, since ``amortization'' is just a cost, reflecting all costs that are not energy. However, in the analysis of the Amortization Factor, the analysis should include other non-energy costs. As in the previous case, those costs could be computed as a multiplier of amortization costs.
\item[The period of amortization is known in advance]
Our analysis analysis assumes that the period of full amortization (the period from the moment some mining device is put in operation to the moment it is shut down because it is no longer profitable) is known in advance, so that miners can have it into account to make decisions. However, this is usually not the case. Equipment break-down, and variations in Bitcoin exchange rate and energy cost may suddenly make some equipment profitable or unprofitable during some periods of time. These events are not predictable, and therefore, miners can only work with assumptions about the future.
This does not affect the general analysis, since whatever their assumption, either they are close to reality and then they succeed, or not and they will go bankrupt and be substituted by other miners. However, for the specific analysis of some situation, and therefore the aggregated energy consumption for it, those assumptions matter. For example, if the current exchange rate of Bitcoin is 20,000 Euro/BTC, but miners expect it to be 100,000 Euro/BTC in six months, they will try to acquire equipment to be ready for that new exchange rate.
Therefore, although energy consumption computed for long periods should be close to reality, for short periods it could be quite different, especially if miners have expectations of significant changes of scenario in the near future.
\item[Miners operate with the most profitable equipment]
Again, for the general analysis this is not relevant. But for estimating the energy consumption in a given scenario, our analysis assumes that miners are operating with the most profitable equipment available. In general, this will not be true: miners will operate with the equipment they already have. But in general, the assumption will hold for the new equipment they purchase, since they want to maximize their income. Therefore, as time passes, the installed base of devices gets renovated, as they become unprofitable, and the new equipment fits the assumption.
Therefore even when the assumption does not hold in reality, we assume that reality is close enough to it, thanks to this renovation mechanism, to be useful to compute energy consumption.
\item[Fees are known in advance]
Once again, for the general analysis this assumption is not relevant. But for miners to make their decisions, income is as important as expenditure. Income from minting is completely predictable (in Bitcoin), but fees are not. They depend on the queue of transactions pending entering the blockchain, and on expectations by Bitcoin users.
\end{description}
\section{Related work}
\label{sec:related-work}
There have been many studies estimating the energy consumption of Bitcoin miners. In general, they start from estimating the aggregated hashpower of all miners, and then, based on the hashpower of most common hashing (mining) devices in use, estimate the number of total hashing devices (miners). With this, they can just multiply by the energy consumption of these devices to get the aggregated consumption for all miners. A good example of this approach is described in~\cite{bevand17:_elect_bitcoin}, which summarizes its method as follows:
\begin{quote}
``I drew a chart juxtaposing the Bitcoin hash rate with the market availability of mining ASICs and their energy efficiency. Using pessimistic and optimistic assumptions (miners using either the least or the most efficient ASICs) we can calculate the upper and lower bounds for the global electricity consumption of miners''
\end{quote}
This method is used by the Cambridge Bitcoin Electricity Consumption Index (CBECI)\footnote{CBECI: \url{https://cbeci.org/}}, which estimates an aggregated energy consumption of all miners of 90.10~TWh, with a range between 33.66~TWh and 226.24 TWh, both for June 15th 2021.
Even when this approach is quite helpful to estimate the total energy consumption, based on a detailed model of which equipment miners are using, and its individual energy consumption, it is not very useful to reason about the influence of other factors (such as the Bitcoin exchange rate, or the price of electricity) on the quantity and quality of the equipment actually in use. In other words, although this strategy is very useful to estimate the current aggregate consumption, it is not that useful for reasoning about the factors which ultimately drive that consumption.
In this paper we have followed another approach, based on assuming that miners will in general operate at profit, and studying the factors that influence that profit. To our knowledge, this approach was first explored in \cite{RePEc:new:wpaper:1505}, which focused on the estimation of the cost of ``production'' of Bitcoin as a predictor of its exchange rate:
\begin{quote}
``Bitcoin production seems to resemble a competitive
market, so in theory miners will produce until their marginal
costs equal their marginal product. Break-even points are
modeled for market price, energy cost, efficiency and difficulty to
produce. The cost of production price may represent a
theoretical value around which market prices tend to gravitate.''
\end{quote}
In the process of estimating this ``production cost'', energy consumption is estimated, based on the equilibrium between the cost of energy consumed (expenditure) and the exchange value of Bitcoin produced by minting (income) per day, for a given hashpower of a miner. Then the cost that a miner faces for minting a Bitcoin is considered a lower bound for the exchange rate of Bitcoin, as it would be the ``production cost'' of that Bitcoin. Although the paper does not provide an specific equation or estimation value for the aggregated energy consumption of all miners, it is straightforward to extrapolate it from equations shown in the paper. For the estimation of the cost, the paper does not considers amortization of equipment or any other cost except for energy consumption, and for income it does not consider transaction fees.
There is also in this paper an interesting discussion on how, below a certain exchange rate, miners would shutdown equipment which is not profitable for them because it consumes too much energy. But the analysis does not considers what happens if for any reason the exchange rate is kept at that value: aggregated hashpower will drop because of the retiring miners, the difficulty of the Bitcoin protocol will drop, to adjust the number of Bitcoin minted per period of time, and the remaining miners will be profitable again because minted Bitcoin per hash will therefore increase. This would lead to a new stable situation, with a lower aggregated hashpower, and therefore a lower energy consumption, not altering the exchange rate of Bitcoin. In other words, as we showed, the aggregated hashpower will adjust to the other factors actually driving energy consumption.
This approach is further explored in~\cite{vries2017:bitcoin_elect_consum}\footnote{A peer reviewed version has been published as \cite{vries21:_bitcoin}, but I couldn't access it.}, where it is specifically used to estimate an upper bound for aggregated energy consumption. However, this paper does no have into account transaction fees, and only in a very limited way has into account amortization costs. It is also more focused in finding upper and lower levels for energy consumption given a certain scenario than in exploring the interrelationship between the different parameters influencing energy consumption.
This approach is also the basis of the Bitcoin Energy Consumption Index (BECI)\footnote{BECI: \url{https://digiconomist.net/bitcoin-energy-consumption/}}, which estimates a minimum consumption per year, as of June 15th 2021, of 42.22 TWh, and an estimated consumption per year of 121.29 TWh (both as of June 15th 2021).
\section{Conclusions}
\label{sec:conclusions}
In this paper we have explored the factors determining determining the maximum energy consumption by Bitcoin miners, based on an analysis of the equilibrium between costs and expenditures for miners. Some of the main results of this exploration are:
\begin{itemize}
\item A method, based on the analysis of income and expenses of all miners, to find the economic equilibrium in which they should operate if they are on the verge of being profitable, and at the same time competing.
\item The Bitcoin Energy Factors Formula, Equation~\ref{eq:energy-factors}, which relates energy consumption to Bitcoin minting, transaction fees, energy price, amortization cost, and Bitcoin exchange rate. This equation shows how all these factors impact the aggregated energy consumption of all miners.
\item The Bitcoin Energy Amortization Factor Formula, Equation~\ref{eq:energy-income-amf}, which relates energy consumption to the aggregated income of all miners, the Amortization Factor (a ratio characterizing mining equipment), and the price of energy.
\item A detailed explanation of how individual factors affect aggregated energy consumption.
\item An analysis of some scenarios, characterized by the values of the different factors, computing the aggregated energy consumed in each of them.
\item A proposal of some mechanisms that could be used to control the aggregated energy consumption.
\end{itemize}
Not necessarily all of these contributions are new, but they helped me to make up my mind, and to reason about how energy consumption of Bitcoin miners actually works, beyond just estimating how big it is, and inferring how it will be based on current trends. Instead of that, looking at the fundamental factors that affect it, I think we can better reason about its future trends, and even start discussions about how it could be controlled, while at the same time maintaining the main characteristics of the current Bitcoin protocol.
\textbf{Reproduction package} There is a GitLab repository\footnote{GitLab repository for this paper: \url{https://gitlab.com/jgbarah/bitcoin-paper}} with source for this paper, including a Jupyter Python notebook and some CSV files with data in scenarios. The notebook can also be interactively consulted in Binder\footnote{Notebook in Binder: \url{https://mybinder.org/v2/gl/jgbarah\%2Fbitcoin-paper/HEAD?filepath=notebooks\%2Fanalysis.ipynb}}.
\textbf{Feedback about this paper} Please provide feedback about this paper as you may think it will reach me. This said, opening an issue in GitLab\footnote{Feedback: \url{https://gitlab.com/jgbarah/bitcoin-paper/-/issues/new}} is the preferred feedback method.
\printbibliography
\end{document}
|
1,116,691,499,188 | arxiv |
\section{Approach}
\label{sect:approach}
In what follows, we denote with $\Probab{\cdot}$ and
$\Pcond{\cdot}{\cdot}$ the observed marginal and conditional
probability of an event, whose complement is denoted with the
diacritical mark $\~\cdot$ (macron).
\paragraph{\bfseries A probabilistic model of selective advantage.}
Central to
CAPRI's score function is Suppes' notion of
\emph{probabilistic causation} (cfr., \cite{Suppes70}),
which can be stated in
the following terms: a selectivity relation\footnote{Suppes presents the relation in terms of causality; however, we avoid Suppes' terminology as we build on just two of his many axioms, which only give rise to the notion of {\em prima-facie\/} causality.}
among two observables $i$ and $j$ if
(1) $i$ occurs earlier than $j$ -- \emph{temporal priority} (\textsc{tp}) --
and (2) if the probability of observing $i$ raises the probability of
observing $j$, i.e., $\Pcond{j}{i} > \Pcond{j}{\~i}$ --
\emph{probability raising} (\textsc{pr}).
The definition of probability raising subsumes positive
statistical dependency and mutuality (see, e.g., \cite{Loohuis:2014im}).
Note that the resulting relation (also, called prima facie causality) is purely
observational and remains agnostic to the possible mechanistic
cause-effect relation involving $i$ and $j$
While Suppes' definition of probabilistic causation has known limitations in the context of general
causality theory (see discussions in, e.g.,
\cite{probabilistic-causation,kleinberg-thesis}),
in the context of cancer evolution, this relation appropriately describes various features of
\emph{selective advantage} in somatic alterations that accumulate
as tumor progresses.
Thus, in our framework, we implement the temporal priority among
events -- condition (1) -- as $\Probab{i} > \Probab{j}$, because it is
intuitively sound to assume that the (cumulative) genomic events
occurring earlier are the ones present in higher frequency in a
dataset.
In addition, condition (2) is implemented as is\Newrev{, that is by requiring that for each pair of
observables $i$ and $j$ directly connected, $\Pcond{j}{i} >
\Pcond{j}{\~i}$ is verified}.
Taken together, these conditions
gives rise to a natural ordering relation among events, written ``$i
\rhd j$'' and read as ``$i$ has a selective influence on $j$.''
This relation is a \emph{necessary} but \emph{not
sufficient} condition to capture the notion of selective advantage, and additional
constraints need to be imposed to filter spurious relations.
Spurious correlations are both intrinsic to the
definition (e.g., if $i \rhd j \rhd w$ then also $i \rhd w$,
which could be spurious) and to the model we aim at inferring, because
data is finite as well as corrupted by noise.
Building on this framework, we
devise inference algorithms that capture the essential aspects of heterogeneous
cancer progressions: \emph{branching}, \emph{independence} and
\emph{convergence} -- all combining in a progression model.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.35\textwidth]{SingleCause}
\end{center}
\begin{center}
\tiny
\begin{tabular}{ l || l }
{\ } & \emph{temporal priority} \\ \hline
path & $\Probab{a} > \Probab{b} > \Probab{c}$ \\
branch & $\Probab{a} > \Probab{b} > \Probab{d}$
and $\Probab{a} > \Probab{c}$
\end{tabular}
\end{center}
$\;$ \\
\begin{center}
\includegraphics[width=0.35\textwidth]{ConjunctiveCause}
\end{center}
\begin{center}
\tiny
\begin{tabular}{ l || l }
{\ } & \emph{temporal priority} \\ \hline
co-occurrence & $\Probab{a} > \Probab{d}$
and $\Probab{b} > \Probab{d}$
and $\Probab{c} > \Probab{d}$
\end{tabular}
\end{center}
\caption[Singleton and co-occurrence selectivity patterns.]{\textbf{Singleton and Co-occurrence Selectivity Patterns.}
Examples of patterns that CAPRI can automatically extract without prior hypotheses.
\emph{(Top):} A linear path and branching model (left) and
corresponding singleton selectivity patterns with infinite sample
size (right). All the genuine connections are shown (red and black, directed
by the temporal priority), as well as edges
(purple, undirected) which might be suggested by the topology (or
observations, if data were finite).
\emph{(Bottom):} Example of conjunctive model ($a$ \emph{and} $b$
\emph{and} $c$). The co-occurrence selectivity pattern is shown, with
all true patterns and infinite sample size. The topology is augmented
by logical connectives; green arrows are spurious patterns emerging from
the structure of the true pattern $a \wedge b \wedge c \rhd d$.
}
\label{fig:single-cause}
\end{figure}
\paragraph{\textbf{Progression patterns.}}
The complexity of cancer requires modeling multiple
non-trivial \emph{patterns} of its
progression: for a specific event, a pattern is defined as a specific
combination of the closest upstream events that confers a selective
advantage.
As an example, imagine a clonal subpopulation becoming fit -- thus
enjoying expansion and selection -- once it acquires a mutation of
gene $c$, provided it also has previously acquired a
mutation in a gene in the upstream
$a$/$b$ pathway. In terms of progression, we would like to capture the
trajectories: $\{a,\neg b\}, \{\neg a, b\}$ and $\{a,b\}$ precedes
$c$ (where $\neg$ denotes the absence of an event in the gene).
To establish this analysis formally, we augment
our
model of selection in a tumor with a language built from simple
propositional logic formulas using the usual Boolean connectives: namely,
``and'' ($\wedge$), ``or'' ($\vee$) and ``xor''
($\oplus$). These patterns can be
described by formul\ae\ in a propositional logical language, which can be
rendered in \emph{Conjunctive Normal Form} (CNF). A CNF formula
$\varphi$ has the following syntax: $\varphi = \boldsymbol{c}_1 \wedge \ldots
\wedge \boldsymbol{c}_n$, where each $\boldsymbol{c}_i$ is a \emph{disjunctive clause}
$\boldsymbol{c}_i = c_{i,1} \vee \ldots \vee c_{i,k}$ over a set of literals,
each literal representing an event or its negation.
Given this (rather obvious) pattern representation,
we write the conditions for
\emph{selectivity with patterns} as
\begin{equation}
\varphi \rhd e \iff \Probab{\varphi} > \Probab{e} \text{ and } \big[\Pcond{e}{\varphi}
> \Pcond{e}{\~\varphi} \big]\, ;
\end{equation}
with respect to the example above, patterns \footnote{Note that
the conjunction $\wedge$ in our setting is interpreted
differently from the classical notion (and the one adopted in e.g., \cite{gerstung2009quantifying}) since $a \wedge b \rhd
c$ implies $a \rhd c$ and $b \rhd c$ in our framework. See also
\cite{Beerenwinkel:2014eb}.
Moreover, note that the scope of
this study is intentionally kept limited from further generalization
of formul\ae\; i.e., we will not consider statements of the form
$\varphi_i \rhd \varphi_j$, where the rightmost argument is a
formula too.}
could be $a \vee b \rhd
c$ and $a \oplus b \rhd c
In our framework the problem of reconstructing a probabilistic
graphical model of progression reduces to the following:
for each input event $e$, assess a \emph{set of selectivity patterns}
$\{\varphi_1 \rhd e, \ldots, \varphi_k \rhd e\}$,
filter the spurious ones,
and combine the rest in a \emph{direct acyclic graph}
\Newrev{(DAG)}\footnote{\Newrev{A DAG is formed by a set of nodes
and oriented edges connecting one node to another, such that
there are no directed loops among them. See SI Section 1 for a
technical definition.}},
augmented with logical symbols.
Notice that while we broke down the progression extraction into a series of
sub-tasks, the problem remains complex:
patterns are unknown, potentially spurious, and exponential in formula
size; data is noisy; patterns must allow for
``imperfect regularities'', rather than being strict\footnote{This
statement implies that there could be samples -- i.e., patients -- contradicting
a pattern which still remains valid at a population level.
For this reason a
pattern $x \wedge y \rhd z$ is sometimes called a ``noisy
and''.}.
To summarize, in our setting we can model complex progression trajectories
with branches (i.e., events involved in various patterns), independent
progressions (i.e., events without common ancestors) and convergence
(via CNF formulas). The framework we introduce here is highly versatile, and
to the best of
our knowledge, it
infers and checks more complex claims than any cancer progression
algorithms described thus far
(cfr.,\cite{desper_1999,gerstung2009quantifying,Loohuis:2014im}).
\section{Introduction}
\input{Approach}
\input{Methods}
\input{Discussion}
\input{Conclusions}
\section*{Acknowledgements}
This research was funded by the NSF
grants CCF-0836649 and CCF-0926166 and by Regione Lombardia (Italy)
under the research projects RetroNet through the ASTIL Program
[12-4-5148000-40]; U.A 053 and Network Enabled Drug Design project
[ID14546A Rif SAL-7] Fondo Accordi Istituzionali 2009.
We also thank Francesca Ciccarelli, King's College
London, UK, and others for suggesting the ``selectivity advantage''
terminology.
We would also like to thank all the participants of the Workshop and
School on \emph{Cancer, Systems and Complexity} held on Lake Como,
Italy for many fruitful discussions there
(\texttt{csac.lakecomoschool.org}).
Finally, we are also indebted to Rocco Piazza, Universit\`{a} degli Studi di
Milano Bicocca, Italy, for all the data, insights and
patience in explaining to us the biology of aCML.
\bibliographystyle{ieeetr}
\section{Conclusions}
\label{sect:conclusions}
The \emph{reconstruction of cancer progression models} is a pressing problem,
as it promises to highlight important clues about the evolutionary dynamics of
tumors and to help in better targeting therapy to the
tumor (see e.g.,~\cite{olde2014cancer}).
In the absence of large longitudinal datasets, progression extraction
algorithms rely primarily on \emph{cross-sectional} input data,
thus complicating the statistical inference problem.
In this paper we presented CAPRI, a new algorithm (and part of the TRONCO
package) that attacks the progression model reconstruction problem by
inferring \emph{selectivity relationships} among
``genetic events'' and organizing them in a graphical model. The
reconstruction algorithm draws its power from a combination of a
scoring function (using Suppes' conditions) and subsequent filtering and refining procedures, maximum-likelihood estimates
and bootstrap iterations. We have shown that CAPRI outperforms a wide
variety of state-of-the-art algorithms.
We note that CAPRI performs especially well in
the presence of noise in the data, and with limited sample size.
Moreover we note that, unlike other approaches, CAPRI can
reconstruct different types of confluent trajectories unaffected by the
irregularities in the data -- the only limitation being our ability to
hypothesize these patterns in advance. We also note that CAPRI's overall
algorithmic complexity and convergence properties
do offer several tradeoffs to
the user.
\Newrev{
Successful cancer progression extraction
is complicated by tumor heterogeneity: many tumor types have molecular
subtypes following different progression patterns.
For this reason, it can be advantageous to cluster patient samples
by their genetic subtype prior to applying CAPRI. Several tools have
been developed that address this clustering problem (e.g.,
Network-based stratification \cite{hofree2013network} or COMET from \cite{comet}).
A related problem is the classification of
mutations into functional categories.
In this paper, we have used genes with deleterious mutations
as driving events. However, depending on other criteria, such as the level of
homogeneity of the sample, the states of the progression can represent
any set of discrete states at varying levels of abstraction.
Examples include high-level hallmarks of cancer
proposed by \cite{hanahan_hallmarks_2000, Hanahan_Weinberg_2011}, a set of
affected pathways, a selection of driving genes, or a set of specific
genomic aberrations such as genetic mutations at a more mechanistic
level.
}
We are currently using CAPRI to conduct a number of studies on
publicly available datasets (mostly from TCGA, \cite{tcga}) in
collaboration with colleagues from various institutions.
\Newrev{In this work we have shown the results of the reconstruction on the \textsc{aCML}{} dataset published by \cite{piazza_nat_gen}, and in SI Section 4 we include a further example application on ovarian cancer (\cite{knutsen2005interactive}), as well as a comparative study against the competing techniques.
Furthermore, we are currently extending our pipeline in order to include
pre-processing functionalities, such as patient
clustering and categorization of mutations/genes into pathways
(using databases such as the KEGG database (see \cite{kanehisa2000kegg}) and
functionalities from tools like Network-based clustering, due to \cite{hofree2013network}.
}
Encouraged by CAPRI's ability to infer interesting relationships in a
complex disease such as aCML, we expect
\Newrev{that in the future} CAPRI will
help uncover relationships to aid our understanding of cancer and
eventually improve targeted therapy design.
\section{Results and Discussion}
\label{sect:discussion}
\begin{figure*}
\begin{center}
\includegraphics[width=0.80\textwidth]{composizione_v_alt}
\end{center}
\vspace*{.05in}
\caption[Comparative Study]{\textbf{Comparative Study}.
Performance and accuracy of CAPRI{}
(unsupervised execution) and other algorithms, IAMB, PC, BIC, BDE,
CBN and CAPRESE{}, were compared using synthetic datasets sampled by a
large number of randomly parametrized progression models -- \emph{trees},
\emph{forests}, \emph{connected} and \emph{disconnected DAGs}, which
capture different aspects of confluent, branched and heterogenous
cancer progressions. For each of those, $100$ models with $n=10$
events were created and $10$ distinct datasets were sampled by each
model. Datasets vary by number of samples $(m)$ and level of noise
in the data $(\nu)$ -- see the Supplementary Information file for details.
(\emph{Red box}) Average \emph{Hamming distance} (HD) -- with $1000$
runs -- between the reconstructed and the generative model, as a
function of dataset size $(m \in \{50, 100, 150, 200, 500, 1000
\})$, when data contain no noise $(\nu = 0)$. The lower the HD,
the smaller is the total rate of mis-inferred selectivity relations
among events. (\emph{Blue box}) The same is shown for a fixed sample
set size $m = 100$ as a function of noise level in the data $(\nu
\in \{0, 0.025, 0.05, \cdots, 0.2\})$ so as to account for input
\emph{false positives} and \emph{negatives}. See SI Section 3
for more extensive results on precision and recall scores and also including additional
combinations of noise and samples as well as experimental settings.}
\label{fig:performance}
\end{figure*}
To determine CAPRI's relative accuracy (true-positives and
false-negatives) and performance compared to the state-of-the-art
techniques for \emph{network inference}, we performed extensive
\emph{simulation experiments}. From a list of potential
competitors of CAPRI, we selected: \emph{Incremental Association
Markov Blanket} (IAMB, \cite{tsamardinos2003algorithms}), the
\emph{PC algorithm} (see \cite{spirtes2000causation}), \emph{Bayesian
Information Criterion} (BIC, \cite{bic_1978}), \emph{Bayesian
Dirichlet with likelihood equivalence} (BDE, \cite{bde_1995})
\emph{Conjunctive Bayesian Networks} (CBN,
\cite{gerstung2009quantifying}) and \emph{Cancer Progression Inference with Single Edges} (CAPRESE, \cite{Loohuis:2014im}). These
algorithms constitute a rich landscape of structural methods
(IAMB and PC), likelihood scores (BIC and BDE) and hybrid approaches
(CBN and CAPRESE).
\Newrev{Also, we applied CAPRI to the analysis of an atypical Chronic Myeloid Leukemia dataset
of somatic mutations — with data based on
\cite{piazza_nat_gen}.}
\subsection{Synthetic data}
\label{sect:comparison}
We performed extensive tests on a large number of \emph{synthetic
datasets} generated by randomly parametrized progression models with
distinct key features, such as the presence/absence of: $(1)$
\emph{branches}, $(2)$ \emph{confluences with patterns of
co-occurrence}, $(3)$ \emph{independent progressions} (i.e.,
composed of disjoint sub-models involving distinct sets of events).
Accordingly, we distinguish four classes of generative models with
increasing complexity and the following features:
\begin{center} \footnotesize
\begin{tabular}{c|c|c|c|c}
& \emph{trees} & \emph{forests} & \emph{connected DAGs} & \emph{disconnected DAGs} \\
\hline
(1) & \color{green}{\ding{51}} & \color{green}{\ding{51}} & \color{green}{\ding{51}} & \color{green}{\ding{51}} \\
\hline
(2) & \color{red}{\ding{55}} & \color{red}{\ding{55}} & \color{green}{\ding{51}} & \color{green}{\ding{51}} \\
\hline
(3) & \color{red}{\ding{55}} & \color{green}{\ding{51}} & \color{red}{\ding{55}} & \color{green}{\ding{51}} \\
\end{tabular}
\end{center}
The choice of these different type of topologies is not a mere
technical exercise, but rather it is motivated, in our application of
primary interest, by \emph{ heterogeneity of cancer cell types} and
\emph{possibility of multiple cells of origin}.
To account for \emph{biological noise} and \emph{experimental errors}
in the data we introduce a parameter $\nu \in (0,1)$ which represents
the probability of each entry to be random in $D$, thus representing
a \emph{false positive} ($\epsilon_+$) and a \emph{false negative}
rate ($\epsilon_-$): $\epsilon_+ =\epsilon_- = {\nu}/{2}\,$. The
noise level complicates the inference problem, since samples generated
from such topologies will likely contain sets of mutations that are
correlated but causally irrelevant.
To have reliable statistics in all the tests, $100$ distinct
progression models per topology are generated and, for each model, for
every chosen combination of sample set size $m$ and noise rate $\nu$,
$10$ different datasets are sampled (see SI Section 3
for our synthetic data generation methods).
Algorithmic performance was evaluated using the metrics \emph{Hamming
distance} (HD), \emph{precision} and
\emph{recall}, as a function of dataset size, $\epsilon_+$ and
$\epsilon_-$. HD measures the \emph{structural similarity} among the
reconstructed progression and the generative model in terms of the
minimum-cost sequence of node edit operations (inclusion and exclusion)
that transforms the reconstructed topology into the generative
one\footnote{This measure corresponds to the sum of false positives
and false negative and, for a set of $n$ events, is bounded above by
$n(n-1)$ when the reconstructed topology contains all the false
negatives and positives.}. Precision and recall are defined as
follows: \emph{precision} = TP/({TP} + {FP}) and \emph{recall} =
TP/({TP} + {FN}), where {TP} are the \emph{true positives} (number of
correctly inferred true patterns), {FP} are the \emph{false positives}
(number of spurious patterns inferred) and {FN} are the \emph{false
negatives} (number of true patterns that are \emph{not} inferred
). The closer both precision and recall are to $1$, the
better.
In Figure \ref{fig:performance} we show the performance of CAPRI and
of the competing techniques, in terms of Hamming distance, on datasets
generated from models with $10$ events and all the four different
topologies. In particular, we show the performance: $(i)$ in the case
of noise-free datasets, i.e., $\nu = 0$ and different values of the
sample set size $m$ and $(ii)$ in the case of a fixed sample set size,
$m = 100$ (size that is likely to be found in currently available
cancer databases, such as TCGA (cfr., \cite{tcga})) and different
values of the noise rate $\nu$. As is evident from Figure \ref{fig:performance} CAPRI{}
outperforms all the competing techniques with respect to all the
topologies and all the possible combinations of noise rate and sample
set size, in terms of average Hamming distance (with the only
exception of CAPRESE in the case of tree and forests, which displays a
behavior closer to CAPRI's). The analyses on precision and
recall
display consistent results (SI Section 3). In other words,
we demonstrate on the basis of extensive synthetic tests that CAPRI requires
a much lower number of samples than the other techniques in order to
converge to the real generative model and also that it is much more
robust even in the presence of significant amount of noise in the data, irrespective of the
underlying topology.
See SI Section 3
for a more complete description
of the performance evaluation for all the
analyzed combinations of parameters. There, we have shown that CAPRI{} is highly
effective when the co-occurrence constraint on confluences is
relaxed to \emph{disjunctive} patterns,
\emph{ even if no input hypotheses are provided}, i.e.,
$\Phi=\emptyset$. This result hints at CAPRI's robustness to infer patterns
with imperfect regularities. Finally, we also show that CAPRI{} is
effective in inferring synthetic lethality relations in this case using the
operator $\oplus$ as introduced in
\Newrev{Section~\ref{sect:approach}, Approach}; when a
combination of mutations in two or more genes leads to cell death,
while separately, the mutations are viable. In this case, candidate
relations are directly input as $\Phi$.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.46\textwidth]{aCML-dataset}
\includegraphics[width=0.53\textwidth]{aCML-CAPRI}
\end{center}
\vspace*{.05in}
\caption[Atypical Chronic Myeloid Leukemia ]{\Newrev{\textbf{Atypical Chronic Myeloid Leukemia}.
{\em(left)} Mutational profiles of $n=64$ \textsc{aCML}{} patients - exome sequencing in \cite{piazza_nat_gen} - with alterations in $|G|=9$ genes with either mutation frequency $>5\%$ or
belonging to an hypothesis inputed to CAPRI (SI Section 4). Mutation types are classified as \emph{nonsense point}, \emph{missense point} and \emph{insertion/deletions}, yielding $m=16$ input events. Purple annotations report the frequency of mutations per sample. {\em(right)} Progression model inferred by CAPRI{} in supervised mode. Node size is proportional to the marginal probability of each event, edge thickness to the confidence estimated with $1000$ non-parametric bootstrap iterations (numbers shown leftmost of every edge). The p-value of the hypergeometric test is displayed too. Hard exclusivity patterns inputed to CAPRI are indicated as red squares. Events without inward/outward edges are not shown.}}
\label{fig:aCML_model}
\end{figure*}
\subsection{Atypical Chronic Myeloid Leukemia (aCML)}
\label{sect:aCML}
\Newrev{
As a case study, we applied CAPRI to the mutational profiles
of $64$ \textsc{aCML}{} patients described in \cite{piazza_nat_gen}. Through exome
sequencing, the authors identify a recurring \emph{missense
point mutation} in the \emph{SET-binding protein 1} (\textsc{setbp1}{}) gene as
a novel \textsc{aCML}{} marker.
Among all the genes present in the dataset by Piazza {\em et al.},
we selected those either $(i)$ mutated - considered any mutation type
- in at least $5\%$ of the input samples ($3$ patients), or $(ii)$
hypothesised to be part of a functional \textsc{aCML}{} progression pattern in
the literature \footnote{Two \emph{hard exclusivity} patterns - i.e.,
mutual exclusivity with ``xor'' - were tested, involving the
mutations of: $(i)$ genes \textsc{asxl1}{} and \textsc{sf3b1}{} (see
\cite{lin2014sf3b1}), which is present in the inferred progression
model in Figure \ref{fig:aCML_model}, and $(ii)$ genes \textsc{tet2}{} and
\textsc{idh2}{} (see \cite{figueroa2010leukemic}). The syntax in which the
patterns are expressed is in the SI, Section 4.}. The input dataset
with selected events is shown in Figure \ref{fig:aCML_model}; notice
that somatic mutations are categorised as \emph{indel},
\emph{missense point} and \emph{nonsense point} as in \cite{piazza_nat_gen}. In Figure \ref{fig:aCML_model} we show the
model reconstructed by CAPRI (supervised mode, execution time
$\approx 5$ seconds) on this dataset, with confidence
assessed via $1000$ non-parametric bootstrap iterations. The model highlights several non trivial selectivity relations involving genomic events relevant to \textsc{aCML}{} development.
First, CAPRI predicts a progression involving mutations in \textsc{setbp1}{},
\textsc{asxl1}{} and \textsc{cbl}{}, consistently with the recent study by \cite{meggendorfer2013setbp1}, in which these genes were shown to be highly correlated and possibly functioning in a synergistic manner for \textsc{aCML}{} progression.
Specifically, CAPRI predicts a selective advantage relation between
missense point mutations in \textsc{setbp1}{} and nonsense point mutations in
\textsc{asxl1}{}. This is in line with recent evidence from
\cite{inoue2014setbp1} suggesting that \textsc{setbp1}{} mutations are enriched
among \textsc{asxl1}{}-mutated \emph{myelodysplastic syndrome} (MDS) patients,
and \emph{in-vivo} experiments point to a driver role of \textsc{setbp1}{} for
that leukemic progression.
Interestingly, our model seems also to suggest a different role of \textsc{asxl1}{}
\emph{missense} and \emph{nonsense} mutation types in the
progression, yet more extensive studies (e.g., prospective or systems biology explanation) are needed to corroborate this hypothesis.
Among the hypotheses given as input to CAPRI, the algorithm seems to suggest that the exclusivity pattern among \textsc{asxl1}{} and \textsc{sf3b1}{} mutations selects for \textsc{cbl}{} missense point mutations. The role of the \textsc{asxl1}{}/\textsc{sf3b1}{} exclusivity pattern is consistent with the study of \cite{lin2014sf3b1} which shows that, on a cohort of 479 MDS patients, mutations in \textsc{sf3b1} are inversely related to \textsc{asxl1}{} mutations.
Also, in \cite{abdel2012asxl1} it was recently shown that \textsc{asxl1}{} mutations, in patients with MDS, \emph{myeloproliferative neoplasms} (MPN) and \emph{acute myeloid leukemia}, most commonly occur as nonsense and insertion/deletion in a clustered region adjacent to the highly conserved \emph{PHD} domain (see \cite{gelsi2009mutations}) and that mutations of any type eventually result in a loss of \textsc{asxl1}{} expression. This observation is consistent with the exclusivity pattern among \textsc{asxl1}{} mutations in the reconstructed model, possibly suggesting alternative trajectories of somatic evolution for \textsc{aCML}{} (involving either \textsc{asxl1}{} nonsense or indel mutations).
Finally, CAPRI predicts selective advantage relations among \textsc{tet2}{}
and \textsc{ezh2}{} missense point and indel mutations. Even though the limited sample size does not allow to draw definitive conclusions on the
ordering of such alterations, we can hypothesize that they may play a synergistic role in \textsc{aCML}{} progression. Indeed, \cite{muto2013concurrent} suggests that the concurrent loss of \textsc{ezh2}{} and \textsc{tet2}{} might cooperate in the pathogenesis of myelodysplastic disorders, by accelerating the overall tumor development, with respect to both MDSs and \emph{overlap disorders} (MDS/MPN).
}
\section{Introduction}
\label{sect:introduction}
Analysis and interpretation of the fast-growing biological data sets
that are currently being curated from laboratories all over the world
require sophisticated computational and statistical methods.
Motivated by the availability of genetic patient data,
we focus on the problem of
\emph{reconstructing progression models} of cancer. In particular, we
aim to infer
the plausible sequences of \emph{genomic alterations} that, by a
process of \emph{accumulation}, selectively make a tumor fitter to
survive, expand and diffuse (i.e., metastasize).
Along the trajectories of progression, a tumor (monotonically) acquires or
``activates'' mutations in the genome, which, in turn, produce progressively more
``viable'' clonal subpopulations over the so-called
\emph{cancer evolutionary landscape}
(cfr., \cite{Merlo:2006em,huang_2009,Vogelstein29032013}).
Knowledge of such
progression models is very important for drug development and in
therapeutic decisions. For example, it has been known that for the same cancer type, patients
in different stages of different progressions respond differently to different
treatments.
Several datasets are currently available that aggregate
diverse cancer-patient data and report in-depth mutational profiles,
including e.g., structural changes (e.g., inversions, translocations, copy-number variations) or somatic mutations
(e.g., point mutations, insertions, deletions, etc.). An example of
such a dataset is \emph{The Cancer Genome Atlas} (TCGA)
(cfr., \cite{tcga})). These data, by their
very nature,
only give a snapshot of a given tumor sample,
mostly from biopsies of untreated tumor samples at the time of diagnoses. It still remains
impractical to track the tumor progression in any single patient over
time, thus limiting most analysis methods to work with
\emph{cross-sectional} data\footnote{Unlike longitudinal
studies, these cross-sectional data are derived from samples that are collected at
unknown time points, and can be considered as ``static''.}.
To rephrase, we focus on the problem of \emph{cancer
progression models reconstruction from cross-sectional
data}. The problem is not new and, to the best of our knowledge,
two threads of research starting in the
late 90's have addressed it.
The first category of works examined mostly gene-expression
data to reconstruct the temporal ordering of samples
(cfr., \cite{magwene03:_reconstructing,gupta_bar_joseph08:_extracting}).
The second category of works
looked at inferring cancer
progression models of increasing model-complexity, starting from the simplest tree models
(cfr. \cite{desper_1999}) to more complex graph models (cfr.,
\cite{gerstung2009quantifying}); see the next subsection for an overview of the state of
the art.
Building on our previous work described in \cite{Loohuis:2014im} we present a novel
and comprehensive algorithm of the second category that addresses this
problem.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.80\textwidth]{overview}
\end{center}
\vspace*{.05in}
\caption[Selectivity Relation in Tumor Evolution]{\textbf{Selectivity Relation in Tumor Evolution.}
The \emph{CAncer PRogression Inference} (CAPRI) algorithm
examines cancer patients' genomic cross-sectional data to determine
relationships among genomic alterations (e.g., somatic mutations,
copy-number variations, etc.) that modulate the somatic evolution
of a tumor. When CAPRI concludes that aberration $a$ (say, an
\textsc{egfr} mutation) ``selects for'' aberration $b$ (say, a
\textsc{cdk} mutation),
such relations can be rigorously expressed using Suppes'
conditions, which postulates that if $a$ selects $b$,
then $a$ occurs before $b$ (\emph{temporal priority}) and occurrences of
$a$ raises the probability of emergence of $b$ (\emph{probability
raising}). Moreover, CAPRI is capable of reconstructing relations
among more complex boolean combination of events, as shown in the
bottom panel and discussed in the Approach section.
}
\label{fig:overview}
\end{figure*}
The new algorithm proposed here is called \emph{CAncer PRogression
Inference} (CAPRI) and is part of the \emph{TRanslational ONCOlogy}
(TRONCO) package (cfr., \cite{tronco}). Starting from cross-sectional
genomic data, CAPRI reconstructs a probabilistic progression model
by inferring ``selectivity relations'', where a mutation in a gene
\textit{A} ``selects'' for a later mutation in a gene \textit{B}.
These relations are depicted in a combinatorial graph and resemble the way a mutation exploits its
``\emph{selective advantage}''
to allow its host cells to expand
clonally. Among other things, a selectivity relation implies a putatively invariant temporal structure among the
genomic alterations (i.e., \emph{events}) in a specific cancer type.
In addition, these relations are expected to also imply ``probability raising'' for a pair of events in the following sense:
Namely, a selectivity relation between a pair of events here signifies that
the presence of the earlier genomic alteration (i.e., the
\emph{upstream event}) that is advantageous in a
Darwinian competition scenario
increases the probability with which a subsequent advantageous
genomic alteration (i.e., the \emph{downstream event})
appears in the clonal evolution of the tumor. Thus the selectivity relation captures the effects of the
evolutionary processes, and not just correlations among the events and imputed clocks associated with them. As an example, we show in (Figure~\ref{fig:overview}) the selectivity relation connecting a mutation of \textsc{egfr} to the mutation of \textsc{cdk}.
Consequently, an inferred selectivity relation suggests
mutational profiles in which certain samples (early-stage patients)
display specific alterations only (e.g., the alteration
characterizing the beginning of the progression), while certain other
samples (e.g., late-stage patients) display a superset subsuming the early
mutations (as well as alterations that occur subsequently in the progression).
Various kinds of genomic aberrations are suitable as input data, and
include somatic point/indel mutations, copy-number alterations, etc., provided that
they are \emph{persistent}, i.e., once an alteration is acquired
no other genomic event can restore the cell to the non-mutated
(i.e., \emph{wild type}) condition\footnote{For instance,
epigenetic alterations such as methylation and alterations in gene
expression are not directly usable as input data for the
algorithm. Notice that the selection of the relevant events is
beyond the scope of this work and requires a further upstream
pipeline, such as that provided, for instance,
in \cite{Tamborero:2013nx,Vogelstein29032013}.}.
The selectivity \Newrev{relations} that CAPRI
reconstructs are ranked and subsequently further refined by means of a
hybrid algorithm, \Newrev{which reasons over time, mechanism and chance, as follows.}
CAPRI's overall scoring methods combine topological constraints
grounded on Patrick Suppes' conditions of probabilistic causation (see
e.g., \cite{Suppes70}), with a \emph{maximum likelihood-fit} procedure
(cfr., \cite{bayesian_learning}) and derives much of its statistical
power from the application of \emph{bootstrap} procedures (see e.g.,
\cite{efron_1982}).
CAPRI returns a \emph{graphical model} of a complex selectivity
relation among events which captures the essential aspects of cancer
evolution: branches, confluences and independent progressions. In the
specific case of confluences, CAPRI's ability to infer them is related
to the complexity of the ``patterns'' they exhibit, expressed in a
logical fashion. As pointed out by other approaches (cfr.,
\cite{beerenwinkel_2007}), this strategy requires trading off
complexity for expressivity of the inferred models, and results in two
execution modes for the algorithm: supervised and unsupervised, which
we discuss in details in Sections~\ref{sect:approach}
and~\ref{sect:methods}.
In Section~\ref{sect:methods} (Methods) we show that CAPRI enjoys a
set of attractive properties in terms of its complexity, soundness and
expressivity, even in the presence of uniform \emph{noise} in the input
data -- e.g., due to ~\emph{genetic heterogeneity} and experimental errors.
Although many other approaches enjoy similar asymptotic properties, we
show that CAPRI can compute accurate results with surprisingly
small sample sizes (cfr., Section~\ref{sect:discussion}). Moreover, to the best of
our knowledge, based on extensive synthetic data simulations, CAPRI
outperforms all the competing procedures with respect to all desirable
performance metrics. We conclude by showing an application of CAPRI to
reconstruct a progression model for \emph{atypical Chronic Myeloid
Leukemia} (aCML) using a recent exome sequencing dataset, first presented in
\cite{piazza_nat_gen}.
\subsection{State of the Art}
\label{sect:state-of-the-art}
For an extensive review on \emph{cancer progression model
reconstruction} we refer to the recent survey by \cite{Beerenwinkel:2014eb}.
In brief, progression models for cancer have been studied starting with the
seminal work of \cite{vogelstein1988genetic} where, for the first
time, cancer progression was described in terms of a directed path by assuming
the existence of a unique and most likely temporal order of genetic mutations.
\cite{vogelstein1988genetic} manually created a (colorectal) cancer
progression from a genetic and clinical point of view. More rigorous and complex
algorithmic and statistical automated approaches have appeared subsequently.
As stated already, the earliest thread of research simply sought more generic
progression models that could assume tree-like structures.
The \emph{oncogenetic tree model} captured evolutionary branches of
mutations (cfr., \cite{desper_1999,szabo}) by optimizing a
\emph{correlation}-based score. Another popular approach to reconstruct
tree structures appears in \cite{desper_2000}. Other general Markov chain models such as, e.g., \cite{hjelm2006new} reconstruct more flexible probabilistic networks, despite a computationally expensive parameter estimation.
In \cite{Loohuis:2014im},
we introduced an algorithm called \emph{CAncer
PRogression Extraction with Single Edges} (CAPRESE), which, based on its extensive empirical analysis, may be deemed as the
current state-of-the-art algorithm for the inference of tree models of
cancer progression. It is based on a shrinkage-like statistical estimation, grounded
in a general theoretical framework, which we extend further in this paper. Other
results that extend tree representations of cancer evolution
exploit mixture tree models, i.e., multiple oncogenetic trees, each of
which can independently result in cancer development (cfr.,
\cite{beerenwinkel_2005}).
In general, all these methods are capable of modeling diverging
temporal orderings of events in terms of branches, although the
possibility of converging evolutionary paths is precluded.
To overcome this limitation, the most recent approaches tends to adopt
Bayesian graphical models, i.e., Bayesian Networks (BN). In the
literature, there have been two initial families of methods aimed at
inferring the structure of a BN from data (cfr.,
\cite{bayesian_learning}). The first class of models seeks to explicitly capture
all the conditional independence relations encoded in the edges and
will be referred to as \emph{structural approaches}; the methods in
this family are inspired by the work on causal theories by Judea Pearl
(cfr.,
\cite{pearl1988probabilistic,pearl,spirtes2000causation,tsamardinos2003algorithms}).
The second class -- \emph{likelihood approaches} -- seeks a
model that maximizes the likelihood of the data (cfr.,
\cite{bic_1978,bde_1995,carvalho2009scoring}).
A more recent \emph{hybrid approach} to learn a BN
which combines the two families above by $(i)$ constraining the
search space of the valid solutions and, then, $(ii)$ fitting the
model with likelihood maximization (see \cite{beerenwinkel_2007,gerstung2009quantifying,misra2014inferring}).
A further technique to reconstruct progression models from cross-sectional data was introduced in \cite{attolini2010mathematical}, in which the transition probabilities between genotypes are inferred by defining a Moran process that describes the evolutionary dynamics of mutation accumulation. In \cite{cheng2012mathematical} this methodology was extended to account for pathway-based phenotypic alterations.
\section{Methods}
\label{sect:methods}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.80\textwidth]{pipe}
\end{center}
\vspace*{.05in}
\caption{\Newrev{\textbf{Data processing pipeline for cancer progression inference.} We sketch a pipeline to best exploit CAPRI{}'s ability to extract cancer progression models from cross-sectional data. Initially, one collects \emph{experimental data} (which could be accessible through publicly available repositories such as TCGA) and performs \emph{genomic analyses} to derive profiles of, e.g., somatic mutations or Copy-Number Variations for each patient. Then, statistical analysis and biological priors are used to select events relevant to the progression and imputable by CAPRI{} - e.g., {\em driver mutations}. To exploit CAPRI{}'s supervised execution mode (see Methods) one can use further statistics and priors to generate \emph{patterns of selective advantage} - , e.g, hypotheses of mutual exclusivity. CAPRI{} can extract a progression model from these data and assess various \emph{confidence} measures on its constituting relations - e.g., (non-)parametric bootstrap and hypergeometric testing. \emph {Experimental validation} concludes the pipeline.}}
\label{fig:pipeline}
\end{figure*}
Building on the framework described in the previous section, we now describe the implementation of CAPRI's
building blocks.
\Newrev{Notice that, in general, the inference of cancer progression models requires a complex \emph{data processing pipeline}, as summarized in Figure \ref{fig:pipeline}; its architecture optimally exploits CAPRI's efficiency.}
\paragraph{\textbf{Assumptions.}}
CAPRI relies on the following assumptions:
$i)$ Every pattern is expressible as a propositional CNF formula;
$ii)$ All events are persistent, i.e., an acquired mutation
cannot disappear;
$iii)$ All relevant events in tumor progression are observable, with
the observations describing the progressive
phenomenon in an essential manner (i.e., \emph{closed world} assumption,
in which all events `driving' the progression are detectable);
$iv)$ All the events have non-degenerate observed probability in
$(0, 1)$;
$v)$ All events are distinguishable,
\Newrev{in the following sense: input alterations produce different profiles across input samples}.
Assumptions $i$-$ii)$ relate to the framework derived in previous
section, while $iii)$ imposes an onerous burden on the
experimentalists, who must select the \emph{relevant}
genomic events to model\footnote{\Newrev{Theoretically, this assumption
- common to other Bayesian learning problems - is necessary to prove CAPRI's ability to extract
the exact model in the optimal case of infinite samples. Practically,
as all \emph{relevant} events are hardly
selectable a priori and sample size is finite, further statistics can be used to select the most relevant driver alterations -- see
also Section~\ref{sect:discussion}, Results and Discussion. Nonetheless, CAPRI can provide significant results even if this assumption
is not or cannot be verified.}}. \Newrev{Assumption $iv)$ relates instead to the statistical
distinguishability of the input events (see the next section on CAPRI's Data Input).
}.
\paragraph{\textbf{Trading Complexity for Expressivity.}}
To automatically extract
the patterns that underly a progression model, one may try to adopt a brute-force
method of enumerating and testing all possibilities. This strategy is computationally intractable, however, since the number of (distinct)
(sub)formul\ae\ grows exponentially with the number of events included
in the model.
Therefore, we need to
exploit certain properties of the $\rhd$ relation whenever possible, and trade
expressivity for complexity in other cases, as explained below.
Note that \emph{singleton} and \emph{co-occurrence} ($\wedge$)
types of patterns are amenable to \emph{compositional reasoning}: if
$i_1 \wedge \ldots \wedge i_k \rhd j$ then, for any $p = 1,\ldots, k$,
$i_p \rhd j$. This observation leads to the following
straightforward strategy of evaluating every
conjunctive (and henceforth singleton) relation using a pairwise-test
for the selectivity relation (see Figure~\ref{fig:single-cause}).
Unfortunately, it is easy to see that this reasoning fails to generalize for CNF patterns: e.g., when the
pattern contains disjunctive operators ($\vee$).
As an example,
consider pattern $a \vee b
\rhd c$, in a cancer where
$\{a, \neg b\}$ progression to $c$ is more prevalent
than $\{\neg a, b\}$ and $\{ a, b\}$. In this case, considering
sub-formulas only we might find
$ a \rhd c$ but miss
$b \rhd c$ because the probability of
mutated $b$ is smaller than that of $c$,
thus invalidating condition $(1)$ of relation $\rhd$. Notice that in extreme
situations, when the data is very noisy, the algorithm may even ``invert'' the selectivity
relation to $ c \rhd b$.
This difficulty is not a peculiarity of our framework, but rather
intrinsic to the problem of extracting complex ``causal networks''
(cfr., \cite{pearl1988probabilistic,pearl,kleinberg-thesis}). To handle this situation, CAPRI adapts a strategy that trades
complexity for expressivity:
\Newrev{the resulting inference procedure},
Algorithm~\ref{alg:inference}, can be executed in two modes:
unsupervised and supervised. In the former, inferred
patterns of confluent progressions are constrained to co-occurrence
types of relations, in the latter CAPRI can test more complex patterns,
i.e., disjunctive or \Newrev{``mutual exclusive''} ones, provided they are given as prior
hypotheses. In both cases, CAPRI's complexity --
studied in next sections -- is quadratic both in the number of events
and hypotheses.
\paragraph{\textbf{Data Input (Step 1).}}
CAPRI{} (cfr., Algorithm~\ref{alg:inference}) requires an input set $G$
of $n$ events, i.e., genomic alterations, and $m$ cross-sectional
samples, represented as a dataset in an $m \times n$ binary matrix $D$,
in which an entry $D_{i,j} = 1$ if the event $j$ was observed in
sample $i$, and $0$ otherwise. \Newrev{Assumption $iv)$ is satisfied
when all columns in $D$ differ - i.e., the alteration profiles yield
different observations.}
Optionally, a set of $k$ input hypotheses $\Phi= \{ \varphi_1 \rhd
e_1, \ldots, \varphi_k \rhd e_k\}$, where each $\varphi_i$ is a
well-formed\footnote{Formally, we require that $\varphi_i \not
\sqsubseteq e_i$, where $\sqsubseteq$ represents the usual \emph{syntactical}
ordering relation among atomic{} events and formulas, and disallows for
example
$a \vee b \rhd a$.} CNF
formula.
Note that we advise that the algorithm be used in the following regime~\footnote{
\Newrev{In the current biomedical setting, the number of samples ($m$) is usually in the hundreds, while number of possible mutations ($n$) and hypotheses ($k$), absent any pre-processing, could be large, thus violating the assumption; in these cases, we rely on various commonly used pre-preprocessing filters to limit $n$ to driver mutations, and $k$ to simple hypotheses involving the driver mutations. However, in the future as the number of samples increases, we envision a more agnostic application}.
}: $k+n \ll m$.
\paragraph{\textbf{Data Preprocessing (Lifting, step 2).}}
When input hypotheses are provided (e.g., by a domain expert), CAPRI{} first performs a
\emph{lifting operation}
over
$D$ to permit direct inference of complex selectivity relations over a
joint representation, which involve input events as well as the hypotheses. Lifting
operation evaluates each input CNF formula -- for all input
hypotheses in $\Phi$ -- and outputs a lifted matrix $D(\Phi)$ to be processed further as in step 1.
As an example, consider hypothesis
$a \oplus b \rhd c$
lifted input matrix $D$ is:
\begin{scriptsize}
\[
D(\Phi) = \left[
\begin{array}{ccc|c}
a & b & c & a \oplus b \rhd c\\ \hline
1 & 1 & 1 & 1 \oplus 1 = \mathbf{0}\\
1 & 0 & 1 & 1 \oplus 0 = \mathbf{1}\\
0 & 1 & 0 & 0 \oplus 1 = \mathbf{1}\\
1 & 0 & 1 & 1 \oplus 0 = \mathbf{1}\\
\end{array}
\right]\, .
\]
\end{scriptsize}
\noindent
Note that the first row (profile
$\{a, b, c\}$
) contradicts the hypothesis,
while all other rows support it.
\paragraph{\textbf{Selectivity Topology (steps 3, 4, 5).}}
We exploit a compositional approach to
test CNF hypotheses as follows: the disjunctive
relations are grouped, and treated as if they were individual objects
in $G$. For example, when a formula $\varphi \rhd d$ where $\varphi= (a \vee b) \wedge c$
is considered, we assess $\varphi \rhd d$ as whether $(a \vee b)
\rhd d$ and $c \rhd d$ hold -- with the proviso that we treat
$(a \vee b)$ as an individual event. Formally, with $\conj{\varphi}$
we denote the disjunctive clauses in a CNF formula.
Nodes in the reconstruction are
all input events together with all the disjunctive
clauses of each input formula $\varphi$.
Edges in the reconstructed DAG are patterns that satisfy
both conditions (1) and (2) of the selectivity relation
$\rhd$. Formally, CAPRI includes an edge
between two nodes $\varphi$ and $j$
only if both $\Gamma_{\varphi,j} = \Probab{\varphi} - \Probab{j}$ and $\Lambda_{\varphi,j} =
\Pcond{j}{\varphi} - \Pcond{j}{\~ {\varphi}}$ are strictly positive. Note
that $\varphi$ can be both a disjunctive clause as well as a
singleton event.
A function $\pi(\cdot)$ assigns a parent to
each node that is not an input formula.
Note that this approach works efficiently by nature of the lifted
representation of $D$.
The reconstructed DAG contains all
the true positive patterns, with respect to $\rhd$, plus spurious
instances of $\rhd$ which CAPRI subsequently removes in step~6
\Newrev{(cfr., the Supplementary Material for a proof of this statement)}.
Note that $\mathcal{D}$ can be readily interpreted as a
probabilistic graphical model, once it is augmented with a labeling
function $\alpha: N \to [0,1]$\Newrev{, where $N$ is the set of nodes
-- i.e., the genetic alterations --} such that $\alpha(i)$ is the
\emph{independent probability} of observing mutation $i$ in a sample,
whenever \emph{all of its parent} mutations (i.e., $\pi(i)$) are
observed (if any). Thus $\mathcal{D}$ induces a \emph{distribution}
of observing a subset of events in a set of samples (i.e., a
probability of observing a certain \emph{mutational profile} in a
patient).
\paragraph{\textbf{Maximum Likelihood Fit (step 6).}}
As the selectivity relation provides only a \emph{necessary}
condition, we must filter out all of its \emph{spurious instances}
that might have been included in $\mathcal{D}$ (i.e., the possible
\emph{false positives}).
For any selectivity structure, spurious claims contribute to
a reduction in the \emph{likelihood-fit} relative to true patterns.
Thus, a
standard maximum-likelihood fit can be used to select and prune the
selectivity DAG (including a \emph{regularization term} to avoid
over-fitting\footnote{\Newrev{In principle other regularisation strategies common
to Bayesian learning could be used, e.g., Akaike information criterion (see
\cite{carvalho2009scoring} and references therein).
In this paper, we prefer to work with BIC which, in general,
trades model complexity to reduce false positives rate.}}).
Here, we adopt the \emph{Bayesian
Information Criterion} (BIC), which implements \emph{Occam's razor}
by combining log-likelihood fit with a \emph{penalty criterion}
proportional to the $\log$ of the DAG size via \emph{Schwarz
Information Criterion} (see \cite{bic_1978}). The BIC score is defined
as follows.
\begin{equation}\label{eq:bic}
\textsc{bic}\left(\mathcal{D},\, D(\Phi)\right)
= \mathcal{LL}\left(\mathcal{D},\, D(\Phi)\right) - \dfrac{\log m}{2}\, \mathrm{dim}(\mathcal{D}).
\end{equation}
Here, $D(\Phi)$ is the lifted input matrix, $m$ denotes the number of
samples and $\text{dim}(\mathcal{D})$ is the number of parameters in
the model $\mathcal{D}$. Because, in general, $\text{dim}(\cdot)$
depends on the number of parents each node has, it is a good metric
for model complexity. Moreover, since each edge added to $\mathcal{D}$
increases model complexity, the regularization term based on
$\text{dim}(\cdot)$ favors graphs with fewer edges and, more
specifically, fewer parents for each node.
At the end of this step, $\mathcal{D}$ and the labeling function are
modified accordingly, based on the result of BIC regularization. By
collecting all the incoming edges in a node it is possible to extract
the patterns, which have been selected by CAPRI as the positive ones.
\begin{algorithm}[t]
\caption{\emph{\underline{CA}ncer \underline{PR}ogression
\underline{I}nference} (CAPRI{})}
\label{alg:inference}
\begin{algorithmic}[1]
\begin{scriptsize}
\STATE \textbf{Input:} {A set of events $G=\{g_1,\ldots,g_n\}$,
a matrix $D \in \{0,1\}^{m \times n}$} and $k$ CNF causal
claims $\Phi=\{\varphi_1 \rhd e_1, \ldots, \varphi_k \rhd
e_k\}$ where, for any $i$, $e_i \not \sqsubseteq \varphi_i$ and $e_i \in G$;
\STATE [\emph{Lifting}] Define the \emph{lifting of $D$ to
$D(\Phi)$} as the augmented matrix
\begin{align*}\label{eq:lifting}
D(\Phi) =
\left[ \begin{array}{ccc|ccc}
D_{1,1} & \ldots & D_{1,n} & \varphi_{1}(D_{1,\cdot}) & \ldots &\varphi_{k}(D_{1,\cdot}) \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
D_{m,1} & \ldots & D_{m,n} & \varphi_{1}(D_{m,\cdot}) & \ldots &\varphi_{k}(D_{m,\cdot}) \\
\end{array}
\right].
\end{align*}
by adding a column for each $\varphi_i \rhd c_i \in \Phi$,
with $\varphi_i$ evaluated row-by-row. Define then the coefficients
$\Gamma_{i,j} = \Probab{i} - \Probab{j}$ and $\Lambda_{i,j} =
\Pcond{j}{i} - \Pcond{j}{\~ {i}}$ pairwise over $D(\Phi)$;
\STATE[\emph{DAG nodes}] Define the set of nodes $N = G \cup
\left(\bigcup_{\varphi_i} \conj{\varphi_i}\right)$ which
contains both input events and the disjunctive clauses in every
input formula of $\Phi$.
\STATE[\emph{DAG edges}] Define a parent function $\pi$ where
$\pi(j \not \in G) = \emptyset$ -- avoid edges incoming in a
formula
\footnotemark -- and
\begin{align}
\pi(j \in G) & =
\left\{ i \in G \mid \Gamma_{i,j}, \Lambda_{i,j} >0 \right\} \nonumber\\
& \cup \left\{ \conj{\varphi}
\mid \Gamma_{\varphi,j}, \Lambda_{\varphi,j} > 0, \;
\varphi \rhd j \in \Phi \right\}.
\end{align}
Set the DAG to $\mathcal{D} = (N,\pi)$.
\STATE[\emph{DAG labeling}] Define the labeling $\alpha$ as follows
\[
\alpha(j) = \begin{cases}
\Probab{j}, & \text{ if } \pi(j) =\emptyset \text{ and } j \in G; \\
\Pcond{j}{ i_1 \wedge \ldots \wedge i_n}, & \text{ if } \pi(j) = \{i_1, \ldots, i_n\}.
\end{cases}
\]
\STATE[\emph{Likelihood fit}] Filter out all spurious causes
from $\mathcal{D}$ by likelihood fit with the regularization BIC
score and set $\alpha(j)=0$ for each removed edge.
\STATE \textbf{Output:} the DAG $\mathcal{D}$ and $\alpha$;
\end{scriptsize}
\end{algorithmic}
\end{algorithm}
\footnotetext{Although CAPRI{} is equipped with bootstrap testing it is still possible to encounter various
degenerate situations. In particular, for some pair of events it could be that temporal priority cannot be satisfactorily resolved, i.e. there is no significant $p$-value for any edge orientation. Thus, loops might be present in the inferred prima facie topology. Nonetheless, some of these could be still disentangled by probability raising, while some might remain, albeit rarely. To remove such edges we suggest to proceed as follows: $(i)$ sort these edges according to their $p$-value (considering both temporal priority and probability raising), $(ii)$ scan the sorted list in decreasing order of confidence, $(iii)$ remove an edge if it forms a loop.}
\paragraph{\textbf{Inference Confidence: Bootstrap and Statistical
Testing.}}
To infer confidence intervals of the selectivity relations $\rhd$,
CAPRI employs \emph{bootstrap with rejection resampling} as follows\Newrev{, by estimating} a distribution of the marginal and joint probabilities. For
each event, $(i)$ CAPRI samples with repetitions rows from the input
matrix $D$ (bootstrapped dataset), $(ii)$ CAPRI next estimates the
distributions from the observed probabilities, and finally, $(iii)$ CAPRI
rejects values which do not satisfy $0 < \Probab{i} < 1$ and $\Pcond{i}{j}
< 1 \vee \Pcond{j}{i} < 1$, and iterates restarting from $(i)$. We stop
when we have, for each distribution, at least $K$ values (in our case
$K = 100$). Any
inequality (i.e., checking temporal priority and probability
raising) is estimated using the non-parametric Mann-Whitney U test\footnote{The Mann-Whithney U test is a rank-based non-parametric statistical
hypothesis test that can be used as an alternative to the Student's
t-test and is particularly useful if data are not normally distributed.} with $p$-values
set
to $0.05$.
We compute
confidence $p$-values for both temporal priority and probability
raising using this test, which need not assume Gaussian distributions
for the populations.
Once a DAG $\mathcal{D}$ is inferred both \emph{parametric and
non-parametric bootstrapping methods} can be used to assign a
confidence level to its respective pattern and to the overall model.
Essentially, these tests consist of using the reconstructed model (in
the parametric case), or the probabilities observed in the dataset (in
the non-parametric case) to generate new synthetic datasets, which are
then reused to reconstruct the progressions (see,
e.g., \cite{efron_2013} for an overview of these methods). The
confidence is estimated by the number of times the DAG or any instance of
$\rhd$ is reconstructed from the generated data.
\paragraph{\textbf{Complexity, Correctness and Expressivity.}}
CAPRI has the following asymptotic complexity (Theorem 1, SI Section 2)
$(i)$ Without input hypotheses the execution is self-contained and
polynomial in the size of $D$.
$(ii)$ In addition to the above cost, CAPRI{} tests input hypotheses of
$\Phi$ at a polynomial cost in the size of $|\Phi|$. In this case,
however, its complexity may range over many orders of magnitude
depending on the structural complexity of the input set $\Phi$
\Newrev{consisting of hypotheses}.
\Newrev{An empirical analysis of the execution time of CAPRI and the competing techniques on synthetic datasets is provided in the SI, Section 3.5}.
CAPRI{} is a \emph{sound and complete} algorithm, and its
expressivity in terms of the inferred patterns is proportional to the
hypothesis set $\Phi$ which, in turn, determines the complexity of the
algorithm. With a proper set of input hypothesis, CAPRI{} can infer
all (and only) the true patterns from the data, filtering out all the
spurious ones (Theorem 2, SI Section 2)
Without
hypotheses, besides singleton and co-occurrence, no other patterns can
be inferred (see Figure \ref{fig:single-cause}). Also, some of these
claims might be spurious in general for more complex (and unverified) CNF
formula (Theorem 3, SI Section 2)
|
1,116,691,499,189 | arxiv | \section{Introduction}
Siegel~\cite{Siegel1929} proved that any equation $y^2=f(x)$ with irreducible polynomial $f\in\mathbb{Z}[x]$
of degree at least 3 has finitely many integral points. With the method of Baker~\cite{Baker1966,Baker1967a,Baker1967b}, it became possible to
bound the solutions and perform an exhaustive search. For third-degree curves, Baker's method was a subject to many practical improvements,
and now there exists a number of software implementations for finding integral points on elliptic curves~\cite{MAGMA,eclib,SAGE}.
These procedures are based on a method developed by Stroeker and Tzanakis \cite{StTz} and independently by Gebel, Peth\H{o} and Zimmer \cite{GPZell}.
Thue equations of the form $g(x,y)=d$, where $g\in\mathbb{Z}[x,y]$ is a homogeneous irreducible polynomial of degree at least 3 and $d\in\mathbb{Z}$,
were first studied by Thue~\cite{Thue1909}, who proved that they have only a finite number of integer solutions.
In computer era, Thue equations became a subject to developments of computational methods,
resulting in at least two implementations: in computer algebra systems MAGMA~\cite{MAGMA} and PARI/GP~\cite{PARI}.
For our practical computations, we chose SAGE~\cite{SAGE}, which adopts PARI/GP Thue equations solver based on Bilu and Hanrot's improvement~\cite{Bilu1996}
of Tzanakis and de Weger's method~\cite{Tzanakis1989}.
In the current work, we show how to reduce a search for integral points on a biquadratic curve
$$y^2 = ax^4 + bx^2 + c$$
with integer (or, more generally, rational) coefficients $a$, $b$, $c$ with $ac(b^2-4ac)\ne 0$ firstly to a Diophantine equation
$k^2 = \tfrac{f(m,n)}{g(m,n)}$
in coprime integers $m$, $n$ with homogeneous quadratic polynomials $f$ and $g$ (Theorem~\ref{TBiqCur}),
and then to a finite number of quartic Thue equations (Theorem~\ref{TFracSq}).\footnote{In the course of this reduction,
we may also encounter Thue-like equations $g(x,y)=d$ with $g$ being a \emph{reducible} homogeneous polynomial of degree 4.
Such equations however are easily solvable~\cite[Theorem~3]{Alekseyev2011}.}
While possibility of reduction to Thue equations was described by Mordell~\cite{Mordell1969} based on arguments from algebraic number theory,
to the best of our knowledge, there is no published algorithm applicable for the general case.
Furthermore, in contrast to traditional treatment of this kind of problems with algebraic number theory~\cite{Mordell1969,Steiner1991,Weger1995,Petho1998},
our reduction method is elementary.
It may be viewed as a generalization of the method of Steiner and Tzanakis~\cite{Steiner1991}
who reduced Ljunggren equation $y^2 = 2x^4 - 1$ to two Thue equations (in particular, we obtain the same Thue equations).
There are other methods which can be used in certain cases to determine all integral solutions of the equation $y^2 = ax^4 + bx^2 + c$.
Poulakis~\cite{RungeP} provided an elementary algorithm to solve Diophantine equations of the form $y^2=f(x)$,
where $f(x)$ is quartic monic polynomial with integer coefficients,
based on Runge's method~\cite{RungeR, RungeS, RungeW}. Here it is crucial
that the leading coefficient of $f(x)$ is 1 (the idea also works if the leading coefficient is a perfect square).
Using the theory of Pell equations, Kedlaya~\cite{Kedlaya-Pell} described a method to solve the system of equations
$$
\begin{cases}
x^2-a_1y^2 = b_1,\\
P(x,y) = z^2,
\end{cases}
$$
where $P$ is a given integer polynomial, and implemented his algorithm in Mathematica.\footnote{Kedlaya's implementation is available from \url{http://math.ucsd.edu/~kedlaya/papers/pell.tar}.}
If we set $P(x,y)=c_1x+d_1,$ then we obtain a quartic equation of the form
$(a_1c_1y)^2=a_1z^4-2a_1d_1z^2-a_1b_1c_1^2.$
There is also a simple reduction of the equation $y^2 = ax^4 + bx^2 + c$ to an elliptic equation: after multiplying the equation by $a^2x^2$, one
obtains
$$
(axy)^2=(ax^2)^3+b(ax^2)^2+ac(ax^2),
$$
which can be further written as
$$
Y^2=X^3+bX^2+acX.
$$
As we noted earlier to determine all integral points on a given elliptic curve one can follow a method developed
by Stroeker and Tzanakis \cite{StTz} and independently by Gebel, Peth\H{o} and Zimmer \cite{GPZell}.
The disadvantage of this approach is that there is no known algorithm to determine the rank of the so-called Mordell-Weil group
of an elliptic curve, which is necessary to determine all integral points on the curve.
For efficient computation of integral points on biquadratic curves,
we implemented in SAGE a combination of the elliptic curve and reduction to Thue equations methods.\footnote{Our implementation
is available from \url{http://www.math.unideb.hu/~tengely/biquadratic.sage}.}
By default we employ the elliptic curve method and if it fails, we fall back to our reduction to Thue equations.
Our approach also allows one to efficiently compute solutions to a system of Diophantine equations (Theorem~\ref{Tzz2}):
$$
\begin{cases}
a_1 x^2 + c_1 z = d_1,\\
b_2 y^2 + c_2 z^2 = d_2.
\end{cases}
$$
From this perspective, it continues earlier work~\cite{Alekseyev2011}, where the first author described an algorithm for computing solutions to a system of Diophantine equations:
$$
\begin{cases}
a_1 x^2 + b_1 y^2 + c_1 z^2 = d_1,\\
a_2 x^2 + b_2 y^2 + c_2 z^2 = d_2,
\end{cases}
$$
and demonstrated applications for finding common terms of distinct Lucas sequences of the form $U(P,\pm 1)$ or $V(P,\pm 1)$,
which include Fibonacci, Lucas, Pell, and Lucas-Pell numbers.
The current method also has applications for such Lucas sequences, allowing one to find all terms
of the form $a\cdot m^2 + b$ for any fixed integers $a$, $b$.
While the question of finding multiples of squares (i.e., with $b=0$) in Lucas sequences has been widely studied, starting with the works of
Cohn~\cite{Cohn1964} and Wyler~\cite{Wyler1964}
(we refer to Bremner and Tzanakis~\cite{Bremner2007a} for an extensive review of the literature),
finding \emph{near}-multiples of squares (i.e., with $b\ne 0$) has got so far only a limited attention~\cite{Finkelstein1973,Finkelstein1975,Robbins1981,WalshNear}.
In the current work, we present an unified computational approach for solving this problem.
As an example, we establish
that among Fibonacci numbers
only $2$ and $34$ are of the form $2m^2+2$; only $1$, $13$, and $1597$ are of the form $m^2-3$; and so on.
The paper is organized as follows. In Section~\ref{SecHomPol},
we develop our machinery for homogeneous quadratic polynomials with integer coefficients.
In Section~\ref{SecBiQ}, we prove our method for finding integral points on biquadratic curves
and illustrate its workflow on Ljunggren equation.
In Section~\ref{SecInLucas}, we further demonstrate how our method can be used for finding near-multiples of squares in Lucas sequences and
list some results of this kind.
\section{Homogeneous quadratic polynomials}\label{SecHomPol}
We start with studying properties of quadratic homogeneous polynomials with integer coefficients in two and three variables.
We do not distinguish between homogeneous polynomials in two variables from their univariate counterparts
(i.e., $f(x,y)=ax^2 + bxy + cy^2$ and $\tilde{f}(z)=az^2 + bz + c$) that allows us to define resultant ($\mathop{\mathrm{Res}}$) and discriminant ($\mathop{\mathrm{Disc}}$) on them.
\begin{theorem}[Theorem~5 in \cite{Alekseyev2011}\footnote{This theorem corrects an error in \cite[Corollary~6.3.8]{Cohen07}.}]
\label{ThABC}
Let $A, B, C$ be non-zero integers and let $(x_0, y_0, z_0)$ with $z_0 \ne 0$ be a particular non-trivial integer solution
to the Diophantine equation $Ax^2+By^2+Cz^2=0$. Then its general integer solution is given by
\begin{equation}\label{SolSys}
(x, y, z) = \frac{p}{q}\; ( P_x(m, n),\; P_y(m, n),\; P_z(m, n))
\end{equation}
where $m, n$ as well as $p, q$ are coprime integers with $q>0$ dividing $2 \mathop{\mathrm{lcm}}(A,B) Cz_0^2$, and
\begin{equation}\label{Pxyz}
\begin{array}{lcl}
P_x(m,n) & = & x_0 A m^2 + 2y_0 Bmn - x_0 B n^2,\\
P_y(m,n) & = & -y_0 A m^2 + 2x_0 A mn + y_0 B n^2,\\
P_z(m,n) & = & z_0 A m^2 + z_0 B n^2.
\end{array}
\end{equation}
\end{theorem}
We refer to \cite{Cohen07,Cremona03} for general methods of finding a particular solution to a quadratic homogeneous equation
in three variables.\footnote{In PARI/GP, a particular solution can be computed with the function \emph{bnfisnorm}.}
\begin{theorem}\label{TPolGCD} Let $P_1(x,y)$ and $P_2(x,y)$ be homogeneous quadratic polynomials with integer coefficients
and $R=\mathop{\mathrm{Res}}(P_1,P_2)\ne 0$.
Let $G$ be the largest element in the Smith normal form of the resultant matrix of $P_1$ and $P_2$.\footnote{While $G=R$ would also satisfy the theorem statement,
we want $G$ as small as possible. The resultant $R$ is often much larger than $G$ defined in the theorem.}
Then for any coprime integers $m,n$, $\gcd(P_1(m,n),P_2(m,n))$ divides $G$.
\end{theorem}
\begin{proof}
Let $P_1(x,y)=a_1 x^2+b_1 x y + c_1 y^2$ and $P_2(x,y) = a_2 x^2+b_2 x y + c_2 y^2$ where $a_1,b_1,c_1,a_2,b_2,c_2$ are integer.
Consider polynomials $x\cdot P_1(x,y)$, $y\cdot P_1(x,y)$, $x\cdot P_2(x,y)$, $y\cdot P_2(x,y)$ as linear combinations with integer coefficients of
the basis terms $x^3$, $x^2y$, $xy^2$, $y^3$.
Our goal is to find two linear combinations of these polynomials:
one equal an integer multiple of $x^3$ and the other equal an integer multiple of $y^3$.
This corresponds to the following two systems of linear equations with the same resultant matrix:
\begin{equation}\label{EqTwoSys}
\begin{matrix}
x^3:\\
x^2y:\\
xy^2:\\
y^3:
\end{matrix}
\qquad
\begin{bmatrix}
a_1 & 0 & a_2 & 0 \\
b_1 & a_1 & b_2 & a_2 \\
c_1 & b_1 & c_2 & b_2 \\
0 & c_1 & 0 & c_2
\end{bmatrix}
\cdot
\begin{bmatrix}
t_1\\
t_2\\
t_3\\
t_4
\end{bmatrix}
\quad
=
\quad
\begin{bmatrix}
1\\
0\\
0\\
0
\end{bmatrix}
\quad
\text{or}
\quad
\begin{bmatrix}
0\\
0\\
0\\
1
\end{bmatrix}
\end{equation}
with respect to rational numbers $t_1,t_2,t_3,t_4$.
Since the matrix determinant equals the resultant $R\ne 0$,
the systems have unique solutions of the form:
$$(t_1,t_2,t_3,t_4) = \frac{1}{R} \left( \Delta_1, \Delta_2, \Delta_3, \Delta_4 \right)
= \frac{1}{G} \left( \frac{\Delta_1}{d_3}, \frac{\Delta_2}{d_3}, \frac{\Delta_3}{d_3}, \frac{\Delta_4}{d_3} \right)$$
where each $\Delta_i$ is the determinant of a certain $3\times 3$ minor of the resultant matrix
and $d_3$ is its third determinant divisor. Here we used the fact that $\tfrac{R}{d_3}=G$ is the fourth elementary divisor of the resultant matrix
(and the largest element of its Smith normal form).
It is important to notice that all vector components $\tfrac{\Delta_i}{d_3}$ are integer.
So we have two linear combinations of $x\cdot P_1(x,y)$, $y\cdot P_1(x,y)$, $x\cdot P_2(x,y)$, $y\cdot P_2(x,y)$ with integer coefficients
$\tfrac{\Delta_1}{d_3}$, $\tfrac{\Delta_2}{d_3}$, $\tfrac{\Delta_3}{d_3}$, $\tfrac{\Delta_4}{d_3}$ equal $G\cdot x^3$ and $G\cdot y^3$, respectively.
For $(x,y)=(m,n)$ where $m,n$ are coprime integers, these linear combinations imply that $\gcd(P_1(m,n),P_2(m,n))$
divides both $G\cdot m^3$ and $G\cdot n^3$. Therefore, $\gcd(P_1(m,n),P_2(m,n))$ divides $\gcd(G\cdot m^3,G\cdot n^3) = G\cdot \gcd(m^3,n^3) = G$.
\end{proof}
\begin{remark} To compute $G$ in practice, we do not need to compute Smith normal form of the resultant matrix.
Instead, we simply solve the two linear systems \eqref{EqTwoSys} and define $G$ as the least common multiple of all denominators in both solutions $(t_1,t_2,t_3,t_4)$.
\end{remark}
\begin{theorem}\label{TPol2Sq} Any homogeneous quadratic polynomial with integer coefficients and non-zero discriminant can be represented
as a linear combination with non-zero rational coefficients of squares of two homogeneous linear polynomials.
Moreover, these polynomials are linearly independent.
\end{theorem}
\begin{proof} Let $a x^2+b x y + c y^2$ be a homogeneous quadratic polynomial with integer coefficients and non-zero discriminant, i.e., $b^2-4ac\ne 0$.
If $b=0$ then $ac\ne 0$ and the statement is trivial.
Suppose that $b\ne 0$. If $a\ne 0$, then we have
$$a x^2+b x y + c y^2 = \frac{1}{4a}\cdot (2a x + by)^2 + \frac{4ac - b^2}{4a}\cdot y^2.$$
Similarly, if $c\ne 0$, then we have
$$a x^2+b x y + c y^2 = \frac{4ac - b^2}{4c}\cdot x^2 + \frac{1}{4c}\cdot (bx + 2cy)^2.$$
Finally, if $a=c=0$, then
$$bxy = \frac{b}{4}\cdot (x+y)^2 - \frac{b}{4}\cdot (x-y)^2.$$
It is easy to see that in all cases, the linear polynomials are linearly independent.
\end{proof}
\begin{theorem}\label{TFracSq} Let $P_1(x,y)$ and $P_2(x,y)$ be homogeneous quadratic polynomials
with integer coefficients such that $\mathop{\mathrm{Disc}}(P_1)\ne 0$, $\mathop{\mathrm{Disc}}(P_2)\ne 0$, and $\mathop{\mathrm{Res}}(P_1,P_2)\ne 0$.
Then the equation
\begin{equation}
\label{EqSqFrac}
z^2 = \frac{P_1(x,y)}{P_2(x,y)}
\end{equation}
has a finite number of integer solutions $(x,y,z)=(m,n,k)$ with $\gcd(m,n)=1$.
\end{theorem}
\begin{proof}
Suppose that $(x,y,z)=(m,n,k)$ with $\gcd(m,n)=1$ satisfies the equation \eqref{EqSqFrac}.
Since $P_2(m,n)$ divides $P_1(m,n)$, we have $\gcd(P_1(m,n),P_2(m,n)) = P_2(m,n)$, which by Theorem~\ref{TPolGCD} must divide a certain integer $G$.
Then for some divisor\footnote{Unless specified otherwise, the divisors of an integer include both positive and negative divisors.}
$g$ of $G$, we have $P_2(m,n)=g$ and $P_1(m,n) = gk^2$.
So $(x,y,z)=(m,n,k)$ represents a solution to the following system of equations:
\begin{equation}\label{SysSplit}
\begin{cases}
P_1(x,y) = g\cdot z^2, \\
P_2(x,y) = g.
\end{cases}
\end{equation}
Therefore, to find all solutions to \eqref{EqSqFrac}, we need to solve the systems \eqref{SysSplit} for $g$ ranging over the divisors of $G$.
Each such system is solved as follows.
We start with
using Theorem~\ref{TPol2Sq} to represent $P_1(x,y)$
as a linear combination with rational coefficients of squares of two linearly independent polynomials, say, $P_1(x,y) = a\cdot Q_1(x,y)^2 + b\cdot Q_2(x,y)^2$.
Substituting this representation into the first equation of \eqref{SysSplit}, we get the following equation:
\begin{equation}\label{EqQ1Q2}
a\cdot Q_1(x,y)^2 + b\cdot Q_2(x,y)^2 - g\cdot z^2 = 0.
\end{equation}
We solve this equation using Theorem~\ref{ThABC} to obtain $Q_1(x,y) = \tfrac{p}{q}\cdot R_1(m,n)$
and
$Q_2(x,y) = \tfrac{p}{q}\cdot R_2(m,n)$,
where $q$ ranges over the positive divisors of a certain integer and integer $p$ is coprime to $q$.
We solve this system of linear equations with respect to $x,y$ to obtain
$$(x,y) = \frac{p}{q}\cdot \left(S_x(m,n),\; S_y(m,n) \right),$$
where $S_x(m,n)$ and $S_y(m,n)$ are linear homogeneous polynomials with rational coefficients (which depend only on $g$ but not $p,q$).
Plugging this into the second equation of \eqref{SysSplit}, we get the following quartic equations with integer coefficients w.r.t. $m,n$:
\begin{equation}\label{EP2S}
\ell_g\cdot P_2(S_x(m,n),S_y(m,n)) = q^2\cdot \frac{g\cdot \ell_g}{p^2}.
\end{equation}
where $\ell_g$ is the least common multiple of the coefficients denominators of $P_2(S_x(m,n),S_y(m,n))$ (notice that $\ell_g$ depends only on $g$ but not $p,q$).
Here $p^2$ must divide $g\ell_g$ and thus there is a finite number of such equations.
By \cite[Theorem~3]{Alekseyev2011}, each such equation has a finite number of solutions,
unless $P_2(S_x(m,n),S_y(m,n)) = c\cdot T(m,n)^2$ for some polynomial $T(x,y)$, which is not the case since $\mathop{\mathrm{Disc}}(P_2)\ne 0$.
\end{proof}
\begin{remark} Different choices of values for $g,p,q$ may result in the same equation \eqref{EP2S}.
In particular, if $g'=g\cdot d^2$ for some integer $d$, then we can represent equation \eqref{EqQ1Q2} for $g'$
in the form $a\cdot Q_1(x,y)^2 + b\cdot Q_2(x,y)^2 - g\cdot (d\cdot z)^2 = 0$ so that it
has the same solutions w.r.t. $Q_1(x,y)$ and $Q_2(x,y)$. Then the equation \eqref{EP2S} for $g'$ has the same left hand side as the one for $g$
(with $\ell_{g'}=\ell_g$), while the former has an extra factor $d^2$ in the right hand side.
Therefore, to reduce the number of equations in practice, we can restrict $g$ to the square-free divisors of $G$: for each such $g$, we
compute the left hand side of \eqref{EP2S} and iterate over all \emph{distinct} integer right hand sides of the form
$q^2\cdot \tfrac{g\cdot \ell_g\cdot d^2}{p^2}$, where $d^2$ divides $\tfrac{G}{g}$.
\end{remark}
\section{Finding integral points on biquadratic curves}\label{SecBiQ}
Now we are ready to prove our main result:
\begin{theorem}\label{TBiqCur}
Finding integral points on a biquadratic curve
\begin{equation}\label{EqBiqCur}
y^2 = a\cdot x^4 + b\cdot x^2 + c
\end{equation}
with integer coefficients $a,b,c$ and $ac(b^2 - 4ac)\ne 0$, reduces to solving a finite number of quartic Thue equations.
\end{theorem}
\begin{proof} Multiplying the equation \eqref{EqBiqCur} by $4c$, we can rewrite it as a linear combination of three squares
with non-zero integer coefficients:\footnote{Alternatively, we can multiply \eqref{EqBiqCur} by $4a$ and obtain
another linear combination of three squares: $(2ax^2+b)^2+(4ac-b^2)\cdot 1^2 - 4ay^2 = 0$, which has smaller coefficients and thus may be preferable when $c\gg a$.}
$$(b^2 - 4ac) (x^2)^2 + 4cy^2 - (bx^2 + 2c)^2 = 0.$$
Denoting $X=x^2$, $Y=y$, $Z=bx^2 + 2c$, $A=b^2 - 4ac$, $B=4c$, $C=-1$, we get a Diophantine equation:
\begin{equation}\label{EqABCXYZ}
A\cdot X^2 + B\cdot Y^2 + C\cdot Z^2 = 0.
\end{equation}
If this equation is solvable with a particular solution $(X,Y,Z) = (X_0,Y_0,Z_0)$, $Z_0\ne 0$,
then by Theorem~\ref{ThABC} its general solution is given by
$$(X,Y,Z) = r\cdot \left( P_x(m,n),\ P_y(m,n),\ P_z(m,n) \right),$$
where $P_i(m,n)$ are polynomials defined by \eqref{Pxyz},
$m,n$ are coprime integers, and $r$ is a rational number.
In our case, this solution should additionally satisfy the relation:
$$2c = Z - bX = r\cdot \left( P_z(m,n) - b\cdot P_x(m,n) \right),$$
implying that
$$r = \frac{2c}{ P_z(m,n) - b\cdot P_x(m,n) }.$$
So we get a constraining Diophantine equation:
\begin{equation}\label{EqXfrac}
x^2 = \frac{2c\cdot P_x(m,n)}{ P_z(m,n) - b\cdot P_x(m,n) },
\end{equation}
which reduces to a finite number of Thue equations by Theorem~\ref{TFracSq}, unless $\mathop{\mathrm{Res}}(P_x,P_z-b\cdot P_x)=0$ or $\mathop{\mathrm{Disc}}(P_z-b\cdot P_x)=0$.
The case of $\mathop{\mathrm{Res}}(P_x,P_z-b\cdot P_x)=\mathop{\mathrm{Res}}(P_x,P_z)=0$ is impossible since by direct computation we have $\mathop{\mathrm{Res}}(P_x,P_z) = -4AB^2CZ_0^4 \ne 0$.
In the case of $\mathop{\mathrm{Disc}}(P_z-b\cdot P_x)=0$,
we have
$\mathop{\mathrm{Disc}}(P_z-b\cdot P_x) = 4b^2B^2Y_0^2 - 4 AB (Z_0^2 - b^2 X_0^2) = 0$
and thus $b^2BY_0^2 = A(Z_0^2 - b^2 X_0^2)$.
Since $BY_0^2 = -A X_0^2 - C Z_0^2$, we further get $b^2(-A X_0^2 - C Z_0^2) = A(Z_0^2 - b^2 X_0^2)$, which reduces to $A+b^2C=0$.
However this is impossible since $A+b^2C = -4ac \ne 0$.
Therefore, Theorem~\ref{TFracSq} is applicable.
\end{proof}
As a corollary we get
\begin{theorem}\label{Tzz2}
The system of Diophantine equations:
$$\begin{cases}
z = ax^2 + d_1, \\
z^2 = by^2 + d_2,
\end{cases}
$$
where $a,b,d_1,d_2$ are rational numbers with $abd_2(d_1^2-d_2)\ne 0$,
reduces to a finite number of quartic Thue equations.
\end{theorem}
\begin{proof} The system implies that $(ax^2 + d_1)^2 = by^2 + d_2$ or $(by)^2 = a^2bx^4 + 2abd_1x^2 + b(d_1^2 - d_2)$. Since
$(2d_1ab)^2 - 4a^2b^2(d_1^2 - d_2) = 4a^2b^2d_2\ne 0$, Theorem~\ref{TBiqCur} applies.
\end{proof}
\subsection*{Example: Ljunggren equation}
We illustrate our method on Ljunggren equation $y^2 = 2x^4 - 1$ whose reduction to Thue equations was first obtained in~\cite{Steiner1991}.
Here we have $(a,b,c)=(2,0,-1)$.
First we compute $A=b^2-4ac=8$, $B=4c=-4$, and $C=-1$ and consider equation \eqref{EqABCXYZ}.
Its particular solution is $(1,1,2)$, which yields by Theorem~\ref{ThABC} a general solution:
$$
(X, Y, Z) = \frac{p}{q}\; ( P_x(m, n),\; P_y(m, n),\; P_z(m, n))
$$
with
\begin{eqnarray*}
P_x(m,n) & = & 8 m^2 - 8mn + 4 n^2,\\
P_y(m,n) & = & -8 m^2 + 16 mn - 4 n^2,\\
P_z(m,n) & = & 16 m^2 - 8 n^2.
\end{eqnarray*}
Now we consider equation \eqref{EqXfrac}:
$$x^2 = \frac{2cP_x(m,n)}{P_z(m,n)-b\cdot P_x(m,n)} = \frac{-2(8 m^2 - 8mn + 4 n^2)}{16 m^2 - 8 n^2} = \frac{-2 m^2 + 2mn - n^2}{2 m^2 - n^2}$$
and use Theorem~\ref{TFracSq} to solve it.
We take the resultant matrix of $P_1(x,y)=-2 x^2 + 2xy - y^2$ and $P_2(x,y)=2 x^2 - y^2$ and solve the two linear systems \eqref{EqTwoSys}
to obtain $(t_1,t_2,t_3,t_4)=\tfrac{1}{4}(-2, -1, 0, 1)$ and $(t_1,t_2,t_3,t_4)=(-1, -1, -1, 0)$. So we get $G=4$.
Let $g$ range over the divisors of $G$, i.e., $g\in\{-4,-2,-1,1,2,4\}$.
We use Theorem~\ref{TPol2Sq} to represent $P_1(x,y)$ as a linear combination of squares:
$$P_1(x,y)=-2 x^2 + 2xy - y^2 = - \frac{1}{8} \cdot (-4x + 2y)^2 - \frac{1}{2}\cdot y^2 = - \frac{1}{2} \cdot (-2x + y)^2 - \frac{1}{2}\cdot y^2$$
and obtain the equation \eqref{EqQ1Q2} (multiplied by $-2$):
$$(-2x+y)^2 + y^2 + 2gz^2 = 0.$$
Clearly, it may have non-trivial solutions only for $g<0$ and only when $-1$ is a quadratic residue modulo $2g$, which leaves us the only suitable value $g=-1$.
The corresponding equation has a particular solution $(1,1,1)$ and by Theorem~\ref{ThABC} its general solution is
$$(-2x+y,y,z) = \frac{p}{q}\cdot\left( m^2 + 2mn - n^2, -m^2 + 2mn + n^2, m^2 + n^2 \right)$$
where $(p,q)=1$ and $q>0$ divides 4.
From this solution we express $x = \tfrac{p}{q}\cdot(-m^2 + n^2)$ and $y=\tfrac{p}{q}\cdot(-m^2 + 2mn + n^2)$ and plug them into the equation $P_2(x,y)=g$
to obtain the following quartic equations:
$$m^4 + 4m^3n - 6m^2n^2 - 4mn^3 + n^4 = q^2\cdot \frac{-1}{p^2}$$
and conclude that $p^2=1$. Since the polynomial in the left hand side is irreducible, these are Thue equations.
We used PARI/GP to solve the resulting three Thue equations (for $q=1,2,4$) and found that only the one with $q=2$ has solutions,
which are $(m,n) = \pm(5,-1)$, $\pm(1,5)$, $(\pm 1,\pm 1)$ (the same solutions were earlier obtained by Lettl and Peth\H{o} \cite{LettlPetho}).
The corresponding solutions to $\tfrac{P_1(m,n)}{P_2(m,n)}=x^2$ are
$(m,n) = \pm(12,17)$ and $\pm(0,1)$, giving respectively $x=\pm 13$ and $\pm 1$.
Thus the solutions to Ljunggren equation are $(\pm 13,\pm 239)$ and $(\pm 1,\pm 1)$.
\section{Near-multiples of squares in Lucas sequences}\label{SecInLucas}
The pair of Lucas sequences $U(P,Q)$ and $V(P,Q)$ are defined by the same linear recurrent relation with the coefficient $P,Q\in\mathbb{Z}$ but
different initial terms:
$$
\begin{array}{lll}
U_0(P,Q)=0, & U_1(P,Q)=1, & U_{n+1}(P,Q) = P\cdot U_n(P,Q) - Q\cdot U_{n-1}(P,Q),\, n\geq 1;\\
V_0(P,Q)=2, & V_1(P,Q)=P, & V_{n+1}(P,Q) = P\cdot V_n(P,Q) - Q\cdot V_{n-1}(P,Q),\, n\geq 1.
\end{array}
$$
Some Lucas sequences have their own names:
\begin{center} \begin{tabular}{|l|l|l|l|}
\hline
Sequence & Name & Initial terms & Index in \cite{OEIS}\\
\hline\hline
$U(1,-1)$ & Fibonacci numbers & $0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, \ldots$ & \seqnum{A000045} \\
\hline
$V(1,-1)$ & Lucas numbers & $2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, \ldots$ & \seqnum{A000032} \\
\hline
$U(2,-1)$ & Pell numbers & $0, 1, 2, 5, 12, 29, 70, 169, 408, 985, \ldots$ & \seqnum{A000129} \\
\hline
$V(2,-1)$ & Pell-Lucas numbers & $2, 2, 6, 14, 34, 82, 198, 478, 1154, \ldots$ & \seqnum{A002203} \\
\hline
\end{tabular}
\end{center}
Other examples include Jacobsthal numbers $U(1,-2)$, Mersenne numbers $U(3,2)$ etc.
The characteristic polynomial of Lucas sequences $U(P,Q)$ and $V(P,Q)$ is $\lambda^2 - P\lambda + Q$ with the discriminant $D=P^2 - 4Q$.
For non-degenerate sequences, the discriminant $D$ is a positive non-square integer.
Terms of Lucas sequences satisfy the following identity:
\begin{equation}\label{MainEq}
V_n(P,Q)^2 - D\cdot U_n(P,Q)^2 = 4 Q^n.
\end{equation}
In the current paper, we focus on the case of $Q=1$ or $Q=-1$, implying that
the pairs $(V_n(P,Q),U_n(P,Q))$ satisfy the equation:\footnote{Here and everywhere below $\pm$ in the r.h.s. of an equation means that we accept both signs as solutions.}
\begin{equation}\label{EqXDY}
x^2 - D y^2 = \pm 4.
\end{equation}
The converse statement can be used to prove that given positive integers belong to $V(P,Q)$ or $U(P,Q)$ respectively:
\begin{theorem}[Theorem~1 in \cite{Alekseyev2011}]\label{Trecog}
Let $P$, $Q$ be integers such that $P>0$, $|Q|=1$, $(P,Q)\ne(3,1)$,
and $D=P^2 - 4Q > 0$.
If positive integers $u$ and $v$ are such that
$$v^2 - D u^2 = \pm 4,$$
then
$u = U_n(P,Q)$ and $v = V_n(P,Q)$ for some integer $n\geq 0$.
\end{theorem}
\begin{theorem}\label{finit1} For fixed integers $a\ne 0$ and $b$, finding terms of the form $am^2 + b$
in nondegenerate Lucas sequence $U(P,Q)$ or $V(P,Q)$ with $Q=\pm 1$ reduces to a finite number of Thue equations,
unless this sequence is $V(P,Q)$ and $b=\pm 2$.
\end{theorem}
\begin{proof} Let $ax^2 + b$ be a term of $U(P,\pm 1)$. By Theorem~\ref{Trecog} we have $y^2 = D (ax^2+b)^2 \pm 4 = Da^2x^4 + 2Dabx^2 + (Db^2 \pm 4)$.
Theorem~\ref{TBiqCur} applies, since $Db^2 \pm 4\ne 0$ (notice that $D\ne\pm 1$) and $(2Dab)^2 - 4 Da^2(Db^2\pm 4) = \mp 16 Da^2\ne 0$.
Now let $ax^2 + b$ be a term of $V(P,\pm 1)$. By Theorem~\ref{Trecog} we have $(Dy)^2 = D((ax^2+b)^2 \mp 4) = Da^2x^4 + 2Dabx^2 + D(b^2 \mp 4)$.
We have $(2Dab)^2 - 4D^2a^2(b^2 \mp 4) = \pm 16D^2a^2\ne 0$. Theorem~\ref{TBiqCur} applies here as soon as $b^2 \mp 4\ne 0$ (i.e., $b\ne \pm 2$).
\end{proof}
\begin{remark} For $b=\pm 2$, the sequence $V(P,Q)$ may have an infinite number of terms of the form $am^2+b$.
In particular, Lucas sequence $V(1,-1)$ (Lucas numbers) has infinitely many terms of the forms $m^2+2$ and $m^2-2$ since
$V_{4n}(1,-1) = V_{2n}(1,-1)^2 - 2$ and $V_{4n+2}(1,-1) = V_{2n+1}(1,-1)^2 + 2$.
\end{remark}
The following theorem allows to find all terms solutions of the form $am^2 \pm 2$ in $V(P,\pm 1)$.
\begin{theorem}\label{finit2} For fixed integers $a\ne 0$ and $b=\pm 2$, finding terms of the form $am^2 + b$
in nondegenerate Lucas sequence $V(P,Q)$ with $Q=\pm 1$ reduces to a finite number of Thue equations and a Pell-Fermat equation.
\end{theorem}
\begin{proof}
Let $ax^2 + b$ with $b=\pm 2$ be a term of $V(P,\pm 1)$. By Theorem~\ref{Trecog}, we have the following equations:
$$(Dy)^2 = D((ax^2+b)^2 + 4) = Da^2x^4 + 2Dabx^2 + 8D$$
and
$$(Dy)^2 = D((ax^2+b)^2 - 4) = Da^2x^4 + 2Dabx^2.$$
The former equation is addressed by Theorem~\ref{TBiqCur}, while the latter equation always has solution $x=0$ (corresponding to the term $V_0=2$) and
for $x\ne 0$ is equivalent to the Pell-Fermat equation:
$$\left(\tfrac{Dy}{x}\right)^2 - Da^2x^2 = 2Dab.$$
For solution of this equation, we refer to \cite[Section~6.3.5]{Cohen07}.
\end{proof}
Theorem~\ref{finit1} implies finiteness of the terms of the form $am^2+b$ in a Lucas sequence $U(P,Q)$ or $V(P,Q)$ with $Q=\pm 1$, unless
this sequence is $V(P,Q)$ and $b=\pm 2$. The latter case is addressed by Theorem~\ref{finit2} and may sometimes result in infinitely many terms.
While this characterization represents a special case of the results by Nemes and Peth\H{o} \cite{NemesPetho} and by Peth\H{o} \cite{PethoGn},
our proofs rely on the simple transformation method developed in this paper.
In Table~\ref{Tab1} we list all the terms of the form $am^2 + b$ for $1\leq a\leq 3$ and $-3\leq b\leq 3$ in Fibonacci, Lucas, Pell, and Pell-Lucas numbers.
\begin{footnotesize}
\begin{table}
\begin{center}
\begin{tabular}{|l||l|l|l|l|}
\hline
& Fibonacci numbers & Lucas numbers & Pell numbers & Pell-Lucas numbers \\
Form & $U(1,-1)$ & $V(1,-1)$ & $U(2,-1)$ & $V(2,-1)$ \\
\hline\hline
$m^2$ & $0, 1, 144$ & $1,4$ & $0, 1, 169$ & none \\
\hline
$m^2 + 1$ & $1, 2, 5$ & $1, 2$ & $1,2,5$ & $2,82$ \\
\hline
$m^2 - 1$ & $0, 3, 8$ & $3$ & $0$ & none \\
\hline
$m^2 + 2$ & $2,3$ & $2, 11$, and $V_{4n+2}$ & $2$ & $2$ and $V_{4n+2}$ \\
\hline
$m^2 - 2$ & $2, 34$ & $V_{4n}$ & $2$ & $14$ and $V_{4n}$ \\
\hline
$m^2 + 3$ & $3$ & $3, 4, 7, 199$ & $12$ & none \\
\hline
$m^2 - 3$ & $1, 13, 1597$ & $1$ & $1$ & $6$ \\
\hline
$2m^2$ & $0, 2, 8$ & $2, 18$ & $0, 2$ & $2$ \\
\hline
$2m^2 + 1$ & $1, 3$ & $1, 3$ & $1$ & none \\
\hline
$2m^2 - 1$ & $1$ & $1, 7, 199$ & $1$ & none \\
\hline
$2m^2 + 2$ & $2, 34$ & $2,4$ & $2$ & $2$ and $V_{4n}$ \\
\hline
$2m^2 - 2$ & $0$ & none & $0, 70$ & $V_{4n+2}$ \\
\hline
$2m^2 + 3$ & $3,5,21$ & $3, 11$ & $5$ & none \\
\hline
$2m^2 - 3$ & $5$ & $29, 47, 64079$ & $5, 29$ & none \\
\hline
$3m^2$ & $0, 3$ & $3$ & $0, 12$ & none \\
\hline
$3m^2 + 1$ & $1, 13$ & $1, 4, 76$ & $1$ & none \\
\hline
$3m^2 - 1$ & $2$ & $2, 11, 47$ & $2$ & $2$ \\
\hline
$3m^2 + 2$ & $2, 5$ & $2, 29$ & $2, 5, 29$ & $2,14$ \\
\hline
$3m^2 - 2$ & $1$ & $1$ & $1$ & none \\
\hline
$3m^2 + 3$ & $3$ & $3$ & none & $6$ \\
\hline
$3m^2 - 3$ & $0,144$ & none & $0$ & none \\
\hline
\end{tabular}
\caption{Terms of the form $am^2 + b$ for $1\leq a\leq 3$ and $-3\leq b\leq 3$ in Fibonacci, Lucas, Pell, and Pell-Lucas numbers.\label{Tab1}}
\end{center}
\end{table}
\end{footnotesize}
\section*{Acknowledgements}
We thank Dr. John Cremona, Dr. Nikos Tzanakis, and Dr. Noam Elkies for a number of helpful suggestions and discussions.
The first author was supported by the National Science Foundation under Grant No. IIS-1253614. The second author was partially
supported by the European Union and the European Social Fund through project ``Supercomputer, the national virtual lab"
under Grant No. T\'AMOP-4.2.2.C-11/1/KONV-2012-0010.
\bibliographystyle{plain}
|
1,116,691,499,190 | arxiv | \section{Introduction}
\label{sec:introduction}
Let $p,q$ be two fixed integers, and let $E=\{p^aq^b:(a,b)\in\mathbb{N}^2\}$ be endowed
with the divisibility order, i.e., $x \succeq y \iff y \mid x$. A \mbox{$(p,q)$-ary chain}\ is a
finite non-increasing sequence in $E$. For example, $(72, 12, 4, 4,1)$ is a
$(2,3)$-ary chain, whereas $(72, 12, 4, 3, 1)$ is not since $4 \not\succeq 3$.
We define the \emph{weight} of a \mbox{$(p,q)$-ary chain}\ as the sum of its terms:
\begin{equation}
\label{eq:weight}
w = \sum_{i\geq 1} p^{a_i}q^{b_i}, \; \text{ where }
p^{a_i}q^{b_i} \succeq p^{a_{i+1}}q^{b_{i+1}} \text{ for } i \geq 1
\end{equation}
Expansions of this type have been proposed and successfully used by
Dimitrov~et~al. in the context of digital signal processing and cryptography
under the name \emph{double-base number system}. (For more details
see~\cite{DimJulMil99,DimImbMis08:mathcomp} and the references therein.)
From a different point of view, a \mbox{$(p,q)$-ary chain}\ can be seen as a partition of its
weight, where the parts are restricted to the set $E$ and constrained by a
divisibility condition. Surprisingly, works on integer partitions with
divisibility constraints on the parts are very scarce. Erd\H{o}s and Loxton
considered two types of such unconventional partitions, called chain and
umbrella partitions~\cite{ErdLox79a}, and obtained ``\emph{some rather weak
estimates for various partition functions}''. More recently, motivated by
some theoretical questions behind Dimitrov's number system, the second and third
authors refined some of Erd\H{o}s and Loxton's earlier results in a paper
entitled \emph{strictly chained $(p,q)$-ary
partitions}~\cite{ImbPhi10:pqary:CDM}. A strictly chained $(p,q)$-ary
partition, or \mbox{$(p,q)$-\textsc{scp}}{} for short, is a decreasing \mbox{$(p,q)$-ary chain} , i.e. it has distinct
parts. The original motivation for the present work was to extend the results
from~\cite{ImbPhi10:pqary:CDM} to the unconventional situation where the parts
of a \mbox{$(p,q)$-\textsc{scp}}{} can be either positive or negative. The results of such a study are
expected to provide significant improvements for some cryptographic primitives,
e.g. the computation of the multiple of a point on an elliptic curve. In
this context the first, natural question that we tackle in the present paper is:
``What is the maximal weight of a \mbox{$(p,q)$-\textsc{scp}}{} whose parts are bounded by some given
integer $m$?'' Although the problem may seem elementary at first glance, we
show that the answer is not so trivial. In particular, assuming $p<q$, we prove
that this maximal weight asymptotically grows as $pm/(p-1)$, independently of
$q$.
If the first part is given, the heaviest \mbox{$(p,q)$-\textsc{scp}}{} may be computed using a greedy
strategy by successively taking the next greatest part satisfying the
divisibility condition. Nevertheless, given a bound $m > 0$ on the parts,
determining how to best select the first part is not immediate and the greedy
approach fails in general. Indeed, we shall see that choosing the largest part
less than or equal to $m$ does not always provide a partition of maximal weight.
These facts are established in Sections~\ref{sec:preliminaries}
and~\ref{sec:maxim-elem} among other preliminary definitions, examples, and
results. The cases where the greedy choice fails are fully characterized in
Section~\ref{sec:Ym}. Section~\ref{sec:asymptotics} is devoted to the asymptotic
behavior of the maximal weight as a function of $m$. Finally, in
Section~\ref{sec:computing} we show how to compute a best choice for the first
part, thus the maximal weight, in $O(\log\log m)$ steps.
\section{Preliminaries}
\label{sec:preliminaries}
Let $m$ be a positive integer, and let $G(m)$ denote the maximal weight of a
\mbox{$(p,q)$-\textsc{scp}}{} whose greatest part does not exceed $m$. For example, with $p=2$ and
$q=3$, the first values of $G$ are: 1, 3, 4, 7, 7, 10, 10, 15, 15, 15, 15, 22,
22, 22, 22, 31, 31, \dots
In the following, we shall assume w.l.o.g. that $p<q$. Notice that the case
$p=1$ is irrelevant since $G(m)$ is simply the sum of all the powers of $q$ less
than or equal to $m$. More generally, and for the same reason, we shall
consider that $p$ and $q$ are not powers of the same integer, or equivalently
\emph{multiplicatively independent}. As a direct consequence, both $\log_p{q}$
and $\log_q{p}$ are irrational numbers (see e.g.~\cite[Theorem~2.5.7]{AllSha03}). Under
this assumption, the first values of $G(m)$ may be quickly computed with the
help of the following formula.
\begin{prop}
\label{lemm:Gamma(k)}
For $m \in \mathbb{N}^*$, let $G(m)$ denote the largest integer that can be expressed
as a strictly chained $(p,q)$-ary partition with all parts less than or equal
to $m$. Assume that $G(m) = 0$ if $m \not\in \mathbb{N}$. Then, we have $G(1)=1$, and
for $m>1$
\begin{equation}
\label{eq:Gamma}
G(m) = \max\left( G(m-1), 1+pG(m/p), 1+qG(m/q) \right)
\end{equation}
\end{prop}
\begin{proof}
Let $\lambda$ be a partition of weight $G(m)$ whose parts are all less than or
equal to $m$. First, notice that $\lambda$ must contain part $1$ by
definition of $G(m)$. If $m \not\in E$, then $G(m) = G(m-1)$. Otherwise, it
suffices to observe that removing part $1$ from $\lambda$ produces a partition
whose parts are all divisible by either $p$ or $q$.
\end{proof}
Computing $G(m)$ with relation~\eqref{eq:Gamma} requires $O(\log m)$ steps in
the worst case: Simply note that, for all $m$, in at most $p-1$ baby-steps,
i.e. $G(m) = G(m-1)$, one gets an integer that is divisible by $p$.
Formula~\eqref{eq:Gamma} may also be adapted to compute both $G(m)$ and a
\mbox{$(p,q)$-\textsc{scp}}{} of such weight. Nevertheless, it does not give any idea about the
asymptotic behavior of $G$. Moreover, we shall see in
Section~\ref{sec:computing} how to compute $G(m)$ and a \mbox{$(p,q)$-\textsc{scp}}{} of weight $G(m)$
in $O(\log\log m)$ steps.
A natural graphic representation for \mbox{$(p,q)$-\textsc{scp}}{}s is obtained by mapping each part
$p^aq^b \in E$ to the pair $(a,b) \in \mathbb{N}^2$. Indeed, with the above assumptions
on $p$ and $q$, the mapping $(a,b) \mapsto p^aq^b$ is one-to-one. Since the
parts of a \mbox{$(p,q)$-\textsc{scp}}{} are pairwise distinct by definition, this graphic
representation takes the form of an increasing path in $\mathbb{N}^2$ endowed with the
usual product order. This is illustrated in Figure~\ref{fig:staircase-walk-72}
with the ten \pqsp{2}{3}s containing exactly six parts and whose greatest part
equals $72 = 2^33^2$. Note that a \mbox{$(p,q)$-\textsc{scp}}{} with largest part $p^aq^b$ possesses at
most $a+b+1$ parts, and that there are exactly $\binom{a+b}{b}$ of them with a
maximum number of parts.
\begin{figure}[h]
\centering
\begin{tabular}{ccccc}
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) |- (3,2);
\foreach \a/\b in {0/0, 0/1, 0/2, 1/2, 2/2, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) |- (1,1) |- (3,2);
\foreach \a/\b in {0/0, 0/1, 1/1, 1/2, 2/2, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) |- (2,1) |- (3,2);
\foreach \a/\b in {0/0, 0/1, 1/1, 2/1, 2/2, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) |- (3,1) |- (3,2);
\foreach \a/\b in {0/0, 0/1, 1/1, 2/1, 3/1, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) -| (1,2) -| (3,2);
\foreach \a/\b in {0/0, 1/0, 1/1, 1/2, 2/2, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
\\
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) -| (1,1) -| (2,2) -| (3,2);
\foreach \a/\b in {0/0, 1/0, 1/1, 2/1, 2/2, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) -| (1,1) -| (3,2);
\foreach \a/\b in {0/0, 1/0, 1/1, 2/1, 3/1, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) -| (2,2) -| (3,2);
\foreach \a/\b in {0/0, 1/0, 2/0, 2/1, 2/2, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) -| (2,1) -| (3,2);
\foreach \a/\b in {0/0, 1/0, 2/0, 2/1, 3/1, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.38, every text node part/.style={font=\footnotesize}]
\draw[grid] (0,0) grid (3.5, 2.5);
\draw[->] (0,0) -- (3.5,0);
\draw[->] (0,0) -- (0,2.5);
\draw (3.5,0) node[anchor=west] {$2^a$};
\draw (0,2.5) node[anchor=south] {$3^b$};
\draw[thick] (0,0) -| (3,2);
\foreach \a/\b in {0/0, 1/0, 2/0, 3/0, 3/1, 3/2}
\fill (\a, \b) circle (4pt);
\end{tikzpicture}
\\
\end{tabular}
\caption{The set of \pqsp{2}{3}s with 6 parts and whose largest part equals
$2^33^2 = 72$.}
\label{fig:staircase-walk-72}
\end{figure}
With this representation in mind, one is easily convinced that the heaviest
\mbox{$(p,q)$-\textsc{scp}}{} with first part $p^aq^b$ looks like the top left \mbox{$(p,q)$-\textsc{scp}}{} in
Figure~\ref{fig:staircase-walk-72}. This is formalized in the following lemma.
\begin{lemm}
\label{lemm:characterize}
Given $a,b\in\mathbb{N}$, the heaviest \mbox{$(p,q)$-\textsc{scp}}{} with first part $p^aq^b$ is the one whose
parts are the elements of the set $\{q^i : 0 \leq i<b\} \cup \{q^bp^i : 0 \leq
i\leq a\}$.
\end{lemm}
\begin{proof}
Consider a \mbox{$(p,q)$-\textsc{scp}}{} $\lambda=(\lambda_i)_{i=1}^k$ with greatest part
$\lambda_1=p^aq^b$. Let $\lambda_i=p^{a_i}q^{b_i}$. If $a_i+b_i>b$ then
define $\lambda'_i=p^{a_i+b_i-b}q^b$, otherwise let $\lambda'_i=q^{a_i+b_i}$.
Note that $\lambda'_1=p^aq^b$ again. Since sequence $(a_i+b_i)_{i=1}^k$ is
decreasing, $(\lambda'_i)_{i=1}^k$ is also a \mbox{$(p,q)$-\textsc{scp}}. Since $p<q$ we have
$\lambda'_i\geq\lambda_i$ for all $i$, with equality if, and only if, the
parts in $\lambda$ form a subset of $\{q^i : 0 \leq i<b\} \cup \{p^iq^b : 0
\leq i\leq a\}$. Therefore, the maximal weight is reached when taking the
whole set, and only in this case.
\end{proof}
As a consequence, a \mbox{$(p,q)$-\textsc{scp}}{} of weight $G(m)$ and whose parts do not exceed $m$ is
characterized by its greatest part only. Moreover, denoting by $p^aq^b$ this
greatest part, we have $G(m)=h(a,b)$, where $h$ is the mapping defined on $\mathbb{N}^2$
by
\begin{align}\label{def:h}
h(a,b)=\frac{q^b-1}{q-1}+\frac{p^{a+1}-1}{p-1}q^b
\end{align}
Accordingly, the definition of $G$ may be rewritten as
\begin{equation}
\label{def:G}
G(m) = \max_{P_m} h,\quad \text{where } P_m=\{(a,b)\in\mathbb{N}^2 : p^aq^b \leq m\}
\end{equation}
Finally observe that the greatest part of a \mbox{$(p,q)$-\textsc{scp}}{} of weight $G(m)$ and whose
parts do not exceed $m$ must be a maximal element of $E \cap [0,m]$ for the
divisibility order. Otherwise, the partition could be augmented by a part,
resulting in a partition of larger weight. The next section is devoted to the
set of these maximal elements.
\section{On the set $Z_m$}
\label{sec:maxim-elem}
For convenience, let us denote by $\rho$ the logarithmic ratio of $q$ and $p$.
$$
\rho = \frac{\log q}{\log p} > 1
$$
Since $p$ and $q$ are multiplicatively independent, $\rho$ is irrational.
Let us further denote by $Z_m$ the set of all maximal elements in $E \cap [0,m]$
for the divisibility order. Recall that $E=\{p^aq^b:(a,b)\in\mathbb{N}^2\}$, so that any
element of $E$ may also be written as $p^{a+b\rho}$. There are exactly
$\lfloor\log_q m\rfloor +1$ elements in $Z_m$, described in the following Lemma.
\begin{lemm}
\label{lemm:Z_m}
Let $m$ be a positive integer. The following characterization holds:
$$
p^aq^b\in Z_m \Longleftrightarrow 0\leq b\leq \lfloor\log_q
m\rfloor\text{\ and\ }a=\lfloor\log_p m-b\rho\rfloor
$$
\end{lemm}
\begin{proof}
An element $p^aq^b$ of $E$ is in $Z_m$ if, and only if, $a$ and $b$ are
non-negative, $p^aq^b\leq m< p^{a+1}q^b$, and $p^aq^b\leq m< p^{a}q^{b+1}$.
Since $p<q$, the latter condition is superfluous. Checking that the former
inequalities are equivalent to the Lemma's claim is immediate.
\end{proof}
As a consequence, let us note for further use that
\begin{align}
Z_{qm} &= qZ_m\cup\{p^{\lfloor \rho + \log_pm\rfloor}\}, \label{eq:Z_qm}\\[1ex]
Z_{pm} &=
\begin{cases}
pZ_m & \text{if } \lfloor1/\rho+\log_qm\rfloor=\lfloor\log_qm\rfloor ,\\
pZ_m\cup\{q^{\lfloor\log_qm\rfloor+1}\} & \text{otherwise}
\end{cases}
\end{align}
The elements of $Z_m$ correspond exactly to the maximal integer points below or
on the line of equation $a\log{p} + b\log{q} - \log{m} = 0$. An example is
given in Figure~\ref{fig:Z_750}. The corresponding values $p^aq^b$ and $h(a,b)$
are reported in Table~\ref{tab:750}.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.65]
\draw[dotted, very thin] (0,0) grid (13.5,6.5);
\draw[->, very thin] (0,-0.3) -- (0,6.8) node[anchor=east] {$b$};
\draw[->, very thin] (-0.3,0) -- (14,0) node[anchor=north] {$a$};
\draw (-0.3,6.215) -- (10.027,-0.3);
\foreach \y/\x in {0/9, 1/7, 2/6, 3/4, 4/3, 5/1, 6/0}
\fill[red] (\x,\y) circle (2pt);
\draw[dashed, blue] (0,0) -- (0,1) -- (2,2) -- (2,6.5);
\foreach \y/\x in {0/0, 1/0, 2/2, 3/2, 4/2, 5/2, 6/2}
{
\draw[blue, thick] (\x - 0.1, \y - 0.1) -- (\x + 0.1, \y + 0.1);
\draw[blue, thick] (\x - 0.1, \y + 0.1) -- (\x + 0.1, \y - 0.1);
}
\end{tikzpicture}
\caption{The set $Z_{750}$ for $(p,q)=(2,3)$, represented as all maximal
integer points below the line of equation $x\log{2} + y\log{3} - \log{750} =
0$. The points along the dashed line correspond to the first values of the
sequence $\ell$ defined in Theorem~\ref{th:ell}.}
\label{fig:Z_750}
\end{figure}
\begin{table}[htbp]
\centering
\caption{The elements of $Z_{750}$ for $(p,q)=(2,3)$, together with the
corresponding values $p^aq^b$ and $h(a,b)$. Note that $G(750) = h(3,4) =
1255$.}
\label{tab:750}
\begin{tabular}{p{1.25cm}*7{c}}
$(a,b)$ & (0, 6) & (1, 5) & (3, 4) & (4, 3) & (6, 2) & (7, 1) & (9, 0) \\
\midrule
$p^aq^b$ & 729 & 486 & 648 & 432 & 576 & 384 & 512 \\
$h(a,b)$ & 1093 & 850 & 1255 & 850 & 1147 & 766 & 1023\\
\end{tabular}
\end{table}
Further define $z_m$ as the greatest integer of the form $p^aq^b$ less than or
equal to $m$, that is,
$$
z_m = \max Z_m
$$
Since $q^{\lfloor \log_q{m}\rfloor}\in Z_m$, we have $z_m \rightarrow \infty$
when $m \rightarrow \infty$. The next proposition goes one step further.
\begin{prop}
\label{lemm:function_z}
We have the following: $z_m \sim m$ when $m \rightarrow \infty$.
\end{prop}
\begin{proof}
Let $\hat{z}_m$ be the smallest integer of the form $p^aq^b$ greater than or
equal to $m$. Thus we have $z_m \leq m \leq \hat{z}_m$. By a theorem of
Tijdeman~\cite{Tijdeman74a} we know that, for $m$ large enough, there exists a
constant $C>0$ such that
$$ \hat{z}_m - z_m < \frac{z_m}{(\log z_m)^C}, $$
so that $0 \leq m - z_m < z_m / (\log z_m)^C $.
\end{proof}
From a greedy point of view, one might think that choosing $z_m$ for the largest
part of the \mbox{$(p,q)$-\textsc{scp}}{} formed as in Lemma~\ref{lemm:characterize} yields a \mbox{$(p,q)$-\textsc{scp}}{} of
weight $G(m)$; in which case our asymptotics problem would be solved using the
above proposition. Unfortunately, this is not true. In Table~\ref{tab:750}, we
see for instance that $z_{750}=729$, obtained for $(a,b) = (0,6)$, does not give
a \mbox{$(p,q)$-\textsc{scp}}{} of maximal weight. Instead, the maximal weight $G(750) = 1255$ is
obtained for $(a,b) = (3,4)$. Hence, even if $m$ is of the form $p^aq^b$, the
first part of a \mbox{$(p,q)$-\textsc{scp}}{} of maximal weight may be different from $m$. For
instance $G(729) = 1255$ comes from a unique \mbox{$(p,q)$-\textsc{scp}}{} whose first part is $648$.
In the next Section, we study the subset $Y_m$ of $Z_m$, which yields the
\mbox{$(p,q)$-\textsc{scp}}{}s of maximal weight $G(m)$.
\section{On the set $Y_m$}\label{sec:Ym}
\label{sec:Ym}
According to~\eqref{def:G}, $G(m)$ is equal to $h(a,b)$ for some values $a,b$.
First, notice that these values are not necessarily unique with respect to this
property, because $h$ is not necessarily one-to-one. For example, with $(p,q) =
(2,3)$, observe from Table~\ref{tab:750} that $h(1,5)=h(4,3)=850$. Similarly,
with $(p,q)=(2,5)$ one has $h(0,2)=h(4,0)=31$.
Let $Y_m$ be the set of all elements $p^aq^b$ in $E\cap[0,m]$ such that
$h(a,b)=G(m)$. As already noticed, $Y_m$ is a subset of $Z_m$. Next recall
that $z_m=\max Z_m$ needs not be in $Y_m$. A particular relation between $Y_m$
and $z_m$ does however exist as proved in the next proposition.
\begin{prop}
\label{prop:droite}
For $m \in \mathbb{N}^*$, let $z_m=p^{a}q^{b}$.
Then
\begin{equation}
\label{eq:Ym}
Y_m \subset \{ p^iq^j \in Z_m : j\leq b \}
\end{equation}
\end{prop}
\begin{proof}
Using \eqref{def:h}, we have
\begin{align*}
\frac {p-1}p\ h(a,b)
&= \frac{p-1}{p}\ \frac{q^b-1}{q-1} + \frac{q^b}{p}(p^{a+1}-1) \\[1ex]
&= p^aq^b - q^b\ \frac{q-p}{pq-p} - \frac{p-1}{p(q-1)},
\end{align*}
so that
\begin{equation}
\label{h(a,b)}
h(a,b) = \frac{p}{p-1}\left(p^aq^b - rq^b \right) -
\frac{1}{q-1}, \quad\text{where } r = \frac{q-p}{pq-p} \in (0,1/p)
\end{equation}
Note that $0 < r < 1/p$, because $pq-p > p(q-p)$.
As a consequence, we have
\begin{equation}
\label{h-h}
h(a,b)>h(a',b') \quad\Longleftrightarrow \quad
p^aq^b - p^{a'}q^{b'}>r(q^b-q^{b'})
\end{equation}
If $p^{a'}q^{b'} \in Z_m$ we have $z_m = p^aq^b > p^{a'}q^{b'}$.
Hence $b' > b$ implies $h(a,b)>h(a',b')$, which concludes the proof.
\end{proof}
Geometrically, Proposition~\ref{prop:droite} tells us that the points $(a,b) \in
\mathbb{N}^2$ such that $h(a,b) = G(m)$ cannot be located ``above'' or equivalently
``left'' of $z_m$. In particular, when $z_m = p^a$, we have $G(m) = h(a,0) =
(p^{a+1}-1)/(p-1)$.
In Proposition~\ref{prop:Ym=2}, we will see that the set $Y_m$ has at most two
elements. For now, let us first focus on those elements of $E$ that provide the
heaviest \mbox{$(p,q)$-\textsc{scp}}{} in a unique way, i.e. those for which $Y_{p^aq^b}=\{p^aq^b\}$.
The following theorem shows that the corresponding points in $\mathbb{N}^2$ form an
infinite area whose boundary is a particular sequence as illustrated in
Figure~\ref{fig:Z_750}.
\begin{theo}
\label{th:ell}
There exists a sequence $\ell=(\ell_b)_{b\in\mathbb{N}}$ in $\mathbb{N}$ such that
\begin{equation}
\label{strong}
Y_{p^aq^b} = \{p^aq^b\} \Longleftrightarrow a \geq \ell_b
\end{equation}
Moreover, the sequence $\ell$ is non-decreasing, unbounded, and satisfies $\ell_0=0$.
\end{theo}
\begin{proof}
Let us first establish the following statements:
\begin{enumerate}[(i)]
\item For all $b\geq 0$, there exists $a\geq 0$ such that $p^aq^b\in Y_{p^aq^b}$.\\[-2ex]
\item If $p^aq^b\in Y_{p^aq^b}$ then, for all $k\geq 1$, $Y_{p^{a+k}q^b}=\{p^{a+k}q^b\}$.
\end{enumerate}
Let $b\in\mathbb{N}$. As already seen, the mapping $(i,j) \mapsto p^iq^j$ is
one-to-one. Therefore, we have $q^b-p^iq^j\geq 1$ for all $p^iq^j\in Z_{q^b}
\setminus \{q^b\}$. Choose $a$ such that
$$
p^a \geq r (q^{b}-1), \quad\text{where } r = \frac{q-p}{pq-p} \text{ as
in~\eqref{h(a,b)}}
$$
Then, for all $p^iq^j\in Z_{q^b}$ with $j<b$, we have
\begin{equation}
\label{pa}
p^{a}q^b - p^{a+i}q^j \geq p^a \geq r (q^b - 1) \geq r (q^b - q^j)
\end{equation}
Using~\eqref{h-h}, it follows that $h(a,b) \geq h(a+i,j)$; in other words
$p^aq^b \in p^a Z_{q^b}$. According to Proposition~\ref{prop:droite}, we also
have $Y_{p^aq^b}\subset\{p^iq^j\in Z_{p^aq^b}:j\leq b\}$. Using
Lemma~\ref{lemm:Z_m}, it is immediate to check that the latter set is
identical to $p^aZ_{q^b}$, thus $p^aq^b\in Y_{p^aq^b}$ and (i) is proved.
Now, if $p^aq^b\in Y_{p^aq^b}$ then replacing $a$ by $a+k$ for any $k\geq 1$
turns~\eqref{pa} into a strict inequality. Therefore,
$Y_{p^{a+k}q^b}=\{p^{a+k}q^b\}$, and (ii) is proved too. Accordingly, letting
\begin{equation}
\label{eq:lb}
\ell_b = \min \left\{ a \in \mathbb{N} : Y_{p^aq^b} =\{p^aq^b\} \right\}
\end{equation}
provides the claimed sequence $\ell$. Since $Y_1=\{1\}$ and $1 = p^0q^0$, we
get $\ell_0=0$.
\medskip
Let us now prove that $\ell$ is non-decreasing. Given $b\in\mathbb{N}$, either
$\ell_b = 0$ thus $\ell_{b+1} \geq \ell_b$, or $\ell_b\geq 1$. In the latter,
there exists $p^iq^j \in Z_{p^{\ell_b-1}q^b}$ such that $j \neq b$ and
$h(i,j)\geq h(\ell_b-1,b)$. From~\eqref{def:h}, it is not difficult to see
that $h(i,j+1) = qh(i,j) + 1$, and thus $h(i,j+1) - h(\ell_{b}-1,b+1) = q
\left( h(i,j) - h(\ell_b-1,b) \right)$. Therefore $h(i,j+1) \geq
h(\ell_b-1,b+1)$, so that $\ell_{b+1}>\ell_b-1$. \medskip
Finally suppose that $\ell$ is bounded. This would imply that there exists an
integer $a$ such that, for all $b\in\mathbb{N}$, $Y_{p^aq^b}=\{p^aq^b\}$. The
following statement shows that this is impossible.
\begin{enumerate}
\item[(iii)] For all $a\in\mathbb{N}$ there exists $b\in\mathbb{N}$ such that $h(a+\lfloor
b\rho\rfloor,0)>h(a,b)$.
\end{enumerate}
\medskip
Indeed, by Lemma~\ref{lemm:Z_m} we know that $p^{a+\lfloor b\rho\rfloor}\in
Z_{p^aq^b}$. Fix $a\in\mathbb{N}$, choose $b\in\mathbb{N}^*$, and set $a'=a+\lfloor
b\rho\rfloor$. According to~\eqref{h-h}, and denoting by
$\{b\rho\}=b\rho-\lfloor b\rho\rfloor$ the fractional part of $b\rho$, we have
\begin{align}
h(a,b) < h(a',0)
& \Leftrightarrow p^aq^b-p^{a'} < r(q^b-1) \notag \\
& \Leftrightarrow p^{\lfloor b\rho\rfloor} > q^b \left(1 -
r/p^{a} \left(1- 1/q^b \right)\right) \notag \\
& \Leftrightarrow \{ b\rho \} <
-\log_p\left(1-r/p^{a}\left(1-1/q^b\right)\right)
\label{majfrac}
\end{align}
Now observe that sequence $(\phi_{a,b})_{b \in \mathbb{N}}$ defined by
\begin{equation}
\label{eq:phiab}
\phi_{a,b} = -\log_p \left(1-\frac r{p^{a}}\left(1-\frac1{q^b}\right)\right),
\end{equation}
where $r = (q-p)/(pq-p)$, is increasing, with $\phi_{a,0}=0$. Since $\rho$ is
irrational, we know that $(b\rho)_{b\in\mathbb{N}}$ is equidistributed modulo 1. Thus,
there exists $b>0$ such that $\{b\rho\} < \phi_{a,1}$, hence $\{b\rho\} <
\phi_{a,b}$, which using~\eqref{majfrac} concludes the proof.
\end{proof}
Let us anticipate a result of the next section, implying that the sequence
$\ell$ is completely known as soon as we can compute its \emph{jump indices},
that is, the values $b>0$ for which $\ell_b>\ell_{b-1}$. Indeed, we shall
establish with statement~\eqref{lgap} that if $b$ is a jump index of $\ell$ then
$\ell_b=\left\lfloor\alpha(b) \right\rfloor$, where
\begin{equation}
\label{alpha}
\alpha(b)= \log_p \frac{q^{b}-1}{q^b-p^{\lfloor b\rho\rfloor}}+ \log_p \frac{q-p}{q-1}
\end{equation}
Computing the jump indices of $\ell$ may be done recursively as shown in the
next corollary. Referring to the sequence $\phi$ defined in~\eqref{eq:phiab},
let the mapping $\beta$ be defined on $\mathbb{N}$ by
\begin{equation}
\label{beta}
\beta(a) = \min\left\{ j\in\mathbb{N}^{*}:\;\{j\rho\} < \phi_{a,j} \right\}
\end{equation}
\begin{coro}
\label{coro:gaps}
The increasing sequence $(j_k)_{k\in\mathbb{N}}$ of the jump indices of $\ell$
satisfies
$$
j_0=\beta(0), \quad j_{k+1}=\beta(\ell_{j_k})
$$
\end{coro}
\begin{proof}
Given any $a\in\mathbb{N}$, we know that $Y_{p^a}=\{p^a\}$ by
Proposition~\ref{prop:droite}. Moreover, letting $b$ be incremented by 1 from
0 iteratively, it follows from \eqref{eq:Z_qm} and \eqref{h-h} that
$Y_{p^aq^b}=\{p^aq^b\}$ as long as $h(a,b)>h(a+\lfloor b\rho\rfloor,0)$. As
it is shown in part (iii) of the proof of Theorem~\ref{th:ell}, the latter
inequality is equivalent to $\{b\rho\}<\phi_{a,b}$. Accordingly,
$Y_{p^aq^b}=\{p^aq^b\}$ if $b<\beta(a)$. Hence $\ell_{\beta(a)-1}\leq
a<\ell_{\beta(a)}$, so that $\beta(a)$ is a jump index for $\ell$. The result
follows immediately since $\ell$ is non-decreasing.
\end{proof}
As claimed before, we next show that $Y_m$ has at most 2 elements. This might
be established by directly using~\eqref{h-h} and studying the diophantine
equation
$$
p^aq^b - p^{c} = r(q^b-1), \quad\text{where }
r = \frac{q-p}{pq-p}
$$
Unfortunately, the latter is seemingly not easy to cope with, whereas
Theorem~\ref{th:ell} proves handy.
\begin{prop}
\label{prop:Ym=2}
For all $m\in\mathbb{N}$, the set $Y_m$ has either one or two elements.
\end{prop}
\begin{proof}
Assume $\#Y_m \geq 2$ and denote by $p^aq^b$ its greatest element. Since
$Y_m=Y_{p^aq^b}$, we have $a<\ell_b$ by definition of sequence $\ell$ in
Theorem~\ref{th:ell}. To be more precise, statement (ii) in the proof of this
theorem even tells us that $a=\ell_b-1$. Now let $p^{c}q^{d}$ be the second
greatest element in $Y_m$. According to Proposition~\ref{prop:droite} we must
have $d<b$, thus $c>a$ by Lemma~\ref{lemm:Z_m}. Since $\ell$ is
non-decreasing, it follows that $\ell_{d}\leq\ell_b$. Since $a=\ell_b-1$ we
get $c\geq\ell_{d}$, which means that $Y_{p^{c}q^{d}}=\{p^{c}q^{d}\}$ from
Theorem~\ref{th:ell}. Therefore, there cannot exist a third element in $Y_m$
as it would also be in $Y_{p^{c}q^{d}}$.
\end{proof}
\section{Asymptotic behavior of $G$}\label{sec:asymptotics}
In this section, our goal is to prove that $G(m)$ is equivalent to $mp/(p-1)$ as
$m$ tends to infinity, independently of $q$. As a simple first step, let us
exhibit a sharp upper bound for $G$.
\begin{lemm}
\label{lemm:upperbound}
For all $m \in \mathbb{N}$ and all $n \in Y_m$, we have $ G(m)< np/(p-1)$. In
particular,
$$ \limsup_{m\rightarrow\infty} \frac{G(m)}{m} = \frac{p}{p-1} $$
\end{lemm}
\begin{proof}
Let $n=p^aq^b\in Y_m$. According to~\eqref{h(a,b)} we have
\begin{equation}
\label{h(n)-pn}
h(a,b) - \frac{np}{p-1} = - \left(\frac{rpq^b}{p-1} + \frac{1}{q-1}
\right) < 0, \quad \text{where } r \in (0,1/p)
\end{equation}
Hence $G(m)=h(a,b) < np/(p-1)$. Since $n\leq m$, it follows that $G(m)/m \leq
p/(p-1)$. To conclude, observe that for $m=p^a$ we have $G(p^a) =
(p^{a+1}-1)/(p-1)$ from Prop~\ref{prop:droite}. Therefore,
$\lim_{a\rightarrow\infty}G(p^a)/p^a= p/(p-1)$.
\end{proof}
Let us now define a mapping $y$ as follows: For all $m$, let $y_m$ denote the
smallest integer of the form $p^aq^b$ such that $G(m)=h(a,b)$, that is
\begin{equation}
\label{ym}
y_m=\min Y_m
\end{equation}
According to Proposition~\ref{prop:droite}, $y_m$ is also the element of $Y_m$
with the smallest exponent in $q$. We shall next give a characterization of
$y_m$ using the sequence $\ell$ defined in Theorem~\ref{th:ell}. Recall that
this sequence is defined by $\ell_b=\min\{a\in\mathbb{N}:Y_{p^aq^b}=\{p^aq^b\}\}$ and
satisfies \eqref{strong}. Since $\ell$ is non-decreasing, the sequence
$(p^{\ell_b}q^b)_{b\in\mathbb{N}}$ is increasing. We may thus define, for all $m\in\mathbb{N}$,
\begin{equation}
\label{mell}
m_\ell = \max\{b\in\mathbb{N}:p^{\ell_b}q^b\leq m\}
\end{equation}
\begin{theo}
\label{prop:y_m}
For all $m \in \mathbb{N}$, we have
\begin{equation}
\label{eq:ym}
y_m = \max \{ p^aq^b \in Z_m : b \leq m_\ell \}
\end{equation}
Moreover, let $\bar{a}=\lfloor\log_p m-m_\ell\rho\rfloor$ and $\bar{m}=\lfloor
m/p^{\bar{a}}\rfloor$. Then $y_m=p^{\bar{a}}z_{\bar{m}}$.
\end{theo}
\begin{proof}
Let $y_m=p^iq^j$. Since $y_m=\min Y_m$, we have $Y_{y_m}=\{y_m\}$ thus $i\geq
\ell_j$ using~\eqref{strong}. Suppose $j > m_\ell$, then $p^iq^j \geq
p^{\ell_j}q^j$, and thus $p^iq^j >m$ from~\eqref{mell}, which contradicts the
fact that $y_m \leq m$. Therefore, $j \leq m_\ell$.
Now consider any $p^aq^b\in Z_m$ such that $b\leq m_\ell$. Since $\ell$ is
non-decreasing we have $\ell_{m_\ell}\geq\ell_b$. By Lemma~\ref{lemm:Z_m},
there exists $k\in \mathbb{N}$ such that $p^kq^{m_\ell}\in Z_m$, and we have $k \geq
\ell_{m_\ell}$ because $p^{\ell_{m_\ell}}q^{m_\ell} \leq m$. Since $p^aq^b\in
Z_m$, condition $b \leq m_\ell$ implies that $a \geq k$ by
Lemma~\ref{lemm:Z_m} again, so that $a \geq \ell_{m_\ell}\geq \ell_b$.
Therefore, $Y_{p^aq^b}=\{p^aq^b\}$. Supposing $p^aq^b>p^iq^j$ would then
imply that $h(i,j)<h(a,b)$, contradicting the definition of $y_m$. Thus
$p^aq^b\leq p^iq^j$, and \eqref{eq:ym} is established.
Accordingly, $j=\lfloor\log_p m-i\rho\rfloor\leq\bar{a}$, so that
$y_m/p^{\bar{a}}$ is an element of $E$. Therefore, $y_m/p^{\bar{a}}\leq
m/p^{\bar{a}}$ implies $y_m/p^{\bar{a}}\leq\lfloor m/p^{\bar{a}}\rfloor
=\bar{m}$, which in turn implies $\lfloor m/p^{\bar{a}}\rfloor\leq
z_{\bar{m}}$. We thus get
$$
y_m\leq p^{\bar{a}}z_{\bar{m}}\leq p^{\bar{a}}\bar{m}\leq m
$$
Let $z_{\bar{m}}=p^aq^b$. To conclude the proof by using \eqref{eq:ym} again,
it suffices to show that $b\leq m_\ell$. Since $p^{\bar{a}}q^{m_\ell}\in
Z_m$, we have $p^{\bar{a}}q^{m_\ell}\leq m<p^{\bar{a}}q^{m_\ell+1}$, so that
$q^{m_\ell}\leq \bar{m}<q^{m_\ell+1}$. Thus
$\lfloor\log_q\bar{m}\rfloor=m_\ell$, hence $b\leq m_\ell$ by
Lemma~\ref{lemm:Z_m}.
\end{proof}
Comparing with Proposition~\ref{prop:droite}, characterization \eqref{eq:ym} of
$y_m$ no more depends on $z_m$. Moreover, it provides a first improvement of
Lemma~\ref{lemm:upperbound}.
\begin{coro}
For all $m \in \mathbb{N}$, we have
\begin{equation}
\label{eq:Gm:ym}
\frac{G(m)}{y_m} \sim \frac{p}{p-1} \quad\text {as } m \rightarrow \infty
\end{equation}
\end{coro}
\begin{proof}
For $m\in\mathbb{N}$, let $y_m=p^{a_m}q^{b_m}$. According to~\eqref{h(n)-pn} we have
$$
\frac{G(m)}{y_m} - \frac{p}{p-1}
= -\frac{t_m}{p^{a_m}}, \quad\text{where }
t_m = \frac{q-p}{(p-1)(q-1)}+\frac{1}{q^{b_m}(q-1)}
$$
Observe that $t_m \in \left(\frac{q-p}{(p-1)(q-1)},\frac{1}{p-1}\right]$
is uniformly bounded. To conclude the proof, we need to show that $a_m
\rightarrow \infty$ as $m \rightarrow \infty$. According to
Theorem~\ref{prop:y_m}, we have $b_m \leq m_\ell$, thus
$a_m\geq\ell_{m_\ell}$. Since $m_\ell$ goes to infinity with $m$, and since
$\ell$ is non-decreasing and unbounded by Theorem~\ref{th:ell}, $a_m$ goes to
infinity with $m$ too. Hence the claim.
\end{proof}
According to the latter result and Proposition~\ref{lemm:function_z}, the final
task consists in showing that $y_m\sim z_m$. This is done next, so that our
main claim is established.
\begin{theo}
\label{theo:Gm/m}
For all $m \in \mathbb{N}$, we have
$$ \frac{G(m)}{m} \sim \frac{p}{p-1} \quad\text{as } m \rightarrow
\infty $$
\end{theo}
\begin{proof}
Assume $y_m \neq z_m$. According to Theorem~\ref{prop:y_m}, the elements in
$Z_m$ that exceed $y_m$ are of the form $p^iq^j$ with
$m_\ell<j\leq\lfloor\log_q m\rfloor$. Let us sort these elements together
with $y_m$ in an increasing sequence $(n_0=y_m, n_1, \dots, n_N=z_m)$, so that
$n_0=y_m$ and $n_N=z_m$. Observe that the elements of this sequence are
consecutive elements of $E$ for the usual order. As soon as $m$ is large
enough, we know by Tijdeman's result already mentioned~\cite{Tijdeman74a} that
$n_{i+1} - n_i \leq n_i/(\log n_i)^C$ for an explicitly computable constant
$C>0$. Therefore,
$$
z_m - y_m
\leq \sum_{i=0}^{N-1}\frac{n_i}{(\log n_i)^C}
\leq \sum_{i=0}^{N-1}\frac{py_m}{(\log \frac{m}{p})^C}
$$
Accordingly, for all $m \in \mathbb{N}$,
\begin{equation}
\label{ecart}
0 \leq z_m - y_m \leq (\lfloor\log_q m\rfloor - m_\ell)\, \frac{py_m}{(\log \frac{m}{p})^C}
\end{equation}
At this point, what remains to be proved is that $\lfloor\log_q m\rfloor -
m_\ell$ grows asymptotically slower than $(\log \frac{m}{p})^C$ so that $z_m
\sim y_m$, and to conclude using~\eqref{eq:Gm:ym} and
Lemma~\ref{lemm:function_z}. Recall that $m_\ell$ is defined as the largest
value $b$ such that $p^{\ell_b}q^b \leq m$. Therefore, we have
$$
p^{\ell_{m_\ell}}q^{m_\ell} \leq m < p^{\ell_{m_{\ell+1}}}q^{m_\ell+1}
$$
Equivalently, using $\rho = \log{q}/\log{p}$, we have
\begin{equation}
\label{C0}
\frac{\ell_{m_\ell}}{\rho}+m_\ell \leq \log_q m <\frac{\ell_{m_\ell+1}}{\rho}+m_\ell+1
\end{equation}
so that
\begin{equation}
\label{C}
\lfloor\log_q m\rfloor-m_\ell<\frac{\ell_{m_\ell+1}}{\rho}+1
\end{equation}
It thus remains to evaluate the terms in sequence $\ell$. For that purpose,
we first give an explicit formula for $\ell$, valid at the jumps of $\ell$. We
claim that for all $b \in \mathbb{N}^*$,
\begin{equation}
\label{lgap}
\ell_b > \ell_{b-1} \quad\Rightarrow\quad
\ell_b = \left\lfloor \log_p \frac{(q-p)(q^b-1)}{(q-1)(q^b-p^{\lfloor
b\rho\rfloor})} \right\rfloor
\end{equation}
Indeed, assume that $\ell_b > \ell_{b-1}$.
Then, for all $p^iq^j \in Z_{p^{\ell_{b-1}}q^{b-1}}$, we know from~\eqref{h-h}
that
$$
p^{\ell_{b-1}}q^{b-1}-p^iq^j>r(q^{b-1}-q^j)
$$
Multiplying both sides by $q$ yields, for all $p^iq^j\in
qZ_{p^{\ell_{b-1}}q^{b-1}}$
$$
p^{\ell_{b-1}}q^{b}-p^iq^j>r(q^{b}-q^j)
$$
Now, $\ell_b>\ell_{b-1}$ implies that there exists an element $p^iq^j \in
Z_{p^{\ell_{b-1}}q^{b}}$ for which the latter inequation does not hold. By
Lemma~\ref{lemm:Z_m} and the definition of $Z_{qm}$ in~\eqref{eq:Z_qm}, this
element must be $p^{\ell_{b-1} + \lfloor b\rho \rfloor}$, so that
$$
p^{\ell_{b-1}}(q^{b}-p^{\lfloor b\rho\rfloor})\leq r(q^{b}-1)
$$
In fact, note that this inequality does not only hold for $p^{\ell_{b-1}}$ ;
by definition of $\ell$, it remains valid for $p^{\ell_{b-1}+1}, \dots,
p^{\ell_b-1}$. Accordingly, we get
$$
p^{\ell_{b}-1}(q^{b}-p^{\lfloor b\rho\rfloor})\leq
r(q^{b}-1)<p^{\ell_{b}}(q^{b}-p^{\lfloor b\rho\rfloor})
$$
which proves claim~\eqref{lgap}.
\medskip
It follows from~\eqref{lgap} that, for any $b$ such that $\ell_b>\ell_{b-1}$,
\begin{equation}
\label{A}
\ell_b < \log_p\frac{q^b}{q^b-p^{\lfloor b\rho\rfloor}}
\end{equation}
Using another result of Tijdeman (see~\cite[Theorem~1]{Tijdeman73}), we know
that, as soon as $p^{\lfloor b\rho\rfloor} > 3$, there exists another explicit
constant $C' > 1$ such that
\begin{equation}
\label{B}
q^b-p^{\lfloor b\rho \rfloor} \geq
\frac{p^{\lfloor b\rho \rfloor}}{\left(\log p^{\lfloor b\rho\rfloor}\right)^{C'}}
\end{equation}
Therefore, since $q^b=p^{b\rho}$,~\eqref{A} and~\eqref{B} imply that
\begin{equation}
\label{lbound}
\ell_b < \log_p \frac{q^b(\log p^{\lfloor b\rho\rfloor})^{C'}}{p^{\lfloor
b\rho\rfloor}}
= \{b\rho\} + \frac{C'}{\log p} \log p^{\lfloor b\rho\rfloor}
< 1 + \frac{C'}{\log p}\log\log q^b
\end{equation}
Putting all this together, we get the claimed result.
Indeed, let $b$ be the smallest index such that $\ell_{m_\ell+1} = \ell_b$.
Since $\ell_b > \ell_{b-1}$, we have
\begin{equation}
\label{algo}
\ell_{m_\ell+1} = \ell_b
< 1+\frac{C'}{\log p}\log\log q^{b}
\leq 1+\frac{C'}{\log p}\log\log q^{m_\ell+1}
\leq 1+\frac{C'}{\log p}\log\log qm
\end{equation}
Using~\eqref{C} we get
\begin{equation}\label{bm-ml}
\lfloor \log_q m \rfloor - m_\ell < 1 + \frac{1}{\rho} +\frac{C'}{\log
q}\log\log qm =o\left((\log m/p)^{C}\right)
\end{equation}
which implies, using~\eqref{ecart}, that $y_m \sim z_m$ and concludes the proof.
\end{proof}
Note that the above proof mainly relies on the fact that the sequence $\ell$ is
non-decreasing and that, due to the lower bound in \eqref{B} essentially, it
grows very slowly. The theorem of Tijdeman that provides this lower bound
hinges on a result of Fel'dman about linear forms in logarithms. More recent
results of Laurent et alii~\cite{LauMigNes95} about such forms in two logarithms
allow one to make precise the value of the effective constant $C'$ in \eqref{B}.
Nevertheless, this value remains large and does not seem convenient in order to
compute $m_\ell$ using~\eqref{bm-ml}, in particular when $m$ is not very large.
The algorithm presented in the next section provides one with an alternative
method.
\section{Computing $y_m$ and $G(m)$}
\label{sec:computing}
Using the mapping $h$, computing $G(m)$ is straightforward as soon as an element
of $Y_m$ is known, in particular $y_m$. In Theorem~\ref{prop:y_m}, we proved
that $y_m = p^{\bar{a}} z_{\bar{m}}$, where $\bar{a}$ and $z_{\bar{m}}$ both
depend on $m_\ell$. Once $m_\ell$ is known, computing $z_{\bar{m}}$ (the
greatest element in $Z_{\bar{m}}$) can be done efficiently with an algorithm
explained in~\cite{BerImb09:dmtcs}. We shall establish a slightly different and
simpler version of that algorithm at the end of this section.
Computing $m_\ell$ requires to compute the values of $\ell$ (see~\eqref{mell}).
Theorem~\ref{th:beta} given below asserts that the jump indices of $\ell$ are
denominators of convergents of $\rho$. Furthermore, the relation $\ell_b =
\lfloor \alpha(b) \rfloor$, see~\eqref{alpha}, also holds for \emph{all}
denominators of both even primary convergents of $\rho$ and their mediants.
This provides an explicit method for computing any term of $\ell$, stated in
Corollary~\ref{coro:ellb}.
Some known facts about the convergents of $\rho$ are thus needed, let us recall
them (see, e.g.,~\cite{Lang95} or~\cite{AllSha03} for more details). Let
$[a_0,a_1,...]$ be the regular continued fraction expansion of $\rho$, and for $i
\geq 0$, denote by $h_i/k_i$ the $i^\mathrm{th}$ principal convergent of
$\rho$.
It is well known that the sequence $(h_i/k_i)_{i\geq0}$ converges to $\rho$ and
satisfies
$$|k_i\rho - h_i| = (-1)^i(k_i\rho - h_i),$$
and
$$
\frac{1}{k_{i}+k_{i+1}} < \left| k_i\rho - h_i \right| < \frac{1}{k_{i+1}}
$$
Given $i\geq 0$, the intermediate convergents of $h_i/k_i$, sometimes
referred to as mediants, are the rational numbers ${h_{i,j}}/{k_{i,j}}$, given
by
\begin{equation}
\label{def2cv}
h_{i,j} = h_i + jh_{i+1}, \quad k_{i,j} = k_i + jk_{i+1}, \quad\text{for } 0 <
j < a_{i+2}
\end{equation}
Let $(h_{2i}/k_{2i})_{i\geq 0}$ denote the sequence of convergents of $\rho$
of even index. We define $(H_n/K_n)_{n\in\mathbb{N}}$ as the increasing sequence of
all convergents of $\rho$ of even index together with their intermediate
convergents.
It is also known (see~\cite[Theorem~2]{Richards81}) that
$(H_n/K_n)_{n \in \mathbb{N}}$ is the best approximating sequence of $\rho$ from below,
that is, its terms are characterized by the following property: For each
$n\in\mathbb{N}$ and integers $h$ and $k$, we have
\begin{equation}
\label{convergents}
\frac{H_n}{K_n} < \frac{h}{k} < \rho \quad \Longrightarrow \quad k > K_n
\end{equation}
We shall need two immediate consequences of~\eqref{convergents}. The first one
is that, while the sequence $(K_n)_{n\in\mathbb{N}}$ increases to infinity, the sequence
$(\{K_n\rho\})_{n\in\mathbb{N}}$ decreases to 0. Indeed, property~\eqref{convergents}
implies that $H_n = \lfloor K_n\rho \rfloor$, so that~\eqref{def2cv} implies,
for $0 \leq j < a_{i+2}$,
\begin{equation}
\label{cvrho}
\{k_{2i,j+1}\rho\}-\{k_{2i,j}\rho\} = k_{2i+1}\rho - h_{2i+1} \in (-1/k_{2i+2},0)
\end{equation}
The second one rephrases the sufficient condition in~\eqref{convergents}: If
$\{b\rho\}\leq\{j\rho\}$ holds for all integers $0 < j \leq b$, then
$b$ is a term of $(K_n)_{n\in\mathbb{N}}$. Indeed, let $h,k$ be such that
$$
\frac {\lfloor b\rho\rfloor}b<\frac hk<\rho
$$
Then, since $h<k\rho$, the above inequalities still hold for $h=\lfloor
k\rho\rfloor$. This implies $\{k\rho\}<\frac{k}{b}\{b\rho\}$. Supposing $k\leq
b$ yields $\{k\rho\}<\{b\rho\}$, which contradicts our hypothesis. Hence
$\lfloor b\rho\rfloor/b$ satisfies~\eqref{convergents}.
We can now establish our main claims. According to~\eqref{lgap}, the values of
$\ell$ are known explicitly at every jump index $j$ of $\ell$. In these cases,
we have $\ell_j = \left\lfloor \alpha(j) \right\rfloor$ where, as already
defined in \eqref{alpha}:
$$
\alpha(b)= \log_p \frac{q^{b}-1}{q^b-p^{\lfloor b\rho\rfloor}}+ \log_p
\frac{q-p}{q-1}
$$
\begin{theo}
\label{th:beta}
Every jump index of $\ell$ is a term of the sequence $(K_n)_{n\in\mathbb{N}}$.
Moreover, for each $K\in(K_n)_{n\in\mathbb{N}}$, we have
$\ell_K = \left\lfloor \alpha(K) \right\rfloor$.
\end{theo}
\begin{proof}
According to Corollary~\ref{coro:gaps}, the jump indices $j_k$ of $\ell$ can
be computed starting from $j_0 = \beta(0)$ and iterating $j_{k+1} =
\beta(\ell_{j_k})$, where $\beta(a) = \min \{j \in \mathbb{N}^* : \{j\rho\} <
\phi_{a,j} \}$ (see~\eqref{eq:phiab} and~\eqref{beta}). Let us fix $a\in\mathbb{N}$.
Since the sequence $(\phi_{a,i})_{i\geq0}$ increases from 0 and the sequence
$(\{K_i\rho\})_{i\geq0}$ decreases to 0, there exists a unique $n$ such that
$\{K_n\rho\} < \phi_{a,K_n}$ and $\{K_i\rho\}\geq\phi_{a,K_i}$ for all $i<n$,
if any. In particular, $K_n\geq \beta(a)$. We next establish that
$K_n=\beta(a)$. This is clear if $n=0$ since $K_0=1$ and $\beta(a)\geq 1$. In
the following, we assume $n\geq 1$.
Let $b=\beta(a)$ for short, and suppose $b<K_n$. Let $K=K_{n-1}$ for short
again, and let $c=\min\{j>0:\{j\rho\}<\{K\rho\}\}$. For all $i<c$ we have
$\{i\rho\}\geq\{K\rho\}>\{c\rho\}$, so that $c\in(K_i)$, which implies $c=K_n$
since $(\{K_i\rho\})$ decreases. Therefore, $b<K_n$ forces
$\{b\rho\}\geq\{K\rho\}$, so that
\begin{equation}
\label{ecarts}
\phi_{a,b}>\{b\rho\}\geq\{K\rho\}\geq\phi_{a,K}.
\end{equation}
Since $\phi_{a,i}$ increases with $i$, we get $K<b<K_n$. Property
\eqref{convergents} implies that $\lfloor b\rho\rfloor/b< \lfloor
K_n\rho\rfloor/K_n$. Since we cannot have $\lfloor K\rho\rfloor/K<\lfloor
b\rho\rfloor/b< \lfloor K_n\rho\rfloor/K_n$ (\cite{Richards81}, (ii) of Lemma~1), it
follows that $\lfloor b\rho\rfloor/b< \lfloor K\rho\rfloor/K$, that is,
$\{b\rho\}/b>\{K\rho\}/K$. Thus
$$
\{b\rho\}-\{K\rho\}>\frac{b-K}{K}\{K\rho\}\geq
\frac{\phi_{a,K}}{K}>\frac{x(1-q^{-K})}{K\log p},
$$
where $x=r/p^a$ for short. Nevertheless,
$$
\phi_{a,b}-\phi_{a,K}<\phi_{a,\infty}-\phi_{a,K}=
\log_p\bigg(1+\frac x{1-x}q^{-K}\bigg)<\frac{x}{(1-x)q^{K}\log p}.
$$
According to \eqref{ecarts} we should thus have
$$
\frac{(1-q^{-K})}{K}<\frac{1}{(1-x)q^{K}},
$$
which would imply, since $x\leq r$,
$$
q-1<\frac{q^{K}-1}{K}<\frac1{1-x}\leq\frac{p(q-1)}{q(p-1)}<q-1.
$$
Therefore, $K_n=\beta(a)$ as claimed, which proves the first assertion of the
Theorem.
Now, let $K\in(K_n)$. On one hand, $\ell_K\leq\lfloor\alpha(K)\rfloor$ holds.
Indeed, there is a unique jump index $K^*$ of $\ell$ such that
$\ell_K=\ell_{K^*}$, and $K^*\leq K$ because $\ell$ is non-decreasing.
According to the first assertion of the Theorem, $K^*\in(K_n)$. Since $(K_n)$
increases and $(\{K_n\rho\})$ decreases, $(\alpha(K_n))$ is increasing, thus
$(\lfloor\alpha(K_n)\rfloor)$ is non-decreasing. Therefore,
$\ell_K=\ell_{K^*}=\lfloor\alpha(K^*)\rfloor\leq\lfloor\alpha(K)\rfloor$. On
the other hand, we also have $\lfloor\alpha(K)\rfloor\leq \ell_K$. Indeed,
letting $a=\lfloor\alpha(K)\rfloor$ for short, we have $a\leq \alpha(K)$, that
is,
$$
p^a\leq \frac{(q-p)(q^K-1)}{(q-1)(q^K-p^{\lfloor K\rho\rfloor})}.
$$
Recalling \eqref{h-h}, this also reads
$$
(p^{a-1}q^K-p^{a-1+\lfloor K\rho\rfloor})\leq r(q^K-1).
$$
Since $\ell_K\geq 0$, we may assume $a\geq 1$. Letting $a'=a+\lfloor
K\rho\rfloor$, the above inequality means that
$$
h(a-1,K)\leq h(a'-1,0).
$$
Finally notice that $p^{a'-1}<p^{a-1}q^K$. Therefore,
$Y_{p^{a-1}q^K}\neq\{p^{a-1}q^K\}$, so that $a-1<\ell_K$ by \eqref{strong}.
Thus $\lfloor\alpha(K)\rfloor=a\leq\ell_K$ as claimed.
\end{proof}
Accordingly, computing $\ell_b$ for an arbitrary $b$ only requires applying
$\alpha$ to the largest term of $(K_n)_{n\in\mathbb{N}}$ not exceeding $b$. More
explicitly:
\begin{coro}
\label{coro:ellb}
Given $b\in\mathbb{N}$, let $s$ and $t$ be the integers defined by
$$
k_{2s} \leq b < k_{2s+2}, \quad\text{and}\quad t = \left\lfloor
\frac{b-k_{2s}}{k_{2s+1}} \right\rfloor
$$
Then $\ell_b=\lfloor\alpha(k_{2s,t})\rfloor$.
\end{coro}
Let us finally turn to the computation of $y_m=p^aq^b$ and $G(m)=h(a,b)$. Of
course, writing these values requires $O(\log m)$ bits, but we show that the $a$
and $b$ exponents of $y_m$ can be obtained with $O(\log\log m)$ operations
involving numbers of $O(\log\log m)$ bits.
According to Lemma~\ref{lemm:Z_m}, for each integer $b\in[0,\log_m]$ there is
a unique integer $a$ such that $p^aq^b\in Z_m$, given by $a = \left\lfloor
\log_p m - b\rho \right\rfloor$. Let us define, for any $b \in \mathbb{N}$,
$$ \zeta(b) = p^{\left\lfloor \log_p m - b\rho \right\rfloor} q^b $$
In particular, for each $0 \leq b \leq \left\lfloor \log_q m \right\rfloor$,
$\zeta(b)$ is the element in $Z_m$ whose exponent in $q$ is $b$. In the
example given in Figure~\ref{fig:Z_750} for $(p,q) = (2,3)$ and $m = 750$, the
terms of $\zeta$, given for $0 \leq b \leq \left\lfloor \log_q m \right\rfloor
= 6$ are $(512, 384, 576, 432, 648, 486, 729)$. Let us now define the
sequence $(b_i)_{i\in\mathbb{N}}$ as follows: $b_0 = 0$ and for $i \geq 0$
$$
b_{i+1} = \min \{b\in\mathbb{N}: b>b_i \text{ and }
\zeta(b)>\zeta(b_i) \}
$$
This sequence is the basis for the algorithm in~\cite{BerImb09:dmtcs} that computes
$z_m$. The following lemma tells us that this same algorithm can also be used
to compute $y_m$.
\begin{lemm}
\label{lemm:basis}
Let $I = \max\{i\in\mathbb{N} : b_i\leq \log_q m \}$ and $J = \max\{i\in\mathbb{N} : b_i\leq
m_\ell\}$, then $\zeta(b_I) = z_m$ and $\zeta(b_J) = y_m$.
\end{lemm}
\begin{proof}
Let $z_m=\zeta(b^*)$. Since $\zeta(b_I)\in Z_m$, we have $\zeta(b^*)\geq
\zeta(b_{I})$. Suppose $\zeta(b^*)> \zeta(b_{I})$, then $b^*\geq b_{I+1}$,
which contradicts the definition of $I$. Thus $\zeta(b^*)= \zeta(b_{I})$,
hence $b^*= b_{I}$. Finally, it follows from Theorem~\ref{prop:y_m} that
$\zeta(b_J)=y_m$.
\end{proof}
In our running example (see Figure~\ref{fig:Z_750}) it can be read directly
from Table~\ref{tab:750} that sequence $(b_i)$ starts with $(0,2,4,6)$. We
thus have $I=3$ and, since $m_\ell=4$ (to be read on Figure~\ref{fig:Z_750}),
$J=2$.
In order to compute successive terms of $(b_i)_{i\in\mathbb{N}}$, we next state a
modified version of the result in~\cite{BerImb09:dmtcs}, which provides a simple
bound on the number of steps required to get $z_m$ from $(b_i)_{i\in\mathbb{N}}$ without
any particular assumption on the partial quotients of $\rho$.
For each principal convergent $h_s/k_s$ of $\rho$, let $\varepsilon_s=|k_s\rho-h_s|$.
We known that $(\varepsilon_s)_{s\in\mathbb{N}}$ is strictly decreasing and
converges towards 0. Besides, we also know that the convergents of even
index approach $\rho$ from below (see~\eqref{convergents}) , whereas those
of odd index approach $\rho$ from above. Hence $\varepsilon_{2s} = k_{2s}\rho -
h_{2s}$, while $\varepsilon_{2s+1} = -k_{2s+1}\rho + h_{2s+1}$. Thus we have
$\{k_{2s}\rho\}=\varepsilon_{2s}$ and, for all $0 \leq t \leq a_{2s+2}$,
\begin{equation}
\label{eq:eps}
\{k_{2s,t}\rho\} = \varepsilon_{2s} - t \varepsilon_{2s+1} =
\varepsilon_{2s+2}+(a_{2s+2}-t)\varepsilon_{2s+1}
\end{equation}
Our main theorem regarding the computation of $y_m$ and $G(m)$ can now be
established.
\begin{theo}
\label{th:seqd}
For all $i\geq 0$, let $d_{i+1}=b_{i+1}-b_{i}$. The sequence $(d_{i})_{i\geq
1}$ is non-decreasing ; its terms all belong to $(K_n)_{n\in\mathbb{N}}$ and satisfy
the following properties:
\begin{itemize}
\item[(i)] For each $s\in\mathbb{N}$, there exists at most one value of $t$ in
$(0,a_{2s+2})$ such that $k_{2s,t}$ belongs to $(d_{i})_{i\geq 1}$. If
$d_{i+1}=k_{2s,t}$ with $0<t<a_{2s+2}$, then $t=t_s$ is given by
\begin{equation}
\label{ts}
t_s = \left\lceil\frac{\varepsilon_{2s}-\{\log_p m-b_{i}\rho\}}{\varepsilon_{2s+1}}\right\rceil
\end{equation}
\item[(ii)] If $d_{i+1} = k_{2s}$ with either $i=0$ or $d_{i+1}>d_{i}$, then
$k_{2s}$ occurs $n_s$ consecutive times in $(d_{i})_{i\geq 1}$, where
\begin{equation}
\label{ns}
n_s = \left\lfloor \frac {\{\log_p m-b_{i}\rho\}}{\varepsilon_{2s}} \right\rfloor
\end{equation}
Moreover, if either $s=0$ or $d_{i}=k_{2s-2,t}$ with $0<t<a_{2s}$, then
$n_s\leq a_{2s+1}$, else $n_s\leq 1+a_{2s+1}$.
\end{itemize}
\end{theo}
\begin{proof}
Let $r_i=\{\log_p m-b_{i}\rho\}$ for short. For all integers $b$ such that
$b_i < b \leq \log_q m$, we have \footnote{Note that $0\leq y\leq x$ implies
$\lfloor x-y\rfloor +y -\lfloor x\rfloor = \lfloor \{x\}-\{y\}\rfloor+\{y\}$,
and expand~\eqref{diff-zeta} as $\log_p m - b\rho = \log_p m -
(b-b_i)\rho - b_i\rho$.}
\begin{align}
\log_p\zeta(b)-\log_p\zeta(b_i)
= & \lfloor\log_p m-b\rho\rfloor +b\rho - \lfloor\log_p m-b_i\rho\rfloor
-b_i\rho \label{diff-zeta}\\
= & \lfloor r_i-\{(b-b_i)\rho\}\rfloor+\{(b-b_i)\rho\}\notag
\end{align}
Thus $\zeta(b)>\zeta(b_i)$ if, and only if, $\{(b-b_i)\rho\}\leq r_i$. For
$b = b_{i+1}$, this reads $\zeta(b_{i+1}) > \zeta(b_i) \iff
\{d_{i+1}\rho\} \leq r_i$. Therefore, any positive integer $c < d_{i+1}$
satisfies $\{c\rho\}>r_i$, hence
\begin{equation}
\label{di}
d_{i+1} = \min \{ d\in\mathbb{N}^* : \{d\rho\}\leq r_i\}
\end{equation}
Using the second consequence of~\eqref{convergents}, $d_{i+1}$
is thus a term of $(K_n)_{n\in\mathbb{N}}$. Similarly, we next get $\{d_{i+2}\rho\}\leq
r_{i+1}$. Since $r_i \geq \{d_{i+1}\rho\}$ from~\eqref{di}, observe that
$r_{i+1} = \{\log_p m - b_i\rho - d_{i+1}\rho \} = r_i - \{d_{i+1}\rho\} <
r_i$, so that $d_{i+2}\geq d_{i+1}$. Thusly the first claims
regarding the terms of $(d_{i+1})_{i\in\mathbb{N}}$ are proved.
\medskip
Let us now prove the properties in (i). Assume $d_{i+1}=k_{2s,t}$.
Since $\{k_{2s,t}\rho\}= \varepsilon_{2s}-t\varepsilon_{2s+1}$, we get using~\eqref{di}
again
$$
\varepsilon_{2s}-t\varepsilon_{2s+1}\leq r_i < \varepsilon_{2s}-(t-1)\varepsilon_{2s+1},
$$
thus $t \geq (\varepsilon_{2s} - r_i)/\varepsilon_{2s+1} > t-1$, hence~\eqref{ts}. It
follows that $r_{i+1} = r_i-\{k_{2s,t}\rho\} < \varepsilon_{2s+1}$, and since
the minimum value of $\{k_{2s,t'}\rho\}$, reached for $t'=a_{2s+2}-1$, is
$\varepsilon_{2s+2}+\varepsilon_{2s+1}$, we must have $d_{i+2}>k_{2s,a_{2s+2}-1}$, thus
$d_{i+2}\geq k_{2s+2}$.
\medskip
Turning to (ii), assume that $d_{i+1}=k_{2s}$. Hence $r_i\geq
\varepsilon_{2s}$, so that $n_s$ given in~\eqref{ns} is positive. If $n_s>1$ we get
$r_{i+1} = r_i-\varepsilon_{2s}\geq \varepsilon_{2s}$, thus $d_{i+2}=k_{2s}$. By iterating,
it follows that $r_{i+j-1}\geq\varepsilon_{2s}$ and $d_{i+j}=k_{2s}$ for each $j\leq
n_s$, and that $r_{i+n_s}<\varepsilon_{2s}$. Thus $d_{i+n_s+1}>k_{2s}$.
Finally assume without loss of generality that either $i=0$ or
$d_{i}<d_{i+1}$. Thus $s=0$ implies $i=0$. Since $r_0=\{\log_p m\}<1$ and
$\varepsilon_0=\{\rho\}$, $n_0 \leq \lfloor 1/\{\rho\} \rfloor = a_1$. If $s>0$,
since $d_{i+1}> k_{2s-2,a_{2s}-1}$ we get, using~\eqref{eq:eps}
$$
r_i<\varepsilon_{2s}+\varepsilon_{2s-1}=(a_{2s+1}+1)\varepsilon_{2s}+\varepsilon_{2s+1},
$$
thus $n_s\leq a_{2s+1}+1$. Finally, when $d_{i} = k_{2s-2,t}$ with
$0<t<a_{2s}$,~\eqref{ts} shows that $r_i < \varepsilon_{2s-1}$, which is tighter and
yields $n_s \leq a_{2s+1}$ in the same way.
\end{proof}
We now describe how Theorem~\ref{th:seqd} can be turned into an algorithm
that computes sequence $(b_{i})_{i\in\mathbb{N}}$. As seen in Lemma~\ref{lemm:basis}
the same algorithm can be used to compute either $z_m$ or $y_m$. For that
purpose, we use an additional input parameter, denoted $B_m$, standing for
$\left\lfloor \log_q m \right\rfloor$ or $m_\ell$ respectively. The
algorithm works as follows. Starting from values $s = 0$, $i_0=0$, $b_0=0$,
$r_0=\{\log_p m\}$, iterate:
\begin{enumerate}
\setlength{\itemsep}{2ex}
\item \textbf{if} $(r_{2s}\geq\varepsilon_{2s})$ \textbf{then}\\[-1ex]
\begin{enumerate}
\setlength{\itemsep}{2ex}
\item $n_s = \left\lfloor r_{2s}/\varepsilon_{2s} \right\rfloor$; \quad
$r_{2s+1} = r_{2s} - n_s\varepsilon_{2s}$; \quad
$i_{2s+1} = i_{2s} + n_s$;
\item \textbf{for} $i_{2s}<i\leq i_{2s+1}$ \textbf{do} \quad $d_i = k_{2s}$;
\quad $b_i=b_{i-1}+k_{2s}$;
\item \textbf{if} $(b_{i_{2s+1}}>B_m)$ \textbf{then}
\textbf{return} $b_{i_{2s}} + \left\lfloor (B_m - b_{i_{2s}})/k_{2s}
\right\rfloor k_{2s}$;\\[-1ex]
\end{enumerate}
\textbf{else} $r_{2s+1} = r_{2s}$; \quad $i_{2s+1}=i_{2s}$;
\item \textbf{if} $(r_{2s+1} \geq \varepsilon_{2s} - \varepsilon_{2s+1})$ \textbf{then}\\[-1ex]
\begin{enumerate}
\setlength{\itemsep}{2ex}
\item $t_s = \left\lceil (\varepsilon_{2s} - r_{2s+1})/\varepsilon_{2s+1} \right\rceil$;
\item
$r_{2s+2} = r_{2s+1} - \varepsilon_{2s} + t_s\varepsilon_{2s+1}$; \quad
$i_{2s+2} = i_{2s+1}+1$;
\item $d_{i_{2s+2}} = k_{2s,t_s}$; \quad $b_{i_{2s+2}} = b_{i_{2s+1}} + k_{2s,t_s}$;
\item \textbf{if} $(b_{i_{2s+2}} > B_m)$ \textbf{then return} $b_{i_{2s+1}}$\\[-1ex]
\end{enumerate}
\textbf{else} $r_{2s+2} = r_{2s+1}$; \quad $i_{2s+2}=i_{2s+1}$;
\end{enumerate}
\begin{coro}
The above algorithm requires at most $2+\lfloor\log_2\log_q m\rfloor$ iterations.
\end{coro}
\begin{proof}
Let $z_m=\zeta(b_I)$ and let $d_I=k_{2S,t}$, with $S\geq 0$ and either $t=0$
or $t=t_S$. Then $k_{2S}\leq b_I\leq \log_q m$. But relations $k_{i+3}=
(a_{i+2}a_{i+1}+1)k_{i+1}+k_i$ and $k_0=1$ imply that $k_{2i}\geq 2^i$, with
equality only if $i\leq 1$. Thus $S\leq \log_2\log_q m$. Finally, checking
the stopping condition requires $n_{S+1}$ be computed in the case $t=t_S$.
\end{proof}
Observe that the precision required is about $\log_2\log_q m$ bits for the
floating point calculations with fractional parts. Alternatively, the
computation may be carried-out with integers, by approximating $\rho$ by the
convergent $H/K$, where $K$ is the greatest element in $(K_n)_{n\in\mathbb{N}}$ not
exceeding $\log_q m$, and by performing the operations modulo $K$.
The above algorithm requires $m_\ell$ to be known in order to output $y_m$. If
we know the largest $n$ such that $K_n+\lfloor\alpha(K_n)\rfloor/\rho\leq\log_p
m$, we simply have $m_\ell=\lfloor\log_p
m-\lfloor\alpha(K_n)\rfloor/\rho\rfloor$. Indeed,
$\ell_b=\ell_{K_n}=\lfloor\alpha(K_n)\rfloor$ for all $b\in[K_n, K_{n+1})$. In
order to compute this $K_n$, it suffices to
1: compute the largest $s$ such that $k_{2s}+\lfloor\alpha(k_{2s})\rfloor/\rho\leq\log_p m$,
2: compute the largest $t$ such that $k_{2s,t}+\lfloor\alpha(k_{2s,t})\rfloor/\rho\leq\log_p m$.
\noindent Task 1 requires at most $2+\log_2\log_q m$ steps, each step
essentially consisting in computing next values of $k_{2s}$ and
$\alpha(k_{2s})$. Task 2 may be done by using a binary search of $t$ in
$[0,a_{2s+2})$, which requires $\log_2 a_{2s+2}$ similar steps. Whereas we are
sure that $a_{2s}<\log_q m$ because $a_{2s}<k_{2s}$, it may happen that
$a_{2s+1}$ or $a_{2s+2}$ exceed $\log_q m$. In the first case we conclude that
$t=0$, since $\log_q m<k_{2s,1}=k_{2s}+k_{2s+1}$. In the second case, we may
simply use a binary search of $t$ in $[0,\lfloor (\log_q
m-k_{2s})/k_{2s+1}\rfloor]$ because $k_{2s,t}\leq \log_q m$ must hold. We have
thus established:
\begin{prop}
Computing $m_\ell$ can be done by computing $K+\lfloor\alpha(K)\rfloor/\rho$
for at most $2+2\lfloor\log_2\log_q m\rfloor$ values of $K$ in the sequence
$(K_n)$.
\end{prop}
The final remaining problem is that computing $\alpha(b)$ may be expensive.
Recall that
\begin{align}\label{alphabis}
\alpha(b)= \log_p \frac{q^{b}-1}{q^b-p^{\lfloor b\rho\rfloor}}+ \log_p \frac{q-p}{q-1}
\end{align}
We next show that a fitting approximation of $\alpha$ is given by
\begin{align}\label{alpha+}
\alpha^+(b)=\log_p\frac{q-p}{(q-1)\log p}+\log_p\frac 1{\{b\rho\}}+\frac12\{b\rho\}.
\end{align}
\begin{prop}
For all $n\in\mathbb{N}$, we have
$$
0 < \alpha^+(K_n)-\alpha(K_n) < \frac{\log p}{6}\{K_n\rho\}^2+\frac{1}{q^{K_n}-1}
$$
\end{prop}
\begin{proof}
Let $\delta_n=\alpha^+(K_n)-\alpha(K_n)$ and $u_n=\frac 12\{K_n\rho\}\log p$.
According to \eqref{alphabis} and \eqref{alpha+},
$$
\delta_n=\log_p\frac{\sinh u_n}{u_n}-\log_p(1-q^{-K_n})
$$
Observe that $\delta_n>0$ because $\sinh u_n> u_n>0$ and $0<q^{-K_n}<1$.
Furthermore, $-\log(1-x)<x/(1-x)$ for $0<x<1$, thus $-\log_p(1-q^{-K_n})<1/(q^{K_n}-1)$.
Finally, $(1-e^{-x})/x<1-x/2+x^2/6$ for $0<x$, so that
$$
\log\frac{\sinh u_n}{u_n}=u_n+\log\frac{1-e^{-2u_n}}{2u_n}<\frac{2u_n^2}3
$$
Hence the claimed inequalities.
\end{proof}
\section{Concluding remark}
\label{sec:conclusion}
When $m = q^N$ for some positive integer $N$, applying Theorem~\ref{th:seqd} for
the search of $z_m$ provides one with a finite representation $N$ by a finite
sum of terms of $(K_n)_{n\in\mathbb{N}}$. This representation is similar in spirit to
Ostrovski's number system~\cite{Ostrowski22, Berthe01a}. For example, take
$\rho=\log_2 3$ whose partial quotients start with $[1,1,1,2,2,3,1,...]$. In
this case, sequence $(K_n)_{n\in\mathbb{N}}$ starts with $(1,2,7,12,53,...)$ and
sequence $(k_n)_{n\in\mathbb{N}}$ starts with $(1,1,2,5,12,41,53,...)$. Therefore, we
get a representation of 6 given by $6 = 3K_1 = 3k_2$, whereas
$6 = k_1+k_3$ in Ostrovski's representation. Studying this novel representation
is beyond the scope of this paper and should be the topic of future work.
To conclude, the authors wish to thank an anonymous referee for her/his stimulating comments,
which, in particular, lead them to discover and prove Theorem~\ref{th:seqd}.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,499,191 | arxiv | \section{Introduction}
\label{Intro}
\z We present an outline of new methods for determining the position and
the type of singularities of a certain kind of solutions of difference
equations \cite{[CK5]} and use this information to perform Painlev\'e
analysis on them. The approach relies on advances in exponential
asymptotics and the theory of analyzable functions
\cite{[Br],[BR2],[C3],[C1], [C2], [CK2],[CK1],[Invent],[CT],
[CT2],[E3],[E1],[KS],[STL]}. The main concepts are discussed first.
{\em Analyzable functions}. Introduced by Jean
\'Ecalle, these are mostly analytic functions which at singular points
are completely described by {\em transseries}, much in the same way as
analytic functions are represented at regular points by convergent
series. In contrast with analytic functions (which are not closed under
division) and with meromorphic functions (which fail to be stable under
integration and composition) analyzable ones are conjectured to be
closed under all operations whence the grand picture of this theory,
that all functions of natural origin are analyzable. In particular,
solutions of many classes of differential and difference equations have
been shown to be analyzable.
{\em Transseries}. Also introduced by J. Ecalle, transseries represent
the ``ultimate'' generalization of Taylor series. Transseries are formal
asymptotic combinations of power series, exponentials and logarithms and
contain a wealth of information not only about local but also about
global behavior of functions. One of the simplest nontrivial examples of
a transseries is
\begin{equation}\label{l1}\sum_{k=0}^{\infty}\frac{k!}{x^{k+1}}+Ce^{-x}\ \ \ \ (x\rightarrow +\infty)\end{equation}
\z What distinguishes the expression (\ref{l1}) from a usual asymptotic
expansion is the presence beyond all orders of the principal series of an
exponentially small term which cannot be captured by classical Poincar\'e
asymptoticity. More general transseries arising in generic ordinary
differential and difference equations are doubly infinite series whose terms
are powers of $x$ multiplied by exponentials (see e.g. (\ref{transs})). These
transseries can be determined algorithmically and usually diverge factorially,
but can be shown to be Borel summable in a suitable sense in some sector. In
this sector the function to which the transseries sums has good analyticity
properties and on the edges of the sector typically singularities appear.
{\em The correspondence between the formal universe and actual
functions}. Envisaged by \'Ecalle to be a ``total'' isomorphism
implemented by generalized Borel summation, the correspondence has been
established rigorously in a number of problems including ODEs,
difference equations and PDEs.
\section{{ Difference equations: the isolated movable singularity
property (IMSP)}} The problem of integrability has been and is a
subject of substantial research. In the context of differential
equations there exist numerous effective criteria of integrability,
among which a crucial role is played by the Painlev\'e test (for
extensive references see e.g. \cite{Kr-Cl}).
An analog of the Painlev\'e property for difference equations turns out
to be more delicate to define.
In the context of solvability, various methods have been proposed by
Joshi \cite{Joshi}, Ablowitz et al. \cite{[AHH]}, Ramani and Grammaticos
(see references in \cite{RG}), Conte and Musette \cite{[CM]}. See also
\cite{[AHH]} for a comparative discussion of the various approaches in
the literature. None of these is a proper extension of the Painlev\'e
test. One difficulty resides in continuing the solutions $x(n)$ of a
difference equation, which are defined on a {\em discrete set}, to the
complex plane of the independent variable $n$ in a natural and effective
fashion. The embedding of $x(n)$ must be done in such a way that {\em
properties are preserved}. It is important for the effectiveness of the
analysis that this embedding $x(n)${$\sqsubset$}$x(z)$ is natural,
constructive and unique under proper conditions.
It is of course crucial that we are given infinitely many values of the
function $x(n)$; since the accumulation point of $n$ is infinity, the
behavior of $x$ at $\infty$ is key, and then the question boils down to
when infinitely many values determine function, and in which class.
To illustrate a point we start with a rather trivial example. Assume
that $x(n)$ is expressed as a convergent power series in powers of $1/n$
$$
x_n=\sum_{k=1}^{\infty}c_kn^{-k}
$$
We would then naturally define $x(n)\sqsubset x(z)$ by
$$
x(z)=\sum_{k=1}^{\infty}c_kz^{-k}
$$
\z for large enough $z$. Uniqueness is ensured by the analyticity
at $\infty$ of $x(z)$. Still by analyticity, {$\sqsubset$} preserves all
properties.
We cannot rely merely on analytic continuation since there are very few
classes of equations where solutions are given by convergent power
series. However, as discovered by \'Ecalle and proved in detail very
recently in a general setting by Braaksma \cite{[BR2]}, a wide class of
arbitrary order difference equations admit formal solutions as Borel
summable {\em transseries}. The class considered by Braaksma is of the
form
\begin{equation}
\label{DE}
\mathbf{x}_{n+1}=\mathbf G(\mathbf x_n,n)=\left(\hat\Lambda +\frac{1}{n}\hat{A}\right)\mathbf{x}_n +\mathbf{g}(n,\mathbf{x}_n)
\end{equation}
\z where $\mathbf{G}$ analytic at $\infty$ in $n$ and at $0$ in $x_n$,
under genericity assumptions \cite{[BR2]}. In particular
a nonresonance condition is imposed
\begin{equation}\label{condmu}
\mu_m=\mathbf{k}\cdot\boldsymbol\mu \mod 2\pi i\end{equation}
\z with $\mathbf{k}\in\mathbb{N}^n$ iff $\mathbf{k}=\mathbf{e}_m$. Transseries solutions
for these equations have many similarities to transseries solutions of
differential equations. With some $m_1$, $0\le {m_1}\le n$,
\begin{equation}
\label{transs}
\tilde{\bf x}(n)
=\sum_{{\bf k}\in\mathbb{N}^{m_1}}{\bf C}^{\bf k} e^{-{\bf k}\cdot
\boldsymbol{\mu}n}
n^{{\bf k}\cdot {\bf a}}\tilde{\bf y}_\mathbf{k}(n)
\end{equation}
\z In (\ref{transs}) $\boldsymbol{\mu}=(\mu_1,...,\mu_{m_1})$ and
$\mathbf{a}= (a_1,...,a_{m_1})$ depend only on the recurrence (they are
the eigenvalues of $\hat{\Lambda}, \hat{A}$, respectively), ${\bf C}$ is
a free parameter and $\tilde{\bf y}_\mathbf{k}$ are formal series in negative
integer powers of $n$, independent on $\mathbf{C}$. The number $m_1$ is
chosen so that all the exponentials in (\ref{transs}) tend to zero in
the chosen sector.
Braaksma showed that $\tilde{\bf y}_\mathbf{k}(n)$ are Borel summable
uniformly in $\mathbf{k}$. Let $\mathbf{Y}_\mathbf{k}=\mathcal{L}^{-1}
\tilde{\bf y}_\mathbf{k}(n)$. Then ${\bf Y}_{\mathbf{k}}(p)$ are analytic
in a neighborhood of $\mathbb{R}^+$ (in fact in a larger sector). Defining
\begin{equation}\label{-2}
{\bf y}_{\mathbf{k}}=\int_0^{\infty}e^{-np}{\bf
Y}_{\mathbf{k}}(p)dp\end{equation}
\z we have uniform estimates $|\mathbf y_\mathbf
k|<A^\mathbf k$ and thus the series
\begin{equation}
\label{eq:transsS1}
{\bf x}(n)
=\sum_{{\bf k}\in\mathbb{N}^{m_1}}{\bf C}^{\bf k} e^{-{\bf k}\cdot
\boldsymbol{\mu} n}
n^{{\bf k}\cdot {\bf a}}{\bf y}_\mathbf{k}(n)
\end{equation}
\z is classically convergent for large enough $n$. Braaksma showed that
${\bf x}(n)$ is an actual solution of (\ref{DE}). It is natural to
replace $n$ by $z$ in (\ref{-2}) and define:
\begin{equation} \label{eq:transsS}
{\bf x}(z)
=\sum_{{\bf k}\in\mathbb{N}^{m_1}}{\bf C}^{\bf k} e^{-{\bf k}\cdot
\boldsymbol{\mu} z}
z^{{\bf k}\cdot {\bf a}}{\bf y}_\mathbf{k}(z)
\end{equation}
\z If $z$ and all constants are real and $\mu_i<0$, the functions
(\ref{eq:transsS}) are special cases of \'Ecalle's {\em analyzable}
functions. As explained before we are allowing for $z$ and constants to
be complex, under restriction $\Re({\bf k}\cdot \boldsymbol{\mu} z)>0$.
It is crucial that the values of $\bf y (n)$ for all large enough $n$ uniquely
determine the expansion. In \cite{[CK5]} it is shown that under suitable
conditions, {\em two distinct analyzable functions cannot agree on a set of
points accumulating at infinity.} Below is a simplified version of a theorem
in \cite{[CK5]}.
\begin{Theorem}[\cite{[CK5]}] {Assume
$$
\label{eq:nonres}
\left(\mathbf{Z}\cdot \boldsymbol\mu=0 \mod 2\pi i \ \ \mbox{with}\ \ \mathbf{Z}\in\mathbb{Z}^p
\right)
\Leftrightarrow \mathbf{Z}=0
$$
and $ {\bf x}(z)=\sum_{{\bf k}\in\mathbb{N}^{m_1}}{\bf C}^{\bf k} e^{-{\bf
k}\cdot \boldsymbol{\mu} z} z^{{\bf k}\cdot {\bf a}}{\bf y}_\mathbf{k}(z)$.
If $ {\bf x}(n)=0$ for all large enough $n\in\mathbb{N}$, then $
{\bf x}(z)$ is identically zero.}
\end{Theorem}
Analyzable functions behave in most respects as analytic functions. Among the
common properties, particularly important is the uniqueness of the extension
from sets with accumulation points, implying the principle of permanence of
relations. Under the assumptions \cite{[BR2]} that ensure Borel summability,
we have the following.
\begin{Definition}
\label{D3} If
$${\bf x}(n)
=\sum_{{\bf k}\in\mathbb{N}^{m_1}}{\bf C}^{\bf k} e^{-{\bf k}\cdot
\boldsymbol{\mu} n}
n^{{\bf k}\cdot {\bf a}}{\bf y}_\mathbf{k}(n)$$
\z then we call
$${\bf x}(z)
=\sum_{{\bf k}\in\mathbb{N}^{m_1}}{\bf C}^{\bf k} e^{-{\bf k}\cdot
\boldsymbol{\mu} z}
z^{{\bf k}\cdot {\bf a}}{\bf y}_\mathbf{k}(z)$$
\z the { analyzable embedding} of $\mathbf{x}(n)$
from $\mathbb{N}$ to a sector in $\mathbb{C}$.
\end{Definition}
\z Having now a suitable procedure of analytic continuation, we can
define the isolated movable singularity property, an extension of the
Painlev\'e property to difference equations, in a natural fashion. In
analogy with the case of differential equations we require that all
solutions are free from ``bad'' movable singularities.
\begin{Definition}
A
difference equation has the IMSP if all {\em movable}
singularities of all its solutions are {\bf isolated.}
\end{Definition}
{\bf Notes.} (i) We use the common convention that isolated singularity
exclude branch points, clusters of poles and barriers of singularities.
(ii) To determine the singularities, transasymptotic matching
methods introduced for differential equations in \cite{[Invent]}, can be
extended with little changes to difference equations.
\section{Classification of some difference
equations with IMSP. Solvability.}
We look at autonomous difference equations of the form $x_{n+1}=G(x_n)$
where $G$ is meromorphic and has attracting fixed points. A more general
analysis is given in \cite{[CK5]}. We then write
\begin{equation}
\label{eq:01}
x_{n+1}=G(x_n):=ax_n+F(x_n)
\end{equation}
\z and restrict for simplicity to the case $F(0)=F'(0)=0$ and
$0<|a|<1$. There is a one-parameter family of solutions presented as
simple transseries {\em convergent} for large enough $n$, of the form
\begin{equation}
\label{eq:2}
x_n=x_n(C)=\sum_{k=1}^\infty e^{nk\ln a} C^k D_k
\end{equation}
\z for given values of $D_k$, independent of $C$. The analyzable
embedding of $x$, cf. Definition \ref{D3}, reads
\begin{equation}
\label{eq:2z} x(z)=x(z;C)=\sum_{k=1}^\infty e^{z k\ln a} C^k D_k
\end{equation}
\z which is analytic for large enough $z$. To look for the IMSP, we find the
properties of $x(z)$ beyond the domain of convergence of (\ref{eq:2z}), and
find the singular points of $x(z)$.
{\bf Note.} Because equation (\ref{eq:01}) is nonlinear, although
(\ref{eq:2z}) has one continuous parameter, there may be more solutions.
This issue is addressed in (\cite{[CK5]}).
\subsection{Embedding versus properties of the conjugation map} By the
Poincar\'e conjugation theorem applied to $x_{n+1}=G(x_n)$ there exists
a unique map $\phi$ with the properties
\begin{equation}
\label{assum1}
\phi(0)=0,\ \ \phi'(0)=1\ \ \mbox{and} \ \phi \ \mbox{analytic at }\ 0
\end{equation}
\z and such that $x_n=\phi(X_n)$ implies $X_{n+1}=aX_n$. The map $\phi$
is a conjugation map of $x_{n+1}$ with its linearization $X_{n+1}$. We
have $X_n=a^n X_0=Ca^n$.
We obtain an extension of $x$ from $\mathbb{N}$ to $\mathbb{C}$ through
\begin{equation}
\label{eq:con1}
x_n=\phi(Ca^n)\ \sqsubset\ x(z):=\phi(Ca^z)
\end{equation}
Then the conjugation map satisfies the equation
$$
\label{eq:conjug}
\phi(az)=G(z)=a\phi(z)+F(\phi(z))
$$
As expected by the uniqueness of the embedding in Definition~\ref{D3} we
have the following.
\begin{Lemma}\label{R1}
(i) For equations of the type (\ref{eq:01}) under the given
assumptions, the embeddings through analyzability and via the
conjugation map, cf. (\ref{eq:con1}) agree.
(ii) The solutions $x(z;C)$ have only isolated movable singularities
iff $\phi$ has only isolated singularities.
\end{Lemma}
\z {\bf Corollary} {\em A necessary condition for (\ref{eq:01}) to have the IMSP
is that the conjugation map $\phi$, around every stable fixed point of
$G$ and of those of its iterates which are meromorphic (of the form
$G^{[m]}=G\circ\cdots\circ G$ $ m$ times) extends analytically into the
complex plane, except for isolated singularities.}
\begin{figure}{\label{Fig.1}}
\begin{picture}(500,250)\hskip -3cm
\epsfig{file=050.ps, height=10.10cm}
\end{picture}
\vskip -2.1cm
\begin{picture}(100,000)
\epsfig{file=090.ps, height=5cm}
\end{picture}
\vskip -0.4cm \hskip 6.5cm
\begin{picture}(100,000)
\epsfig{file=100.ps, height=5cm}
\end{picture}
\vskip 3cm
\caption{Julia sets for $G=ax(1-x)$ for $a=0.5$, $a=0.9$ and $a=1$ respectively. The set $\mathcal{K}_p$ is the interior of these curves (graphs generated using C and MapleV)}
\end{figure}
\subsection{Autonomous equations with the IMSP}
In (\cite{[CK5]}) we classify the equations of the form (\ref{eq:01})
with respect to the IMSP. In a way analog to the case of ODEs of first
order, only Riccati equations have this property. On the other hand,
these equations are explicitly solvable, and thus autonomous equations
of the first order which have the IMSP can be solved in closed form.
\begin{Theorem}\label{T1}
The equation (\ref{eq:01}) under the given assumptions {\em fails} to
have the IMSP {\em unless}, for some $c\in\mathbb{C}$,
\begin{equation}
\label{(**)}
G(z)=\frac{az}{1+cz}
\end{equation}
\z In case (\ref{(**)} we have: $1/x_n=a^{-n}(C-c/(a-1))+c/(a-1)$
\end{Theorem}
It is also very interesting to see that in the case of failure of the
IMSP the analytic properties of the solutions preclude the existence of
nice constants of motion. This is discussed in the next section.
\subsection{Case of failure of IMSP} We illustrate this situation when
$G$ polynomial. The surprising conclusion is that for polynomial $G$
without the IMSP, constants of motion (defined as functions $C(x,n)$
constant along trajectories) develop barriers of singularities along
some fractal closed curves $\partial \mathcal{K}_p$, see below. In a
neighborhood of the origin, the function $\phi$ is invertible, and thus
for small $x$ we derive from $x_n=\phi(a^nC)$ that $ C=a^{-n}Q(x_n)$
where $Q=\phi^{-1}$.
In Fig. 1. we depict the Julia set $\partial \mathcal{K}_p$ of a simple
map. In that case, in the compact set bounded by $\mathcal{K}_p$
consists in the initial conditions for which the solution of the
iteration converges to zero. The Julia set is a closed curve of
nontrivial fractal dimension. For a comprehensive discussion of Julia
sets and iterations of rational maps see \cite{Beardon}.
\begin{Theorem}\label{Barrier}
Assume $G$ is a polynomial map with an attracting fixed point at the
origin. Denote by $\mathcal{K}_p$ (cf. Fig. 1) the maximal connected
component of the origin in the Fatou set of $G$.
Then the domain of analyticity of $Q$ is $\mathcal{K}_p$, and
$\partial \mathcal{K}_p$ is a barrier of singularities of $Q$.
\end{Theorem}
The proofs of these results can be found in \cite{[CK5]}. The logistic
map discussed in relative detail in the next section represents a very
simple illustration of some of the relevant phenomena.
\section{Analysis of the logistic map at the superstable fixed
point infinity}
We show that the equation $x_{n+1}=ax_n(1-x_n)$ has the IMSP iff
$a\in\{-2,0,2,4\}$. The case $a=0$ needs no analysis. Otherwise, taking
$y=1/x$ we get
\begin{equation}
\label{eq:111}
y_{n+1}=\frac{y_{n}^2}{a(y_n-1)}
\end{equation}
\z For small $y_0$, the leading order form of equation (\ref{eq:111}) is
$y_{n+1}=-a^{-1}{y_{n}^2}$ whose solution is $-y_0^{2^n}a^{-2^{n-1}-1}$. It is
then convenient to seek solutions of (\ref{eq:111}) in the form
$y_n=F(y_0^{2^n}a^{-2^{n}})$ whence the initial condition implies $F(0)=0$,
$F'(0)=-a$. Denoting $y_0^{2^n}a^{-2^{n}}=z$, the functional relation
satisfied by $F$ is
\begin{equation}
\label{eq:112}
F(z^2)=\frac{F(z)^2}{a(F(z)-1)} \ \ \ \ (F(0)=0,\ F'(0)=-a)
\end{equation}
\begin{Lemma}\label{L11}
(i) There exists a unique analytic function $F$ in the neighborhood of the
origin satisfying (\ref{eq:112}) and such that $F(0)=0$ and
$F'(0)=-a$. This $F$ has only isolated singularities in $\mathbb{C}$ if and
only if $a\in\{-2,2,4\}$. In the latter case, the equations
(\ref{eq:112}) and (\ref{eq:111}) can be solved explicitly (see
(\ref{114.5}), (\ref{119}) and (\ref{120})).
If $a\not\in\{-2,2,4\}$ then the unit disk is a barrier of singularities
of $F$.
(ii) If $a\not\in\{-2,2,4\}$ then $\partial \mathcal{K}_p$ is a barrier of
singularities of the constant of motion $Q=F^{-1}$ (cf.
Theorem~\ref{Barrier}).
\end{Lemma}
\subsection{Proof of Lemma \ref{L11}}
\subsubsection{Analyticity at zero} We write
\begin{equation}
\label{eq:113}
F(z)=\frac{a}{2}F(z^2)-\frac{1}{2}\sqrt{a^2 F(z^2)^2-4aF(z^2)}
\end{equation}
\z (with the choice of branch consistent with $F(0)=0, F'(0)=-a$), and take
$F(z)=-az+h(z)$. This leads to the equation for $h$
\begin{equation}
\label{eq:1131}
h(z)=az+\frac{a}{2}(h(z^2)-az^2)-\frac{1}{2}\sqrt
{4a^2z^2-4ah(z^2)+a^2(h(z^2)-az^2)^2}=\mathcal{N}(h)
\end{equation}
\z It is straightforward to show that, for small $\epsilon$, $\mathcal{N}$
is contractive in the ball of radius $|a|^2$ in the space of functions
of the form $h(z)=z^2u(z)$, where $u$ is analytic in the disk
$D=\{z:\|z\|<\epsilon\}$ with the norm $\|h\|=\sup_{D}|u|$. The corresponding
$F$ is analytic for small $z$ and is a conjugation map between
(\ref{eq:111}) and its small-$y$ approximation.
\subsubsection{} Now the question is to determine the singularities of $F$ in the complex
plane. Inside the unit disk we can inductively continue $F$ analytically
through (\ref{eq:113}) as follows. If $F$ is analytic in a disk of radius
$r<1$ then (\ref{eq:113}) provides the analytic continuation in the disk of
radius $\sqrt{r}$ if $F(z^2)$ is not zero or $4/a$. Because $F$ is assumed to
be analytic in the disk of radius $r$, $F(z^2)$ cannot vanish; otherwise by
(\ref{eq:112}) it would vanish infinitely often in a neighborhood of zero,
which is inconsistent with its analyticity at $z=0$ and with $F'(0)=-a$. But
the equation $F(z^2)=4/a$ does have solutions in the unit disk if $|a|>5$.
Indeed, with $|a|>5$ it is immediate to show $F(z^2)=4/a$ implies
$F(z^{2^n}):=\epsilon_n\rightarrow 0$ as $n\rightarrow\infty$. Since $F$ is,
by the condition $F'(0)=-a$, bijective near $z=0$, the equation
$F(z)=\epsilon_n$ has a (unique) solution, $z=z_0$ for sufficiently large $n$.
Now, if $z_1$ is such that $z_1^{2^n}=z_0$ we have $F(z_0)=4/a$. But then $F$
has a square root branch point at ${z_1}$ since $F'(z)$ cannot vanish anywhere
inside the unit disk, otherwise again by (\ref{eq:112}), $F'(z)$ would vanish
infinitely often near the origin.
Thus for $|a|>5$ there is no $F$ with only isolated singularities. In fact, as
is seen in the last paragraph of the proof, in this case the unit disk is a
barrier of singularities of $F$.
We now consider the case where $|a|<5$.
\subsubsection{} We claim that unless $a=-2$, the point $z=1$ is a singular point of $F$.
Indeed, a Taylor series expansion $F=\sum_{k=0}^\infty c_k(z-1)^k$ gives
$c_0=0$ or $c_0=a/(a-1)$. It is straightforward to see that $c_0=0$ implies
$c_k=0$ for all $k$ which contradicts $F'(0)=-a$.
Therefore $c_0=a/(a-1)$ in which case direct calculation shows that, unless
$a=-2(2^k-1)$ for some $k\in\mathbb{Z}$, all $c_k$ for $k\ge 1$ are zero which is not
possible since $F(0)=0$. If $k>1$ then $|a|>5$.
Therefore, $k=1$ whence $a=-2$. With this value of $a$, (\ref{eq:112}) has the
explicit solution
\begin{equation}\label{114.5} F(z)=\frac{2z}{z^2+z+1}\end{equation}
It remains to look at the cases when $z=1$ is a singular point of
$F$.
\subsubsection{} If $z=1$ is a meromorphic point of $F$, then we obtain from (\ref{eq:112}):
\begin{equation}\label{116}
\lim_{z\rightarrow 1}\frac{aF(z^2)}{F(z)}=1\end{equation}
\z If $F(z)=\sum_{k=-p}^{\infty}c_k(z-1)^k$ is the Laurent series of $F$ at
$z=1$, then (\ref{116}) implies
\begin{equation}
\label{eq:117}
a=2^p
\end{equation}
\z and, since $|a|<5$, we must have $a\in\{2,4\}$.
\z For $a=2$ (\ref{eq:112}) has the explicit solution
\begin{equation}\label{119}F(z)=\frac{2z}{z-1}\end{equation}
\z For $a=4$ the solution of (\ref{eq:112}) is
\begin{equation}\label{120}F(z)=-\frac{4z}{(z-1)^2}\end{equation}
\subsubsection{} If $F$ is not meromorphic at $z=1$ then $F$ cannot be continued
analytically beyond the unit circle:
Assume first that $F$ is analytic in the open unit disk $D_1$. By
(\ref{eq:112}) we have
\begin{equation}\label{122}F(z^{2^n})=R_n(F(z))\end{equation}
\z where $R_n$ is a rational function. Thus if $z_0^{2^n}=1$ and
$F$ is analytic at $z_0$ then it is meromorphic at $z=1$, while the set
of points such that $z_0^{2^n}=1$ is dense on the unit circle.
The last case to analyze is $R<1$, where $R$ is the (maximal) radius of
analyticity of $F$. Equation (\ref{eq:113}) shows that on the disk of radius
$R$ the singularities of $F$ are square root branch points. On the other hand,
if $z_0\in D_1$ is an algebraic branch point and $z_1^2=z_0$ then $z_1$ is
also an algebraic branch point as can be seen by continuing (\ref{eq:112})
around a small circle near the branch point $z^2=z_0$. It is then easy to see
that if $R<1$ the branch points accumulate densely towards the unit circle.
${\hfill\hbox{\enspace${\sqre}$}} \smallskip$
(ii) The proof follows the lines of \ref{[Ck5]} so we only sketch the
main steps. We note that by (i) $F$ is invertible near the origin,
so $Q$ is analytic near the origin, where it satisfies the relation
$Q^2(z)=Q(z^2/(az-1)$.
|
1,116,691,499,192 | arxiv | \section{Introduction}
Modeling of chemical reaction systems is important for describing various systems in chemistry, physics, and biology~\cite{van1992stochastic,gardiner1985handbook,anderson2015stochastic}.
When the number of molecules is large enough, fluctuations can be ignored, and
deterministic rate equations for species concentrations can be used to track the time evolution of reaction systems.
In contrast, in a situation where the number of molecules is not so large, the effect of fluctuations becomes important, and stochastic modeling is necessary.
Stochastic reaction systems are commonly described by continuous-time Markov chains, whose time evolution is governed by chemical master equations.
The properties of stochastic reaction systems in the long-time limit are characterized by their stationary distributions.
Calculating stationary distributions analytically\footnote{
For monomolecular reaction networks,
time-dependent solutions of master equations
can be obtained~\cite{jahnke2007solving}, which are parametrized by the solution of rate equations.
} is in general a difficult task because a chemical master equation is a collection of infinitely many coupled ordinary differential equations.
However, for a certain class of chemical reaction systems, stationary distributions can be obtained analytically~\cite{anderson2010product}: When the deterministic counterpart of a stochastic reaction system has a complex-balanced steady-state solution, the stationary distribution of the stochastic system is of a product-Poisson form, whose parameter is given by the deterministic steady-state solution.
This can be combined with the
classic result by Feinberg~\cite{FEINBERG19872229,feinberg2019foundations} and Horn--Jackson~\cite{horn1972general} that, if a chemical reaction network is of a zero deficiency and weakly reversible, it has a unique steady-state solution in each stoichiometry compatibility class, and the solution is complex-balanced.
Hence, when a reaction network satisfies the two topological conditions, vanishing deficiency and weak reversibility,
its stationary distribution is given analytically.
To extend the applicability of these results,
one possible strategy is to transform one network,
which does not satisfy the two conditions,
into another that satisfies them,
while keeping the same chemical properties.
The method of network translation~\cite{johnston2014translated,johnston2019computing}, in which reactions with a common stoichiometry are combined to obtain another network with desirable topological properties, has been utilized~\cite{hong2021derivation}
to compute stationary distributions analytically for networks with nonzero deficiency and without weak reversibility.
In this paper, we introduce a different kind of network transformation based on the parallel between stochastic reaction systems and quantum mechanics~\cite{Doi_1976, Doi_1976_2, baez2018quantum} (see Fig.~\ref{fig:eg1-schematic} for a general idea).
There is a reformulation of stochastic reaction systems in which a probability distribution is represented by a vector spanned by the occupation-number basis.
In this formalism, a chemical master equation is written in the form analogous to the Schr\"{o}dinger equation.
Poissonian stationary distributions for complex-balanced systems correspond to coherent states in the context of quantum optics~\cite{Gardiner_Zoller}, that most-closely approximate classical states, saturating the uncertainty relation.
We further pursue this analogy.
Starting from coherent states, there are other states that can be reached by acting unitary operators.
In particular, a common operation is squeezing, with which the uncertainty of a certain physical observable can be reduced at the cost of increasing the uncertainty of another observable.
We find that we can perform squeeze operations on
the Poissonian stationary distributions of complex-balanced systems.
Under a squeeze transformation, the time-evolution operator is modified and it represents a different reaction network, which in general has a nonzero deficiency and is not weakly reversible.
The stationary distribution of the transformed network is the counterpart of a squeezed coherent state in quantum mechanics, and its expression can be obtained analytically.
Thus, the squeeze operation provides us with another way to analytically compute the stationary distributions of reaction networks which do not have complex-balanced steady-state solutions.
The remainder of the article is organized as follows.
In Sec.~\ref{sec:formulation},
we introduce the stochastic description of chemical reaction systems and its quantum-mechanical formulation using creation/annihilation operators.
We also review the theorem by Anderson, Craciun, and Kurtz.
In Sec.~\ref{sec:squeeze-single}, we introduce the squeeze transformation for a simple example.
In Sec.~\ref{sec:two-mode-sq}, we derive the stationary distribution with finite correlations of different species.
In Sec.~\ref{sec:nonlinear}, we discuss an example which involves nonlinear propensity functions.
In Sec.~\ref{sec:generic}, we discuss the structural changes of generic reaction networks under squeezing transformations.
Finally, we give a summary and further discussion in Sec.~\ref{sec:summary}.
\tikzcdset{scale cd/.style={every label/.append style={scale=#1},
cells={nodes={scale=#1}}}}
\begin{figure}[htb]
\centering
\begin{tikzpicture}
\node at (2, 0) {
{
$
\begin{tikzcd}[scale cd=1.4,sep=large]
\Gamma: \,\,
\emptyset \rar["\ell_1", shift left]
& A \lar["\ell_2", shift left]
\end{tikzcd}
$
}
};
\node at (7.2, 0) {
\scalebox{1.4} { $\Gamma': $}
};
\node at (9, 0) {
{
$
\begin{tikzcd}[scale cd=1.3,sep=large]
\emptyset
\rar["k_1", shift left]
\dar["k_3"]
& A \lar["k_2", shift left]
\\
2 A &
\end{tikzcd}
$
}
};
\draw[mybrightblue, -stealth, line width=4pt]
(4.3,0) -- (6.1,0);
\node at (5.2,-0.6){ \scalebox{1.1} {\color{mybrightblue} Squeezing }};
\node at (5.2,-1.2){ \scalebox{1.2} {\color{mybrightblue} $S(\xi)$ }};
\node at (1.4, -1.7) {
\color{mypurple}
\scalebox{1.2} {
Deficiency:
}
};
\node at (2.9, -1.7) {
\scalebox{1.3} {
\color{mypurple} $\delta = 0$
}
};
\node at (2, -2.5) {
\scalebox{1.2} {
\color{mydarkred} Weakly reversible
}
};
\node at (8.8, -1.7) {
\scalebox{1.3} {
\color{mypurple} $\delta = 1$
}
};
\node at (8.8, -2.5) {
\scalebox{1.2} {
\color{mydarkred} Not weakly reversible
}
};
\node at (1, -3.5) {
\scalebox{1.2} {
Stationary dist.:
$\ket{\Psi} = \sum_n P(n) \ket{n}$
}
};
\node at (9.5, -3.5) {
\scalebox{1.2} {
$\ket{\Psi'} =
{\color{mybrightblue} S(\xi) }\ket{\Psi} = \sum_n P'(n) \ket{n}$
}
};
\end{tikzpicture}
\caption{ Schematic of the squeezing procedure.
On the action of squeeze operator,
the reaction network is transformed from $\Gamma$ to $\Gamma'$.
The transformed system $\Gamma'$
is of a nonzero deficiency and not weakly reversible.
In the quantum-mechanical formulation of stochastic chemical reaction systems, the probability distribution is represented by a vector
($\ket{\Psi}=\sum_n P(n)$ for $\Gamma$ and $\ket{\Psi'}=\sum_n P'(n)$ for $\Gamma'$).
Since the network $\Gamma$ has deficiency zero and is weakly reversible, the stationary distribution follows the Poisson distribution.
In the language of quantum optics, this state $\ket{\Psi}$ corresponds to a coherent state.
The stationary distribution of network $\Gamma'$
can be obtained by operating a squeeze operator $S(\xi)$ on $\ket{\Psi}$, and the probability distribution $P'(n)$
can be obtained analytically (see Eq.~\eqref{eq:eg1-prob-analytic} for this example).
}
\label{fig:eg1-schematic}
\end{figure}
\section{ Quantum-mechanical formulation of stochastic chemical reaction systems }\label{sec:formulation}
In this section, we
briefly review the description of stochastic chemical reaction systems using continuous-time Markov chains.
We also review a reformulation of the chemical master equation using
the language of quantum mechanics~\cite{Doi_1976, Doi_1976_2}.
\subsection{ Chemical reaction systems }
A chemical reaction network consists of
the triple $(V,K,E)$
where
$V$ is a set of chemical species,
$K$ is a set of complexes,
and
$E$ is a set of chemical reactions.
A complex is an element of $\mathbb N^V$,
where $\mathbb N$ denotes nonnegative integers,
and a reaction
$e_A \in E$ is given by specifying
two complexes as its source and target,
\begin{equation}
e_A :
\sum_i s_{iA} v_i
\longrightarrow
\sum_i t_{iA} v_i,
\end{equation}
where $v_i \in V$.
Here, $s_A, t_A \in \mathbb N^V$ are the source and target complexes of reaction $e_A$.
A chemical reaction network can be represented as a directed graph of complexes, which is called a reaction graph.
The reaction vector for $e_A$ is defined by
$S_A \coloneqq t_A - s_A \in \mathbb Z^V$.
Seen as a matrix, $S_{iA}$ is called a stoichiometric matrix.
For a given chemical reaction network,
one can consider stochastic/deterministic dynamics on it.
In the stochastic description,
the variables that we use are the numbers of particles of chemical species, $n \in \mathbb N^V$.
The status of a reaction system at time $t$ is
characterized by the probability distribution of $n$, $P(t, n)$, and its time evolution is governed by the chemical master equation of a continuous-time Markov chain,
\begin{equation}
\frac{d}{dt}
p (t, n)
=
\sum_A
\left[
R_A (n - S_A) P(t, n-S_A) - R_A (n) P(t, n)
\right] ,
\label{eq:cme}
\end{equation}
where $R_A (n)$ is the intensity function for reaction $e_A$.
Throughout the paper,
we employ the mass-action kinetics,
\begin{equation}
R_A (n) = k_A \frac{n!}{(n-s_A)!},
\label{eq:mak}
\end{equation}
where $k_A$ is a constant.
Here, the factorial of a vector $n \in \mathbb N^V$ is the abbreviation of the following expression,
\begin{equation}
n! = \prod_i n_i ! .
\end{equation}
The mass-action rate \eqref{eq:mak} is proportional to the number of combinations to form the source complexes for the reaction.
This form is justified when the molecules in the system are well-stirred.
When the number of molecules is large and random fluctuations can be ignored, the system can be described by deterministic equations.
In this case, the dynamical variables are the concentrations of chemical species,
$x_i = n_i / V$, where $V$ is a parameter controlling the system size.
The time evolution of $x_i$ is dictated by rate equations,
\begin{equation}
\frac{d}{dt} x_i (t)
=
\sum_A S_{iA} r_A (x),
\end{equation}
where $r_A (x)$ is the reaction rate of $e_A$.
In the deterministic version of mass-action kinetics, reaction rates are written as
\begin{equation}
r_A (x) = {\bar k}_A x^{s_A},
\label{eq:mak-det}
\end{equation}
where we have used the abbreviation $x^{s_A} = \prod_i x_i^{s_{iA}}$.
In many situations, the stochastic description reduces to the deterministic one in the limit of a large system size.
It is customary to take $R_A = O(V)$ and $n_i = O(V)$, where $V$ is a dimensionless parameter quantifying the system size.
In this case, the parameters in the continuous-time Markov chain are related to those of the rate equation as
\begin{equation}
\bar k_A = \lim_{V \to \infty } k_A V^{|s_A|_1 - 1} ,
\end{equation}
where $|s_A|_1$ denotes the L1 norm of $s_A$.
\subsection{ Quantum-mechanical formulation }
Quantum mechanics and stochastic chemical reaction systems have in common that the outcome of the measurement is probabilistic.
With the quantum-mechanical formalism
chemical reaction systems
introduced in Refs. \cite{Doi_1976,Doi_1976_2},
the chemical master equation~\eqref{eq:cme}
is formally written as
in the form of the the Schr\"{o}dinger equation.
This gives us the opportunity to import the techniques established in the latter to the former problem, which is the strategy we take in this paper.
As we will see, this allowed us to find analytical form of stationary distribution functions for chemical reaction networks that has not been known before to our knowledge.
Below, we briefly review the quantum-mechanical formalism that we base on throughout this paper. (For a recent review of the quantum-mechanical formulation and the associated path-integral method~\cite{Peliti1985},
see Ref.~\cite{Weber_2017}.)
Let us start by introducing annihilation/creation operators for each species $v_i \in V$,
$a_i$ and $a_i^\dag$, which obey the following commutation relations
\begin{equation}
[a_i, a^\dag_j]
= \delta_{ij},
\qquad
[a_i, a_j] =0,
\qquad
[a^\dag_i, a^\dag_j] =0,
\label{eq:ccr}
\end{equation}
for any $i,j$.
Roughly speaking, the creation (annihilation) operator $a_i^\dag$ ($a_i$) ``creates (annihilates)'' one species-$i$ molecule, as it would be clear in a moment.
We introduce the vacuum state, $\ket{0}$,
as a state satisfying $a_i \ket{0} = 0$ and $\bra{0} a^\dag_i = 0$ for any $i$.
Occupation number states are defined as
\begin{equation}
\ket{n} := (a^\dag)^n \ket{0},
\end{equation}
where $n \in \mathbb N^V$
and we use the short-hand notation,
\begin{equation}
(a^\dag)^{n} = \prod_i (a_i^\dag)^{n_i}.
\end{equation}
As the name suggests, the vacuum state $\ket{0}$ and the occupation number state $\ket{n}$ describes a state where no molecules are present
and a state that has occupation $n=(n_1,...,n_{|V|})$, respectively.
Note that the states are normalized as\footnote{
The Kronecker delta for
$n,m \in \mathbb N^V$ is
defined by $\delta_{n,m} = \prod_i \delta_{n_i,m_i}$.
}
\begin{equation}
\langle n | m \rangle = n! \delta_{n,m},
\end{equation}
which is different from the one employed in quantum mechanics.
The actions of $a$ and $a^\dag$ on occupation number states are given by
\begin{equation}
a^s \ket{n} =
\frac{n!}{(n-s)!}
\ket{n-s},
\qquad
(a^\dag)^s \ket{n} = \ket{n+s} ,
\end{equation}
for $s \in \mathbb N^V$.
This relation makes it clear why $a$ and $a^\dag$ are called the annihilation and creation operators, respectively; when an annihilation (creation) operator $a_i$ ($a_i^\dag$) is applied to an occupation number state $\ket n$ for $s_i$ times,
the number of species $i$ is decreased (increased) by $s_i$.
Using the occupation number states $\ket {n}$ defined above, we represent the probability distribution of the chemical reaction network
at time $t$ as a vector $\ket{\psi(t)}$ as
\begin{equation}
\ket{\psi (t)} = \sum_n P(t, n) \ket{n} .
\end{equation}
Introducing the Hamiltonian $H$ by
\begin{equation}
H =
\sum_A
k_A
\left[ (a^\dag)^{t_A} - (a^\dag)^{s_A} \right]
a^{s_A} ,
\label{eq:chem-hamiltonian}
\end{equation}
the chemical master equation \eqref{eq:cme}
can be expressed in the form of the Schr\"{o}dinger equation,
\begin{equation}
\frac{d}{dt} \ket{\psi(t)} = H \ket{\psi(t)} .
\label{eq:s-eq}
\end{equation}
Indeed, one can check
the equivalence of Eq.~\eqref{eq:s-eq}
and Eq.~\eqref{eq:cme} by direct computation.
In evaluating observables, the following state plays a special role,
\begin{equation}
\ket{\mathcal P}
:=
e^{a^\dag} \ket{0}.
\end{equation}
This state satisfies
$\bra{n} \mathcal P \rangle = 1$
\footnote{
\begin{equation}
\bra{n} \mathcal P \rangle
=
\bra{0} a^n e^{a^\dag} \ket{0}
=
\bra{0}e^{a^\dag} (a+1)^n \ket{0}
= \langle 0 | 0 \rangle
= 1.
\end{equation}
}
for any $\bra{n}$.
For a given state $\ket{\psi}$,
the expectation value of
an observable $\mathcal O(n)$,
which is a function of
the numbers of molecules, is given by
\begin{equation}
\langle \mathcal O \rangle
=
\bra{\mathcal P} \mathcal O( a^\dag a )\ket{\psi(t)} ,
\end{equation}
so that the state $\ket{\psi(t)}$ represents a probability distribution, it should satisfy
\begin{equation}
\bra{\mathcal P}\psi(t)\rangle = 1 ,
\label{eq:prob-cons}
\end{equation}
at any time $t$.
If Eq.~\eqref{eq:prob-cons} is satisfied in the initial condition, it is also satisfied at later times, since
\begin{equation}
\begin{split}
\bra{\mathcal P} H
&=
\sum_A
k_A
\bra{0}
e^{a}
\left[
(a^\dag)^{t_A}
-
(a^\dag)^{s_A}
\right]
a^{s_A} \\
&=
\sum_A
k_A
\bra{0}
\left[
(a^\dag+1)^{t_A}
-
(a^\dag+1)^{s_A}
\right]
e^{a}
a^{s_A} \\
&= 0 ,
\end{split}
\end{equation}
where we used the ``Doi shift,''
\begin{equation}
e^a f(a^\dag)
=
f(a^\dag + 1) e^a,
\end{equation}
and $\bra{0} a^\dag = 0$.
We can see that
the time evolution is consistent with
probability conservation
if the Hamiltonian $H = H(a, a^\dag)$
satisfies $H(a, a^\dag =1) = 0$.
\subsection{ Probability generating functions }
The formulation using creation/annihilation operators
is equivalent to considering the time evolution of probability generating functions.
The probability generating function is defined by
\begin{equation}
\Psi (t,z) \coloneqq \sum_n P(t, n) z^n,
\end{equation}
where $n \in \mathbb N^V$ and
$z^n = \prod_i (z_i)^{n_i}$.
To see the relation of the two formulations,
note that $\frac{\p}{\p z_i}$
and $z_i$ satisfy the same
commutation relations as Eq.~\eqref{eq:ccr}.
The correspondence of the quantum-mechanical notation and the formulation based on generating functions
can be made by the following replacements:
\begin{equation}
\frac{\p}{\p z_i} \leftrightarrow a_i,
\qquad
z_i \leftrightarrow a_i^\dag,
\qquad
z^n
\leftrightarrow
\ket{n} .
\end{equation}
Using probability generating functions,
the chemical master equation can be written as~\cite{gardiner1985handbook}
\begin{equation}
\frac{d}{dt}
\Psi (t,z )
=
\sum_A
k_A \left[ z^{t_A} - z^{s_A} \right] (\p_z)^{s_A} \Psi (t,z ).
\label{eq:cme-gf}
\end{equation}
Note that we are using the following notations,
\begin{equation}
z^{t_A}
=
\prod_i (z_i)^{t_{iA}},
\qquad
(\p_z)^{s_A} =
\prod_i
\left( \frac{\p}{\p z_i} \right)^{s_{iA}} .
\end{equation}
When all source complexes involve up to one species, the resulting equation for the probability generating function is a linear partial differential equation, that can be solved via the method of characteristics~\cite{doi:10.1073/pnas.0803850105}.
An efficient method to obtain the analytic solution for these cases has recently been proposed~\cite{PhysRevE.104.024408}.
\subsection{ Anderson--Craciun--Kurtz theorem }
Finding the analytic form of stationary distributions is not easy in general.
Anderson, Craciun, and Kurtz
\cite{anderson2010product} showed that chemical master equations
admit stationary distributions of a product-Poisson form when the deterministic counterpart (i.e. the rate equation) with the mass-action kinetics
has a complex-balanced steady-state solution.
In the quantum mechanical formulation of chemical master equations, these stationary distributions
correspond to coherent states~\cite{baez2015quantum,baez2018quantum}.
A steady-state solution of the rate equation
is said to be complex balanced
when the following condition is satisfied
\begin{equation}
\sum_{A: s_A =C_m} k_A \bar x^{s_A}
=
\sum_{A: t_A =C_m} k_A \bar x^{s_A} ,
\label{eq:cb-1}
\end{equation}
for any complex $C_m \in K$
in the reaction network.
Intuitively, this means that the
inflow and outflow of the rates are balanced in
each complex.
Equation~\eqref{eq:cb-1}
can be written equivalently as
\begin{equation}
\sum_A B_{mA} k_A {\bar x}^{s_A} =0,
\label{eq:cb-2}
\end{equation}
where $B_{mA}$ is the incidence matrix of
the reaction graph.
Note that the incidence matrix can be written
using the Kronecker delta as
$B_{mA} = \delta_{ t_A, C_m} - \delta_{ s_A, C_m}$\footnote{
The Kronecker delta of two complexes
$s,s' \in \mathbb N^V$ should be understood as
$\delta_{s, s'} \coloneqq \prod_i \delta_{s_{i}, s'_{i}}
$.
},
and we have
\begin{equation}
\sum_A B_{m A} k_A \bar x^{s_A}
=
\sum_{A}
( \delta_{ t_A, C_m} - \delta_{ s_A, C_m} ) k_A \bar x^{s_A}
=
\sum_{A : t_A = C_m} k_A \bar x^{s_A}
-
\sum_{A : s_A = C_m} k_A \bar x^{s_A} .
\end{equation}
Not every solution of rate equations
has this property.
An important class of reaction networks with complex-balanced steady states
are those with a zero deficiency
and weak reversibility.
The deficiency $\delta$ is a nonnegative integer
determined from the topological structure\footnote{
For an alternative approach to constrain the steady-state properties of deterministic chemical reaction systems with generic kinetics base on a different topological index, see Refs.~\cite{PhysRevLett.117.048101,PhysRevE.96.022322,PhysRevResearch.3.043123}.
} of reaction networks,
\begin{equation}
\delta \coloneqq |K| - \ell - {\rm rank}\, S,
\end{equation}
where
$|K|$ indicates the number of complexes,
$\ell$ is the number of linkage classes (connected components of reaction graph),
and the last term is the rank of the stoichiometric matrix.
A reaction network is said to be weakly reversible,
if there is a path of reactions
from one complex $C_m$
to another complex $C_n$,
there is always a path from $C_n$ to $C_m$.
Feinberg~\cite{FEINBERG19872229}
and Horn--Jackson \cite{horn1972general}
showed that, if a reaction network has zero deficiency and is weakly reversible,
the rate equation with mass-action kinetics
admits a unique steady-state solution in each positive stoichiometric compatibility class for any choice of rate constants.
With a complex-balanced solution $\bar x \in \mathbb R^V$
of the rate equation with mass-action kinetics\footnote{
A tricky point here is that the rate equation is parametrized by $k_A$, which are the parameters of
the stochastic reaction systems, and not those of the rate equation obtained by the deterministic limit \eqref{eq:mak-det} of the stochastic reaction system under consideration.
In fact, the theorem holds even for
non-mass-action stochastic kinetics \cite{anderson2010product,hong2021derivation},
whose deterministic limit does not have
reaction rates with mass-action kinetics.
}, the Anderson--Craciun--Kurtz theorem claims that the following state is a stationary distribution\footnote{
Note that this is the abbreviation of the following expression,
\begin{equation}
\prod_i e^{- \bar x_i}
e^{ \bar x_i a^\dag_i }\ket{0} .
\end{equation}
},
\begin{equation}
\ket{\bar x}
:=
e^{- \bar x}
e^{ \bar x a^\dag }\ket{0} ,
\label{eq:coh-st}
\end{equation}
where we have normalized the state
so that $\bra{\mathcal P} \bar x \rangle =1$.
This type of state is called a coherent state \cite{Gardiner_Zoller},
and it is an eigenstate of annihilation operators,
\begin{equation}
a_i \ket{\bar x} = \bar x_i \ket{\bar x} .
\end{equation}
Let us explicitly show that
the state~\eqref{eq:coh-st}
is indeed a zero eigenstate of
the chemical Hamiltonian~\eqref{eq:chem-hamiltonian}\footnote{
The following derivation is a slightly simplified version of the one given in Ref.~\cite{baez2015quantum}.
See also Ref.~\cite{PhysRevE.96.062102}.
}.
To show this, a crucial step is
writing the summation over reactions as
\begin{equation}
\sum_A
=
\sum_m
\sum_{A: t_A =C_m },
\end{equation}
where $\sum_m$ is the summation over complexes,
and $\sum_{A: t_A =C_m }$
is a summation over reactions whose reactants are given by complex $C_m$.
Acting $H$ on the state \eqref{eq:coh-st},
\begin{equation}
\begin{split}
H
e^{- \bar x}
e^{\bar x a^\dag} \ket{0}
&=
e^{- \bar x}
\sum_A
k_A
{\bar x}^{s_A}
\left[ (a^\dag)^{t_A} - (a^\dag)^{s_A} \right]
e^{\bar x a^\dag} \ket{0}
\\
&=
e^{- \bar x}
\sum_{m}
\left[ \sum_{A: t_A =C_m }
k_A {\bar x}^{s_A} (a^\dag)^{t_A}
- \sum_{A: s_A =C_m }
k_A {\bar x}^{s_A} (a^\dag)^{s_A}
\right]
e^{\bar x a^\dag} \ket{0}
\\
&=
e^{- \bar x}
\sum_m
(a^\dag)^{C_m}
\left[ \sum_{A: t_A =C_m }
k_A {\bar x}^{s_A}
- \sum_{A: s_A =C_m }
k_A {\bar x}^{s_A} \right]
e^{\bar x a^\dag} \ket{0}
\\
&=
e^{- \bar x}
\sum_m
(a^\dag)^{C_m}
\left( \sum_{A} B_{mA} k_A {\bar x}^{s_A} \right)
e^{\bar x a^\dag} \ket{0}
\\
&=0,
\end{split}
\end{equation}
where we used the complex-balancing condition, Eq.~\eqref{eq:cb-1} or \eqref{eq:cb-2}.
\section{ Squeezing and stochastic chemical reaction systems }\label{sec:squeeze-single}
We have seen that
the stationary distributions of a product-Poisson form in the Anderson--Craciun--Kurtz theorem can be interpreted as
coherent states in the quantum-mechanical formulation of stochastic chemical reaction systems.
We consider the transformation of
the Hamiltonian and the coherent state by a squeeze operator \cite{Gardiner_Zoller}.
The transformed Hamiltonian represents a reaction network that is different from the original one.
In particular, the transformed network has nonzero deficiency and is not weakly reversible.
The obtained squeezed coherent state is
the zero eigenstate (i.e. the stationary distribution) of the transformed Hamiltonian, and its analytic form can be identified.
Therefore, although the transformed system is not of a zero deficiency and not weakly reversible in general,
we can obtain the analytical expression
for the stationary distribution through this procedure.
In this section, we illustrate the procedure
with a simple example.
\subsection{ Network transformation via squeezing }
We start with the following simple chemical reaction network, which we call $\Gamma$,
\begin{equation}
\begin{tikzcd}
\emptyset
\rar["\ell_1", shift left]
&
A
\lar["\ell_2", shift left]
\end{tikzcd} ,
\label{eq:network-0-a}
\end{equation}
where $\ell_1$ and $\ell_2$ are parameters
in the mass-action kinetics of the corresponding reactions.
The stochastic Hamiltonian for $\Gamma$ is given by
\begin{equation}
H =
\ell_1 (a^\dag - 1 )
+
\ell_2 (1 - a^\dag) a
=
(a^\dag - 1) (\ell_1 - \ell_2 a)
\eqqcolon
(a^\dag - 1)h(a),
\end{equation}
where
$a$ and $a^\dag$ are the creation
and annihilation operators of species $A$,
respectively,
and we defined $h(a) \coloneqq \ell_1 - \ell_2 a$.
The rate equation for this system is given by
\begin{eqnarray}
\frac{dx}{dt} = \ell_1 - \ell_2 x,
\end{eqnarray}
where $x(t)$ is the concentration of species $A$.
This network \eqref{eq:network-0-a} has zero deficiency and is weakly reversible.
Hence, the Anderson--Craciun--Kurtz theorem applies,
and a coherent state gives its stationary distribution.
The steady-state solution of the rate equation
is
\begin{equation}
\bar x = \frac{\ell_1}{\ell_2},
\end{equation}
and, indeed, the state
$\ket{\bar x} \coloneqq e^{- \bar x} e^{\bar x a^\dag}\ket{0}$ is a zero eigenstate of $H$, because
\begin{eqnarray}
h(a) \ket{\bar x}
=(\ell_1 - \ell_2 a) \ket{\bar x}
= (\ell_1 - \ell_2 \bar x) \ket{\bar x}
= 0.
\end{eqnarray}
We shall perform a squeezing and obtain
another reaction network whose
stationary distribution is given by a squeezed coherent state. A squeeze operator for $a$ is defined by \cite{Gardiner_Zoller}
\begin{equation}
S(\xi) =
\exp
\left[
\frac{1}2 \left( \xi^\ast a^2 - \xi (a^\dag)^2 \right)
\right] ,
\end{equation}
where $\xi$ is a complex parameter.
Under the action of $S(\xi)$,
the operators $a$ and $a^\dag$ are transformed as\footnote{
The operator $S(\xi)$ is a unitary operator
and $S^{-1}(\xi) = S^\dag(\xi) =S(-\xi)$.
}
\begin{align}
S (\xi ) a S^{-1} (\xi)
&=
\cosh r \, a + e^{i\theta } \sinh r \,a^\dag,
\label{eq:s-a-s}
\\
S (\xi) a^\dag S^{-1} (\xi)
&=
\cosh r \, a^\dag
+
e^{-i\theta } \sinh r \, a ,
\label{eq:s-a-s-2}
\end{align}
where $\xi \coloneqq r e^{i\theta}$.
Using the squeeze operator,
we define a new Hamiltonian by
\begin{eqnarray}
H'=(a^\dag - 1)S(\xi) h(a) S^{-1} (\xi).
\end{eqnarray}
Namely, we have transformed the part that involves the annihilation operator,
$h(a)$, by $S(\xi)$.
The transformed Hamiltonian $H'$
is probability-conserving,
since $H'(a,a^\dag = 1) = 0$.
We can obtain a zero eigenstate of $H'$ by
\begin{eqnarray}
\ket{\bar x,\xi} \coloneqq S(\xi)\ket{\bar x} .
\label{eq:sq-state-def}
\end{eqnarray}
Namely, the zero eigenstate
of the new Hamiltonian $H'$ is
a squeezed coherent state.
Indeed, we have
\begin{eqnarray}
S(\xi) h(a) S^{-1}(\xi) \ket{\bar x,\xi}
=
S(\xi) h(a) \ket{\bar x}
= 0.
\end{eqnarray}
So that the state \eqref{eq:sq-state-def}
represents a probability distribution,
we take the parameter $\xi$ to be real.
We will use the convention to take $\theta = 0$ and
$r$ be of either sign.
We will discuss more detailed properties of the stationary distribution given by Eq.~\eqref{eq:sq-state-def} in the next subsection.
Let us examine the chemical content of the transformed reaction system, which we call $\Gamma'$.
Using Eq.~\eqref{eq:s-a-s}, $H'$ is written as
\begin{equation}
H'
= (a^\dag - 1)
[\ell_1 - \ell_2 (\cosh r \, a + \sinh r \, a^\dag)].
\end{equation}
The Hamiltonian can be organized in the following form,
\begin{equation}
\begin{split}
H'
&=
(\ell_1 + \ell_2 \sinh r)
(a^\dag - 1)
+ \ell_2 \cosh r (1- a^\dag) a
- \ell_2 \sinh r ((a^\dag)^2 - 1)
\\
&
\eqqcolon
k_1
(a^\dag - 1)
+
k_2 (1- a^\dag) a
+
k_3 ((a^\dag)^2 - 1) ,
\end{split}
\label{eq:hp-k}
\end{equation}
where we have defined
\begin{equation}
k_1 = \ell_1 +\ell_2 \sinh r ,
\qquad
k_2 = \ell_2 \cosh r ,
\qquad
k_3 = - \ell_2 \sinh r .
\end{equation}
Comparing Eq.~\eqref{eq:hp-k}
with a generic chemical Hamiltonian \eqref{eq:chem-hamiltonian},
we can see that the transformed Hamiltonian $H'$ corresponds to the following reaction network:
\begin{equation}
\begin{tikzcd}
\emptyset
\rar["k_1", shift left]
\dar["k_3"]
&
A
\lar["k_2", shift left]
\\
2 A
&
\end{tikzcd}
\end{equation}
Compared to the original system $\Gamma$, a reaction $\emptyset \to 2A$ is added.
The deficiency of the transformed network $\Gamma'$ is nonzero, $\delta=3-1-1=1$, and it is not weakly reversible.
The parameters of $H$
can be expressed by those of $H'$ as
\begin{equation}
\ell_1 = k_1 + k_3,
\qquad
\ell_2 = \sqrt{(k_2)^2 - (k_3)^2} .
\end{equation}
One might wonder if there are restrictions in the choice of the parameters $\{k_1, k_2, k_3\}$
from the positivity of $\{\ell_1, \ell_2\}$.
In fact, there is no restriction and
$\{k_1, k_2, k_3\}$ can be taken to be arbitrary positive values (some of them can be even zero).
This is because the obtained probability generating function, once written in terms of the parameters
$\{k_1, k_2, k_3\}$, is the solution of the stationary condition~\eqref{eq:cme-gf}
of $\Gamma'$ for {\it any} positive values of $\{k_1, k_2, k_3\}$, even if some of $\ell_1,\ell_2$ are imaginary.
In this sense, the reaction network $\Gamma$ is fictitious and is used as a stepping stone to compute the stationary distribution of $\Gamma'$.
The fact that the stationary distribution of $\Gamma'$ from squeezing is indeed a stationary distribution can be
checked independently of the properties of the original system $\Gamma$.
To simplify the notations,
let us define
\begin{equation}
u \coloneqq \frac{k_1}{k_2},
\qquad
v \coloneqq \frac{k_3}{k_2},
\qquad
\gamma \coloneqq \frac{1}{\sqrt{1-v^2}}.
\end{equation}
The squeezing parameter is given by
$\tanh r = -v$,
and $\bar x$ is written as $\bar x = \gamma (u + v)$.
\subsection{ Stationary distribution }
We can utilize the mapping of $\Gamma$
and $\Gamma'$ to compute the exact stationary distribution of $\Gamma'$,
which has a nonzero deficiency and
is not weakly reversible:
the stationary distribution of $\Gamma'$ is given by the squeezed coherent state~\eqref{eq:sq-state-def}.
To find the probability distribution,
let us here use the representation using probability generating functions.
The reaction system $\Gamma$ has
a Poisson distribution as its stationary distribution,
and the corresponding probability generating function is written as
\begin{equation}
\Psi_{\rm c}(z)
=
e^{-\bar x}
\sum_n \frac{{\bar x}^n}{n!} z^n
=
e^{\bar x (z-1)}.
\end{equation}
It is an eigenfunction of $\p_z$
with eigenvalue $\bar x$,
\begin{equation}
\p_z
\Psi_{\rm c} (z)
=
\bar x \Psi_{\rm c} (z).
\label{eq:mom-gen-c-eigen}
\end{equation}
The probability generating function of
the stationary distribution of
$\Gamma'$
is obtained by operating a squeeze operator
on Eq.~\eqref{eq:mom-gen-c-eigen},
\begin{equation}
\Psi_{\rm sq}(z)
=
S(\xi)\Psi_{\rm c}(z)
=
e^{\frac{1}2 (\xi^\ast (\p_z)^2 - \xi z^2)}
e^{\bar x (z-1)}.
\label{eq:mom-gen-sq}
\end{equation}
Although it is possible to compute
Eq.~\eqref{eq:mom-gen-sq} directly,
let us take an easier path.
Here, we use the eigenvalue equation satisfied by $\Psi_{\rm sq}(z)$.
Acting $S(\xi)$ on both sides
of Eq.~\eqref{eq:mom-gen-c-eigen},
\begin{equation}
S(\xi) \p_z S^{-1}(\xi)
\Psi_{\rm sq} (z)
=
\bar x
\Psi_{\rm sq} (z).
\end{equation}
Namely,
$\Psi_{\rm sq} (z)$ is
the eigenfunction of the operator
$
S(\xi) \p_z S^{-1}(\xi)
$
with eigenvalue $\bar x$.
The operator $S(\xi) \p_z S^{-1}(\xi)$ is written as
\begin{equation}
S(\xi)
\p_z
S^{-1}(\xi)
=
\cosh r \, \p_z + \sinh r \, z ,
\end{equation}
which is equivalent to Eq.~\eqref{eq:s-a-s}.
Thus, we have a differential equation,
\begin{equation}
( \p_z + \tanh r \, z )
\Psi_{\rm sq} (z)
=
\frac{\bar x}{\cosh r}
\Psi_{\rm sq} (z).
\end{equation}
The solution can be readily obtained as
\begin{equation}
\Psi_{\rm sq}(z)
=
\exp
\left[
- \frac{\tanh r}{2} (z^2 -1)
+\frac{\bar x }{\cosh r} (z-1)
\right]
=
\exp
\left[
\frac{v}{2} (z^2 -1) + (u+v) (z-1)
\right] .
\label{eq:psi-sq-analytic}
\end{equation}
where we used the parameters of $\Gamma'$
in the last expression.
We have fixed the normalization constant by the condition
$
\Psi_{\rm sq}(z =1) = 1
$.
The probability generating function \eqref{eq:psi-sq-analytic} fully characterizes the stationary distribution,
and we can use this to evaluate the statistical properties of $\Gamma'$ in the long-time limit.
For example, we can compute cumulants using
the cumulant generating function,
\begin{equation}
C(w) \coloneqq \ln \Psi_{\rm sq}(e^w)
=
\frac{v}{2} (e^{2w} -1)
+
(u+v) (e^{w}-1).
\end{equation}
The $n$-th cumulant, $c_n$, is computed as
\begin{equation}
c_n =
u +
( 2^{n-1} + 1 ) v
=
\frac{k_1}{k_2}
+
( 2^{n-1} + 1 ) \frac{k_3}{k_2} .
\end{equation}
For example, the mean and variance are
\begin{align}
&\langle n \rangle
=
u+2v
=
\frac{k_1 + 2 k_3}{k_2} ,
\\
&\langle n^2 \rangle - \langle n \rangle^2
=
\langle n \rangle + v
=
\frac{k_1 + 3 k_3}{k_2} .
\end{align}
Recalling that a Poisson distribution has identical cumulant for arbitrary $n$,
one sees that the stationary distribution
in the transformed system
is broader than a Poisson distribution.
To get the expression of the stationary
number distribution, note that the generating function of the Hermite polynomials is
given by
\begin{equation}
e^{- z^2 + 2 y z}
=
\sum_{n}
\frac{1}{n!}
H_n (y)
z^n .
\end{equation}
Using this, the probability generating function can be expanded as
\begin{equation}
\begin{split}
\Psi_{\rm sq}(z)
&=
\exp
\left(
\frac{\tanh r}{2}
- \frac{\bar x}{\cosh r}
\right)
\sum_n
\frac{1}{n!}
\left(
\frac{ \tanh r }{2}
\right)^{\frac{n}2}
H_n
\left( \frac{\bar x }{\sqrt{\sinh 2 r}} \right)
\,
z^n
\\
&=
e^{ - \frac{3}{2} v - u }
\sum_n
\frac{1}{n!}
\left(
- \frac{v}{2}
\right)^{\frac{n}2}
H_n
\left( \frac{u+v}{\sqrt{ -2 v}} \right)
\,
z^n .
\end{split}
\end{equation}
This expression coincides with
the photon number distribution
of squeezed coherent states~\cite{Gong1990}
up to normalization.
From the coefficients,
we can read off the stationary distribution $P_{\rm s} (n) $, which can be expressed
using $\{k_1,k_2,k_3\}$ as
\begin{equation}
P_{\rm s} (n)
=
e^{- \frac{1}{k_2}
\left(
\frac{3}{2}k_3+ k_1 \right)
}
\frac{1}{n!}
\left(
- \frac{k_3}{2k_2 }
\right)^{\frac{n}2}
H_n
\left( \frac{k_1+k_3}{\sqrt{ -2 k_2 k_3}} \right) .
%
\label{eq:eg1-prob-analytic}
\end{equation}
In Fig.~\ref{fig:eg1-comp}, we show a comparison of the analytic form of the stationary distribution~\eqref{eq:eg1-prob-analytic} with stochastic simulations using the Gillespie algorithm~\cite{gillespie1977exact}.
The numerically computed distribution agrees well with the analytic expression.
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{eg1.pdf}
\caption{
Probability distributions from
the analytic solution and numerical simulations.
The solid line shows the analytic solution,
which is consistent with numerically simulations (boxes).
The distribution is wider than the Poisson distribution (dashed line) with the same mean.
Parameters are chosen as $k_1=10, k_2=1, k_3=10$.
The numerical data are based on $200,000$ simulated events.
}
\label{fig:eg1-comp}
\end{figure}
\section{ Example with correlations from two-mode squeezing }\label{sec:two-mode-sq}
A noticeable feature of stationary distributions for complex-balanced systems is that they are of a product form and each species is statistically independent when there are no conserved quantities.
Here, we discuss an example where
the transformed reaction system has
a correlated stationary distribution among different species, which is obtained by the so-called two-mode squeezing used in the context of continuous variable-quantum information processing \cite{Weedbrook2012}.
\subsection{ Network transformation }
As an original network $\Gamma$, we consider the following:
\begin{equation}
\begin{tikzcd}
A
\rar["\ell_2", shift left]
&
\emptyset
\lar["\ell_1", shift left]
\rar["\ell_3", shift left]
&
B
\lar["\ell_4", shift left]
\end{tikzcd}
\end{equation}
The corresponding Hamiltonian reads
\begin{equation}
H =
\ell_1 (1 - a^\dag) a
+ \ell_2 (a^\dag - 1)
+\ell_3 (1 - b^\dag) b
+ \ell_4 (b^\dag - 1)
=
(a^\dag-1)
(\ell_1 - \ell_2 a)
+ (b^\dag-1)
(\ell_3 - \ell_4 b),
\end{equation}
where $a(a^\dag)$ and $b(b^\dag)$
are the annihilation (creaction)
operators for species $A$ and $B$, respectively.
This network is weakly reversible and its deficiency is zero,
so the rate equations admit a complex-balanced steady-state solution.
The rate equations are
\begin{equation}
\frac{d}{dt} x_a = \ell_1 - \ell_2 x_a,
\qquad
\frac{d}{dt} x_b = \ell_3 - \ell_4 x_b,
\end{equation}
where $x_a(t)$ and $x_b(t)$ are concentrations of species $A$ and $B$,
respectively.
The steady-state concentrations are
given by
\begin{equation}
\bar x_a = \frac{\ell_1}{\ell_2},
\qquad
\bar x_b = \frac{\ell_3}{\ell_4}.
\label{eq:xa-xb}
\end{equation}
The stationary state of this system is the product of Poisson distributions with parameters \eqref{eq:xa-xb}.
On the stationary state
and Hamiltonian of this system,
we act a two-mode squeeze operator,
\begin{equation}
S_2 (\zeta) = \exp \left(
\zeta^\ast a b - \zeta a^\dag b^\dag \right) ,
\end{equation}
which mixes the operators of different species.
The operators $a$ and $b$
are transformed as
\begin{align}
a
\mapsto
S_2 (\zeta) a S^{-1}_2 (\zeta)
&= \cosh q \, a + e^{i \phi} \sinh q \, b^\dag,
\\
b
\mapsto
S_2 (\zeta) b S^{-1}_2 (\zeta)
&= \cosh q \, b + e^{i \phi} \sinh q\, a^\dag ,
\end{align}
where $\zeta = q e^{i\phi}$.
We define the transformed Hamiltonian by
\begin{equation}
H' = (a^\dag-1)
S_2 (\zeta)(\ell_1 - \ell_2 a)S^{-1}_2 (\zeta)
+
(b^\dag-1)
S_2 (\zeta)(\ell_3 - \ell_4 b) S^{-1}_2 (\zeta)
,
\end{equation}
where we take $\phi = 0$, and $q$ can be either positive or negative.
To read off its chemical content,
let us rewrite the Hamiltonian as
\begin{equation}
\begin{split}
H' &=(a^\dag-1)
(\ell_1 - \ell_2 \cosh q\, a - \ell_2 \sinh q\, b^\dag)
+ (b^\dag-1)
(\ell_3 - \ell_4 \cosh q\, b - \ell_4 \sinh q\, a^\dag)
\\
&=(a^\dag-1)
(\ell_1 - \ell_2 \cosh q\, a)
+ (b^\dag-1)
(\ell_3 - \ell_4 \cosh q\, b)
-\ell_2 \sinh q(a^\dag b^\dag
\textcolor{blue}{-1}- b^\dag
\textcolor{blue}{+1})
- \ell_4 \sinh q (a^\dag b^\dag \textcolor{blue}{-1}-a^\dag\textcolor{blue}{+1})
\\
&=(a^\dag-1)
((\ell_1+
{\ell_4 \sinh q}) - \ell_2 \cosh q \, a)
+ (b^\dag-1)
((\ell_3
{+\ell_2 \sinh q}) - \ell_4 \cosh q\, b)
-(\ell_2+\ell_4) \sinh q(a^\dag b^\dag-1)
\\
&\eqqcolon
(a^\dag-1) (k_1 - k_2 a)
+ (b^\dag-1) (k_3 - k_4 b)
+ k_5(a^\dag b^\dag-1),
\end{split}
\end{equation}
where we have inserted $+1 -1$ (colored in blue) in the second line.
Comparing this with the form of a generic Hamiltonian \eqref{eq:chem-hamiltonian},
the transformed reaction system corresponds to a network $\Gamma'$ with the following reactions,
\begin{equation}
\begin{tikzcd}
A
\rar["k_2", shift left]
&
\emptyset
\lar["k_1", shift left]
\rar["k_3", shift left]
\dar["k_5"]
&
B
\lar["k_4", shift left]
\\
&
A+B
&
\end{tikzcd}
\end{equation}
The deficiency of this network is one, $\delta = 1$, and is not weakly reversible.
The parameters of $\Gamma'$ are written
by those of $\Gamma$ as
\begin{equation}
k_1 = \ell_1+\ell_4 \sinh q
\quad
k_2 = \ell_2\cosh q,
\quad
k_3 = \ell_3+\ell_2 \sinh q,
\quad
k_4 = \ell_4\cosh q,
\quad
k_5 = -(\ell_2 + \ell_4)\sinh q.
\end{equation}
Note that $k_1$ and $k_3$
can be taken to be zero,
in which case the corresponding reaction
is absent in the network.
The parameters of $\Gamma$ can
be expressed by
the parameters of $\Gamma'$ as
\begin{equation}
\ell_1 = k_1 + v k_4,
\qquad
\ell_2 = k_2 \sqrt{1-v^2},
\qquad
\ell_3 = k_3 + v k_2,
\qquad
\ell_4 = k_4 \sqrt{1-v^2},
\end{equation}
where we have defined
\begin{equation}
v \coloneqq \frac{k_5}{k_2 + k_4}.
\end{equation}
The parameter $q$ is also determined from
the parameters $k_i$
as $\tanh q = -v$.
A similar comment to the previous example also applies here.
For some choice of the parameters
$\{k_i\}_{i=1\ldots5}$,
some of $\{\ell_i \}_{i=1\ldots4}$ can become imaginary.
However, the stationary distribution
obtained through squeezing in fact is correct for {\it any} positive values
$\{k_i\}_{i=1\ldots5}$,
because the squeezed coherent state is
going to be the zero mode of $H'$
regardless of whether $\ell_i$ are real or imaginary.
\subsection{ Stationary distribution }
Here, we look at the properties
of the stationary distribution.
For this purpose,
we will use the probability generating function.
The original state is a coherent state
and the corresponding
probability generating function is written as
\begin{equation}
\Psi_{\rm c} (z_a, z_b)
=
e^{\bar x_a (z_a - 1)}
e^{\bar x_b (z_b - 1)}.
\label{eq:coh-s-corr}
\end{equation}
We will denote the derivatives
with respect to $z_a$ and $z_b$ as
\begin{equation}
\p_a \coloneqq \frac{\p}{\p z_a},
\qquad
\p_b \coloneqq \frac{\p}{\p z_b}.
\end{equation}
Equation~\eqref{eq:coh-s-corr} is an eigenfunction of derivative operators,
\begin{equation}
\p_a
\Psi_{\rm c} (z_a, z_b)
=
\bar x_a \Psi_{\rm c} (z_a, z_b),
\qquad
\p_b
\Psi_{\rm c} (z_a, z_b)
=
\bar x_b \Psi_{\rm c} (z_a, z_b).
\label{eq:coh}
\end{equation}
The zero eigenstate of the transformed reaction system is obtained by acting $S_2(\zeta)$
on $\Psi_{\rm c}(z_a,z_b)$,
\begin{equation}
\Psi_{\rm sq} (z_a, z_b)
\coloneqq
S_2 (\zeta) \Psi_{\rm c}(z_a, z_b).
\end{equation}
Similarly to the case of single-mode squeezing,
to find the expression of $\Psi_{\rm sq} (z_a, z_b)$, we use the following eigenvalue equations
obtained by acting $S_{2}(\zeta)$
on Eq.~\eqref{eq:coh},
\begin{align}
S_2 (q) \p_a S^{-1}_2 (q)
\Psi_{\rm sq} (z_a, z_b)
&=
\bar x_a \Psi_{\rm sq} (z_a, z_b),
\label{eq:sps-sq-sq-a}
\\
S_2 (q) \p_b S^{-1}_2 (q)
\Psi_{\rm sq} (z_a, z_b)
&=
\bar x_b \Psi_{\rm sq} (z_a, z_b),
\label{eq:sps-sq-sq-b}
\end{align}
where we have taken the parameter to be real,
$\zeta = q \in \mathbb R$.
Namely, $\Psi_{\rm sq}(z_a,z_b)$
is an eigenfunction of
operators,
$S_2 (q) \p_a S^{-1}_2 (q)$
and
$S_2 (q) \p_b S^{-1}_2 (q)$.
Noting that
\begin{equation}
S_2 (q) \p_a S^{-1}_2 (q)
=
\cosh q \, \p_a + \sinh q \, z_b,
\qquad
S_2 (q) \p_b S^{-1}_2 (q)
=
\cosh q \, \p_b + \sinh q \, z_a,
\end{equation}
Eqs.~\eqref{eq:sps-sq-sq-a}
and \eqref{eq:sps-sq-sq-b}
are written as
\begin{align}
(\p_a + \tanh q \, z_b)\Psi_{\rm sq} (z_a, z_b)
&=
\frac{\bar x_a}{\cosh q} \Psi_{\rm sq} (z_a, z_b),
\\
(\p_b + \tanh q \, z_a)\Psi_{\rm sq} (z_a, z_b)
&=
\frac{\bar x_b}{\cosh q} \Psi_{\rm sq} (z_a, z_b).
\end{align}
These differential equations can be immediately solved to give
\begin{equation}
\begin{split}
\Psi_{\rm sq} (z_a, z_b)
&=
\exp
\left(
- \tanh q (z_a z_b -1 )
+ \frac{\bar x_a}{\cosh q} (z_a -1)
+ \frac{\bar x_b}{\cosh q} (z_b -1)
\right)
\\
&=
\exp
\left(
v (z_a z_b -1 )
+
\frac{k_1 + v k_4}{k_2} (z_a -1)
+
\frac{k_3 + v k_2}{k_4} (z_b -1)
\right) ,
\end{split}
\end{equation}
where we used the parameters of $\Gamma'$
in the second line.
We have fixed the normalization constant
using the condition
$\Psi_{\rm sq}(z_a=1,z_b=1)=1$.
The probability generating functions of
marginalized distributions for $n_a$ and $n_b$ are
given by
\begin{align}
\psi_a (z_a)
\coloneqq
\Psi_{\rm sq} (z_a, z_b = 1)
&=
\exp
\left[
\frac{k_1 + k_5}{k_2}
(z_a -1)
\right] ,
\\
\psi_b (z_b)
\coloneqq
\Psi_{\rm sq} (z_a=1, z_b)
&=
\exp
\left[
\frac{k_3 + k_5}{k_4} (z_b -1)
\right]
.
\end{align}
Thus, marginal distributions are Poissonian,
and they are characterized by
the following parameters:
\begin{equation}
\langle n_a \rangle
=
\frac{k_1 + k_5}{k_2},
\qquad
\langle n_b \rangle
=
\frac{k_3 + k_5}{k_4}.
\end{equation}
As a result of a two-mode squeezing,
the joint distribution is not a product of Poisson distributions,
and $n_a$ and $n_b$ are correlated.
The covariance is given by
\begin{equation}
\langle n_a n_b \rangle
-
\langle n_a \rangle
\langle n_b \rangle
=
v
= \frac{k_5}{k_2 + k_4} .
\end{equation}
\subsection{ Derivation of the number distribution }
Here, we derive the analytic expression of
the stationary distribution of this reaction system.
Let us write the probability generating function in the following form
\begin{equation}
\Psi_{\rm sq} (z_a, z_b)
=
\exp
\left[
v (z_a z_b -1 )
+ c_a (z_a -1) + c_b (z_b -1)
\right] ,
\end{equation}
where we defined
\begin{equation}
c_a \coloneqq \frac{k_1 + v k_4}{k_2} ,
\qquad
c_b \coloneqq \frac{k_3 + v k_2}{k_4} .
\end{equation}
Expanding the generating function in powers of $z_a$ and $z_b$,
\begin{equation}
\begin{split}
\Psi_{\rm sq} (z_a, z_b)
&=
e^{ - v - c_a - c_b}
\sum_{k,l,m}
\frac{1}{k! \, l! \, m!}
v^k (c_a)^l (c_b)^m
(z_a)^{k+l} (z_b)^{k+m}
\\
&=
e^{ - v - c_a - c_b}
\sum_{n_a, n_b}
\sum_{k,l,m}
\delta_{k+l,n_a}
\delta_{k+m,n_b}
\frac{v^k (c_a)^l (c_b)^m }{k! \, l! \, m!}
(z_a)^{k+l} (z_b)^{k+m}
\\
&=
e^{ - v - c_a - c_b}
\sum_{n_a, n_b}
\sum_{k=0}^{\min (n_a,n_b)}
\frac{v^k (c_a)^{n_a - k} (c_b)^{n_b - k} }{k! \, (n_a-k)! \, (n_b - k)!}
(z_a)^{n_a} (z_b)^{n_b} .
\end{split}
\end{equation}
We can read off the number distribution
from the coefficient of $(z_a)^{n_a} (z_b)^{n_b}$.
To further simplify the expression,
let us first consider the case
$n_a \le n_b$:
\begin{equation}
\begin{split}
P_{\rm s}(n_a,n_b)
&=
e^{ - v - c_a - c_b}
\sum_{k=0}^{n_a}
\frac{v^k (c_a)^{n_a - k} (c_b)^{n_b - k} }{k! \, (n_a-k)! \, (n_b - k)!}
\\
&=
\frac{e^{ - v - c_a - c_b}}{n_b !}
v^{n_a} (c_b)^{n_b - n_a}
\sum_{k=0}^{n_a}
\frac{n_b ! }{k! \, (n_a-k)! \, (n_b - n_a + k)!}
\left(
\frac{c_a c_b}{v}
\right)^k
\\
&=
\frac{e^{ - v - c_a - c_b}}{n_b !}
v^{n_a} (c_b)^{n_b - n_a}
L_{n_a}^{(n_b - n_a )}
\left(- \frac{c_a c_b}{v} \right) ,
\end{split}
\end{equation}
where
$L_{n}^{(p)}\left(x \right)$
are generalized Laguerre polynomials,
and we changed the summation label
$k \leftrightarrow (n_a - k)$ in the second line. We also used the following expression of generalized Laguerre polynomials
\begin{equation}
L_n^{(p)} (x)
= \sum_{k=0}^{n} \frac{(n+p)!}{(p+k)!(n-k)!k!} (-y)^k.
\end{equation}
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{eg2m.pdf} \caption{
Marginal probability distributions of $n_a$ and $n_b$ from the analytic solution and numerical simulations.
Solid lines are from the analytic solution,
which agrees well with the result of numerical simulations indicated by boxes (species A) and circles (species B).
The plot is based on $100,000$ events.
Parameters are chosen as $k_1=20,k_2=1,k_3=10,k_4=1,k_5=10$.
}
\label{fig:eg-corr-marginal}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=9cm]{eg2joint.pdf}
\caption{
Joint probability distributions from
the analytic solution and numerical simulations.
Parameters and the number of events are the same as Fig.~\ref{fig:eg-corr-marginal}.
}
\label{fig:eg-corr-joint}
\end{figure}
A similar expression can be
obtained for the case $n_a \ge n_b$.
Introducing
$p \coloneqq \min (n_a,n_b)$
and
$q \coloneqq \max (n_a,n_b)$,
the stationary distribution is finally written as
\begin{equation}
P_{\rm s}(n_a,n_b)
=
\frac{ e^{-v-c_a-c_b} }{q !}
v^p (c_a)^{n_a - p} (c_b)^{n_b - p}
L_{p}^{(q-p)}
\left( - \frac{c_a c_b}{v} \right) .
\end{equation}
The expression matches with the coefficients
of two-mode squeezed states \cite{PhysRevA.43.3854} (up to a normalization constant).
We have validated the analytically computed probability distributions
with numerically computed ones
using the Gillespie algorithm.
In Fig.~\ref{fig:eg-corr-marginal},
we plot the analytically calculated marginal distributions of species $A$ and $B$,
which are consistent with numerical simulations.
Figure~\ref{fig:eg-corr-joint} shows the analytic form of the joint distribution
and a histogram based on Monte Carlo simulations. The two are consistent.
\section{ Example with non-linear reactions }\label{sec:nonlinear}
The present method can be applied
to reaction systems that contain
reactions whose source complexes involve two or more species.
In such cases, the differential equations satisfied by the generating functions (in the stationary state) contain two or more partial derivatives, and obtaining analytic solutions
is a nontrivial task.
Squeezing can be applied in such situations as well.
Here, we discuss such an example.
As a starting point, let consider the following network $\Gamma$,
\begin{equation}
\label{eq:network 0 B A+B}
\begin{tikzcd}
\emptyset
\rar["\ell_3", shift left]
&
B
\lar["\ell_4", shift left]
\rar["\ell_1", shift left]
&
A + B
\lar["\ell_2", shift left]
\end{tikzcd}
\end{equation}
The reaction $A+B \to B$ involves two species.
The network $\Gamma$ is of a zero deficiency and weakly reversible.
Hence, the stationary distribution
is a product of Poisson distributions,
whose means are given by the steady-state solution,
\begin{equation}
\bar x_a = \frac{\ell_1}{\ell_2} ,
\qquad
\bar x_b = \frac{\ell_3}{\ell_4} .
\end{equation}
The stochastic Hamiltonian is written as
\begin{equation}
\begin{split}
H
&=
\ell_1
( a^\dag b^\dag - b^\dag )
b
+
\ell_2 ( b^\dag - a^\dag b^\dag )
a b
+ \ell_3 ( b^\dag - 1)
+ \ell_4 (1-b^\dag) b
\\
&=
( a^\dag b^\dag - b^\dag )
(\ell_1 b - \ell_2 a b )
+
( b^\dag - 1) (\ell_3 - \ell_4 b) .
\end{split}
\end{equation}
On this system, we perform the single-mode squeezing of species $A$.
The annihilation and creation operators of species $A$
are transformed as Eqs.~\eqref{eq:s-a-s} and \eqref{eq:s-a-s-2}, respectively.
The transformed Hamiltonian reads
\begin{equation}
\begin{split}
H'
&=
( a^\dag b^\dag - b^\dag )
S_a(r)
(\ell_1 b - \ell_2 a b )
S_a^{-1}(r)
+
( b^\dag - 1) (\ell_3 - \ell_4 b)
\\
&=
( a^\dag b^\dag - b^\dag )
(\ell_1 b - \ell_2 \cosh r\, a b )
+
( b^\dag - 1) (\ell_3 - \ell_4 b)
-
\ell_2 \sinh r
( (a^\dag)^2 b^\dag - a^\dag b^\dag ) b .
\end{split}
\end{equation}
We write the last term as
\begin{equation}
-
\ell_2 \sinh r
( (a^\dag)^2 b^\dag - b^\dag - a^\dag b^\dag + b^\dag) b
=
- \ell_2 \sinh r
( (a^\dag)^2 b^\dag - b^\dag ) b
+
\ell_2 \sinh r ( a^\dag b^\dag - b^\dag) b .
\end{equation}
Hence, the total Hamiltonian is written as
\begin{equation}
\begin{split}
H'
&=
( a^\dag b^\dag - b^\dag )
((\ell_1 + \ell_2 \sinh r) b - \ell_2 \cosh r\, a b )
+
( b^\dag - 1) (\ell_3 - \ell_4 b)
- \ell_2 \sinh r
( (a^\dag)^2 b^\dag - b^\dag ) b
\\
&\eqqcolon
( a^\dag b^\dag - b^\dag )
(k_1 b - k_2 a b )
+
( b^\dag - 1) (k_3 - k_4 b)
+
k_5 ( (a^\dag)^2 b^\dag - b^\dag ) b .
\end{split}
\end{equation}
This Hamiltonian represents the following reaction network,
\begin{equation}
\label{eq:network 0 B A+B 2A+B}
\begin{tikzcd}
\emptyset
\rar["k_3", shift left]
&
B
\lar["k_4", shift left]
\rar["k_1", shift left]
\dar["k_5"]
&
A+B
\lar["k_2", shift left]
\\
&
2 A + B
&
\end{tikzcd}
\end{equation}
which we call $\Gamma'$.
The parameters of $\Gamma'$ is written
using those of $\Gamma$ as
\begin{equation}
k_1 = \ell_1 + \ell_2 \sinh r,
\quad
k_2 = \ell_2 \cosh r,
\quad
k_3 = \ell_3,
\quad
k_4 = \ell_4,
\quad
k_5 = - \ell_2 \sinh r .
\end{equation}
We can solve this for $\{\ell_i\}_{i=1\ldots4}$ and $r$ as
\begin{equation}
\ell_1 = k_1 + k_5,
\quad
\ell_2 = \sqrt{(k_2)^2 - (k_5)^2},
\quad
\tanh r = - \frac{k_5}{k_2}.
\end{equation}
The probability generating function $\Psi'(z_a,z_b)$
for the system $\Gamma'$ satisfies the following differential equations,
\begin{align}
\left( \p_a
- \frac{k_5}{k_2}
z_a \right) \Psi' (z_a,z_b)
&=
\frac{k_1 + k_5}{k_2}
\Psi' (z_a,z_b),
\\
\p_b \Psi' (z_a,z_b) &=
\frac{k_3}{k_4}
\Psi' (z_a,z_b) .
\end{align}
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{egnl.pdf}
\caption{
Marginal probability distributions of $n_a$ and $n_b$ from the analytic solution and numerical simulations.
Solid lines are from the analytic solution,
which are consistent with the result of Gillespie simulations indicated by boxes (species A) and circles (species B).
Parameters are chosen as $k_1=10,k_2=1,k_3=20,k_4=1,k_5=20$,
and we have simulated $50,000$ events for the plot.
}
\label{fig:eg-nl-marginal}
\end{figure}
The normalized solution is given by
\begin{equation}
\Psi' (z_a,z_b)
=
\exp
\left[
\frac{k_5}{2k_2} ((z_a)^2-1) +
\frac{k_1 + k_5}{k_2} (z_a - 1)
+ \frac{k_3}{k_4} (z_b-1)
\right].
\end{equation}
The stationary distribution of $\Gamma'$
is analytically obtained as
\begin{equation}
P_{\rm s} (n_a,n_b)
=
e^{
- \frac{1}{k_2}
\left( \frac{3}{2} k_5 + k_1 \right)
-
\frac{k_3}{k_4}
}
\frac{1}{n_a !}
\left(
- \frac{k_5}{2k_2 }
\right)^{\frac{n_a}2}
H_{n_a}
\left( \frac{k_1+k_5}{\sqrt{ -2 k_2 k_5}} \right)
\frac{1}{n_b!}
\left(\frac{k_3}{k_4}\right)^{n_b} .
\label{eq:eg-nl-prob-analytic}
\end{equation}
We have checked that this expression is consistent with numerical simulations based on the Gillespie algorithm (Fig.~\ref{fig:eg-nl-marginal}).
\section{ Squeezing generic chemical reaction systems }\label{sec:generic}
So far, we have discussed squeezing transformations in three simple examples.
One can perform squeezing on more complicated reaction networks.
For example, the single-mode squeezing of
species $A$ on the following network results in:
\begin{equation}
\begin{tikzpicture}
\node at (-1, 0) {
{
$
\begin{tikzcd}
A
\rar["\ell_1", shift left]
&
B
\lar["\ell_2", shift left]
&
\emptyset
\rar["\ell_3", shift left]
&
A+B
\lar["\ell_4", shift left]
\end{tikzcd}
$
}
};
\node at (9, -0.3) {
{
$
\begin{tikzcd}
A
\rar["k_1", shift left]
&
B
\lar["k_2", shift left]
\dar["k_6", shift left]
&
\emptyset
\rar["k_3", shift left]
\dar["k_5", shift left]
&
A+B
\lar["k_4", shift left]
\\
& 2A + B & 2A
&
\end{tikzcd}
$
}
};
\draw[mybrightblue, -stealth, line width=4pt]
(2.5,0) -- (4.8,0);
\node at (3.6,-0.6){ \scalebox{1} {\color{mybrightblue} Squeezing }};
\end{tikzpicture}
\label{eq:squeeze-tr-eg}
\end{equation}
It is natural to ask
what kind of structural transformation
is induced in squeezing for a
generic reaction network,
which we discuss here.
Recall that the stochastic Hamiltonian
for a generic reaction network is
written as
\begin{equation}
H =
\sum_A k_A \left[
(a^\dag)^{t_A} - (a^\dag)^{s_A}
\right] a^{s_A} .
\end{equation}
We shall pick one species $v_i$
and act the two-mode squeezing operator
mixing $v_i$ and $v_j$ (the case of single-mode squeezing can be obtained by setting $j=i$).
The annihilation operator $a_i$ is transformed as
\begin{equation}
a_i \mapsto S_j (r) a_i S_j^{-1} (r)
= \cosh r \, a_i + \sinh r \, a^\dag_j ,
\end{equation}
For simplicity, we here assume that species $v_i$ appears only once as a source in a reaction
(namely, $s_{iA} \le 1$ for any $A$).
As a transformed Hamiltonian,
we consider the following:
\begin{equation}
H' =
\sum_A
k_A \left[
(a^\dag)^{t_A} -
(a^\dag)^{s_A}
\right]
S_j (r) a^{s_A} S^{-1}_j (r) .
\end{equation}
If the reaction $e_A$ contains one $v_i$ as a reactant,
\begin{equation}
S_j (r) a^{s_A} S_j^{-1} (r)
=
(\cosh r\, a_i + \sinh r \, a_j^\dag)
a^{s_A - e_{i}} ,
\end{equation}
where $e_{i} \in \mathbb N^V$ is a vector
whose $i$-th component is one
and other components are zero
if $i$ is a source of reaction $A$
and otherwise a zero vector.
This Hamiltonian $H'$ has a steady state that satisfies $H'\ket{\Psi'}=0$ given by a displaced squeezed state,
\begin{eqnarray}
\ket{\Psi'} = S_i(r) \ket{\Psi}.
\end{eqnarray}
In the transformed Hamiltonian,
the part containing one $v_i$
as a source is written as
\begin{equation}
\begin{split}
&
k_A \left[
(a^\dag)^{t_A} -
(a^\dag)^{s_A}
\right]
(\cosh r\, a_i + \sinh r\, a_j^\dag)
a^{s_A - e_{i}}
\\
&=
k_A \cosh r
\left[
(a^\dag)^{t_A} -
(a^\dag)^{s_A}
\right]
a^{s_A}
+
k_A \sinh r
\left[
(a^\dag)^{t_A + e_{j} }
- (a^\dag)^{s_A + e_{j} }
\right]
a^{s_A - e_{i}}
.
\end{split}
\end{equation}
We write the second term as
\begin{equation}
\begin{split}
\text{(2nd term)}
&=
k_A \sinh r
\left[
(a^\dag)^{t_A + e_{j}} - (a^\dag)^{s_A + e_{j}}
\right]
a^{s_A - e_{i}}
\color{mydarkred}
+
k_A \sinh r
(a^\dag)^{s_A - e_{i}}
a^{s_A - e_{i}}
\color{mydarkpurple}
-
k_A \sinh r
(a^\dag)^{s_A - e_{i}}
a^{s_A - e_{i}}
\color{black}
\\
&=
k_A \sinh r
\left[
(a^\dag)^{t_A + e_{j}} - (a^\dag)^{s_A - e_{i}}
\right]
a^{s_A - e_{i}}
-
k_A \sinh r
\left[
(a^\dag)^{s_A + e_{j}} - (a^\dag)^{s_A - e_{i}}
\right]
a^{s_A - e_{i}} ,
\end{split}
\label{eq:2nd-term}
\end{equation}
where the colored part sums up to zero.
From Eq.~\eqref{eq:2nd-term},
we can see that two additional reactions appear,
whose chemical content can be read off.
Namely, if the reaction $e_A$ contains
one $v_i$ as a reactant,
the transformed reaction system
contains the following two
additional reactions\footnote{
Similarly, if there is a reaction containing one $v_j$ as a source, two additional reactions appear from the reaction via the two-mode squeezing.
},
\begin{align}
s_A - e_{i}
\quad &\longrightarrow
\quad s_A + e_{j},
\label{eq:add-reac-1}
\\
s_A - e_{i}
\quad
&\longrightarrow
\quad
t_A + e_{j}.
\label{eq:add-reac-2}
\end{align}
The reaction constants of
these reactions are given by
$k_A \sinh r$
and $-k_A \sinh r$.
If the original reaction contains
either of the two additional reactions,
it is possible to take all the reaction coefficients positive in the resulting system.
If we set $j=i$, we obtain the additional reactions in the case of single-mode squeezing of species $v_i$.
One can check that all the examples discussed
earlier can be understood from the rules ~\eqref{eq:add-reac-1} and
\eqref{eq:add-reac-2}.
For example, for the example~\eqref{eq:squeeze-tr-eg},
there are two reactions that involve $A$ as its source.
From the reaction $A \to B$, there appear the following two reactions,
\begin{align}
\emptyset &\longrightarrow 2A, \\
\emptyset &\longrightarrow A+B.
\end{align}
The second one already exists in the original set of reactions,
and the transformation renormalizes the rate constant of the reaction.
From the reaction $A +B \to \emptyset$, we have
\begin{align}
B &\longrightarrow 2A+B, \\
B &\longrightarrow A,
\end{align}
where the second one is in the original reactions.
As a result, we obtain the network shown on the right of \eqref{eq:squeeze-tr-eg}.
\section{ Summary and discussion }\label{sec:summary}
In this paper, we studied the stationary distributions of stochastic chemical reaction systems using an analogy to the quantum mechanics.
Stationary distributions of the product Poisson form
in the Anderson--Craciun--Kurtz theorem correspond to coherent states in the quantum-mechanical formulation, and we considered squeezing of the coherent states.
Using the same squeeze operator,
the stochastic Hamiltonian is also transformed,
and the squeezed coherent states are the zero eigenstate of the new Hamiltonian.
The transformed Hamiltonian represents a different chemical reaction network from the original one,
and in general its deficiency is nonzero and
weak reversibility is lost.
From the squeezed coherent states, we can obtain analytical expressions of the stationary distribution of the new reaction network.
We validated the obtained expressions of stationary distributions via comparison with stochastic simulations.
We also discussed the form of additional reactions that appear in a squeezing for a generic chemical reaction network.
The present method can be applied even when the reaction network is higher than first order and certain reactions involve two or more species as reactants.
However, we admit that the reaction networks that can be reached by squeezing those with complex-balanced equilibria are rather limited.
Still, we believe that it would be possible to find analytic stationary distributions by considering other kinds of transformation\footnote{
A duality relation for stochastic processes has been discussed~\cite{Ohkubo_2013} based on the Doi--Peliti formalism.
}, which may or may not come from the analogy to quantum mechanics.
In this paper, we considered single-mode and two-mode squeezed states.
There are other types of multiphoton coherent states~\cite{DELLANNO200653}
and it would be interesting to examine their counterparts in stochastic chemical systems.
\begin{acknowledgments}
The authors are grateful to Hyukpyo Hong and Bryan Hernandez for helpful discussions.
Y.~H. and R.~H. are supported by an appointment of the JRG Program at the APCTP, which is funded through the Science and Technology Promotion Fund and Lottery Fund of the Korean
Government, and is also supported by the Korean Local
Governments of Gyeongsangbuk-do Province and Pohang City.
Y.~H. is also supported by the National Research Foundation (NRF) of Korea (Grant No. 2020R1F1A1076267) funded by the Korean Government (MSIT).
\end{acknowledgments}
|
1,116,691,499,193 | arxiv | \section{Introduction}
In network communication, latency is defined as the time elapsed between the transmission of a packet from the source and its recovery at the receiver. Ensuring reliable communication with low latency is the main challenge in designing next-generation communication systems. To this end, we focus on studying error-correcting codes that ensure reliability in the presence of low decoding delay constraints. Our motivation stems from several modern applications such as Extended Reality (XR), Virtual Reality (VR), and Cloud Gaming.
In this work, we focus on forward error correction (FEC). FEC codes for correcting erasures (i.e., packet drops) with low decoding delay were first investigated by Martinian {\em et al.} in~\cite{Martinian2004, Martinian2007}. The works in \cite{Martinian2004, Martinian2007} focused on designing codes that correct a burst of $B$ erasures with a decoding delay that is at most $T$, and showed that the optimal code rate for this setting is $R=\frac{T}{T+B}$. In~\cite{Martinian2007}, Martinian and Trott presented an explicit rate-optimal code that can be constructed for all values of $B, T$, and $R$, with a linear field size $q=O(T)$. In~\cite{Badr2017}, the authors generalized the study on codes with low decoding delay by introducing and studying a channel model that considers both burst and arbitrary erasures. This channel model was referred to as the sliding-window erasure channel, denoted by $C(N, B, W)$, where in any sliding window of size $W$, the channel can introduce either an erasure burst of maximum length $B$, or else, up to $N$ arbitrary erasures. The authors in \cite{Badr2017} showed that, without loss of generality, the channel parameter $W$ could be set to $W=T+1$, and the optimal code rate for this channel is~$R=\frac{T+1-N}{T+1-N+B}$.
The term {\em streaming codes} is widely used in the literature to refer to codes that can correct {\em all} erasure patterns generated by the $C(N, B, W)$ sliding-window channel model. A series of works focused on designing rate-optimal streaming codes, e.g.,~\cite{Badr2017,Kumar2018, Fong2019l, dudzicz2019explicit, Rakumar2020, Domanovitz2020, Kumar2020IT, Kumar2020L,fong2020optimal,badr2017fec,badr2011diversity}. These codes have different field size requirements in terms of $T$, ranging from exponential $q=O(e^T)$ to linear $q=O(T)$. Existing streaming codes with linear field sizes can be constructed only for certain values of $N, B,$ and $T$, e.g., \cite{Rakumar2020, Kumar2020L}. The state-of-the-art optimal streaming code that can be constructed for {\em all} values of the parameters $N, B, T$, is the one that was recently presented by Domanovitz {\em et al.}~\cite{Domanovitz2020}, which requires a quadratic field size $q=O(T^2)$. Some works also considered variations of the $C(N, B, W)$ sliding-window channel, the authors in~\cite{badr2015streaming} proposed a code with cubic field size $q=O(T^3)$ that can correct most erasures when a window of size $W$ contains a {\em single} arbitrary erasure {\em in addition} to a burst erasure of maximum size $B$.
The goal of introducing sliding-window channels such as the $C(N, B, W)$ channel in~\cite{Badr2017} was to provide a theoretically tractable approximation of the Gilbert-Elliot (GE) channel\cite{Gilbert, Elliot}, which is a random channel with memory that is commonly used to model packet erasures over networks. However, the accuracy of this approximation heavily depends on the values of $N,B,T$, and the GE channel parameters. Namely, some realizations of the GE channel may include both burst and multiple arbitrary erasures occurring in the same window, and such cases are not covered by the sliding-window erasure channels studied in~\cite{Badr2017} and~\cite{badr2015streaming}. These realizations could have a relatively high probability of occurrence depending on the values of the system parameters.
In our work, we adopt a different approach from the literature for constructing low-delay erasure codes, where we do not optimize the performance of the code over sliding-window channels of any kind. Instead, we focus on constructing low-delay codes with linear field size that can also correct erasure patterns consisting of both burst and multiple arbitrary erasures occurring simultaneously in the same window. Moreover, our code design is focused on minimizing the packet loss probability by allowing the recovery of a maximal number of erased packets in scenarios where the erasure pattern is beyond the error correction capability of the code. The parameters of our code are $N,B,$ and $T$, where $T$ represents the maximum decoding delay and $(N,B)$ determines the erasure correction capability of the code. Our codes are explicit, systematic and can be constructed for all values of $N, B$, and $T$, with a linear field size $q\geq T$. We provide simulation results on the packet loss probability of our code over the GE channel and compare the results to existing constructions in the literature. The comparison shows that our code can outperform existing streaming codes in terms of both packet loss probability and maximum delay, for the same code rate and lower field size. An important implication of our results is that constructing rate-optimal streaming codes with zero error over theoretical channels, such as sliding-window channels, does not necessarily translate into codes with optimal performance over practical channels, such as the GE channel.
\section{Preliminaries}
\label{sec:2}
\subsection{System Model}
\label{model}
In what follows, we discuss coding approaches for tackling the low-delay packet transmission problem. Convolutional codes are typically used for the low-delay packet transmission setting, and the delay is measured in terms of the time units that one needs to wait for before being able to decode an erased packet. These codes can be constructed in multiple ways but they share the basic idea of introducing redundant symbols or packets through time. The redundancy is generated by encoding the information symbols or packets causally. More specifically, the parity symbols generated at time $i$ can only depend on the information symbols at times $j\leq i$. We will be discussing two main approaches for constructing such convolutional codes that differ in terms of the allocation of redundancy across time instances. We will focus on systematic code constructions since the systematic property ensures that unerased packets are recovered with zero delay.
The first known approach~\cite{Martinian2004} for designing low-delay codes is the following. Consider the setting where at each time~$i$, a source generates an information packet denoted by \mbox{$\mathbf{u}[i]=(u_1[i],u_2[i],\ldots,u_{k}[i])\in \mathbb{F}_q^k$} consisting of $k$ symbols over $\mathbb{F}_q$. At time $i$, a systematic encoder maps the information packet $\mathbf{u}[i]$ into an $n$-symbol coded packet denoted by \begin{equation}
\label{de}
\mathbf{x}[i]=(u_1[i],\ldots,u_{k}[i],p_1[i],\ldots,p_{n-k}[i])\in \mathbb{F}_q^n,
\end{equation}
where the parity symbols $p_1[i],\ldots,p_{n-k}[i]$ are generated as a function of $\mathbf{u}[j]$ with $j\leq i$. The rate of the code is $R=k/n$. Our goal is to design the code such that it allows decoding an erased packet under a maximum delay constraint $T$, where $T$ is a code parameter. Therefore, in the case where $\mathbf{x}[i]$ is erased, the underlying code should allow recovering the information packet $\mathbf{u}[i]$ by the time the packet $\mathbf{x}[i+T]$ is received.
In the previous description, a delay-constrained code with rate $R=k/n$ is constructed by introducing $n-k$ redundant parity symbols at each time $i$. Alternatively, a second known approach~\cite{VajhaITW, Vajha} for constructing a code with the same delay properties and same rate $R=k/n$, is to transmit $k$ information packets, followed by $n-k$ parity packets, in $n$ consecutive time instances. More specifically, the first $n$ transmitted packets would be given by
\begin{equation}
\label{he}
\mathbf{x}[i] = \left\{\def\arraystretch{1.2}%
\begin{array}{@{}c@{\quad}l@{}}
\mathbf{u}[i] \in \mathbb{F}_q^k, & \text{for $i=1,\ldots,k$},\\
\mathbf{p}[i] \in \mathbb{F}_q^k, & \text{for $i=k+1,\ldots,n$},\\
\end{array}\right.
\end{equation}
where $\mathbf{p}[k+1], \ldots, \mathbf{p}[n]$ are $n-k$ parity packets of length $k$ that are generated as a function of the $k$ preceding information packets $\mathbf{u}[1],\ldots,\mathbf{u}[k]$. This process can be repeated for every group of $k$ information packets.
\subsection{Diagonal and Horizontal Interleaving}
\label{interleaving}
Next, we present two known interleaving techniques for transforming a low-delay block code into a convolutional one with equivalent code properties. The importance of these interleaving techniques is that they reduce our problem to designing classical block codes that satisfy certain properties.
Consider an $(n,k)$ block code $\mathcal{C}\subseteq \mathbb{F}_q^n$. Suppose that a codeword $\mathbf{x}=(x_1,\ldots,x_n)\in \mathcal{C}$ is affected by erasures resulting in a string $\mathbf{y}=(y_1,\ldots,y_n)$, where $y_i\in \mathbb{F}_q \cup \{\star\}$. We say that an erased codeword symbol $x_i$ is decoded with maximum delay $T$, if this symbol can be recovered using $(y_1,y_2,\ldots,y_{i+T^*})$, where $T^*=\min \{T,n-i\}$.
The two interleaving techniques that we explain next, called diagonal and horizontal interleaving, allow transforming any block code $\mathcal{C}$ into a convolutional code that has the same code rate and equivalent decoding delay properties.
The diagonal interleaving technique~\cite{Martinian2007} is applied in the setting corresponding to~\eqref{de}. In diagonal interleaving, the convolutional code is obtained from its block code counterpart~$\mathcal{C}$ by positioning the codeword symbols diagonally across time instances, i.e., for all $i$, the symbols of the transmitted packet $\mathbf{x}[i]=(x_1[i],\ldots,x_n[i])\in \mathbb{F}_q^n$ satisfy the following property
$\left(x_1[i],x_2[i+1],x_3[i+2],\ldots,x_n[i+n-1]\right)\in \mathcal{C}.$
The horizontal interleaving technique~\cite{VajhaITW,Vajha} is applied in the setting corresponding to~\eqref{he}. Let $\mathcal{I}\triangleq \{1+mn~|~m\in \mathbb{N}\}$. In horizontal interleaving, each transmitted packet $\mathbf{x}[i]$ is of length~$k$, such that for $i\in \mathcal{I}$ we have
\[
\mathbf{x}[i+a-1] = \left\{\def\arraystretch{1.2}%
\begin{array}{@{}c@{\quad}l@{}}
\mathbf{u}[i+a-1] \in \mathbb{F}_q^k, & a=1,2,\ldots,k,\\
\mathbf{p}^{a-k+1}[i+a-1] \in \mathbb{F}_q^k, & a=k+1,\ldots,n,\\
\end{array}\right.
\]
where $\mathbf{p}^1[i+k],\mathbf{p}^2[i+k+1], \ldots,\mathbf{p}^{n-k}[i+n-1]$ represent the $n-k$ parity packets generated by horizontally encoding the symbols of $\mathbf{u}[i], \mathbf{u}[i+1],\ldots,\mathbf{u}[i+k-1]$ using the underlying systematic code $\mathcal{C}$. Namely, for $i\in \mathcal{I}$ and $j=1,\ldots,k$,
$$\left( u_j[i],\ldots,u_j[i+k-1], p^1_j[i+k],\ldots,p^{n-k}_j[i+n-1]\right) \in \mathcal{C}.$$
One can easily show that the aforementioned interleaving techniques result in a convolutional code that has equivalent decoding properties as the block code $\mathcal{C}$~\cite{Martinian2007, Kumar2020IT, VajhaITW, Vajha}. For more details on these interleaving techniques, we refer interested readers to~\cite{Martinian2007, Vajha} and references therein.
\begin{comment}
\subsection{Gilbert-Elliot (GE) and Sliding-Window Channels}
\label{GE}
The GE channel is a commonly-accepted model for packet erasures in networks. A GE channel with parameters $(\alpha,\beta,\epsilon_0,\epsilon_1)$ can be represented by a 2-state discrete-time Markov chain. At any instance of the GE channel, the channel is either in a good state or in a bad one (Figure~\ref{fig2}). In the good and bad states, the GE channel behaves as an i.i.d. erasure channel with erasure probabilities $\epsilon_0$ and $\epsilon_1$, respectively (typically $\epsilon_1 \gg \epsilon_0$). The probability of transitioning from the good state to the bad one between any two consecutive instances of the GE channel is $\alpha$, and the probability of transitioning from the bad state to the good one is $\beta$.
\begin{figure}[h!]
\vspace{0.2cm}
\centering
\begin{tikzpicture}
\node[state, text width=1cm, align=center] (s) {Good $\epsilon_0$};
\node[state, text width=1cm, align=center, right=of s] (r) {Bad \\ $\epsilon_1$};
\draw[every loop]
(r) edge[bend right, auto=right] node {$\beta$} (s)
(s) edge[bend right, auto=right] node {$\alpha$} (r)
(s) edge[loop left] node {$1-\alpha$} (s)
(r) edge[loop right] node {$1-\beta$} (r);
\end{tikzpicture}
\caption{2-state Markov chain illustration of the GE channel.}
\label{fig2}
\end{figure}
The GE channel has memory, which makes the theoretical analysis of the performance of codes over this channel challenging.
The $C(N, B, W)$ sliding-window-erasure channel was introduced in~\cite{Badr2017} as a theoretically tractable channel that approximates the behavior of the GE channel.
The $C(N, B, W)$ channel model assumes that in any sliding window of length~$W$, the channel can introduce either a single erasure burst of maximum length $B$, or a maximum of $N\leq B$ arbitrary erasures. The main intuition behind this approximation is that in a given time interval (window), the channel is likely to be and remain in one of the two states (good and bad). In the bad state, the channel is expected to introduce up to a certain number of burst erasures ($B$), and in the good state, the channel is expected to introduce up to a certain number of arbitrary erasures ($N\leq B$).
The previous works in the literature adopt this approximation and focus on optimizing their code constructions over the $C(N, B, W)$ channel. More specifically, the term streaming codes is used to refer to codes that have zero-error over the $C(N, B, W)$ channel, i.e., can correct all erasure patterns generated by this channel under the maximum delay constraint. As for the empirical performance of the code, the GE channel with $\epsilon_0=\epsilon$ and $\epsilon_1=1$ is typically used to simulate the performance of streaming codes. The accuracy of the aforementioned approximation depends on several factors including, but not limited to, the size of the window and values of the GE channel parameters $(\alpha,\beta,\epsilon_0,\epsilon_1)$. In general, a realization of the GE channel over a certain time interval may include both burst and arbitrary erasures occurring simultaneously in that interval, which is not covered by the $C(N, B, W)$ channel model. Therefore, streaming codes have a non-zero packet loss probability when applied over the GE channel.
In our work, we do not restrict ourselves to optimizing our construction over the $C(N, B,W)$ channel, instead, we construct codes that can also correct some combinations consisting of both burst and arbitrary erasures. Furthermore, towards minimizing the packet loss probability, our design aims at maximizing the number of packets that can be successfully recovered in scenarios where the erasure pattern is uncorrectable, i.e., not all erasures can be recovered.
\end{comment}
\section{Code Construction}
\label{cons}
\subsection{Encoding}
\label{encoding}
Suppose that we are given a target maximum delay $T$, and we choose certain code design parameters $B\geq 0$ and $N\geq 0$. Let \mbox{$\mathcal{C}\subseteq \mathbb{F}_q^n$} be an $(n,k)$ block code with $k=T-N$ and $n=T+B$, such that $B+N\leq T$ and $q\geq T$. Let $\mathbf{G}$ be the $k\times n$ generator matrix of $\mathcal{C}$. Let $\mathbf{u}=(u_1,\ldots, u_k)\in \mathbb{F}_q^k$ be the information message and $\mathbf{x}\in \mathbb{F}_q^n$ be the codeword. Since the code is systematic, we have $\mathbf{x}=\mathbf{u}\mathbf{G}=(u_1,\ldots, u_k, p_1, \ldots, p_{n-k}),$ where $p_1, p_2, \ldots, p_{n-k}$ represent the $n-k=B+N$ parity symbols. Next, we present our construction by giving the expressions of the parity symbols $p_1, \ldots, p_{B+N}$, as a function of the information symbols $u_1,\ldots, u_k$.
\begin{enumerate}[leftmargin=*]
\item The first $N$ parity symbols $p_{1},p_{2},\ldots,p_{N}$ are generated using a systematic MDS code over $\mathbb{F}_q$. For explicitness, the systematic Reed-Solomon (RS) encoder based on the Cauchy matrix construction is used.
\item The $B$ parity symbols $p_{N+1},p_{N+2},\ldots,p_{N+B},$ are generated as interleaved parity checks with an interleaving factor of $B$, such that $\forall i \in \{1,2,\ldots,B\}$,
\begin{equation}
\label{p2}
p_{N+i}=u_i+u_{i+B}+\ldots+u_{i+(q-1)B}+ \mathds{1}_{\{i\leq r\}} u_{k-r+i},
\end{equation}
\end{enumerate}
where $k=qB+r$ and $r=k\Mod B$. The final step of the construction is to transform this block code into a convolutional one by applying either diagonal or horizontal interleaving, as explained in Section~\ref{interleaving}.
\begin{comment}
\begin{example}[Encoding, $T=10, B=4, N=2$]
\label{ex3}
Suppose that we are given that $T=10$ and we want to construct a code with design parameters $B=4$ and $N=2$. It follows from the construction that $k=T-N=8, n=k+B+N=14$. Consider a field size $q=11$ which satisfies $q\geq k+N$.
One way to construct the Cauchy matrix $\mathbf{M}$ is to choose the elements $a_i=i-1$ for $i=1,\ldots,k$, and $b_j=k+i-1$ for $j=1,\ldots,N$, from the field $\mathbb{F}_{11}$. This choice results in a $8 \times 2$ matrix
$$\mathbf{M}= \begin{pmatrix}
4 & 6 \\
3 & 4 \\
9 & 3 \\
2 & 9 \\
8 &2 \\
7 &8 \\
5 &7 \\
10 &5
\end{pmatrix},
$$
where the elements $m_{i,j}$ of $\mathbf{M}$ follow from~\eqref{eq:m} by computing the multiplicative inverses of $a_i-b_j$ in $\mathbb{F}_{11}$. Therefore, the codeword $\mathbf{x}\in \mathcal{C}$ is given by
$$\mathbf{x}=\left(u_1,u_2,u_3,u_4,u_5,u_6,u_7,u_8, \begin{matrix} 4u_1+3u_2+9u_3+2u_4 \\ + \\ 8u_5+7u_6+5u_7+10u_8 \end{matrix}, \begin{matrix} 6u_1+4u_2+3u_3+9u_4 \\ + \\ 2u_5+8u_6+7u_7+5u_8 \end{matrix}, \begin{matrix} u_1 \\ + \\ u_5 \end{matrix},\begin{matrix} u_2 \\ + \\ u_6 \end{matrix},\begin{matrix} u_3 \\ + \\ u_7 \end{matrix},\begin{matrix} u_4 \\ + \\ u_8 \end{matrix} \right).$$
\end{example}
\begin{example}[Encoding, $T=16,B=4,N=4$]
\label{ex4}
Suppose that we are given that $T=16$ and we want to construct a code with design parameters $B=4$ and $N=4$. It follows from the construction that $k=T-N=12, n=k+B+N=20$. Consider a field size $q=17$ which satisfies $q\geq k+N$.
One way to construct the Cauchy matrix $\mathbf{M}$ is to choose the elements $a_i=i-1$ for $i=1,\ldots,k$, and $b_j=k+i-1$ for $j=1,\ldots,N$, from the field $\mathbb{F}_{17}$. This choice results in a $12 \times 4$ matrix
$$\mathbf{M}= \begin{pmatrix}
7 &13 &6 &9 \\
3 &7 &13 &6 \\
5 &3 &7 &13 \\
15 &5 &3 &7 \\
2 &15 &5 &3 \\
12 &2 &15 &5 \\
14 &12 &2 &15 \\
10 &14 &12 &2 \\
4 &10 &14 &12 \\
11 &4 &10 &14 \\
8 &11 &4 &10 \\
16 &8 &11 &4
\end{pmatrix}, $$
where the elements of $\mathbf{M}$ follow from~\eqref{eq:m} by computing the multiplicative inverses of $a_i-b_j$ in $\mathbb{F}_{17}$. Therefore, it follows from~\eqref{p1} and \eqref{p2} that the parities of the codeword $\mathbf{x}\in \mathcal{C}$ are given by
\begin{align*}
p_1 &= 7u_1+3u_2+5u_3+15u_4+2u_5+12u_6+14u_7+10u_8+4u_9+11u_{10}+8u_{11}+16u_{12}, \\
p_2 &= 13u_1+7u_2+3u_3+5u_4+15u_5+2u_6+12u_7+14u_8+10u_9+4u_{10}+11u_{11}+8u_{12},\\
p_3 &= 6u_1+13u_2+7u_3+3u_4+5u_5+15u_6+2u_7+12u_8+14u_9+10u_{10}+4u_{11}+11u_{12}, \\
p_4 &= 9u_1+6u_2+13u_3+7u_4+3u_5+5u_6+15u_7+2u_8+12u_9+14u_{10}+10u_{11}+4u_{12}, \\
p_5 &= u_1+u_5+u_9, \\
p_6 &= u_2+u_6+u_{10}, \\
p_7 &= u_3+u_7+u_{11}, \\
p_8 &= u_4+u_8+u_{12}.
\end{align*}
\end{example}
\end{comment}
\subsection{Decoding}
\label{decoding}
To explain the decoding scheme, we start by examining the simple case when the total number of erasures in $(x_1,\ldots,x_{k+N})$ is $\leq N$. Notice that the punctured code obtained by only considering the first $k+N$ symbols of each codeword, is an $(k+N,k)$ systematic RS code. Hence, if the total number of erasures in the first $k+N$ symbols is $\leq N$, we apply RS decoding to recover these erasures. In this case, it follows from the code design that the erased symbols can be recovered with a delay that is $<T$.
Next, we examine the case when the total number of erasures in $(x_1,\ldots,x_{k+N})$ is $> N$. In this case, we are interested in decoding as many information symbols (packets) as possible while respecting the maximum delay constraint $T$. It follows from the code design that the first $B-1$ information symbols $u_1,u_2,\ldots,u_{B-1}$ fall within the maximum decoding delay constraint $T$ since these symbols need to be decoded before the end of the code. Whereas the delay constraint for the remaining information symbols $u_{B},u_{B+1},\ldots,u_{k}$ is inactive because the constraint falls right at the end or past the end of the code. We refer to the symbols $u_1,\ldots,u_{B-1}$ as urgent symbols, and to $u_{B},\ldots,u_{k}$ as non-urgent symbols.
We start our decoding process by decoding the urgent symbols $u_1,\ldots,u_{B-1}$ symbol-by-symbol from left to right. Suppose that $u_1$ is erased, we attempt to decode $u_1$ using the interleaved parity check $p_{N+1}$. If $u_1$ is the only summand that is erased in the interleaved parity check $p_{N+1}$, then $u_1$ can be successfully recovered by a simple addition/subtraction operation. Otherwise, $u_1$ is considered to be lost because it could not be decoded within its delay constraint. Then, similarly, we proceed to decode the next erased urgent symbol. If at any point of this symbol-by-symbol decoding process, the number of erased symbols in $(x_1,\ldots,x_{k+N})$ becomes $\leq N$, we use the punctured RS code to decode all symbols, and the decoding is subsequently terminated. In some scenarios, an urgent symbol may not be decodable with delay $\leq T$, but could be decoded at later stages of the decoding process with delay $>T$. In such scenarios, we still consider the urgent symbol to be lost since its recovery was not successful within the maximum decoding delay constraint $T$.
As for the non-urgent symbols $u_{B},\ldots,u_{k}$, the general decoding approach for these symbols consists of two phases. In the first phase, we decode as many symbols as possible using the interleaved parity checks $p_{N+1},\ldots,p_{N+B}$, given in~\eqref{p2}. In the second phase, we decode the remaining symbols that were not recovered in phase 1 by using the RS parities $p_1,\ldots,p_N$. The worst-case overall decoding complexity is $O(T^2)$~\cite{berlekamp}.
\subsection{Erasure Correction Capability}
In this section, we discuss the erasure correction capability of the code in terms of the code parameters $N,B,$ and $T$. Namely, we detail some of the cases where {\em all} erasures can be decoded, and we present a general condition under which the partial recovery of {\em some} erasures is possible.
{\em Case 1:} Suppose that the erasure pattern consists of \mbox{$N'\leq N$} arbitrary erasures and no other erasures are present. In this case, all erased information symbols can be decoded using the punctured RS code obtained by only considering the first $k+N$ codeword symbols. Furthermore, it follows from the construction that the corresponding decoding delay is $<T$.
{\em Case 2:} Suppose that the erasure pattern consists of a burst of size $B'\leq B$ and no other erasures are present. In this case, we show that, under certain conditions, all the erased symbols can be decoded using the interleaved parity checks $p_{N+1},\ldots,p_{N+B}$. We focus our discussion on the case where all the symbols are erased in a given burst since it is the most challenging one. Recall that $k=qB+r$, with $r=k\Mod B$. For $i\in \{1,2,\ldots,B\}$, if $r=0$, let $S_i= \emptyset$; otherwise, let $S_i= \{u_{k-r+i}\}$, and define the sets
$ U_i\triangleq \{u_i, u_{i+B}, u_{i+2B}, \ldots, u_{i+(q-1)B}\} \cup S_i, $
which partition the set of information symbols $\{u_1,\ldots,u_k\}$. Notice that $U_i$ contains all the information symbols that appear as summands in the interleaved parity check $p_{N+i}$. Assume that the burst erases the systematic symbols $u_{i'},u_{i'+1},\ldots,u_{i'+B'-1}$, with $i'\in \{1,\ldots,k-B'+1\}$. It follows from the construction of the interleaved parity checks that the \mbox{$B'\leq B$} consecutive message symbols $u_{i'},\ldots,u_{i'+B'-1}$ belong to different partitions $U_i$. Therefore, any burst of size $B'\leq B$ can erase at most one of the summands in $p_{N+1},p_{N+2},\ldots,p_{N+B}$. Hence, the erased symbols $u_{i'},\ldots,u_{i'+B'-1}$ can be easily decoded by a simple subtraction/addition operation applied on the parity checks.
The argument above covers the case where the burst affects only the systematic symbols. Since the code is systematic, the case where the burst affects only the parity symbols is trivial. For this reason, we examine next the case where the burst affects some systematic and parity symbols simultaneously. Suppose that a systematic symbol $u_i$, $i\in \{1,\ldots,k\}$, appears as a summand in the interleaved parity check $p_j$, $j\in \{N+1,\ldots,N+B\}$. If $B \vert k$, i.e., $r=0$, it follows from the definition of the interleaved parity checks and their positioning in the codeword that any burst of size $B'\leq B$ cannot erase $u_i$ and $p_j$ simultaneously. Namely, if $r=0$, the construction guarantees the availability of every systematic symbol in at least one coded symbol after any $B'\leq B$ burst erasures. Hence, all erased information symbols can be decoded using $p_{N+1},\ldots,p_{N+B}$. Also, if $B\nmid k$, i.e., $r>0$, one can easily show that any $B'$ burst erasures can be decoded if $B'\leq N+r$. Furthermore, it follows from the construction that the decoding delay is $\leq T$ for all recovered symbols.
{\em Case 3:} Suppose that the erasure pattern consists of a burst of size $B'\leq B$ in addition to $1\leq N'\leq N$ arbitrary erasures. We will focus on the case where $B'=B>N$. Consider the case where the burst affects the non-urgent symbols $u_{B},\ldots,u_{k}$. In this case, the following general rule applies. The code can correct $B$ burst erasures in addition to $N'\leq N$ arbitrary erasures if the number of erasures to be decoded in phase~2 of the decoding process (Section~\ref{decoding}) is $N_2<N$, where $N_2$ denotes the number of erasures to be decoded in phase~2. To understand this, recall that any burst of size $B$ can erase only one of the summands in $p_{N+1},\ldots,p_{N+B}$. Hence, if $N'$ arbitrary erasures occur in addition to the $B$ burst erasures, then at least $B-N'$ out of the $B$ interleaved parity checks $p_{N+1},\ldots,p_{N+B}$ will have only one erased summand (that can be recovered by simple addition/subtraction operation). Therefore, by the end of decoding phase~1, at least $B-N'$ out of the $B+N'$ erasures can be successfully decoded. Consequently, the number of erasures to be decoded in phase~2 satisfies $N_2 \leq B+N'-(B-N')=2N'$. Using the $(k+N,k)$ punctured code
systematic RS code, up to $N$ erasures can always be decoded in phase 2. Notice that if $N'\leq N/2$, then $N_2\leq N$, and hence, all erased symbols can be decoded successfully. Whereas if $N'>N/2$, then some erasure patterns, but not all, can be corrected depending on the locations of the $N'$ arbitrary erasures. We omit the discussion of more cases due to space limitations.
Furthermore, a key feature of our code is that it allows the recovery of a large number of erased symbols in many cases where the erasure pattern is not correctable. Note that this is contrary to codes such as $(n,k)$ systematic RS codes, where not a single erasure can be recovered when the total number of erasures is $> n-k$. In practice, this decoding feature translates into a low packet loss probability as we show in Section~\ref{simulnew}. The general condition under which an information symbol can be recovered with maximum delay $T$ is given in Proposition~\ref{prop1}.
\begin{proposition}
\label{prop1}
Let $\bm{\varepsilon}\in\{0,1\}^{n}$ denote an erasure pattern, where $\varepsilon_j=1$, $j=1,\ldots,n$, indicates that the codeword symbol $x_i$ is erased. Let $\mathbf{e}_i\in \{0,1\}^k$, $i=1,\ldots,k$, be the $i^{th}$ canonical basis vector, and $\Tilde{\mathbf{G}}(\bm{\varepsilon}_1^{j})\in \mathbb{F}_q^{k\times n^*}$ be the generator matrix of the code $\mathcal{C}$ (Section~\ref{cons}) punctured column-wise at the indices of $\bm{\varepsilon}_1^{j}=(\varepsilon_1,\ldots,\varepsilon_j)$ whose entries are zero, with $n^*=j-\sum_{i=1}^j \varepsilon_i$. For a given $\bm{\varepsilon}$, an information symbol $u_i$ can be decoded with maximum delay $T$ if and only if $\mathbf{e}_i$ is in the column span of $\Tilde{G}(\bm{\varepsilon}_1^{i+T^*})$, with $T^*=\min \{T,n-i\}$.
\end{proposition}
\begin{comment}
\setcounter{example}{2}
\begin{example}[Continued, Decoding Scenario 1, $T=10, B=4, N=2$]
We continue Example~\ref{ex3} given in Section~\ref{encoding} by providing a decoding example. Suppose that a burst of size $4$ erases the symbols $u_2,u_3,u_4,u_5$, and $1$ arbitrary erasure affects $u_8$. The erased symbols are highlighted in red:
$$\mathbf{x}=\left(u_1,{\color{red} u_2},{\color{red} u_3},{\color{red} u_4},{\color{red} u_5},u_6,u_7,{\color{red}u_8}, \begin{matrix} 4u_1+3{\color{red} u_2}+9{\color{red}u_3}+2{\color{red}u_4} \\ + \\ 8{\color{red}u_5}+7u_6+5u_7+10{\color{red} u_8} \end{matrix}, \begin{matrix} 6u_1+4{\color{red}u_2}+3{\color{red}u_3}+9{\color{red}u_4} \\ + \\ 2{\color{red}u_5}+8u_6+7u_7+5{\color{red} u_8} \end{matrix}, \begin{matrix} u_1 \\ + \\ {\color{red}u_5} \end{matrix},\begin{matrix} {\color{red}u_2} \\ + \\ u_6 \end{matrix},\begin{matrix} {\color{red}u_3} \\ + \\ u_7 \end{matrix},\begin{matrix} {\color{red} u_4} \\ + \\ {\color{red} u_8} \end{matrix} \right).$$
The urgent symbols $u_2,u_3$ can be recovered with delay $10$ from the interleaved parity checks $p_4$ and $p_5$, respectively. The non-urgent symbol $u_5$ can be recovered with delay $6$ from the interleaved parity check $p_3$. After the symbols $u_2, u_3,$ and $u_5$ are recovered using $p_3,p_4,$ and $p_5$, respectively, the only symbols yet to be decoded are $u_4$ and $u_8$. These two symbols can be recovered using the Reed-Solomon parities $p_1$ and $p_2$ by inverting the $2 \times 2$ matrix $\begin{pmatrix} 2 & 9 \\ 10 & 5 \end{pmatrix}$ in $\mathbb{F}_{11}$. The invertibility of $\mathbf{A}_1$ follows from the MDS property of the encoding. The decoding delays for $u_4$ and $u_8$ are $9$ and $5$, respectively. Note that the parity $p_6$ was not used in the decoding process, so the decoding would still have been successful if $p_6$ were also affected by an additional arbitrary erasure.
\end{example}
\setcounter{example}{2}
\begin{example}[Continued, Decoding Scenario 2, $T=10, B=4, N=2$]
\label{inv1}
We continue Example~\ref{ex3} given in Section~\ref{encoding} by providing another decoding example. Suppose that a burst of size $4$ erases the symbols $u_4,u_5,u_6,u_7$, and $2$ arbitrary erasures affect $u_8$ and $p_1$. The erased symbols are highlighted in red:
$$\mathbf{x}=\left(u_1,u_2,u_3,{\color{red} u_4},{\color{red} u_5},{\color{red} u_6},{\color{red} u_7},{\color{red}u_8}, {\color{red}\begin{matrix} 4u_1+3{\color{red} u_2}+9{\color{red}u_3}+2{\color{red}u_4} \\ + \\ 8{\color{red}u_5}+7u_6+5u_7+10{\color{red} u_8} \end{matrix}}, \begin{matrix} 6u_1+4u_2+3u_3+9{\color{red}u_4} \\ + \\ 2{\color{red}u_5}+8{\color{red}u_6}+7{\color{red}u_7}+5{\color{red} u_8} \end{matrix}, \begin{matrix} u_1 \\ + \\ {\color{red}u_5} \end{matrix},\begin{matrix} u_2 \\ + \\ {\color{red} u_6} \end{matrix},\begin{matrix} u_3 \\ + \\ {\color{red} u_7} \end{matrix},\begin{matrix} {\color{red} u_4} \\ + \\ {\color{red} u_8} \end{matrix} \right).$$
The symbols $u_5, u_6$, and $u_7$ are recovered with delay $6$ from $p_3, p_4,$ and $p_5$, respectively. Then, the symbols $u_4$ and $u_8$ are recovered using the parities $p_2$ and $p_6$ by inverting the $2 \times 2$ matrix $\begin{pmatrix} 9 & 1 \\ 5 & 1 \end{pmatrix}$ in $\mathbb{F}_{11}$. The invertibility of this matrix follows from the fact that the Cauchy matrix $\mathbf{M}$ has distinct elements in each column, and hence any vector composed of any 2 elements from a column in $\mathbf{M}$ is linearly independent of the ones vector $(1~~1)$. The decoding delays for $u_4$ and $u_8$ are $10$ and $5$, respectively.
\end{example}
\setcounter{example}{2}
\begin{example}[Continued, Decoding Scenario 3, $T=10, B=4, N=2$]
We continue Example~\ref{ex3} given in Section~\ref{encoding} by providing an additional decoding example. Suppose that a burst of size $4$ erases the symbols $u_1,u_2,u_3,u_4$, and $2$ arbitrary erasures affect $u_5$ and $p_3$. The erased symbols are highlighted in red:
$$\mathbf{x}=\left({\color{red} u_1},{\color{red} u_2},{\color{red} u_3},{\color{red} u_4},{\color{red} u_5},u_6,u_7,u_8, \begin{matrix} 4 {\color{red} u_1}+3{\color{red} u_2}+9{\color{red}u_3}+2{\color{red}u_4} \\ + \\ 8{\color{red}u_5}+7u_6+5u_7+10 u_8 \end{matrix}, \begin{matrix} 6 {\color{red} u_1}+4{\color{red}u_2}+3{\color{red}u_3}+9{\color{red}u_4} \\ + \\ 2{\color{red}u_5}+8u_6+7u_7+5u_8 \end{matrix}, {\color{red} \begin{matrix} u_1 \\ + \\ {\color{red}u_5} \end{matrix}},\begin{matrix} {\color{red}u_2} \\ + \\ u_6 \end{matrix},\begin{matrix} {\color{red}u_3} \\ + \\ u_7 \end{matrix},\begin{matrix} {\color{red} u_4} \\ + \\ u_8 \end{matrix} \right).$$
The symbol $u_1$ cannot be recovered within the maximum delay constraint $T=10$, and therefore $u_1$ is considered to be lost. The symbols $u_2, u_3$, and $u_4$ are recovered with delay $10$ from $p_4, p_5,$ and $p_6$, respectively. After recovering $u_2, u_3$, and $u_4$, the symbol $u_5$ can recovered from $p_1$ and $p_2$ with delay $9$ by inverting the $2 \times 2$ matrix $ \begin{pmatrix} 4 & 6 \\ 8 & 2 \end{pmatrix}$ in $\mathbb{F}_{11}$. Note that at this point, the symbol $u_1$ can also be recovered; however, for delay considerations we consider this symbol to be lost. Therefore, in this example, all symbols except $u_1$ were successfully recovered within the delay constraint $T=10$.
\end{example}
\begin{example}[Continued, Decoding Scenario 1, $T=16,B=4,N=4$]
We continue Example~\ref{ex4} given in Section~\ref{encoding} by providing a decoding example. Suppose that a burst of size $4$ erases the symbols $u_4,u_5,u_6,u_7$, and $2$ arbitrary erasures affect $u_{10}$ and $u_{12}$. The erased symbols are highlighted in red in the parities:
\begin{align*}
p_1 &= 7u_1+3u_2+5u_3+15{\color{red} u_4}+2{\color{red} u_5}+12{\color{red}u_6}+14{\color{red} u_7}+10u_8+4u_9+11{\color{red} u_{10}}+8u_{11}+16{\color{red} u_{12}}, \\
p_2 &= 13u_1+7u_2+3u_3+5{\color{red} u_4}+15{\color{red} u_5}+2{\color{red} u_6}+12{\color{red} u_7}+14u_8+10u_9+4{\color{red} u_{10}}+11u_{11}+8{\color{red} u_{12}},\\
p_3 &= 6u_1+13u_2+7u_3+3{\color{red} u_4}+5{\color{red} u_5}+15{\color{red} u_6}+2{\color{red} u_7}+12u_8+14u_9+10{\color{red} u_{10}}+4u_{11}+11{\color{red} u_{12}}, \\
p_4 &= 9u_1+6u_2+13u_3+7{\color{red} u_4}+3{\color{red} u_5}+5{\color{red} u_6}+15{\color{red} u_7}+2u_8+12u_9+14{\color{red} u_{10}}+10u_{11}+4{\color{red} u_{12}}, \\
p_5 &= u_1+{\color{red} u_5}+u_9, \\
p_6 &= u_2+{\color{red} u_6}+{\color{red} u_{10}}, \\
p_7 &= u_3+{\color{red} u_7}+u_{11}, \\
p_8 &= {\color{red} u_4}+u_8+{\color{red} u_{12}}.
\end{align*}
The symbol $u_5$ can be decoded with delay $12$ using $p_5$. The symbol $u_7$ can be decoded with delay $12$ using $p_7$. Then, the symbols $u_4,u_6, u_{10}, u_{11}$ can be decoded by inverting the $4 \times 4$ matrix
$$\begin{pmatrix}
15 & 5 & 3 & 7 \\
12 & 2 & 15 & 5 \\
11 & 4 & 10 & 14 \\
16 & 8 & 11 & 4
\end{pmatrix},$$ in $\mathbb{F}_{17}$.
The invertibility of follows from the MDS property of the encoding.
\end{example}
\setcounter{example}{3}
\begin{example}[Continued, Decoding Scenario 2, $T=16,B=4,N=4$]
\label{inv2}
We continue Example~\ref{ex4} given in Section~\ref{encoding} by providing an additional decoding example. Suppose that a burst of size $3$ erases the symbols $u_1,u_2,u_3$, and $5$ arbitrary erasures affect $u_{7}, u_{11},p_1,p_2,p_8$. The erased symbols are highlighted in red in the parities:
\begin{align*}
{\color{red} p_1} &{\color{red} = 7u_1+3u_2+5u_3+15{\color{red} u_4}+2{\color{red} u_5}+12{\color{red}u_6}+14{\color{red} u_7}+10u_8+4u_9+11{\color{red} u_{10}}+8u_{11}+16{\color{red} u_{12}}}, \\
{\color{red} p_2} &{\color{red} = 13u_1+7u_2+3u_3+5{\color{red} u_4}+15{\color{red} u_5}+2{\color{red} u_6}+12{\color{red} u_7}+14u_8+10u_9+4{\color{red} u_{10}}+11u_{11}+8{\color{red} u_{12}}},\\
p_3 &= 6{\color{red} u_1}+13{\color{red} u_2}+7{\color{red} u_3}+3u_4+5u_5+15u_6+2{\color{red} u_7}+12u_8+14u_9+10 u_{10}+4{\color{red} u_{11}}+11u_{12}, \\
p_4 &= 9{\color{red} u_1}+6{\color{red} u_2}+13{\color{red} u_3}+7u_4+3u_5+5u_6+15{\color{red}u_7}+2u_8+12u_9+14u_{10}+10{\color{red} u_{11}}+4u_{12}, \\
p_5 &= {\color{red}u_1}+u_5+u_9, \\
p_6 &= {\color{red}u_2}+u_6+u_{10}, \\
p_7 &= {\color{red}u_3}+{\color{red} u_7}+{\color{red}u_{11}}, \\
{\color{red} p_8} &{\color{red} = {\color{red} u_4}+u_8+u_{12}}.
\end{align*}
The symbols $u_1$ and $u_2$ are recovered with delay $16$ using $p_5$ and $p_6$, respectively. Then, the symbols $u_3,u_7, u_{11}$ can be decoded by inverting the $3 \times 3$ matrix
$$\begin{pmatrix}
7 & 13 & 1 \\
2 & 15 & 1 \\
4 & 10 & 1 \\
\end{pmatrix},$$ in $\mathbb{F}_{17}$. The invertibility of this matrix follows from the fact that the Cauchy matrix $\mathbf{M}$ has distinct elements in each column, and hence any vector composed of any 3 elements from a column in $\mathbf{M}$ is linearly independent of the ones vector $(1~~1~~1)$. The decoding delays for $u_3, u_7,$ and $u_{11}$ are $16, 12,$ and $8$, respectively.
\end{example}
\end{comment}
\section{Simulation Results}
\label{simulnew}
In this section, we compare the packet loss probability (PLP) of our proposed code with other existing constructions over the GE channel (Figure~\ref{fig2}). The empirical PLP is computed as the ratio of the total number of lost information packets to the total number of transmitted information packets. The properties of the simulated codes are shown in Table~\ref{tabn1}. Due to space limitations, we only show results for the case of \mbox{$\alpha=5\times 10^{-3}$}, $\beta=0.45$, $\epsilon_1=1$, and varying $\epsilon_0=\epsilon$.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=0.2]
\node[state, text width=1cm, align=center] (s) {Good $\epsilon_0$};
\node[state, text width=1cm, align=center, right=of s] (r) {Bad \\ $\epsilon_1$};
\draw[every loop]
(r) edge[bend right, auto=right] node {$\beta$} (s)
(s) edge[bend right, auto=right] node {$\alpha$} (r)
(s) edge[loop left] node {$1-\alpha$} (s)
(r) edge[loop right] node {$1-\beta$} (r);
\end{tikzpicture}
\caption{{\small 2-state Markov chain illustration of the GE channel. The erasure probabilities in the good and bad states are denoted by $\epsilon_0$ and $\epsilon_1$, respectively. The transition probabilities between the good and bad states are denoted by $\alpha$ and $\beta$.}}
\label{fig2}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& $T$ & $(n,k)$ & Rate & $(N,B)$ \\ \hline
MDS Code & $15$ & $(16,8)$ & $0.5$ &$(8,8)$ \\ \hline
Martinian-Trott Code~\cite{Martinian2007} & $15$ & $(30,15)$ & $0.5$ &$(1,16)$ \\ \hline
Domanovitz et al. Code~\cite{Domanovitz2020} & $15$ & $(24,12)$ & $0.5$ &$(4,12)$\\ \hline
New Code 1 & $15$ & $(22,11)$ & $0.5$ &$(4,7)$ \\ \hline
New Code 2 & $14$ & $(20,10)$ & $0.5$ &$(4,6)$ \\ \hline
\end{tabular}
\caption{{\small The properties of the codes simulated in Figure~\ref{newf1}.}}
\label{tabn1}
\end{table}
\begin{figure}[h!]
\vspace{-0.5cm}
\centering
\includegraphics[width=0.47\textwidth]{Figures/PEICASSP.pdf}
\caption{{\small PLP of the codes in Table~\ref{tabn1} over the GE channel with $\alpha=$5e\text{-}3$, \beta=0.45$, and varying $\epsilon$. The horizontal interleaving setting is considered (Section \ref{interleaving}) and the channel length is set to $10^7$.}}
\vspace{-0.7cm}
\label{newf1}
\end{figure}
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{Figures/Pall1.pdf}
\caption{Comparing the packet loss probability of different code construction over the GE channel while varying $\epsilon$. The simulation results are for $\alpha=5\times 10^{-4}, \beta=0.5$. The properties of the compared codes are given in Table~\ref{tabn1}. The channel length is set to $10^7$.}
\label{newf2}
\end{figure}
\end{comment}
The results in Figure~\ref{newf1} show that for $\epsilon=0$, the Martinian-Trott code, which is optimized for correcting bursts, achieves the best performance. However, its performance rapidly deteriorates as $\epsilon$ increases. For $0<\epsilon\leq 0.04$, our code with $(N=4, B=7, T=15)$ achieves the lowest PLP among all constructions. Also, for $\epsilon\geq 0.005$, our code with $(N=4, B=6, T=14)$ presents a twofold gain since both the PLP and the maximum delay $T$ are lower than that of other constructions. Here it is noteworthy to also mention that these constructions require different field sizes. For $R=0.5$, the Martinian-Trott code can be constructed in $GF(2)$, the MDS code and our code require a linear field size $q\geq T$, and the Domanovitz code requires a quadratic field size $q=O(T^2)$.
In conclusion, the results demonstrate that optimizing code constructions over theoretical sliding-window channels does not necessarily give the best performance over practical channels such as the GE channel. Namely, the results in Figure~\ref{newf1} show that with the same code rate and lower field size, our code can outperform existing rate-optimal streaming codes in terms of both packet loss probability and maximum decoding delay. As part of future work, our goal is to derive a theoretical bound on the PLP of our code over the GE channel.
\bibliographystyle{ieeetr}
|
1,116,691,499,194 | arxiv | \section{Introduction}
Probability sampling is regarded as the gold-standard in survey statistics
for finite population inference. Fundamentally, probability samples
are selected under known sampling designs and therefore are representative
of the target population. Because the selection probability is known,
the subsequent inference from a probability sample is often design-based
and respects the way in which the data were collected; see \citet{sarndal2003model,cochran2007sampling,fuller2009sampling}
for textbook discussions. \citet{kalton2019} provided a comprehensive
overview of the survey sampling research in the last 60 years.
However, many practical challenges arise in collecting and analyzing
probability sample data (\citealp{baker13}; \citealp{keiding2016perils}).
Large-scale survey programs continually face heightened demands coupled
with reduced resources. Demands include requests for estimates for
domains with small sample sizes and desires for more timely estimates.
Simultaneously, program budget cuts force reductions in sample sizes,
and decreasing response rates make non-response bias an important
concern.
Data integration is a new area of research to provide a timely solution
to the above challenges. The goal is multi-fold: i) minimize the cost
associated with surveys, ii) minimize the respondent burden and iii)
maximize the statistical information or equivalently the efficiency
of survey estimation. Narrowly speaking, survey integration means
combining separate probability samples into one survey vehicle \citep{bycroft2010integrated}.
Broadly speaking, one can consider combining probability samples with
non-probability samples. Recently in survey statistics, non-probability
data become increasingly available for research purposes and provide
unprecedented opportunities for new scientific discovery; however,
they also present \textcolor{black}{additional} challenges such as
heterogeneity, selection bias, high dimensionality, etc. The past
years have seen immense progress in theories, methods, and algorithms
for surmounting important challenges arising from non-probability
data analysis. This article provides a systematic review of data integration
for combining probability samples, probability and non-probability
samples, and probability and big data samples.
Section \ref{sec:integProbSamples} establishes notation and reviews
these methods \textcolor{black}{in the context of combining multiple
probability samples}. Existing methods for probability data integration
can be categorized into two types depending on the level of information
to be combined: a macro approach combining the summary statistics
from multiple surveys and a micro approach creating synthetic imputations.
Section \ref{sec:PNP} describes the motivation, challenges and methods
for integrating probability and emergent non-probability samples.
We also draw connections of survey data integration to combine randomized
clinical trials and real-world data in Biostatistics. We then discuss a
wide range of integration methods including calibration weighting,
inverse probability weighting, mass imputation and doubly robust methods.
We then consider data integration methods for combining probability
and big non-probability samples. Depending on the roles in statistical
inference, there are two types of \textit{big data}: one with large
sample sizes (large $n$) and the other with rich covariates (large
$p$). In the first type, the non-probability sample can be large
in sample size. How to leverage the rich information in the big data
to improve the finite population inference is an important research.
In the second type, there are a large number of variables. There is
a large literature on variable selection methods for prediction, but
little work on variable selection for data integration that \textcolor{black}{can
successfully recognize the strengths and the limitations of each data
source and utilize all information captured for finite population
inference.} Section \ref{sec:PbigD} presents robust data integration
and variable selection methods in this context.
To summarize, Section \ref{sec:Concluding-remark} describes the direction
of future research along the line of data integration including sensitivity
analysis to assess the robustness of study conclusions to unverifiable
assumptions, hierarchical modeling, and some cautionary remarks.
\section{Combining probability samples\label{sec:integProbSamples}}
\subsection{Multiple probability samples and missingness patterns}
Combining two or more independent survey probability samples is a
problem frequently encountered in \textcolor{black}{the practice of
} survey sampling. For simplicity of exposition, let $\mathcal{U}=\{1,\ldots,N\}$
be the index set of $N$ units for the finite population, with $N$
being the known population size. Let $(x_{i}^{\mathrm{\scriptscriptstyle T}},y_{i})^{\mathrm{\scriptscriptstyle T}}$ be
the realized value of a vector of random variables $(X^{\mathrm{\scriptscriptstyle T}},Y)^{\mathrm{\scriptscriptstyle T}}$
for unit $i$. Let $I_{i}$ be the sample indicator such that $I_{i}=1$
indicates the selection of unit $i$ into the sample and $I_{i}=0$
otherwise. The probability $\pi_{i}=\pr(I_{i}=1\mid i\in\mathcal{U})$ is called
the first-order inclusion probability and is known by \textcolor{black}{the
sampling design}. The design weight is $d_{i}=\pi_{i}^{-1}$. The
joint probability $\pi_{ij}=\pr(I_{i}I_{j}=1\mid i,j\in\mathcal{U})$ is called
the second-order inclusion probability and is often used for variance
estimation of the design-weighted estimator. \textcolor{black}{The
sample size is} $n=\sum_{i=1}^{N}I_{i}.$
The \textcolor{black}{main} advantage of probability sampling is to
ensure design-based inference. For example, the Horvitz-Thompson (HT)
estimator of the population mean of $y$, denoted by $\mu_{y}$, is
$\widehat{\mu}_{\mathrm{HT}}=N^{-1}\sum_{i:I_{i}=1}\pi_{i}^{-1}y_{i}$, and
the design-variance estimator is
\[
\widehat{V}_{\mathrm{HT}}=nN^{-2}\sum_{i:I_{i}=1}\sum_{j:I_{j}=1}\frac{(\pi_{ij}-\pi_{i}\pi_{j})}{\pi_{ij}}\frac{y_{i}}{\pi_{i}}\frac{y_{j}}{\pi_{j}}.
\]
We consider multiple sources of probability data. For multiple datasets,
we use the subscript letter to indicate the respective sample; for
example, we use $d_{A,i}$ as the design weight of unit $i$ in sample
A.
Depending on the available information from multiple data sources,
\textcolor{black}{each sample has planned missingness by design.}
As illustrated in Table \ref{tab:Missingness-patterns}, the combined
sample exhibits different missingness patterns: monotone and non-monotone.
For monotone missingness, our framework covers two common types of
studies. First, we have a large main dataset, and then collect more
information on important variables for a subset of units, e.g., using
a two-phase sampling design \citep{neyman1938contribution,cochran2007sampling,wang2009causal}.
Consider the U.S. Census of housing and population as an example.
The short form consists of $100\%$ sample, for which basic demographic
information was obtained. The long form consists about $16\%$ sample,
for which other social and economic information as well as demographic
information were obtained. \citet{deming1940least} considered this
setup as a classical two-phase sampling problem and use calibration
weighting for demographic variable to match the known population counts
from the short form.
Second, we have a smaller and carefully designed validation dataset
with rich covariates, and then link it to a larger main dataset with
fewer covariates. The setup of two independent samples with common
items is often called non-nested two-phase sampling. Consider the
U.S. consumer expenditure survey as an example. Two independent samples
were selected from the same finite population, including a diary survey
sample, referred to as sample A, and a face-to-face survey sample,
referred to as sample B. In sample A, observe auxiliary information
$X$ and outcome $Y$; whereas in sample B, observe common auxiliary
information $X$. \citet{zieschang1990sample} considered using sample
weighting to estimate \textcolor{black}{{} detailed expenditure and
income items} combining sample A and sample B. Another example is
the Canadian Survey of Employment, Payrolls and Hours considered by
\citet{hidi01}. Sample A is a small sample from Statistics Canada
Business Register, in which the study variables $Y$, number of hours
worked by employees and summarized earnings, were observed. Sample
B is a large sample drawn from a Canadian Customs and Revenue Agency
administrative data, in which auxiliary variables $X$ were observed.
Finally, we will consider combining two independent surveys with non-monotone missing patterns. Statistical matching technique will be introduced in Section 2.3 as a general statistical tool under this setup.
\begin{table}[H]
\caption{\label{tab:Missingness-patterns}Missingness patterns in the combined
samples: ``$\checked$'' means ``is measured''}
\centering%
\begin{tabular}{ccccc}
\hline
\multicolumn{5}{c}{Monotone missingness}\tabularnewline
\hline
& $d$ & $X$ & $Y$ & \tabularnewline
Sample A & $\checked$ & $\checked$ & $\checked$ & \tabularnewline
Sample B & $\checked$ & $\checked$ & & \tabularnewline
\hline
\hline
\multicolumn{5}{c}{Non-monotone missingness I}\tabularnewline
\hline
& $d$ & $X$ & $Y_{1}$ & $Y_{2}$\tabularnewline
Sample A & $\checked$ & $\checked$ & $\checked$ & $\checked$\tabularnewline
Sample B & $\checked$ & $\checked$ & $\checked$ & \tabularnewline
Sample C & $\checked$ & $\checked$ & & $\checked$\tabularnewline
\hline
\hline
\multicolumn{5}{c}{Non-monotone missingness II}\tabularnewline
\hline
& $d$ & $X$ & $Y_{1}$ & $Y_{2}$\tabularnewline
Sample A & $\checked$ & $\checked$ & $\checked$ & \tabularnewline
Sample B & $\checked$ & $\checked$ & & $\checked$\tabularnewline
\hline
\end{tabular}
$d$ is the sampling weight, where the subscript indicates the sample,
$X$ is the vector of auxiliary variables, $Y$, $Y_{1}$ and $Y_{2}$
are scalar outcome variables.
\end{table}
\subsection{Two approaches for probability data integration}
We classify probability data integration methods based on the level
of information to be combined: a macro approach and a micro approach.
In the macro approach, we obtain summary information such as the point
and variance estimates from multiple data sources and combine those
\textcolor{black}{to obtain} a more efficient estimator of the parameter
of interest. In the micro approach, we create a single synthetic data
that contains all available information from all data sources.
\subsubsection{Macro approach: Generalized least squares (GLS) estimation}
\citet{ren97}, \citet{hidi01}, \citet{mer04}, \citet{wu04}, \citet{ybarra08}
and \citet{mer10} considered the problem of combining data from two
independent probability samples to estimate totals at the population
and domain levels. \citet{mer04} and \citet{mer10} provided a rigorous
treatment of the survey integration through the generalized method
of moments.
We focus on the monotone missingness pattern. The same discussion
applies to the other patterns. From each probability sample, we obtain
different estimators for the means of common items. The GLS approach
combines those estimates as an optimal estimator. Let $\widehat{\mu}_{x,A}$
and $\widehat{\mu}_{x,B}$ be unbiased estimators of $\mu_{x}$ from
sample A and sample B, respectively. Let $\widehat{\mu}_{B}$ be an
unbiased estimator of $\mu_{y}$ from sample B.
To combine the multiple estimates, \textcolor{black}{we may build
a linear model of three estimates with two parameters as follows:}
\begin{equation}
\left(\begin{array}{c}
\widehat{\mu}_{x,A}\\
\widehat{\mu}_{x,B}\\
\widehat{\mu}_{B}
\end{array}\right)=\left(\begin{array}{cc}
1 & 0\\
1 & 0\\
0 & 1
\end{array}\right)\left(\begin{array}{c}
\mu_{x}\\
\mu_{y}
\end{array}\right)+\left(\begin{array}{c}
e_{1}\\
e_{2}\\
e_{3}
\end{array}\right),\label{eq:GLS}
\end{equation}
where $(e_{1},e_{2},e_{3})^{\mathrm{\scriptscriptstyle T}}$ has mean $(0,0,0)^{\mathrm{\scriptscriptstyle T}}$, variance-covariance
\[
V=\left(\begin{array}{ccc}
\mathrm{var}(\widehat{\mu}_{x,A}) & \mathrm{cov}(\widehat{\mu}_{x,A},\widehat{\mu}_{x,B}) & \mathrm{cov}(\widehat{\mu}_{x,A},\widehat{\mu}_{B})\\
\mathrm{cov}(\widehat{\mu}_{x,A},\widehat{\mu}_{x,B}) & \mathrm{var}(\widehat{\mu}_{x,B}) & \mathrm{cov}(\widehat{\mu}_{x,B},\widehat{\mu}_{B})\\
\mathrm{cov}(\widehat{\mu}_{x,A},\widehat{\mu}_{B}) & \mathrm{cov}(\widehat{\mu}_{x,B},\widehat{\mu}_{B}) & \mathrm{var}(\widehat{\mu}_{B})
\end{array}\right),
\]
and $\mathrm{var}(\cdot)$ and $\mathrm{cov}(\cdot)$ are the variance and covariance
induced by the sampling probability.
Based on model (\ref{eq:GLS}), treat $(\widehat{\mu}_{x,A},\widehat{\mu}_{x,B},\widehat{\mu}_{B})$
as observations and define a sum of squared error term
\[
Q(\mu_{x},\mu_{y})=\left(\begin{array}{c}
\widehat{\mu}_{x,A}-\mu_{x}\\
\widehat{\mu}_{x,B}-\mu_{x}\\
\widehat{\mu}_{B}-\mu_{y}
\end{array}\right)^{\mathrm{\scriptscriptstyle T}}V^{-1}\left(\begin{array}{c}
\widehat{\mu}_{x,A}-\mu_{x}\\
\widehat{\mu}_{x,B}-\mu_{x}\\
\widehat{\mu}_{B}-\mu_{y}
\end{array}\right).
\]
The optimal estimator of $(\mu_{x},\mu_{y})$ that minimizes $Q(\mu_{x},\mu_{y})$
is
\begin{equation}
\widehat{\mu}_{x}^{*}=\alpha^{*}\widehat{\mu}_{x,A}+(1-\alpha^{*})\widehat{\mu}_{x,B}\label{eq:(3)}
\end{equation}
and
\begin{equation}
\widehat{\mu}_{\mathrm{GLS}}=\widehat{\mu}_{B}+\left(\begin{array}{c}
\widehat{\mathrm{cov}}(\widehat{\mu}_{x,A},\widehat{\mu}_{B})\\
\widehat{\mathrm{cov}}(\widehat{\mu}_{x,B},\widehat{\mu}_{B})
\end{array}\right)^{\mathrm{\scriptscriptstyle T}}\left(\begin{array}{cc}
\widehat{\mathrm{var}}(\widehat{\mu}_{x,A}) & \widehat{\mathrm{cov}}(\widehat{\mu}_{x,A},\widehat{\mu}_{x,B})\\
\widehat{\mathrm{cov}}(\widehat{\mu}_{x,A},\widehat{\mu}_{x,B}) & \widehat{\mathrm{var}}(\widehat{\mu}_{x,B})
\end{array}\right)^{-1}\left(\begin{array}{c}
\widehat{\mu}_{x}^{*}-\widehat{\mu}_{x,A}\\
\widehat{\mu}_{x}^{*}-\widehat{\mu}_{x,B}
\end{array}\right),\label{eq:(4)}
\end{equation}
where
\[
\alpha^{*}=\frac{\widehat{\mathrm{var}}(\widehat{\mu}_{x,B})-\widehat{\mathrm{cov}}(\widehat{\mu}_{x,A},\widehat{\mu}_{x,B})}{\widehat{\mathrm{var}}(\widehat{\mu}_{x,A})+\widehat{\mathrm{var}}(\widehat{\mu}_{x,B})-2\widehat{\mathrm{cov}}(\widehat{\mu}_{x,A},\widehat{\mu}_{x,B})}.
\]
To see the efficiency gain of $\widehat{\mu}_{\mathrm{GLS}}$ over $\widehat{\mu}_{B}$,
using (\ref{eq:(3)}), we express
\[
\widehat{\mu}_{\mathrm{GLS}}=\widehat{\mu}_{B}-\widehat{\mathrm{cov}}(\widehat{\mu}_{B},\widehat{\mu}_{x,B}-\widehat{\mu}_{x,A})\left\{ \widehat{\mathrm{var}}(\widehat{\mu}_{x,B}-\widehat{\mu}_{x,A})\right\} ^{-1}(\widehat{\mu}_{x}^{*}-\widehat{\mu}_{x,B}).
\]
The variance of $\widehat{\mu}_{\mathrm{GLS}}$ is
\[
\mathrm{var}(\widehat{\mu}_{B})-\mathrm{cov}(\widehat{\mu}_{B},\widehat{\mu}_{x,B}-\widehat{\mu}_{x,A})\left\{ \mathrm{var}(\widehat{\mu}_{x,B}-\widehat{\mu}_{x,A})\right\} ^{-1}\mathrm{cov}(\widehat{\mu}_{B},\widehat{\mu}_{x,B}-\widehat{\mu}_{x,A}),
\]
which is not larger than $\mathrm{var}(\widehat{\mu}_{B})$. \textcolor{black}{The
GLS estimator for non-monotone missingness can be constructed similarly.
See \citet{fuller99} for an application in the National Resource Inventory.
}
\subsubsection{Micro approach: mass imputation}
Mass imputation (also called synthetic data imputation) is a technique
of creating imputed values for items not observed in the current survey
by incorporating information from other surveys. \citet{breidt1996two}
discussed mass imputation for two-phase sampling. \citet{rivers2007}
proposed a mass imputation approach using nearest neighbor imputation
but the theory is not fully developed. \citet{schenker07} reported
several applications of synthetic data imputation, using a model-based
method to estimate totals and other parameters associated with variables
not observed in a larger survey but observed in a much smaller survey.
\citet{legg2009two} and \citet{kim2012combining} developed synthetic
imputation approaches to combining two surveys. \citet{chipperfield2012combining}
discussed composite estimation when one of the surveys is mass imputed.
\citet{bethlehem2016solving} discussed practical issues in sample
matching \textcolor{black}{for mass imputation}.
The primary goal is to create a single synthetic dataset of proxy
values $\widehat{y}_{i}$ for the unobserved $y_{i}$ in sample B
and then use the proxy data together with the associated design weights
of sample A to produce projection estimators of the population mean
$\mu_{y}$. This is particularly useful when sample B is a large scale
survey and item $Y$ is very expensive to measure. The proxy values
$\widehat{y}_{i}$ are generated by first fitting a working model
relating $Y$ to $X$, $E(Y\mid X)=m(X;\beta_{0})$ based on the data
$\{(x_{i},y_{i}):i\in A\}$ from sample A. Then the synthetic values
of $Y$ can be created by $\widehat{y}_{i}=m(x_{i};\widehat{\beta})$
for $i\in B$. \textcolor{black}{Thus, sample A is used as a training
sample for predicting $Y$ in sample B.} The mass imputation estimator
of $\mu_{y}$ is $\widehat{\mu}_{\mathrm{I}}=N^{-1}\sum_{i\in B}d_{B,i}\widehat{y}_{i}$.
\citet{kim2012combining} showed that $\widehat{\mu}_{\mathrm{I}}$ is asymptotically
design-unbiased if $\widehat{\beta}$ satisfies
\begin{equation}
\sum_{i\in A}d_{A,i}\{y_{i}-m(x_{i};\widehat{\beta})\}=0.\label{eq:model-cal}
\end{equation}
With (\ref{eq:model-cal}),
\begin{eqnarray*}
\widehat{\mu}_{\mathrm{I}} & = & N^{-1}\sum_{i\in B}d_{B,i}\widehat{y}_{i}+N^{-1}\sum_{i\in A}d_{A,i}(y_{i}-\widehat{y}_{i})\\
& = & N^{-1}\sum_{i\in B}d_{B,i}m(x_{i};\beta_{0})+N^{-1}\sum_{i\in A}d_{A,i}\{y_{i}-m(x_{i};\beta_{0})\}=\widehat{P}_{B}+\widehat{Q}_{A},
\end{eqnarray*}
and
\[
\mathrm{var}(\widehat{\mu}_{\mathrm{I}})=\mathrm{var}(\widehat{P}_{B})+\mathrm{var}(\widehat{Q}_{A}).
\]
The asymptotic unbiasedness holds regardless of whether the regression
model is true or not. However, a good regression model will reduce
the variance of $\widehat{\mu}_{\mathrm{I}}$. For variance estimation, either
linearization or replication-based sampling \citep{kim2012combining}
can be used.
\subsection{Mass imputation with non-monotone missingness}
For non-monotone missingness, \textcolor{black}{{} the mass imputation
method of \citet{kim2012combining} is not directly applicable as
the sample with partial observations may contain additional information
for parameter estimation.} Often, one can consider a joint model of
all variables and use the EM algorithm to estimate the model parameters.
The joint model deduces the conditional distribution of the missing
variables given the observed values for imputation.
For illustration, consider the non-monotone missingness I structure
in Table \ref{tab:Missingness-patterns}. The goal is to develop mass
imputation for both $Y_{2}$ in sample B and $Y_{1}$ in sample C.
It is attempting to specify the conditional distribution of $Y_{2}$
given $(X,Y_{1})$ to impute $Y_{2}$ in sample B and the conditional
distribution of $Y_{1}$ given $(X,Y_{2})$ to impute $Y_{1}$ in
sample C. However, this approach may result in model incompatiability.
That is, there does not exist a joint model of $(Y_{1},Y_{2})$ given
$X$ that leads to the corresponding conditional distributions. To
avoid model incompatibility, we use a joint model for $(Y_{1},Y_{2})$
given $X$ for prediction though specifying the sequential conditional
distribution
\begin{equation}
f(Y_{1},Y_{2}\mid X;\theta)=f_{1}(Y_{1}\mid X;\theta_{1})f_{2}(Y_{2}\mid X,Y_{1};\theta_{2}),\label{eqn5}
\end{equation}
where $\theta=(\theta_{1}^{\mathrm{\scriptscriptstyle T}},\theta_{2}^{\mathrm{\scriptscriptstyle T}})^{\mathrm{\scriptscriptstyle T}}$, $\theta_{1}$
and $\theta_{2}$ are unknown parameters.
For parameter estimation, it suffices to use observations in sample
A; however, this approach ignores the partial information in sample
B and sample C and therefore is not efficient. Let the joint set of
sampling indexes be $S=A\cup B\cup C$. \textcolor{black}{{} Assuming
no overlap between the samples, we define
\[
\pi_{S,i}=\pr(i\in S\mid i\in\mathcal{U})=\left\{ \begin{array}{ll}
\pi_{A,i} & \mbox{if }i\in A\\
\pi_{B,i} & \mbox{if }i\in B\\
\pi_{C,i} & \mbox{if }i\in C
\end{array}\right.
\]
}and let $d_{i}$ be the design weight for unit $i\in S$ without
specifying which sample it belongs to. That is,
$d_{i}=d_{A,i}$ if $i\in A$. To incorporate all available information,
the EM algorithm can be used as follows.
\begin{description}
\item [{{{[}E-step}{]}}] Let $\theta^{(t)}$ be the parameter estimate
at iteration $t.$ Compute the conditional expectation of the pseudo
log-likelihood functions:
\begin{eqnarray*}
Q_{1}(\theta_{1}\mid\theta^{(t)}) & = & \sum_{i\in S}d_{i}E\left\{ \log f_{1}(y_{1i}\mid x_{i};\theta_{1})\mid x_{i},y_{i,\mathrm{obs}};\theta^{(t)}\right\} \\
Q_{2}(\theta_{2}\mid\theta^{(t)}) & = & \sum_{i\in S}d_{i}E\left\{ \log f_{2}(y_{2i}\mid x_{i},y_{1i};\theta_{2})\mid x_{i},y_{i,\mathrm{obs}};\theta^{(t)}\right\}
\end{eqnarray*}
where $y_{i,\mathrm{obs}}$ is the observed part of $(y_{1i},y_{2i})$.
\item [{{{[}M-step{]}}}] Update the parameter $\theta$ by maximizing
$Q_{1}(\theta_{1}\mid\theta^{(t)})$ and $Q_{2}(\theta_{2}\mid\theta^{(t)})$
with respect to $\theta_{1}$ and $\theta_{2}$.
\end{description}
The E-step and M-step can be iteratively computed until convergence,
leading to the pseudo maximum likelihood estimator $\widehat{\theta}$.
Given $\widehat{\theta}$, mass imputation can be done for both $Y_{2}$
in sample B and $Y_{1}$ in sample C. The imputation model for $Y_{2}$
in sample B is $f_{2}(Y_{2}\mid X,Y_{1};\widehat{\theta}_{2})$. Also,
the imputation model for $Y_{1}$ in sample C is
\begin{equation}
f(Y_{1}\mid X,Y_{2};\widehat{\theta})=\frac{f_{1}(Y_{1}\mid X;\widehat{\theta}_{1})f_{2}(Y_{2}\mid X,Y_{1};\widehat{\theta}_{2})}{\int f_{1}(Y_{1}\mid X;\widehat{\theta}_{1})f_{2}(Y_{2}\mid X,Y_{1};\widehat{\theta}_{2})\mathrm{d} Y_{1}}.\label{5}
\end{equation}
To generate imputed values from (\ref{5}), one may use Markov Chain
Monte Carlo methods or the parametric fractional imputation of \citet{kim2011parametric}.
We now consider the non-monotone missingness II structure in Table
\ref{tab:Missingness-patterns}. Sample A and sample B are probability
samples were selected from the same finite population. In sample A,
observe $(X,Y_{1})$ and in sample B, observe $(X,Y_{2})$. The question
of interest is the associational relationship of $Y_{1}$ and $(X,Y_{2})$.
If $(X,Y_{1},Y_{2})$ were jointly observed, one can fit a simple
regression model of $Y_{2}$ on $(X,Y_{2}).$ However, based on the
available data, $Y_{1}$ and $Y_{2}$ not available simultaneously.
This problem fits into the statistical matching framework \citep{dorazio06}.
In statistical matching, the goal is to create $Y_{1}$ for each unit
in sample B by finding a ``statistical twin'' from the sample A.
Typically, one assumes the conditional independence assumption that
$Y_{1}$ and $Y_{2}$ are conditionally independent given $X,$ or
equivalently,
\begin{equation}
f(Y_{1}\mid X,Y_{2})=f(Y_{1}\mid X).\label{cia}
\end{equation}
Then, the ``statistical twin'' is solely determined by ``how close''
they are in terms of $X$'s. However, in a regression model of $Y_{1}$
on $(X,Y_{2})$, (\ref{cia}) sets the regression coefficient associated
with $Y_{2}$ to be zero \textit{a priori}, which is contrary to the
study question of interest.
For a joint modeling of $(X,Y_{1},Y_{2})$ without assuming (\ref{cia}),
identification is an important issue. Consider the following joint
model of $(Y_{1},Y_{2})$ given $X,$
\begin{eqnarray}
Y_{1} & = & \alpha_{0}+\alpha_{1}X+e_{1},\label{eq:ncia}\\
Y_{2} & = & \beta_{0}+\beta_{1}X+\beta_{2}Y_{1}+e_{2},\label{eq:ncia2}
\end{eqnarray}
where $\mathrm{cov}(e_{1},e_{2})=0$. Because $(X,Y_{1})$ is observed in
sample A, $(\alpha_{0},\alpha_{1})$ is identifiable. Because $(X,Y_{2})$
is observed in sample B, $f(Y_{2}\mid X)$ is identifiable.
Coupling (\ref{eq:ncia}) and (\ref{eq:ncia2}) leads to
\[
Y_{2}=(\beta_{0}+\alpha_{0}\beta_{2})+(\beta_{1}+\alpha_{1}\beta_{2})X+\beta_{2}e_{1}+e_{2}.
\]
Thus, only $\beta_{0}+\alpha_{0}\beta_{2}$ and $\beta_{1}+\alpha_{1}\beta_{2}$
are identifiable and $(\beta_{0},\beta_{1},\beta_{2})$ is not.
In general, adding non-linear terms in the parametric assumption can
help achieve identification. For example, consider adding $X^{2}$
in (\ref{eq:ncia}) such that
\begin{eqnarray}
Y_{1} & = & \alpha_{0}+\alpha_{1}X+\alpha_{2}X^{2}+e_{1}.\label{eq:ncia3}
\end{eqnarray}
Again, $(\alpha_{0},\alpha_{1},\alpha_{2})$ is identifiable from
sample A. Coupling (\ref{eq:ncia2}) and (\ref{eq:ncia3}) leads to
\[
Y_{2}=(\beta_{0}+\alpha_{0}\beta_{2})+(\beta_{1}+\alpha_{1}\beta_{2})X+(\alpha_{2}\beta_{2})X^{2}+\beta_{2}e_{1}+e_{2}.
\]
Thus, $\beta_{0}+\alpha_{0}\beta_{2}$, $\beta_{1}+\alpha_{1}\beta_{2}$
and $\alpha_{2}\beta_{2}$ are identifiable from sample B. As long
as $\alpha_{2}\neq0$, $(\beta_{0},\beta_{1},\beta_{2})$ is then
identifiable. For an identifiable model, parameter estimation can
be implemented either using the EM algorithm or GLS.
Other assumptions can be invoked to achieve model identification.
\citet{kim2016} used an instrumental variable assumption for model
identification and develop fractional imputation methods \textcolor{black}{for
statistical matching}. \citet{park2016} presented an application
of the statistical matching technique using fractional imputation
in the context of handling mixed-mode surveys. \citet{park2017} applied
the method to combine two surveys with measurement errors.
\section{Combining probability and non-probability samples\label{sec:PNP}}
\subsection{Combining a probability sample with a non-probability sample}
Statistical analysis of non-probability survey
samples faces many challenges as documented by \citet{baker13}. Non-probability
samples have unknown selection/inclusion mechanisms and are typically
biased, and they do not represent the target population. A popular
framework in dealing with the biased non-probability samples is to
assume that auxiliary variable information on the same population
is available from an existing probability survey sample. This framework
was first used by \citet{rivers2007} and followed by a number of
other authors including \citet{Vavreck2008}, \citet{Lee2009}, \citet{Valliant2011},
\citet{Brick2015}, \citet{elliott2017inference} and \citet{chen2018doubly},
among others. Combining the up-to-date information from a non-probability
sample and auxiliary information from a probability sample can be
viewed as data integration, which is an emerging area of research
in survey sampling \citep{lohr2017combining}.
Data integration for finite population inference is similar to the
problem of combining randomized experiments and non-randomized real-world
evidence studies for causal inference of treatment effects \citep{keiding2016perils}.
In randomized clinical trial, the treatment assignment mechanism is
known and therefore treatment effect evaluation based on randomized
clinical trial is unconfounded. However, due to restrictive inclusion
and exclusion criteria, the trial sample may be narrowly defined and
can not represent the real-world patient population. On the other
hand, by the real-world data collection mechanism, the real-world
evidence study is often representative of the target population. Combining
trial and real-world evidence studies can achieve more robust and
efficient inference of treatment effect for a target patient population.
Table \ref{tab:Data-integration} draws a parallel comparison of data
sources between data integration in survey sampling and that in treatment
effect evaluation.
\begin{table}[H]
\caption{\label{tab:Data-integration}Data integration in survey sampling and
Biostatistics: }
\centering%
\begin{tabular}{cccc}
\hline
& & Representative of & unbiased\tabularnewline
Survey sampling & Treatment effect evaluation & the finite population & estimation$^{*}$\tabularnewline
\hline
Probability sample & Real world evidence study & $\checked$ & \tabularnewline
Non-probability sample & Randomized experiment & & $\checked$\tabularnewline
\hline
\end{tabular}
$^{*}$In survey sampling, some probability samples may not observe
the study variable of interest; for treatment effect evaluation, randomized
experiments provide unbiased estimation of treatment effect due to
treatment randomization.
\end{table}
Survey statisticians and biostatisticians have provided different
methods for combining information from multiple data sources. \citet{lohr2017combining}
and \citet{rao2019} provided comprehensive reviews of statistical
methods for finite population inference. Existing methods for data
integration of a probability sample and a non-probability sample can
be categorized into three types as follows. The first type is the
so-called propensity score adjustment \citep{rosenbaum1983central}.
In this approach, the probability of a unit being selected into the
non-probability sample, which is referred to as the propensity or
sampling score, is modeled and estimated for all units in the non-probability
sample. The subsequent adjustments, such as propensity score weighting
or stratification, can then be used to adjust for selection biases;
see, e.g., \citet{lee2009estimation,valliant2011estimating,elliott2017inference}
and \citet{chen2018doubly}. \citet{stuart2011use,stuart2015assessing}
and \citet{buchanan2018generalizing} used propensity score weighting
to generalize results from randomized trials to a target population.
\citet{o2014generalizing} proposed propensity score stratification
for analyzing a non-randomized social experiment. One notable disadvantage
of the propensity score methods is that they rely on an explicit propensity
score model and are biased and highly variable if the model is misspecified
\citep{kang2007demystifying}. The second type uses calibration weighting
\citep{deville1992calibration,kott2006using}. This technique calibrates
auxiliary information in the non-probability sample with that in the
probability sample, so that after calibration the weighted distribution
of the non-probability sample is similar to that of the target population
\citep{disogra2011calibrating}. The third type is mass imputation,
which imputes the missing values for all units in the probability
sample. In the usual imputation for missing data analysis, the respondents
in the sample constitute a training dataset for developing an imputation
model. In the mass imputation, an independent non-probability sample
is used as a training dataset, and imputation is applied to all units
in the probability sample; see, e.g., \citet{breidt1996two}; \citet{rivers2007};
\citet{kim2012combining}; \citet{chipperfield2012combining}; \citet{bethlehem2016solving};
and \citet{yang2018integration}.
\subsection{Setup and assumptions}
Non-probability samples become increasingly popular in survey statistics
but may suffer from selection bias that limits the generalizability
of results to the target population. We consider integrating a non-probability
sample with a carefully designed probability sample which provides
the representative covariate information of the target population.
Let $X\in\mathbb{R}^{p}$ be a vector of auxiliary variables (including an
intercept) that are available from two data sources, and let $Y\in\mathbb{R}$
be the study variable of interest. We consider combining a probability
sample with $X$, referred to as sample A, and a non-probability sample
with $(X,Y)$, referred to as sample B, to estimate $\mu_{y}$ the
population mean of $Y$. We focus on the case when the study variable
$Y$ is observed in \textcolor{black}{sample B} only, but the other
auxiliary variables are commonly observed in both data. Although the
big data source has a large sample size, the sampling mechanism is
often unknown, and we cannot compute the first-order inclusion probability
for Horvitz-Thompson estimation. The naive estimators without adjusting
for the sampling process are subject to selection biases, as illustrated
in Table \ref{tab:Illustration-of-bias-big-Data}. On the other hand,
although the probability sample with sampling weights represents the
finite population, it does not observe the study variable. The complementary
features of probability samples and non-probability samples raise
the question of whether it is possible to develop data integration
methods that leverage the advantages of both sources.
\begin{table}[H]
\caption{\label{tab:Illustration-of-bias-big-Data}Illustration of the total
error from the simple mean estimator of $\bar{Y}_{N}$ based on probability
simple random sample and big non-probability sample}
\centering%
\begin{tabular}{ccccc}
\hline
Total Error (MSE) & = & Variance & + & Bias$^{2}$\tabularnewline
\hline
Probability sample & & $\{(1-f_{A})/n_{A}\}S_{Y}^{2}$ & & $0$\tabularnewline
Non-probability sample (Big data) & & $\approx0$ & & $r_{B}^{2}\{(1-f_{B})/f_{B}\}S_{Y}^{2}$\tabularnewline
\hline
\end{tabular}
$f_{A}=n_{A}/N$ and $f_{B}=n_{B}/N$ are the sampling fractions of
sample A and sample B, respectively; $r_{B}$ is the correlation between
the outcome $Y$ and the inclusion indicator $I_{B}$; $S_{Y}$ is
the population variance of $Y$.
\end{table}
Because the sampling mechanism of a non-probability sample is unknown,
the target population quantity is not identifiable in general. Unlike
the previous case in Section \ref{sec:integProbSamples}, the sampling
mechanism of sample B is unknown and therefore $\mu_{y}$ is not identifiable
in general.
Two datasets were considered from the 2005 Pew Research Centre (PRC) and the 2005 Behavioral Risk Factor Surveillance
System (BRFSS). The goal of the PRC study was to evaluate the relationship
between individuals and community \citep{chen2018doubly,kim2018combining}.
The 2005 PRC data are non-probability sample data provided by eight
different vendors, which consist of $n_{B}=9,301$ subjects. \citet{yang2019doubly}
focus on two study variables, a continuous $Y_{1}$ (days had at least
one drink last month) and a binary $Y_{2}$ (an indicator of voted
local elections). The 2005 BRFSS sample is a probability sample, which
consists of $n_{A}=441,456$ subjects with survey weights. This dataset
does not have measurements on the study variables of interest; however,
it contains a rich set of common covariates with the PRC datase. The
covariate distributions from the PRC sample and the BRFSS sample are
considerably different, e.g., age, education (high school or less),
financial status (no money to see doctors, own house), retirement
rate, and health (smoking). Therefore, the PRC dataset is not representative
of the target population, and the naive analyses of the study variables
are subject to selection biases.
Let $f(Y\mid X)$ be the conditional distribution of $Y$ given $X$
in the superpopulation model $\zeta$ that generates the finite population.
We make the following assumption.
\begin{assumption} \label{asmp:MAR} (i) The sampling indicator $I_{B}$
of sample B and the \textcolor{black}{study} variable $Y$ is independent
given $X$; i.e. $P(I_{B}=1\mid X,Y)=P(I_{B}=1\mid X)$, referred
to as the sampling score $\pi_{B}(X)$; and (ii)\textcolor{black}{{}
$\pi_{B}(X)>0$ }for all $X$.
\end{assumption}
Assumption \ref{asmp:MAR} (i) and (ii) constitute the strong ignorability
condition \citep{rosenbaum1983central}. This assumption holds if
the set of covariates contains all predictors for the outcome that
affect the possibility of being selected in sample B. This setup has
previously been used by several authors; see, e.g., \citet{rivers2007}
and \citet{Vavreck2008}. Assumption \ref{asmp:MAR} (i) states the
ignorability of the selection mechanism to sample B conditional upon
the covariates. Under Assumption \ref{asmp:MAR} (i), $E(Y\mid X)=E(Y\mid X,I_{B}=1)$,
denoted by $m(X)$, can be estimated based on sample B. Assumption
\ref{asmp:MAR} (ii) implies that the support of in sample B is the
same as that in the finite population. Assumption \ref{asmp:MAR}
(ii) does not hold if certain units would never be included in the
\textcolor{black}{non-probability} sample. The plausibility of this
assumption can be \textcolor{black}{easily checked by comparing the
marginal distributions of the auxiliary variables in sample B with
those in sample A.}
Under the sampling ignorability assumption, there are two main approaches:
i) the weighting approach by constructing weights for sample B to
improve the representativeness of sample B; ii) the imputation approach
by creating mass imputation for sample A using the observations in
sample B. There is considerable interest in bridging the findings
from a randomized clinical trial to the target population. This problem
has been termed as generalizability \citep{cole2010generalizing,stuart2011use,hernan2011compound,tipton2013improving,o2014generalizing,stuart2015assessing,keiding2016perils,buchanan2018generalizing},
external validity \citep{rothwell2005external} or transportability
\citep{pearl2011transportability,rudolph2017robust} in the statistics
literature and has connections to the covariate shift problem in machine
learning \citep{sugiyama2012machine}.
\subsection{Propensity score weighting}
\textcolor{black}{{} Under Assumption (i) and (ii), we can build a model
for $\pi_{B}(X)=P(I_{B}=1\mid X)$ and use it to adjust for the selection
bias in sample B. } In practice, the propensity score function $\pi_{B}(X)$
is unknown and needs to be estimated from the data. Let $\pi_{B}(X;\alpha)$
be the posited models for $\pi_{B}(X)$, where $\alpha$ is the unknown
parameter. Several authors have proposed different estimation strategies.
For example, $\widehat{\alpha}$ can be obtained by a weighted regression
of $I_{B,i}$ on $x_{i}$ combining sample A and sample B ($I_{B,i}=0$
for $i\in A$ and $I_{B,i}=1$ for $i\in B$), weighted by the design
weights from sample A, which is valid if the size of sample B is relatively
small \citep{valliant2011estimating}. \citet{chen2018doubly} proposed
estimating $\alpha$ by solving
\begin{equation}
\widehat{S}_{1}(\alpha)=\sum_{i\in B}x_{i}-\sum_{i\in A}d_{A,i}\pi_{B}(x_{i};\alpha)x_{i}=0,\label{eq:(5)}
\end{equation}
which is a sample version of the population estimating equation $S(\alpha)=\sum_{i\in U}\left\{ I_{B,i}-\pi(x_{i};\alpha)\right\} x_{i}=0.$
Instead of using (\ref{eq:(5)}), one can also use
\[
\text{\ensuremath{\widehat{S}_{2}(\alpha)}}=\sum_{i\in B}\frac{1}{\pi_{B}(x_{i};\alpha)}x_{i}-\sum_{i\in A}d_{A,i}x_{i}=0,
\]
which is closely related to the calibration weighting approach for
nonresponse adjustment.
Given $\widehat{\alpha}$, the inverse probability of sampling weighting
estimator of $\mu_{y}$ is
\begin{equation}
\widehat{\mu}_{\mathrm{IPW}}=\widehat{\mu}_{\mathrm{IPW}}(\widehat{\alpha})=N^{-1}\sum_{i=1}^{N}\frac{I_{B,i}}{\pi_{B}(x_{i};\widehat{\alpha})}y_{i}.\label{eq:psw}
\end{equation}
Variance estimation of $\widehat{\mu}_{\mathrm{IPW}}$ can be obtained by
the standard M-estimation theory.
One of the notable disadvantages of the propensity score methods is
that they rely on an explicit propensity score model and are biased
if the model is mis-specified \citep{kang2007demystifying}. Moreover,
if the estimated propensity score is close to zero, $\widehat{\mu}_{\mathrm{IPW}}$
will be highly unstable.
\subsection{Calibration weighting}
The second weighting strategy is calibration weighting, or bench marking
weighting \citep{deville1992calibration,kott2006using}. This technique
can be used to calibrate auxiliary information in the non-probability
sample with that in the probability sample, so that after calibration
the non-probability sample is similar to the target population \citep{disogra2011calibrating}.
Instead of estimating the propensity score model and inverting the
propensity score to correct for the selection bias of the non-probability
sample, the calibration strategy estimates the weights directly. Toward
this end, we assign a weight $\omega_{B,i}$ to each unit $i$ in
the sample B so that
\begin{equation}
\sum_{i\in B}\omega_{B,i}x_{i}=\sum_{i\in A}d_{A,i}x_{i}.\label{eq:calibration constraints}
\end{equation}
where $\sum_{i\in A}d_{A,i}x_{i}$ is a design-weighted estimate of
the population total of $X$ from the probability sample. Constraint
\eqref{eq:calibration constraints} is referred to as the covariate
balancing constraint \textcolor{black}{{} \citep{imai2014}}, and weights
$\mathcal{Q}_{B}=\{\omega_{B,i}:i\in B\}$ are the calibration weights.
The balancing constraint calibrates the covariate distribution of
the non-probability sample to the target population in terms of $X$.
Instead of calibrating each $X,$ one can calibrate model-based calibration
\citep{mcconville2017model,chen2018model,chen2019calibrating}. In
this approach, one can posit a parametric model for $E(Y\mid X)=m(X;\beta)$
and estimate the unknown parameter $\beta$ based on sample B. The
model-based calibration specifies the constraints for $\mathcal{Q}_{B}$
as
\begin{equation}
\sum_{i\in B}\omega_{B,i}m(x_{i};\widehat{\beta})=\sum_{i\in A}d_{A,i}m(x_{i};\widehat{\beta})\label{eq:model-cal-constraint}
\end{equation}
We estimate $\mathcal{Q}_{B}$ by solving the following optimization
problem:
\begin{equation}
\underset{\mathcal{Q}_{B}}{\text{min}}\left\{ L(\mathcal{Q}_{B})=\sum_{i\in B}\omega_{B,i}\log\omega_{B,i}\right\} ,\label{eq:optQ}
\end{equation}
subject to $\omega_{B,i}\ge0,\;$ for all $i\in B$; $\sum_{i\in B}\omega_{B,i}=N$,
and the balancing constraint (\ref{eq:calibration constraints}) or
(\ref{eq:model-cal-constraint}).
The objective function in \eqref{eq:optQ} is the entropy of the calibration
weights; thus, minimizing this criteria ensures that the empirical
distribution of calibration weights are not too far away from the
uniform, such that it minimizes the variability due to heterogeneous
weights. This optimization problem can be solved using convex optimization
with Lagrange multiplier. Other objective functions, such as $L(\mathcal{Q}_{B})=\sum_{i\in B}\omega_{B,i}^{2}$,
can also be considered. This optimization problem can be solved using
convex optimization with Lagrange multiplier. By introducing Lagrange
multiplier $\lambda$, the objective function becomes
\begin{equation}
L(\lambda,\mathcal{Q}_{B})=\sum_{i\in B}\omega_{B,i}\log\omega_{B,i}-\lambda^{\top}\left\{ \sum_{i\in B}\omega_{B,i}x_{i}-\sum_{i\in A}d_{A,i}x_{i}\right\} .\label{eq:lagrange_obj}
\end{equation}
Thus by minimizing \eqref{eq:lagrange_obj}, the estimated weights
are
\[
\omega_{B,i}=\omega_{B}(x_{i};\widehat{\lambda})=\frac{N\exp\left(\widehat{\lambda}^{\top}x_{i}\right)}{\sum_{i\in B}\exp\left(\widehat{\lambda}^{\top}x_{i}\right)},
\]
and $\widehat{\lambda}$ solves the equation
\begin{equation}
U(\lambda)=\sum_{i\in B}\exp\left(\lambda^{\top}x_{i}\right)\left\{ x_{i}-\textcolor{black}{ \frac{1}{N} }\sum_{i\in A}d_{A,i}x_{i}\right\} =0,\label{e:lam}
\end{equation}
which is the dual problem to the optimization problem (\ref{eq:optQ}).
The calibration weighting estimator is
\begin{equation}
\widehat{\mu}_{\mathrm{cal}}=\frac{1}{N}\sum_{i=1}^{N}\omega_{B,i}I_{B,i}y_{i}.\label{eq:cal-1}
\end{equation}
Variance estimation of $\widehat{\mu}_{\mathrm{cal}}$ can be obtained by
the standard M-estimation theory by treating $\lambda$ as the nuisance
parameter and (\ref{e:lam}) as the corresponding estimating equation.
The justification for $\widehat{\mu}_{\mathrm{cal}}$ subject to constraint
(\ref{eq:calibration constraints}) relies on the linearity of the
outcome model, i.e., $m(X)=X^{\mathrm{\scriptscriptstyle T}}\beta^{*}$ for some $\beta^{*}$,
or the linearity of the inverse probability of sampling weight, i.e.,
$\{\pi_{B}(X)\}^{-1}=X^{\mathrm{\scriptscriptstyle T}}\alpha^{*}$ for some $\alpha^{*}$ (\citealp{fuller2009sampling};
Theorem 5.1). The linearity conditions are unlikely to hold for non-continuous
variables. In these cases, $\widehat{\mu}_{\mathrm{cal}}$ may be biased.
The justification for $\widehat{\mu}_{\mathrm{cal}}$ subject to constraint
(\ref{eq:model-cal-constraint}) relies on a correct specification
of $m(X;\beta)$ in the data integration problem.
\textcolor{black}{{} \citet{chan2016} generalize this idea further
to develop a general calibration weighting method that satisfies the
covariance balancing property with increasing dimensions of the control
variables $m(x)$. \citet{zhao2019} developed a unified approach
of covariate balancing PS method using Tailored loss functions. The
regularization techniques using penalty terms into the loss function
can be naturally incorporated into the framework and machine learning
methods, such as boosting, can be used. The covariate balancing condition,
or calibration condition, in (\ref{eq:calibration constraints}),
can be relaxed. \citet{zubizarreta2015} relaxed the exact balancing
constraints to some tolerance level. \citet{wong2019} used the theory
of reproducing Kernel Hilbert space to develop an uniform approximate
balance for covariate functions. }
\subsection{Mass imputation approach}
The third type is mass imputation, where the imputed values are created
for the whole elements in the probability sample. In the usual imputation
for missing data analysis, the respondents in the sample provide a
training dataset for developing an imputation model. In the mass imputation,
an independent big data sample is used as a training dataset, and
imputation is applied to all units in the probability sample. While
the mass imputation idea for incorporating information from big data
is very natural, the literature on mass imputation itself is very
sparse.
In a parametric approach, let $m(X;\beta)$ be the posited model for
$m(X)$, where $\beta\in\mathcal{R}^{p}$ is the unknown parameter.
Under Assumption \ref{asmp:MAR}, $\widehat{\beta}$ can be obtained
by fitting the model to sample B. We assume that $\widehat{\beta}$
is the unique solution to
\[
\widehat{U}(\beta)=\sum_{i\in B}\left\{ y_{i}-m(x_{i};\beta)\right\} h(x_{i};\beta)=0
\]
for some $p$-dimensional vector $h(x_{i};\beta)$. Thus, we use the
observations in sample B to obtain $\widehat{\beta}$ and use it to
construct $\widehat{y}_{i}=m(x_{i};\widehat{\beta})$ for all $i\in A$.
Under some regularity conditions, the mass imputation estimator
\[
\widehat{\mu}_{\mathrm{I}}=\widehat{\mu}_{\mathrm{I}}(\widehat{\beta})=N^{-1}\sum_{i\in A}d_{A,i}m(x_{i};\widehat{\beta})
\]
satisfies $\widehat{\mu}_{\mathrm{I}}=\widehat{\mu}_{\mathrm{I}}(\beta_{0})+o_{P}(n_{B}^{-1/2})$
where
\begin{eqnarray*}
\widehat{\mu}_{\mathrm{I}}(\beta) & = & N^{-1}\sum_{i\in A}d_{A,i}m(x_{i};\beta)+n_{B}^{-1}\sum_{i\in B}\left\{ y_{i}-m(x_{i};\beta)\right\} h(x_{i};\beta)^{\mathrm{\scriptscriptstyle T}}c^{*},\\
c^{*} & = & \left\{ n_{B}^{-1}\sum_{i\in B}\dot{m}(x_{i};\beta_{0})h^{\mathrm{\scriptscriptstyle T}}(x_{i};\beta_{0})\right\} ^{-1}\left\{ N^{-1}\sum_{i=1}^{N}\dot{m}(x_{i};\beta_{0})\right\} ,
\end{eqnarray*}
where $\beta_{0}$ is the true value of $\beta$ and $\dot{m}(x;\beta)=\partial m(x;\beta)/\partial\beta$.
Also,
\[
E\{\widehat{\mu}_{\mathrm{I}}(\beta_{0})-\mu_{y}\}=0,
\]
and
\begin{multline*}
\mathrm{var}\left\{ \widehat{\mu}_{\mathrm{I}}(\beta_{0})-\mu_{y}\right\} =\mathrm{var}\left\{ N^{-1}\sum_{i\in A}d_{A,i}m(x_{i};\beta_{0})-N^{-1}\sum_{i\in U}m(x_{i};\beta_{0})\right\} \\
+E\left[n_{B}^{-2}\sum_{i\in B}E\left(e_{i}^{2}\mid x_{i}\right)\left\{ h(x_{i};\beta_{0})^{\mathrm{\scriptscriptstyle T}}c^{*}\right\} ^{2}\right],
\end{multline*}
where $e_{i}=y_{i}-m(x_{i};\beta_{0})$. The justification for $\widehat{\mu}_{\mathrm{I}}$
relies on a correct specification of $m(X;\beta)$ and the consistency
of $\widehat{\beta}$. If $m(X;\beta)$ is misspecified or $\widehat{\beta}$
is inconsistent, $\widehat{\mu}_{\mathrm{I}}$ can be biased. For variance
estimation, either linearization method or bootstrap method can be
used. See \citet{kim2018combining} for more details.
\subsection{Doubly robust estimation}
To improve the robustness against model misspecification, one can
consider combining the weighting and imputation approaches \citep{kim2018sampling}.
The doubly robust estimator employs both the propensity score and
the outcome models, which is given by
\begin{equation}
\widehat{\mu}_{\mathrm{dr}}=\widehat{\mu}_{\mathrm{dr}}(\widehat{\alpha},\widehat{\beta})=N^{-1}\sum_{i=1}^{N}\left[\frac{I_{B,i}}{\pi_{B}(x_{i};\widehat{\alpha})}\{y_{i}-m(x_{i};\widehat{\beta})\}+I_{A,i}d_{A,i}m(x_{i};\widehat{\beta})\right].\label{eq:dr}
\end{equation}
The estimator $\widehat{\mu}_{\mathrm{dr}}$ is doubly robust in the sense
that it is consistent if either the propensity score model or the
outcome model is correctly specified, not necessarily both. Moreover,
it is locally efficient if both models are correctly specified \citep{bang2005doubly,cao2009improving}.
Let $\widehat{\mu}_{\mathrm{HT}}=N^{-1}\sum_{i\in A}d_{A,i}y_{i}$ be the
Horvitz--Thompson estimator that could be used if $y_{i}$ were observed
in sample A. We express $\widehat{\mu}_{\mathrm{dr}}-\widehat{\mu}_{\mathrm{HT}}=-\sum_{i\in A}d_{A,i}\widehat{e}_{i}+\sum_{i\in B}\{\pi_{B}(x_{i};\widehat{\alpha})\}^{-1}\widehat{e}_{i}$where
$\widehat{e}_{i}=y_{i}-\widehat{y}_{i}$. To show the double robustness
of $\widehat{\mu}_{\mathrm{dr}}$, we consider two scenarios. In the first
scenario, if $\pi_{B}(X;\alpha)$ is correctly specified, then
\[
E\left(\widehat{\mu}_{\mathrm{dr}}-\widehat{\mu}_{\mathrm{HT}}\mid\mathcal{F}_{N}\right)\cong-\sum_{i\in A}d_{A,i}\widehat{e}_{i}+\sum_{i\in U}\widehat{e}_{i}
\]
which is design-unbiased of zero. In the second scenario, if $m(X;\beta)$
is correctly specified, then $E(\widehat{e}_{i})\cong0$. In both
cases, $\widehat{\mu}_{\mathrm{dr}}-\widehat{\mu}_{\mathrm{HT}}$ is unbiased of zero
and therefore $\widehat{\mu}_{\mathrm{dr}}$ is unbiased of $\mu_{y}$.
If either $\pi_{B}(X^{\mathrm{\scriptscriptstyle T}}\alpha)$ or $m(X^{\mathrm{\scriptscriptstyle T}}\beta)$ is correctly
specified,
\[
n^{1/2}\left\{ \widehat{\mu}_{\mathrm{dr}}(\widehat{\alpha},\widehat{\beta})-\mu\right\} \rightarrow\mathcal{N}\left(0,V\right),
\]
as $n\rightarrow\infty$, where $V=\lim_{n\rightarrow\infty}(V_{1}+V_{2}),$
\begin{eqnarray*}
V_{1} & = & E\left\{ \frac{n}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}(\pi_{A,ij}-\pi_{A,i}\pi_{A,j})\frac{m(x_{i}^{\mathrm{\scriptscriptstyle T}}\beta^{*})}{\pi_{A,i}}\frac{m(x_{j}^{\mathrm{\scriptscriptstyle T}}\beta^{*})}{\pi_{A,j}}\right\} ,\\
V_{2} & = & \frac{n}{N^{2}}\sum_{i=1}^{N}E\left[\left\{ \frac{I_{B,i}}{\pi_{B,i}(x_{i}^{\mathrm{\scriptscriptstyle T}}\alpha^{*})}-1\right\} ^{2}\left\{ y_{i}-m(x_{i}^{\mathrm{\scriptscriptstyle T}}\beta^{*})\right\} ^{2}\right].
\end{eqnarray*}
To estimate $V_{1}$, we can use the design-based variance estimator
applied to $m(X_{i}^{\mathrm{\scriptscriptstyle T}}\widehat{\beta})$ as
\begin{equation}
\widehat{V}_{1}=\frac{n}{N^{2}}\sum_{i\in A}\sum_{j\in A}\frac{(\pi_{A,ij}-\pi_{A,i}\pi_{A,j})}{\pi_{A,ij}}\frac{m(X_{i}^{\mathrm{\scriptscriptstyle T}}\widehat{\beta})}{\pi_{A,i}}\frac{m(X_{j}^{\mathrm{\scriptscriptstyle T}}\widehat{\beta})}{\pi_{A,j}}.\label{eq:Vhat1}
\end{equation}
To estimate $V_{2},$ we further express $V_{2}$ as
\begin{equation}
V_{2}=\frac{n}{N^{2}}\sum_{i=1}^{N}E\left[\left\{ \frac{I_{B,i}}{\pi_{B,i}(X_{i}^{\mathrm{\scriptscriptstyle T}}\alpha^{*})^{2}}-\frac{2I_{B,i}}{\pi_{B,i}(X_{i}^{\mathrm{\scriptscriptstyle T}}\alpha^{*})}\right\} \left\{ Y_{i}-m(X_{i}^{\mathrm{\scriptscriptstyle T}}\beta^{*})\right\} ^{2}+\left\{ Y_{i}-m(X_{i}^{\mathrm{\scriptscriptstyle T}}\beta^{*})\right\} ^{2}\right].\label{eq:V2-1}
\end{equation}
Let $\sigma^{2}(X_{i}^{\mathrm{\scriptscriptstyle T}}\beta^{*})=E\left[\left\{ Y_{i}-m(X_{i}^{\mathrm{\scriptscriptstyle T}}\beta^{*})\right\} ^{2}\right]$,
and let $\widehat{\sigma}^{2}(X_{i})$ be a consistent estimator of
$\sigma^{2}(X_{i}^{\mathrm{\scriptscriptstyle T}}\beta^{*})$. We can then estimate $V_{2}$
by
\begin{equation}
\widehat{V}_{2}=\frac{n}{N^{2}}\sum_{i=1}^{N}\left[\left\{ \frac{I_{B,i}}{\pi_{B}(X_{i}^{\mathrm{\scriptscriptstyle T}}\widehat{\alpha})^{2}}-\frac{2I_{B,i}}{\pi_{B}(X_{i}^{\mathrm{\scriptscriptstyle T}}\widehat{\alpha})}\right\} \left\{ Y_{i}-m(X_{i}^{\mathrm{\scriptscriptstyle T}}\widehat{\beta})\right\} ^{2}+I_{A,i}d_{A,i}\widehat{\sigma}^{2}(X_{i})\right].\label{eq:Vhat2}
\end{equation}
By the law of large numbers, $\widehat{V}_{2}$ is consistent for
$V_{2}$ regardless of whether one of $\pi_{B,i}(X_{i}^{\mathrm{\scriptscriptstyle T}}\alpha)$
or $\pi_{B,i}(X_{i}^{\mathrm{\scriptscriptstyle T}}\beta)$ is misspecified, and therefore it
is doubly robust.
\section{Combining probability and big data\label{sec:PbigD}}
\subsection{Big data sample}
\textcolor{black}{To meet the new challenges in the probability sampling,
statistical offices face the increasing pressure to utilize convenient
but often uncontrolled big data sources, such as satellite information
\citep{mcroberts2010advances}, mobile sensor data \citep{palmer2013new},
and web survey panels \citep{tourangeau2013science}. \citet{couper2013sky},
\citet{citro2014multiple}, \citet{tam2015big}, and \citet{pfeffermann2015methodological}
articulated the promise of harnessing big data for official and survey
statistics but also raised many issues regarding big data sources.
While such data sources provide timely data for a large number of
variables and population elements, they are non-probability samples
and often fail to represent the target population of interest because
of inherent selection biases. \citet{tam2018big} also covered some
ethical challenges of big data for official statisticians and discuss
some preliminary methods of correcting for selection bias in big data.}
Combining information from several sources to improve estimates for
population parameters is an important practical problem in survey
sampling. In the past decade, more and more auxiliary information
became available, including large administrative record datasets
and remote sensing data derived from satellite images. How to combine
such information with survey data to provide better estimates for
population parameters is a new challenge that survey statisticians
face today. \citet{tam2015big} presented an overview of some initiatives
of big data applications in official statistics of the Australian
Bureau of Statistics. Such big data are becoming increasingly popular
and they come from a variety of sources such as remote sensing data,
administrative data such as tax data, so on.
Suppose that there are two data sources, one from a probability sample,
referred to as sample A, and the other from a big data source, referred
to as sample B. Table \ref{tab:Data-structure-for-big} illustrates
the observed data structure.
\begin{table}[H]
\caption{\label{tab:Data-structure-for-big}Data structure for data integration
with big data}
\centering%
\begin{tabular}{ccccc}
\hline
& & $d$ & $X$ & $Y$\tabularnewline
\hline
Scenario 1 & Sample A & $\checked$ & $\checked$ & $\checked$\tabularnewline
& Sample B & & $\checked$ & \tabularnewline
\hline
Scenario 2 & Sample A & $\checked$ & $\checked$ & \tabularnewline
& Sample B & & $\checked$ & $\checked$\tabularnewline
\hline
\end{tabular}
Sample A is a probability sample, and Sample B is a big data sample,
which may not be representative of the population.
\end{table}
\subsection{Scenario 1: leverage auxiliary information in big data to improve
efficiency}
In Scenario 1, the probability sample contains $Y$ observations.
Therefore, $\mu_{y}$ is identifiable and can be estimated by the
commonly-used estimator solely from sample A, denoted by $\widehat{\mu}_{A}$.
We can leverage the $X$ information in the big data sample to improve
the sample A estimator. We consider the case when additionally the
membership to the big data can be determined throughout the probability
sample. The key insight is that the subsample of units in sample A
with the big data membership constitutes a second-phase sample from
the big data sample, which acts as a new population. We calibrate
the information in the second-phase sample to be the same as the new
acting population. The calibration process in turn improves the accuracy
of the mass imputation estimator without specifying any model assumptions.
Let $h=(I_{B},1-I_{B},I_{B}X)$.
Following \citet{yang2018combining}, we can consider a class of estimators
satisfying
\begin{equation}
n_{A}^{1/2}\left(\begin{array}{c}
\widehat{\mu}_{A}-\mu_{y}\\
\widehat{h}_{A}-\widehat{h}_{B}
\end{array}\right)\rightarrow\mathcal{N}\left\{ 0,\left(\begin{array}{cc}
V_{yy,A} & \Gamma^{\mathrm{\scriptscriptstyle T}}\\
\Gamma & V
\end{array}\right)\right\} ,\label{eq:asymp}
\end{equation}
in distribution, as $n_{A}\rightarrow\infty$, where $\widehat{h}_{A}=N^{-1}\sum_{i\in A}d_{A,i}h_{i}$
and $\widehat{h}_{B}=N^{-1}\sum_{i\in B}h_{i}$. Heuristically, if
(\ref{eq:asymp}) holds exactly rather than asymptotically, by the
multivariate normal theory, we have the following the conditional
distribution
\[
n_{A}^{1/2}(\widehat{\mu}_{A}-\mu_{y})\mid n_{A}^{1/2}(\widehat{h}_{A}-\widehat{h}_{B})\sim\mathcal{N}\left\{ n_{A}^{1/2}\Gamma^{\mathrm{\scriptscriptstyle T}}V^{-1}(\widehat{h}_{A}-\widehat{h}_{B}),V_{yy,A}-\Gamma^{\mathrm{\scriptscriptstyle T}}V^{-1}\Gamma\right\} .
\]
Let $\widehat{V}_{yy,A},$ $\widehat{\Gamma}$ and $\widehat{V}$
be consistent estimators for $V_{yy,A},$ $\Gamma$ and $V$. We set
$n_{A}^{1/2}(\widehat{\mu}_{A}-\mu_{y})$ to equal its estimated conditional
mean $n_{A}^{1/2}\widehat{\Gamma}^{\mathrm{\scriptscriptstyle T}}\widehat{V}^{-1}(\widehat{h}_{A}-\widehat{h}_{B})$,
leading to an estimating equation for $\mu_{y}$:
\[
n_{A}^{1/2}(\widehat{\mu}_{A}-\mu_{y})=n_{A}^{1/2}\widehat{\Gamma}^{\mathrm{\scriptscriptstyle T}}\widehat{V}^{-1}(\widehat{h}_{A}-\widehat{h}_{B}).
\]
Solving this equation for $\mu_{y}$, we obtain the estimator
\begin{equation}
\widehat{\mu}=\widehat{\mu}_{A}-\widehat{\Gamma}^{\mathrm{\scriptscriptstyle T}}\widehat{V}^{-1}(\widehat{h}_{A}-\widehat{h}_{B}).\label{eq:proposed estimator}
\end{equation}
Under certain regularity conditions, if (\ref{eq:asymp}) holds, then
$\widehat{Y}$ is consistent for $\bar{Y}_{N}$, and
\begin{equation}
n_{A}^{1/2}(\widehat{\mu}-\mu_{y})\rightarrow\mathcal{N}(0,V_{yy,A}-\Gamma^{\mathrm{\scriptscriptstyle T}}V^{-1}\Gamma),\label{eq:asymp var}
\end{equation}
in distribution, as $n_{A}\rightarrow\infty$. Given a nonzero $\Gamma$,
the asymptotic variance, $V_{yy,A}-\Gamma^{\mathrm{\scriptscriptstyle T}}V^{-1}\Gamma,$ is smaller
than the asymptotic variance of $\widehat{\mu}_{A}$, $V_{yy,A}$.
The asymptotic variance of $\widehat{\mu}$ can be estimated by
\begin{equation}
\widehat{V}=(\widehat{V}_{yy,A}-\widehat{\Gamma}^{\mathrm{\scriptscriptstyle T}}\widehat{V}^{-1}\widehat{\Gamma})/n_{A}.\label{eq:VE}
\end{equation}
\citet{kimtam18} also explored similar ideas. They develop a calibration
weighting method to incorporate the big data auxiliary information
and apply the method to the official statistics in Australian Bureau
of Statistics. In this application, the big data is the Australian
Agricultural Census with 85\% response rate and the probability sample
is the Rural Environment and Agricultural Commodities Survey used
for calibration. In this application, the measurement from Census
data is the auxiliary variable used for calibration.
\subsection{Scenario 2: leverage probability sampling designs to correct for
selection bias}
In Scenario 2, we have a similar setup as in Section \ref{sec:PNP}.
Depending on the roles in statistical inference, there are two types
of \textit{big data}: one with large sample sizes (large $n$) and
the other with rich covariates (large $p$). We review methods for
the two types of big data.
\subsubsection{Robust mass imputation estimation}
In the first type, the non-probability sample can be large in sample
size. How to leverage the rich information in the big data to improve
the finite population inference is an important research. We review
robust mass imputation methods.
When the sample size of the big data is large, mass imputation is
more desirable. In mass imputation, we can train a predictive model
from the big data and impute the missing $y_{i}$ in sample A. Instead
of a parametric approach, we can also consider nonparametric approaches.
To find suitable imputed values, we consider nearest neighbor imputation;
that is, find the closest matching unit from sample B based on the
$X$ values and use the corresponding $Y$ value from this unit as
the imputed value.
Using sample B (big data) as a training data, find the nearest neighbor
of each unit $i\in A$ using a distance measure $d(x_{i},x_{j})$.
Let $i(1)$ be the index of its nearest neighbor, which satisfies
\[
d(x_{i(1)},x_{i})\leq d(x_{j},x_{i}),\forall j\in B.
\]
The nearest neighbor imputation estimator of $\mu$ is
\[
\widehat{\mu}_{\mathrm{nni}}=N^{-1}\sum_{i\in A}d_{A,i}y_{i(1)}.
\]
\citet{yang2018integration} showed that under some regularity conditions,
$\widehat{\mu}_{\mathrm{nni}}$ has the same asymptotic distribution as $\widehat{\mu}_{\mathrm{HT}}=N^{-1}\sum_{i\in A}d_{A,i}y_{i}$.
Therefore, the variance of $\widehat{\mu}_{\mathrm{nni}}$ is the same as
the variance of $\widehat{\mu}_{\mathrm{HT}}.$ This implies that the standard
point estimator can be applied to the imputed data$\{(x_{i},y_{i(1)}):i\in A\}$
as if the $y_{i(1)}$'s were observed values. Let $\pi_{A,ij}$ be
the joint inclusion probability for units $i$ and $j$. They showed that the direct variable estimator based
on the imputed data
\[
\widehat{V}_{\mathrm{nni}}=\frac{n_{A}}{N^{2}}\sum_{i\in A}\sum_{j\in A}\frac{(\pi_{A,ij}-\pi_{A,i}\pi_{A,j})}{\pi_{A,ij}}\frac{y_{i(1)}}{\pi_{A,i}}\frac{y_{j(1)}}{\pi_{A,j}}
\]
is consistent for $V_{\mathrm{nni}}.$
\citet{yang2018integration} also considered two strategies for improving the nearest neighbor
imputation estimator, one using $K$-nearest neighbor imputation \citep{mack1979multivariate}
and the other using generalized additive models \citep{wood2006generalized}.
In $K$-nearest neighbor imputation, instead of using one nearest
neighbor, they identify multiple nearest neighbors in the big data sample
and use the average response as the imputed value. This method is
popular in the international forest inventory community for combining
ground-based observations with imagines from remote sensors \citep{mcroberts2010advances}.
In the second strategy, they investigated modern techniques of prediction
for mass imputation with flexible models. They used generalized additive
models \citep{wood2006generalized} to learn the relationship of the
outcome and covariates from the big data and create predictions for
the probability samples. We note that this strategy can apply to a
wider class of semi- and non-parametric estimators such as single
index models, Lasso estimators \citep{belloni2015some}, and machine
learning methods such as random forests \citep{breiman1984classification}.
\subsubsection{Variable selection in the presence of a large number of covariates}
In the second type, when there are a large number of variables, there
is a large literature on variable selection methods for prediction,
but little work on variable selection for data integration that \textcolor{black}{can
successfully recognize the strengths and the limitations of each data
source and utilize all information captured for finite population
inference.}
In practice, subject matter experts recommend a rich set of potentially
useful variables but typically will not identify the set of variables
to adjust for. In the presence of a large number of auxiliary variables,
variable selection is important, because existing methods may become
unstable or even infeasible, and irrelevant auxiliary variables can
introduce a large variability in estimation. \citet{gao2017data}
proposed a pseudo-likelihood approach for combining multiple non-survey
data with high dimensionality; this approach requires all likelihoods
be correctly specified and therefore is sensitive to model misspecification.
\citet{chen2018model} proposed a model-based calibration approach
using LASSO; this approach relies on a correctly specified outcome
model.
\citet{yang2019doubly} proposed a doubly robust variable selection
and estimation strategy. In the first step, it selects a set of variables
that are important predictors of either the sampling score or the
outcome model using penalized estimating equations. In the second
step, it re-estimates the nuisance parameter $(\alpha,\beta)$ based
on the joint set of covariates selected from the first step and considers
a doubly robust estimator of $\mu$, $\widehat{\mu}_{\mathrm{dr}}(\widehat{\alpha},\widehat{\beta})$
in (\ref{eq:dr}), where the estimating functions are
\begin{equation}
J(\alpha,\beta)=\left(\begin{array}{c}
J_{1}(\alpha,\beta)\\
J_{2}(\alpha,\beta)
\end{array}\right)=\left(\begin{array}{c}
N^{-1}\sum_{i=1}^{N}I_{B,i}\left\{ \frac{1}{\pi_{B}(x_{i}^{\mathrm{\scriptscriptstyle T}}\alpha)}-1\right\} \{y_{i}-m(x_{i}^{\mathrm{\scriptscriptstyle T}}\beta)\}x_{i}\\
N^{-1}\sum_{i=1}^{N}\left\{ \frac{I_{B,i}}{\pi_{B}(x_{i}^{\mathrm{\scriptscriptstyle T}}\alpha)}-d_{A,i}I_{A,i}\right\} \partial m(x_{i}^{\mathrm{\scriptscriptstyle T}}\beta)/\partial\beta
\end{array}\right).\label{eq:jee}
\end{equation}
Importantly, the two-step estimator allows model misspecification
of either the sampling score or the outcome model. In the existing
high-dimensional causal inference literature, the doubly robust estimators
have been shown to be robust to selection errors using penalization
\citep{farrell2015robust} or approximation errors using machine learning
\citep{chernozhukov2018double}. However, this double robustness feature
requires both nuisance models to be correctly specified. Using (\ref{eq:jee})
relaxes this requirement by allowing one of the nuisance models to
be misspecified. This also enables one to construct a simple and consistent
variance estimator (\ref{eq:Vhat1})$+(\ref{eq:Vhat2})$ allowing
for doubly robust inferences.
\section{Concluding remarks\label{sec:Concluding-remark}}
Data integration is an emerging area of research with many potential
research topics. We have reviewed statistical techniques and applications
for data integration in survey sampling context. Probability sampling
remains as the gold standard to obtain a representative sample, but
the measurement of the study variable can be obtained from an independent
non-probability sample or big data. In this case, assumptions about
the sampling model or the outcome model are required. Most data integration
methods are based on the unverifiable assumption that the sampling
mechanism for the non-probability sample (or big data) is non-informative
(corresponding to the missingness at random in the missing data literature).
If the sampling mechanism is informative, imputation techniques can
be developed under the strong model assumptions for the sampling mechanism
\citep[e.g.,][]{riddles2016propensity,morikawa2018note}. Like the
non-informative sampling case, the informative sampling assumption
is unverifiable. In such settings, sensitivity analysis is recommended
to assess the robustness of the study conclusions to unverifiable
assumptions. This recommendation echoes Recommendation 15 of the National
Research Council (NRC) report entitled ``The Prevention and Treatment
of Missing Data in Clinical Trials'' \citep{NRC2010prevention}.
Chapter 5 of the NRC Report describes ``global'' sensitivity analysis
procedures that rigorously evaluate the robustness of study findings
to untestable assumptions about how missingness might be related to
the unobserved outcome.
When the training dataset has a hierarchical structure, multi-level
or hierarchical models can be used to develop mass imputation. This
is closely related to unit-level small area estimation in survey sampling
\citep{rao2015small}. The small area estimation is particularly promising
when we apply data integration using big data. \textcolor{black}{That
is, when we use big data as a training sample for prediction, the
multi-level model can be used to reflect the possible correlation
structure among observations. The parameter estimates for the multi-level
model computed from the big data can be used for predicting unobserved
study variables in the survey sample if the same multi-level model
can be made. Further research in this direction, including the mean
squared error estimation for this small area estimation, will be a
topic of future research.}
Finally, the uncertainty due to errors in record linkage and statistical matching is also an important problem. The matched sample using recond linkage techniques \citep{fellegi1969} is subject to linakge errors.
\cite{zhang2019} covers several research topics in the statistical analysis of combined or fused data.
\section*{Acknowledgment}
Dr. Yang is partially supported by the National Science Foundation grant DMS-1811245. Dr. Kim is partially supported by the
National Science Foundation grant MMS-1733572 and the Iowa Agriculture and Home Economics Experiment Station, Ames, Iowa.
\bibliographystyle{dcu}
|
1,116,691,499,195 | arxiv | \section{Introduction}\label{Intro}
Brownian motion takes a central place in modern probability theory and its applications,
and is a basic building block
for modeling many random phenomena. Brownian motion has intimate
connections to analysis since its infinitesimal generator
is Laplacian operator. Brownian motion in Euclidean spaces has been
studied by many authors in depth.
Brownian motion on manifolds and on fractals
has also been investigated vigorously, and is shown to have intrinsic interplay with the geometry of
the underlying spaces. See \cite{H, KS, MP, RY} and the references therein.
In most of these studies, the underlying metric measure spaces are assumed to satisfy volume doubling (VD) property.
For Brownian motion on manifolds with walk dimension 2, a remarkable fundamental result obtained independently by Grigor'yan \cite{Gr} and
Saloff-Coste \cite{LSC1} asserts that the following are equivalent: (i) two-sided Aronson type Gaussian bounds for heat kernel,
(ii) parabolic Harnack equality, and (iii) VD and Poincar\'e inequality.
This result is then extended to strongly local Dirichlet forms on metric measure space in \cite{BM, St1, St2}
and to graphs in \cite{De}.
For Brownian motion on fractals with walk dimension larger than 2,
the above equivalence still holds but one needs to replace (iii)
with (iii') VD, Poincar\'e inequality and a cut-off Sobolev inequality; see \cite{BB3, BBK, AB}.
Recently, analysis on non-smooth spaces has attracted lots of interest.
In real world, there are many objects having varying dimension.
It is natural to study Brownian motion and ``Laplacian operator" on such spaces.
A simple example of spaces with varying dimension is a large square with a thin
flag pole. Mathematically, it is modeled by a plane with a vertical line installed
on it:
\begin{equation}\label{e:1.1a}
{\mathbb{R}}^2 \cup {\mathbb{R}}_+: =\left\{(x_1, x_2, x_3)\in {\mathbb{R}}^3: x_3=0 \textrm{ or } x_1=x_2=0 \hbox{ and } x_3>0\right\}.
\end{equation}
Here and in the sequel, we use $:=$ as a way of definition
and
denote $[0, \infty)$ by ${\mathbb{R}}_+$.
Spaces with varying dimension arise in many disciplines including statistics, physics and engineering
(e.g. molecular dynamics, plasma dynamics).
See, for example, \cite{LS, Xia} and the references therein.
The goal of this paper is to construct and study Brownian motion and Laplacian on spaces
of varying dimension, in particular, to investigate how heat propagates on such spaces.
Intuitively, Brownian motion on space ${\mathbb{R}}^2\cup {\mathbb{R}}$ of \eqref{e:1.1a} should behave like a two-dimensional
Brownian motion when it is on the plane, and like a one-dimensional Brownian motion
when it is on the vertical line (flag pole). However the space ${\mathbb{R}}^2\cup {\mathbb{R}}$ is quite singular in the sense
that the base $O$ of the flag pole where the plane and the vertical line meet
is a singleton. A singleton
would never be visited by a two-dimensional Brownian motion, which means
Brownian motion starting from a point on the plane will never visit $O$.
Hence there is no chance for such a process to climb up the flag pole. So the idea is to collapse or short (imagine putting
an infinite conductance on) a small closed disk $\overline{ B(0, \varepsilon)}\subset {\mathbb{R}}^2$ centered at the origin into a point $a^*$
and consider the resulting
Brownian motion with darning on the collapsed plane,
for which $a^*$ will be visited. The notion of Brownian motion with darning
is coined in \cite{CF} and its potential theory has been studied in details
in \cite{Chen} and \cite[Sections 2-3]{CFR}.
Through $a^*$ we put a vertical pole and construct Brownian motion
with varying dimension on ${\mathbb{R}}^2 \cup {\mathbb{R}}_+$ by joining together
the Brownian motion with
darning on the plane and the one-dimensional Brownian motion along the pole.
It is possible to construct the process rigorously
via Poisson point process of excursions. But we find that
the most direct way to construct BMVD is by using
a Dirichlet form approach, which will be
carried out in Section 2.
To be more precise, the state space of BMVD on $E$ is defined as follows.
Fix $\varepsilon>0$ and $p>0$.
Denote by $B_\varepsilon $ the closed disk on ${\mathbb{R}}^2$ centered at $(0,0)$ with radius $\varepsilon $. Let $D_0={\mathbb{R}}^2\setminus B_\varepsilon $. By identifying $B_\varepsilon $ with a singleton denoted by $a^*$, we can introduce a topological space $E:=D_0\cup \{a^*\}\cup {\mathbb{R}}_+$, with the origin of ${\mathbb{R}}_+$ identified with $a^*$ and
a neighborhood of $a^*$ defined as $\{a^*\}\cup \left(U_1\cap {\mathbb{R}}_+ \right)\cup \left(U_2\cap D_0\right)$ for some neighborhood $U_1$ of $0$ in ${\mathbb{R}}^1$ and $U_2$ of $B_\varepsilon $ in ${\mathbb{R}}^2$. Let $m_p$ be the measure on $E$ whose restriction on ${\mathbb{R}}_+$ and $D_0$ is the Lebesgue measure multiplied by $p$ and $1$, respectively.
In particular, we have $m_p (\{a^*\} )=0$.
\medskip
\begin{dfn}\label{D:1.1} \rm Let $\varepsilon >0$ and $p>0$. A Brownian motion with varying dimensions (BMVD in abbreviation) with parameters $(\varepsilon, p)$ on $E$ is an $m_p$-symmetric diffusion $X$ on $E$ such that
\begin{description}
\item{(i)} its part process in ${\mathbb{R}}$ or $D_0$ has the same law as standard Brownian motion in ${\mathbb{R}}$ or $D_0$;
\item{(ii)} it admits no killings on $a^*$.
\end{description}
\end{dfn}
It follows from the $m_p$-symmetry of $X$ and the fact $m_p(\{a^*\})=0$
that BMVD $X$ spends zero Lebesgue amount of time at $a^*$.
\medskip
The following will be established in Section 2.
\begin{thm}\label{T:1.2}
For every $\varepsilon>0$ and $p>0$, BMVD with parameters $(\varepsilon, p)$
exists and is unique in law.
\end{thm}
We point out that BMVD on $E$ can start from every point in $E$.
We further characterize the $L^2$-infinitesimal generator ${\mathcal{L}}$
of BMVD $X$ in Section 2, which can be viewed as the Laplacian operator on this singular
space. We show that $u\in L^2(E; m_p)$
is in the domain of the generator ${\mathcal{L}}$ if and only if
$\Delta u$ exists as an $L^2$-integrable function
in the distributional sense when restricted to $D_0$ and ${\mathbb{R}}_+$,
and $u$ satisfies zero-flux condition at $a^*$; see Theorem \ref{T:2.3}
for details. It is not difficult to see that BMVD $X$ has a continuous transition
density function $p(t, x, y)$ with respect to the measure $m_p$, which is
also called the fundamental solution (or heat kernel) for $\mathcal{L}$.
Note that $p(t, x, y)$ is symmetric in $x$ and $y$.
The main purpose of this paper is to investigate how the BMVD $X$ propagates in $E$;
that is, starting from $x\in E$, how likely $X_t$ travels to position $y\in E$
at time $t$. This amounts to study the properties of $p(t, x, y)$ of $X$.
In this paper, we will establish the following sharp two-sided estimates on $p(t,x,y)$
in Theorem \ref{T:smalltime} and Theorem \ref{largetime}.
To state the results, we need first to introduce some notations.
Throughout this paper, we will denote the geodesic metric on $E$ by $\rho$.
Namely, for $ x,y\in E$, $\rho (x, y)$ is the shortest path distance (induced
from the Euclidean space) in $E$ between $x$ and $y$.
For notational simplicity, we write $|x|_\rho$ for $\rho (x, a^*)$.
We use $| \cdot |$ to denote the usual Euclidean norm. For example, for $x,y\in D_0$,
$|x-y| $ is the Euclidean distance between $x$ and $y$ in ${\mathbb{R}}^2$.
Note that for $x\in D_0$, $|x|_\rho =|x|-\varepsilon$.
Apparently,
\begin{equation}\label{e:1.1}
\rho (x, y)=|x-y|\wedge \left( |x|_\rho + |y|_\rho \right)
\quad \hbox{for } x,y\in D_0
\end{equation}
and $\rho (x, y)= |x|+|y|-\varepsilon$ when $x\in {\mathbb{R}}_+$ and $y\in D_0$ or vice versa.
Here and in the rest of this paper, for $a,b\in {\mathbb{R}}$, $a\wedge b:=\min\{a,b\}$.
The following are the main results of this paper.
\begin{thm}\label{T:smalltime}
Let $T>0$ be fixed. There exist positive constants $C_i$, $1\le i\le 14$ so that the transition density $p(t,x,y)$ of BMVD satisfies the following estimates when $t\in (0, T]$:
\begin{description}
\item{\rm (i)} For $x \in {\mathbb{R}}_+$ and $y\in E$,
\begin{equation}\label{stuff1}
\frac{C_1}{\sqrt{t}}e^{-C_2\rho (x, y)^2/t } \le p(t,x,y)\le\frac{C_3}{\sqrt{t}}e^{- C_4\rho (x, y)^2/t}.
\end{equation}
\item{\rm (ii)} For $x,y\in D_0\cup \{a^*\}$, when $|x|_\rho+|y|_\rho<1$,
\begin{eqnarray}\label{stuff3}
&& \frac{C_{5}}{\sqrt{t}}e^{- C_{6}\rho(x,y)^2/t}+\frac{C_{5}}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{- C_{7}|x-y|^2/t} \\
&\le & p(t,x,y)
\le \frac{C_{8}}{\sqrt{t}}e^{-C_{9}\rho(x,y)^2/t}
+\frac{C_{8}}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{-C_{10}|x-y|^2/t}; \nonumber
\end{eqnarray}
and when $|x|_\rho+|y|_\rho\geq 1$,
\begin{equation}\label{stuff333333}
\frac{C_{11}}{t}e^{- C_{12}\rho(x,y)^2/t} \le p(t,x,y) \le \frac{C_{13}}{t}e^{- C_{14}\rho(x,y)^2/t}.
\end{equation}
\end{description}
\end{thm}
Since $p(t, x, y)$ is symmetric in $(x, y)$, the above two cases cover all the cases for $x,y\in E$.
Theorem \ref{T:smalltime} shows that the transition density function $p(t, x, y)$ of BMVD on $E$
has one-dimensional character when at least one of $x, y$ is in the pole
(i.e. in ${\mathbb{R}}_+$);
it has two-dimensional character when both points are on the plane and at least one of them is away from the pole base $a^*$.
When both $x$ and $y$ are in the plane and both are close to $a^*$,
$p(t, x, y)$ exhibits
a mixture of one-dimensional and two-dimensional characters; see \eqref{stuff3}.
Theorem \ref{T:smalltime} will be proved through Theorems \ref{T:4.5}-\ref{T:4.7}.
The large time heat kernel estimates for BMVD are given by the next theorem, which are very different from the small time estimates.
\begin{thm}\label{largetime}
Let $T>0$ be fixed. There exist positive constants $C_i$, $15\le i \le 32$, so that the transition density $p(t,x,y)$ of BMVD satisfies the following estimates when $t\in [T, \infty)$:
\begin{enumerate}
\item[\rm (i)] For $x,y\in D_0\cup \{a^*\}$,
\begin{equation*}
\frac{C_{15}}{t}e^{- C_{16}\rho(x,y)^2/t} \le p(t,x,y)\le \frac{C_{17}}{t}e^{- C_{18}\rho(x,y)^2/t}.
\end{equation*}
\item[\rm (ii)] For $x\in {\mathbb{R}}_+$, $y\in D_0\cup \{a^*\}$,
\begin{equation*}
\frac{C_{19}}{t}\left( 1 + \frac{|x| \log t}{\sqrt{t}} \right)
e^{- C_{20}\rho(x,y)^2/t} \le p(t,x,y)\le \frac{C_{21}}{t}
\left( 1 + \frac{|x| \log t}{\sqrt{t}} \right)
e^{-C_{22}\rho(x,y)^2/t}
\end{equation*}
when $|y|_\rho\le 1$, and
\begin{align*}
&\frac{C_{23}}{t} \left( 1 + \frac{|x|}{\sqrt{t}} \log \left( 1+\frac{\sqrt{t}}{|y|_\rho} \right) \right)
e^{-{C_{24}\rho(x,y)^2}/{t}} \le p(t,x,y)
\\
&\le \frac{C_{25}}{t} \left( 1 + \frac{|x|}{\sqrt{t}} \log \left( 1+\frac{\sqrt{t}}{|y|_\rho} \right) \right)
e^{-{C_{26}\rho(x,y)^2}/{t}} \qquad \hbox{when } |y|_\rho>1.
\end{align*}
\item[\rm (iii)] For $x,y\in {\mathbb{R}}_+$,
\begin{eqnarray*}
&&\frac{C_{27}}{\sqrt{t}}\left(1\wedge \frac{|x|}{\sqrt{t}}\right)\left(1\wedge \frac{|y|}{\sqrt{t}}\right)e^{-C_{28}|x-y|^2/t}
+\frac{C_{27}}{t} \left( 1+ \frac{(|x|+|y|)\log t}{\sqrt{t}}\right) e^{-{C_{29}(x^2+y^2)}/{t}}
\\
&\le& p(t,x,y) \\
&\le & \frac{C_{30}}{\sqrt{t}}\left(1\wedge \frac{|x|}{\sqrt{t}}\right)\left(1\wedge \frac{|y|}{\sqrt{t}}\right)e^{-C_{31}|x-y|^2/t}
+\frac{C_{30}}{t} \left( 1+ \frac{(|x|+|y|)\log t}{\sqrt{t}}\right) e^{-{C_{32}(x^2+y^2)}/{t}} .
\end{eqnarray*}
\end{enumerate}
\end{thm}
Theorem \ref{largetime} will be proved through Theorems \ref{154}, \ref{T:5.15} and \ref{T:5.17}.
Due to the singular nature of the space, the standard Nash inequality and Davies method
for obtaining heat kernel upper bound do not give sharp bound for our BMVD.
We can not employ either the methods in \cite{Gr, LSC1, BM, St1, St2} on obtaining heat kernel estimates through volume doubling
and Poincar\'e inequality or the approach through parabolic Harnack inequality. In fact,
$(E, m_p)$ does not have volume doubling property, and
we will show the parabolic Harnack inequality fails for BMVD $X$; see Proposition \ref{P:2.1} and
Remark \ref{rmkonsmalltimeHKE}(iii).
Hence a new approach is needed to study the heat kernel
of BMVD.
A key role is played by the ``signed radial process" of BMVD, which we can analyze
and derive its two-sided heat kernel estimates. From it, by exploring the rotational symmetry
of BMVD, we can obtain short time sharp two-sided heat kernel estimates
by the following observation.
The sample paths of BMVD starting at $x$ reach $y$ at time $t$
in two possible ways: with or without passing through $a^*$.
The probability of the first scenario is given exactly by the probability transition density function of killed Brownian motion in $D_0$.
The probability of the second scenario can be computed by employing the strong Markov property of BMVD
at the first hitting time of the pole base $a^*$ and reducing it to the signed radial process of BMVD, exploring the symmetry of BMVD starting from $a^*$.
The large time heat kernel estimates are more delicate.
For large time estimate, the key is to obtain the correct on-diagonal estimate.
This is done through some delicate analysis of BMVD and Bessel process on the plane.
As a corollary of the sharp two-sided heat kernel estimates, we find that the usual form
of the parabolic Harnack inequality fails for parabolic functions of BMVD.
Nevertheless, we will show in Section \ref{S:6} that joint
H\"older regularity holds for bounded parabolic functions of $X$.
Let $X^D$ be the part process of BMVD killed upon exiting a bounded connected $C^{1,1}$ open subset $D$ of $E$. Denote by $p_{_D}(t,x,y)$ its transition density function. Using the Green function estimates and boundary Harnack inequality for absorbing Brownian motion in Euclidean spaces, we can derive sharp two-sided estimates on Green function
\begin{equation*}
G_{_D}(x,y):=\int_{0}^\infty p_{_D}(t,x,y)dt
\end{equation*}
for BMVD in $D$. Recall that an open set $D\subset {\mathbb{R}}^d$ is called to be $C^{1,1}$ if there exist a localization radius $R_0>0$ and a constant $\Lambda_0>0$ such that for every $z\in \partial D$, there exists a $C^{1,1}-$function $\phi=\phi_z: {\mathbb{R}}^{d-1}\rightarrow {\mathbb{R}}$ satisfying $\phi(0)=0$, $\nabla \phi(0)=(0,\dots, 0)$, $\|\nabla \phi\|_\infty \le \Lambda_0$, $|\nabla \phi (x)-\nabla \phi (z)|\le \Lambda |x-z|$ and an orthonormal coordinate system $CS_z:\, y=(y_1, \dots , y_d):=(\widetilde {y}, y_d)$ with its origin at $z$ such that
\begin{equation*}
B(z, R_0)\cap D =\{y\in B(0, R_0)\textrm{ in }CS_z: y_d>\phi (\widetilde {y})\}.
\end{equation*}
For the state space $E$, an open set $D\subset E$ will be called $C^{1,1}$ in $E$, if $D\cap ({\mathbb{R}}_+ \setminus \{a^*\})$ is a $C^{1,1}$ open set in ${\mathbb{R}}_+$, and $D\cap D_0$ is a $C^{1,1}$ open set in ${\mathbb{R}}^2$.
\begin{thm}\label{greenfunctionestimate}
Suppose $D$ is a bounded $C^{1,1}$ domain of $E$ that contains $a^*$.
Let $G_{D} (x,y)$ be the Green function of BMVD $X$ killed upon exiting $D$.
Then for $x\not= y$ in $D$, we have
\begin{equation*}
G_{_D}(x,y)\asymp\left\{
\begin{array}{ll}
\delta_D(x)\wedge \delta_D(y),
\quad &
x, y \in D\cap {\mathbb{R}}_+ ;
\\ \\
\left(\delta_D(y)\wedge 1\right)\left(\delta_D\wedge 1\right)+\ln\left(1+\frac{\delta_{D\cap D_0}(x)\delta_{D\cap D_0}(y)}{|x-y|^2}\right),
& x, y\in D\cap D_0 ;
\\ \\
\delta_D(x) \delta_D(y),
& \ x\in D\cap {\mathbb{R}}_+ , \, y\in D\cap D_0 .
\end{array}
\right.
\end{equation*}
Here $\delta_D(x):={\rm dist}_{\rho} (x, \partial D):=\inf\{\rho (x, z): z\notin D\}$
and $\delta_{D\cap D_0} (x):=\inf\{\rho (x, z): z\notin D\cap D_0\}$.
\end{thm}
For two positive functions $f$ and $g$, $f\asymp g$ means that $f/g$ is bounded between two positive constants.
In the following, we will also use notation $f\lesssim g$ (respectively, $f\gtrsim g$) to mean that there is some constant $c>0$ so that $f\leq c g$
(respectively, $f\geq c g$).
\medskip
The above space $E$ of varying dimension is special.
It serves as a toy model for further study on Brownian motion on more general
spaces of varying dimension. Another two examples of spaces of varying dimension and BMVD on them
are given and studied in Section 8 of this paper.
Even for this toy model, several interesting and non-trivial phenomena have arisen.
The heat kernel estimates on spaces of varying dimension are quite delicate.
They are of Gaussian type but they are not of the classical Aronson
Gaussian type.
The different dimensionality is also reflected in the heat
kernel estimates for BMVD. Even when both points $x$ and $y$ are on the plane, the heat kernel
$p(t, x, y)$ exhibits both one-dimensional and
two-dimensional characteristics depending
on whether both points are close to the base point $a^*$ or not.
In addition, both the Euclidean distance $|x-y|$ and the geodesic distance $\rho (x, y)$
between the two points $x$ and $y$ play a role in the kernel estimates.
As far as we know, this is the first paper that is devoted to the detailed study of heat propagation on spaces of varying dimension
and related potential theory. Our approach is mainly probabilistic.
For other related work and approaches on
Markov processes living on spaces with
possibly different dimensions, we refer the reader to \cite{ES, HK, Ha, Ku} and the references therein.
The rest of the paper is organized as follows. Section 2 gives a Dirichlet form construction and characterization of BMVD,
as well as its infinitesimal generator. Nash-type inequality for $X$ is given in Section 3.
In Section 4, we present small time heat kernel estimates for $X$, while the large time estimates are given in
Section 5. H\"older continuity of parabolic functions of $X$ is established in Section 6.
Section 7 is devoted to the two-sided sharp estimates for Green function of $X$ in bounded $C^{1,1}$ domains in $E$ that contain the pole base $a^*$.
BMVD on a large square with multiple vertical flag poles or with an arch are studied in Sections 8.
For notational convenience, in this paper we set
\begin{equation}\label{e:1.6}
\overline{p}_D(t,x,y):=p(t,x,y)-p_D(t,x,y),
\end{equation}
where $D$ is a domain of $E$ and $p_D(t,x,y)$ is the transition density of the part process killed upon exiting $D$.
In other words, for any non-negative function $f\geq 0$ on $E$,
\begin{equation}\label{e:1.7}
\int_E \overline p_D (t, x, y) f(y) m_p(dy) = {\mathbb{E}}_x \left[ f(X_t); t\geq \tau_D \right],
\end{equation}
where $\tau_D:=\inf\{t\geq 0: X_t\notin D\}$.
Thus while $p_D(t,x,y)$ gives the probability density
that BMVD starting from $x$ hits $y$ at time $t$ without exiting $D$,
$\overline{p}_D(t,x,y)$ is the probability density for BMVD starting from $x$ leaves $D$ before ending up at $y$ at time $t$.
We use $C^\infty_c(E)$ to denote the space of continuous functions with compact support in $E$ so that
there restriction to $D_0$ and ${\mathbb{R}}_+$ are smooth on $\overline D_0$ and on ${\mathbb{R}}_+$, respectively.
We also follow the convention that in the statements of the theorems or propositions $C, C_1, \cdots$ denote positive constants, whereas in their proofs $c, c_1, \cdots$ denote positive constants whose exact value is unimportant and may change
from line to line.
\section{Preliminaries}\label{S:2}
Throughout this paper, we denote the Brownian motion with varying dimension by $X$ and its state space by $E$.
In this section, we will construct BMVD using Dirichlet form approach.
For the definition and basic properties of Dirichlet forms, including the relationship between Dirichlet form,
$L^2$-infinitesimal generator, resolvents and semigroups, we refer the reader to \cite{CF} and \cite{FOT}.
For a connected open set $D\subset {\mathbb{R}}^d$, $W^{1,2} (D)$ is
the collection of $L^2(D; dx)$-integrable functions whose first order derivatives (in
the sense of distribution) exist and are also in $L^2(D; dx)$. Define
$$
{\mathcal{E}}^0 (f, g) =\frac12 \int_{D} \nabla f(x) \cdot \nabla g(x) dx ,
\qquad f, g\in W^{1,2}(D).
$$
It is well known that when $D$ is smooth,
$({\mathcal{E}}^0, W^{1,2}(D)) $ is a regular Dirichlet form on $L^2 (\overline D; dx)$ and its associated Hunt process
is the (normally) reflected Brownian motion on $\overline D$.
Moreover, every function $f$ in $W^{1,2}(D)$ admits a quasi-continuous version on $\overline D$,
which will still be denoted by $f$.
A quasi-continuous function is defined quasi-everywhere (q.e. in abbreviation) on $\overline D$.
When $d=1$, by Cauchy-Schwartz inequality, every function in $W^{1,2}(D)$ is $1/2$-H\"older on $\overline D$.
Denote by $W^{1,2}_0 (D)$ the ${\mathcal{E}}^0_1$-closure of $C^\infty_c(D)$, where ${\mathcal{E}}^0_1(f, f):={\mathcal{E}}^0(u, u) + \int_D u(x)^2 dx$.
It is known that for any open set $D\subset {\mathbb{R}}^d$,
$({\mathcal{E}}^0, W^{1,2}_0(D))$ is a regular Dirichlet form on $L^2(D; dx)$ associated with the
absorbing Brownian motion in $D$.
For a subset $A\subset E$, we define $\sigma_A:=\inf\{t>0, X_t\in A\}$ and $\tau_{_A}:=\inf\{t\geq 0: X_t \notin A\}$.
Similar notations will be used for other stochastic processes.
We will use $B_\rho(x,r)$ (resp. $B_e(x,r)$) to denote the
open ball in $E$ under the path metric $\rho$
(resp. in ${\mathbb{R}}_+$ or ${\mathbb{R}}^2$ under the Euclidean metric) centered at $x\in E$ with radius $r>0$.
A measure $\mu$ on $E$ is said to have volume doubling property if
there exists a constant $C>0$
so that $m_p (B_\rho (x , 2r))\leq C m_p(B_\rho (x , r))$
for all $x\in E$ and every $r\in (0, 1]$.
\begin{prop}\label{P:2.1} For any $p>0$, volume doubling property fails for measure $m_p$.
\end{prop}
\begin{proof}
Note that for small $r>0$ and $x_0 \in D_0$ with $|x_0|_\rho=r$,
$m_p(B_\rho (x_0, r))= \pi r^2$ while $m_p (B_\rho (x_0, 2r))
= 2\varepsilon r +r^2 +r$. Thus there does not exist a constant $C>0$
so that $m_p (B_\rho (x , 2r))\leq C m_p(B_\rho (x , r))$
for all $x\in E$ and every $r\in (0, 1]$.
\end{proof}
The following is an extended version of Theorem \ref{T:1.2}.
\begin{thm}\label{existence-uniqueness}
For every $\varepsilon >0$ and $p>0$,
BMVD $X$ on $E$ with parameter $(\varepsilon, p)$ exists and is unique.
Its associated Dirichlet form $({\mathcal{E}}, {\mathcal{F}})$ on $L^2(E; m_p)$ is given by
\begin{eqnarray}
{\mathcal{F}} &= & \left\{f: f|_{D_0}\in W^{1,2}(D_0), \, f|_{{\mathbb{R}}_+}\in W^{1,2}({\mathbb{R}}_+),
\hbox{ and }
f (x) =f (0) \hbox{ q.e. on } \partial D_0 \right\}, \label{e:2.1}
\\
{\mathcal{E}}(f,g) &=& \frac12 \int_{D_0}\nabla f(x) \cdot \nabla g(x) dx+\frac{p}2\int_{{\mathbb{R}}_+}f'(x)g'(x)dx . \label{e:2.2}
\end{eqnarray}
\end{thm}
\begin{proof}
Let ${\mathcal{E}}$ and ${\mathcal{F}}$ be defined as above.
Let $u_1(x)={\mathbb{E}}^x\left[e^{-\sigma_{B_\varepsilon}}\right]$ when $x\in D_0$ and $u_1(x)={\mathbb{E}}^x\left[e^{-\sigma_{D_0}}\right]$ when $x\in {\mathbb{R}}_+$.
It is known that $u_1|_{D_0}\in W^{1,2}(D_0)$, $u_1|_{{\mathbb{R}}_+} \in W^{1,2}({\mathbb{R}}_+)$, $u_1(x)=1$ for $x\in \partial D_0$ and $u_1(0)=1$.
Hence $u_1\in {\mathcal{F}}$.
{\it Existence:} Let
$$
{\mathcal{F}}^0 = \left\{f: f|_{{\mathbb{R}}^2}\in W^{1,2}_0(D_0), \, f|_{{\mathbb{R}}_+}\in W^{1,2}_0({\mathbb{R}}_+) \right\} .
$$
Then ${\mathcal{F}}$ is the linear span of ${\mathcal{F}}^0 \cup \{u_1\}$.
It is easy to check that $({\mathcal{E}}, {\mathcal{F}})$ is a strongly local regular Dirichlet form on $L^2(E;m)$.
So there is an $m_p$-symmetric diffusion process $X$ on $E$ associated with it.
Using the Dirichlet form characterization, the part process of $X$ killed upon hitting $a^*$ is
an absorbing Brownian motion in $D_0$ or ${\mathbb{R}}_+$, depending on the starting point.
So $X$ is a BMVD on $E$. Moreover, $X$ is conservative; that is, it has infinite lifetime.
\medskip
{\it Uniqueness:} Conversely, if $X$ is a BMVD, it suffices to check from definition that its associated Dirichlet form $({\mathcal{E}}^*, {\mathcal{F}}^*)$
in $L^2(E; m_p)$ has to be $({\mathcal{E}}, {\mathcal{F}})$ given in \eqref{e:2.1}-\eqref{e:2.2}. Indeed, since $a^*$ is non-polar for $X$, for all $u\in {\mathcal{F}}^*$,
$$
H_{a^*}^1u(x):={\mathbb{E}}^x\left[e^{-\sigma_{a^*}}u(X_{\sigma_{a^*}})\right]=u(a^*){\mathbb{E}}^x[e^{-\sigma_{a^*}}]\in {\mathcal{F}} \cap {\mathcal{F}}^*
$$
and $u-H_{a^*}^1u(x)\in {\mathcal{F}}^0$.
Thus ${\mathcal{F}}^*\subset {\mathcal{F}}$. On the other hand, since the part process of $X$ killed upon hitting $a^*$ has the same distribution
as the absorbing Brownian motion on $D_0\cup (0, \infty)$, which has Dirichlet form $({\mathcal{E}}, {\mathcal{F}}^0)$ on $L^2(E\setminus \{a^*\}; m_p)$,
we have ${\mathcal{F}}^0\subset {\mathcal{F}}^*$. It follows that ${\mathcal{F}}\subset {\mathcal{F}}^*$ and therefore ${\mathcal{F}} ={\mathcal{F}}^*$.
Since $X$ is a diffusion that admits no killings inside $E$, its Dirichlet form
$({\mathcal{E}}^*, {\mathcal{F}}^*)$ is strongly local. Let $\mu_{\<u\>}=\mu^c_{\<u\>}$
denote the energy measure associated with $u\in {\mathcal{F}}^*$; see \cite{CF, FOT}. Then by the strong locality of $\mu^c_{\<u\>}$
and the $m_p$-symmetry of $X$, we have for every bounded $u\in {\mathcal{F}}^*={\mathcal{F}}$,
\begin{eqnarray*}
{\mathcal{E}}^*(u,u) &= & \frac12 \left( \mu^c_{\langle u\rangle}(E\setminus \{a^*\})+\mu_{\langle u\rangle }^c(a^*) \right)
=\frac12 \mu_{\langle u\rangle }^c(E\setminus \{a^*\})=
\frac12 \mu_{\langle u\rangle}^c(D_0)+ \frac12 \mu_{\langle u\rangle}^c({\mathbb{R}}_+) \\
&=& \frac12 \int_{D_0}|u(x)|^2dx+ \frac{p}2\int_{{\mathbb{R}}_+}|u'(x)|^2dx = {\mathcal{E}} (u, u).
\end{eqnarray*}
This proves $({\mathcal{E}}^*, {\mathcal{F}}^*)=({\mathcal{E}}, {\mathcal{F}})$.
\end{proof}
Following \cite{CF}, for any $u\in {\mathcal{F}}$, we define its flux $\mathcal{N}_p(u)(a^*)$ at $a^*$ by
$$
\mathcal{N}_p(u)(a^*)=\int_E \nabla u(x)\cdot \nabla u_1(x)m_p(dx)+\int_E\Delta u(x)u_1(x)m_p(dx),
$$
which by the Green-Gauss formula equals
$$
\int_{\partial B_\varepsilon}\frac{\partial u(x)}{\partial \vec{n}}\sigma(dx)-pu'(a^*).
$$
Here $\vec{n}$ is the unit inward normal vector field of $B_\varepsilon$ at $\partial B_\varepsilon$.
The last formula justifies the name of ``flux" at $a^*$.
Let ${\cal L}$ be the $L^2$-infinitesimal generator of $({\mathcal{E}}, {\mathcal{F}})$, or equivalently, the BMVD $X$,
with domain of definition $\mathcal{D}(\mathcal{L})$.
It can be viewed as the ``Laplacian" on the space $E$ of varying dimension.
\begin{thm}\label{T:2.3}
A function $u\in {\mathcal{F}}$ is in $\mathcal{D}(\mathcal{L})$ if and only if the distributional Laplacian $\Delta u$ of $u$ exists as an $L^2$-integrable function on $E\setminus \{a^*\}$ and $u$ has zero flux at $a^*$.
Moreover,
$\mathcal{L}u=\frac12 \Delta u$ on $E\setminus \{a^*\}$
for $u\in \mathcal{D}(\mathcal{L})$.
\end{thm}
\begin{proof}
By the Dirichlet form characterization, $u\in \mathcal{D}(\mathcal{L})$ if and only if $u\in {\mathcal{F}}$ and there is some $f\in L^2(E; m_p)$ so that
\begin{equation*}
{\mathcal{E}}(u, v)=-\int_E f(x)v(x)m_p(dx) \quad \textrm{for every }v\in {\mathcal{F}}.
\end{equation*}
In this case, $\mathcal{L}u := f$. The above is equivalent to
\begin{equation}\label{961240}
\frac12 \int_E\nabla u(x)\cdot \nabla v(x)m_p(dx) =-\int_E f(x)v(x)m_p(dx) \quad \textrm{for every }v\in C_c^\infty (E\setminus \{a^*\})
\end{equation}
and
\begin{equation}\label{961241}
\frac12 \int_E\nabla u(x)\cdot \nabla u_1(x)m_p(dx)=-\int_E f(x)u_1(x)m_p(dx).
\end{equation}
Equation \eqref{961240} implies that $f=\frac12 \Delta u\in L^2(E; m_p)$,
and \eqref{961241} is equivalent to $\mathcal{N}_p(u)(a^*)=0$.
\end{proof}
\section{Nash Inequality and Heat Kernel Upper Bound
Estimate} \label{S:3}
Recall that $D_0 :={\mathbb{R}}^2\setminus \overline{B_e(0, \varepsilon)}$.
In the following, if no measure is explicitly mentioned in the $L^p$-space, it is understood as being with respect to
the measure $m_p$; for instance, $L^p(E)$ means $L^p(E; m_p)$.
\begin{lem}\label{NashIneq}There exists $C_1>0$ so that
\begin{displaymath}
\|f\|_{L^2(E)}^2\le
C_1\left({\mathcal{E}}(f,f)^{{1}/{2}}\|f\|_{L^1(E)}+{\mathcal{E}}(f,f)^{\
{1}/{3}}\|f\|_{L^1(E)}^{{4}/{3}}\right)
\qquad \hbox{for every } f\in {\mathcal{F}}.
\end{displaymath}
\end{lem}
\begin{proof}
Since $D_0\subset {\mathbb{R}}^2$ and ${\mathbb{R}}_+$ are smooth domains, we have by the classical Nash's inequality,
\begin{equation*}
\|f\|_{L^2(D_0)}^2\le c\|\nabla f\|_{L^2(D_0)}\, \|f\|_{L^1(D_0)}
\quad \hbox{for } f\in W^{1,2}(D_0) \cap L^1(D_0),
\end{equation*}
and
$$
\|f\|^3_{L^2({\mathbb{R}}_+)}\le C\|f\|^2_{L^1({\mathbb{R}}_+)}\|f'\|_{L^2({\mathbb{R}}_+)}
\quad \hbox{for } f\in W^{1,2}({\mathbb{R}}_+) \cap L^1({\mathbb{R}}_+) .
$$
The desired inequality now follows by combining these two inequalities.
\end{proof}
The Nash-type inequality in Lemma \ref{NashIneq} immediately implies that BMVD $X$ on $E$ has a symmetric density function
$p(t, x, y)$ with respect to the measure $m_p$ and that the following
on-diagonal estimate by \cite[Corollary 2.12]{CKS} holds.
\begin{prop}\label{P:3.2}
There exists $C_2>0$ such that
\begin{displaymath}
p(t, x, y)\le
C_2\displaystyle{\left(\frac{1}{t}+\frac{1}{t^{1/2}}\right)} \qquad \hbox{for all }
t>0 \hbox{ and } x, y\in E.
\end{displaymath}
\end{prop}
Since $X$ moves like Brownian motion in Euclidean spaces
before hitting $a^*$, it is easy to verify that for each $t>0$,
$(x, y) \mapsto p(t, x, y)$ is continuous in $(E\setminus \{a^* \}) \times (E\setminus \{a^* \})$.
For each $t>0$ and fixed $y\in E\setminus \{a^*\}$,
$$
\int_E p(t/2, y, z)^2 m_p(dz) = p(t, y, y)<\infty.
$$
So by the Dirichlet form theory,
$x\mapsto p(t, x, y)= \int_E p(t/2, x, z) p(t/2, z, y) m_p(dz)$ is ${\mathcal{E}}$-quasi-continuous on $E$.
Since $a^*$ is non-polar for $X$, $x\mapsto p(t, x, y)$ is continuous at $a^*$, and hence
is continuous on $E$. By the symmetry and Chapman-Kolmogorov equation again,
we conclude that
$$ x\mapsto p(t, x, a^*)= \int_{E } p(t/2, x, z) p(t/2, z, a^*) m_p(dz)
$$
is continuous on $E$. Consequently, $p(t, x, y)$ is well defined pointwisely on $(0, \infty) \times E\times E$
so that for each fixed $t>0$ and $y\in E$, $p(t, x, y)$ is a continuous function in $x\in E$.
We can use Davies method to get an off-diagonal upper bound estimate.
\begin{prop}\label{offdiagUBE}
There exist $C_3, C_4>0$ such that
\begin{displaymath}
p(t,x,y)\le C_3\left(\frac{1}{t}+
\frac{1}{t^{1/2}}\right)e^{-C_4 {\rho(x,y)^2}/{t}}
\qquad \hbox{for all } t>0 \hbox{ and } x, y\in E.
\end{displaymath}
\end{prop}
\begin{proof}
Fix $x_0, y_0\in E$, $t_0>0$. Set a constant $\alpha:=\rho(y_0,x_0)/4t_0$ and
$\displaystyle{\psi (x):=\alpha|x|_\rho}$.
Then we define $\psi_n(x)=\psi(x)\wedge n$. Note that for $m_p$-a.e. $x\in E$,
\begin{displaymath}
e^{-2\psi_n (x)}|\nabla e^{\psi_n (x)} |^2=|\nabla
\psi_n (x) |^2=|\alpha|^2\, \mathbf{1}_{\{|x|_\rho \le
\frac{n}{|\alpha|}\}}(x) \leq \alpha^2.
\end{displaymath}
Similarly, $ e^{2\psi_n (x)}|\nabla e^{-\psi_n (x)}|^2 \leq \alpha^2$.
By \cite[Corollary 3.28]{CKS},
\begin{equation}\label{proveoffdiagUB}
p(t,x,y)\le c\left(\frac{1}{t}+
\frac{1}{t^{1/2}}\right)\exp\left(-|\psi(y)-\psi(x)|+2t|\alpha|^2\right) .
\end{equation}
Taking $t=t_0, x=x_0$
and $y=y_0$ in \eqref{proveoffdiagUB} completes the proof.
\end{proof}
As we have seen from Theorems \ref{T:smalltime} and \ref{largetime},
the upper bound estimate in Proposition \ref{offdiagUBE} is not sharp.
\section{Signed Radial Process and Small Time Estimate}\label{S:4}
In order to get the sharp two-sided heat kernel estimates, we consider the radial process of $X$. Namely, we project $X$ to ${\mathbb{R}}$ by applying the following
mapping from $E$ to ${\mathbb{R}}$:
\begin{equation}\label{848}
u(x)=\left\{
\begin{array}{ll}
-|x|, & \hbox{$x \in {\mathbb{R}}_+$;} \\ \\
|x|_\rho, & \hbox{$x\in D_0$.}
\end{array}
\right.
\end{equation}
We call $Y_t:=u(X_t)$ the signed radial process of $X$.
Observe that $u\in {\mathcal{F}}_{{\rm loc}}$, where ${\mathcal{F}}_{{\rm loc}}$ denotes the local Dirichlet space of $({\mathcal{E}}, {\mathcal{F}})$,
whose definition can be found, for instance,
in \cite{CF, FOT}. By Fukushima decomposition \cite[Chapter 5]{FOT},
\begin{displaymath}
Y_t-Y_0=u(X_t)-u(X_0)=M^{[u]}_t+N^{[u]}_t, \quad {\mathbb{P}}_x\textrm{-a.s. for q.e.
}x\in E,
\end{displaymath}
where $M^{[u]}_t$ is a local martingale additive functional of $X$, and
$N^{[u]}_t$ is a continuous additive functional of $X$ locally having zero energy.
We can explicitly compute $M^{[u]}$ and $N^{[u]}$.
For any $\psi\in C_c^\infty(E)$,
\begin{align*}
{\mathcal{E}}(u, \psi)&=\frac12 \int_{D_0}\nabla |x| \cdot \nabla \psi dx + \frac{p}2\int_{{\mathbb{R}}_+}(-1)\psi' dx
\\&=- \frac12 \int_{D_0}\textrm{div}\left(\frac{x}{|x|}\right)\psi dx
- \frac12 \int_{\partial B_e(0,\varepsilon)}\psi(0)\frac{\partial |x| }{\partial
\vec{n}} \sigma (dx) + \frac{p\psi(0)}2
\\&=- \frac12 \int_{D_0}\frac{1}{|x|}\psi dx- \frac{2\pi\varepsilon- p }2\, \psi(0) \\
&= - \int_E \psi (x) \nu (dx),
\end{align*}
where $\vec{n}$ is the outward pointing unit vector normal of the
surface $\partial B_e(0,\varepsilon)$, $\sigma$ is the surface measure
on $\partial B_e (0, \varepsilon)\subset {\mathbb{R}}^2$, and
\begin{displaymath}
\nu(dx):= \frac{1}{|2x|}\mathbf{1}_{ D_0 } (x) dx+\frac{2\pi \varepsilon-p}2 \, \delta_{\{a^*\}}.
\end{displaymath}
Recall that we identify $0\in {\mathbb{R}}_+$ with $a^*$. It follows from \cite[Theorem 5.5.5]{FOT} that
\begin{equation}\label{Zero-energy-for-radial}
dN_t^{[u]}= \frac{1}{2( u(X_t)+\varepsilon )}\mathbf{1}_{\{X_t\in
D_0\}}dt+(2\pi \varepsilon-p)dL^0_t (X),
\end{equation}
where $L^0_t (X)$ is the positive continuous additive functional of $X$ having Revuz measure
$\frac12 \delta_{\{a^*\}}$. We call $L^0$ the local time of $X$ at $a^*$.
Next we compute $\<M^{[u]}\>$, the predictable quadratic variation process of local martingale $ M^{[u]}) $.
Let $u_n=(-n)\vee u \wedge n$, and it immediately follows $u_n\in {\mathcal{F}}$. Let ${\mathcal{F}}_b$ denote the space of bounded functions in ${\mathcal{F}}$.
By \cite[Theorem 5.5.2]{FOT}, the Revuz measure $\mu_{\<u_n\>}$ for $\< M^{[u_n]}\>$ can be calculated as follows.
For any $f\in {\mathcal{F}}_b\cap C_c(E)$,
\begin{eqnarray*}
\int_E f (x) \mu_{\<u_n\>} (dx) =
2{\mathcal{E}}(u_n f, u_n)-{\mathcal{E}}(u_n^2, f)
= \int_E f(x) |\nabla u_n (x)|^2 m_p (dx) ,
\end{eqnarray*}
which shows that
\begin{displaymath}
\mu_{\langle u_n \rangle}(dx)= |\nabla
u_n (x)|^2 m_p(dx)= \mathbf{1}_{B_\rho(a^*, n)} m_p (dx).
\end{displaymath}
By the strong local property of $({\mathcal{E}}, {\mathcal{F}})$, we have $\mu_{\<u\>}= \mu_{\langle u_n \rangle}$
on $B_\rho (a^*, n)$. It follows that $\mu_{\langle u \rangle}(dx) = m_p (dx)$.
Thus by \cite[Proposition 4.1.9]{CF}, $\<M^{[u]}\>_t=t$ for $t\geq 0$
and so $B_t:= M^{[u ]}_t$ is a one-dimensional Brownian motion.
Combining this with \eqref{Zero-energy-for-radial}, we conclude
\begin{eqnarray}\label{710901}
dY_t &=& dB_t+\frac{1}{2 (Y_t+\varepsilon) }\mathbf{1}_{\{X_t\in
D_0\}}dt+(2\pi \varepsilon-p)dL^0_t(X) \nonumber \\
&=& dB_t+\frac{1}{2 (Y_t+\varepsilon) }\mathbf{1}_{\{Y_t >0\}} dt
+(2\pi \varepsilon-p)dL^0_t(X) .
\end{eqnarray}
We next find the SDE for the semimartingale $Y$.
The semi-martingale local time of $Y$ is denoted as $L_t^0(Y)$,
that is,
\begin{equation}\label{e:4.3}
L_t^0(Y):=\lim_{\delta \downarrow 0}\frac{1}{\delta}\int_0^t 1_{[0, \delta )}(Y_s)d\langle Y\rangle_s
= \lim_{\delta \downarrow 0}\frac{1}{\delta }\int_0^t 1_{[0, \delta )}(Y_s)ds,
\end{equation}
where $\< Y\>_t=t$ is the quadratic variation process of the semimartingale $Y$.
\begin{prop}\label{localtime}
$L_t^0(Y)= 4\pi\varepsilon L_t^0(X) $.
\end{prop}
\begin{proof}
By computation analogous to that for $Y_t=u(X_t)$,
one can derive using Fukushima's decomposition for $v(X_t):=|X_t|_\rho$,
that
\begin{displaymath}
dv(X_t)=d\widetilde{B_t}+\frac{1}{2( |X_t|_\rho +\varepsilon )}\mathbf{1}_{\{X_t\in
D_0 \}}dt+(2\pi\varepsilon+p)dL_t^0(X),
\end{displaymath}
where $\widetilde{B}$ is a one-dimensional Brownian motion.
Observe that $v(X_t)=|Y_t|$. Thus we have
\begin{displaymath}
d|Y_t|=d\widetilde{B_t}+\frac{1}{2( Y_t+\varepsilon ) }\mathbf{1}_{\{Y_t>0\}}dt+(2\pi\varepsilon+p)dL_t^0(X).
\end{displaymath}
On the other hand, by Tanaka's formula, we have
\begin{align*}
d|Y_t|&=\textrm{sgn}(Y_t)dY_t+dL_t^0(Y)\\
&=\textrm{sgn}(Y_t)dB_t+\textrm{sgn}(Y_t)\frac{1}{2( Y_t+\varepsilon )}\mathbf{1}_{\{Y_t>0\}}dt
+\textrm{sgn}(Y_t)(2\pi\varepsilon-p)dL_t^0(X)+dL_t^0(Y)\\
&=\textrm{sgn}(Y_t)dB_t+\frac{1}{ 2(Y_t+\varepsilon )}\mathbf{1}_{\{Y_t>0\}}dt+(2\pi\varepsilon-p)\textrm{sgn}(Y_t)dL_t^0(X)+dL_t^0(Y),
\end{align*}
where $\textrm{sgn}(x):=1$ if $x>0$
and $\textrm{sgn}(x):=- 1$ if $x\leq 0$.
Since the decomposition of a continuous semi-martingale as the
sum of a continuous local martingale and a continuous process with finite
variation is unique, one must have
\begin{equation}\label{e:4.4}
(2\pi\varepsilon+p)L_t^0(X)=\textrm{sgn}(Y_t)(2\pi\varepsilon-p)L_t^0(X)+L_t^0(Y).
\end{equation}
The local time $L_t^0(X)$ increases only when $ Y_t=0 $.
Therefore
\begin{displaymath}
(2\pi\varepsilon +p)L_t^0(X)=- (2\pi\varepsilon -p)L_t^0(X)+L_t^0(Y),
\end{displaymath}
and so $ 4\pi\varepsilon L_t^0(X)=L_t^0(Y)$.
\end{proof}
The semi-martingale local time in \eqref{e:4.3} is non-symmetric in the sense that
it only measures the occupation time of $Y_t$ in the one-sided interval $[0, \delta )$ instead of the symmetric interval $(-\delta, \delta)$.
One can always relate the non-symmetric semi-martingale local time $L^0(Y)$
to the symmetric semi-martingale local time $\widehat L^0(Y)$ defined by
\begin{equation*}
\widehat{L}_t^0(Y):=\lim_{\delta \downarrow 0}\frac{1}{2 \delta }\int_0^t 1_{(-\delta, \delta )}(Y_s)d\langle Y\rangle_s
= \lim_{\delta \downarrow 0}\frac{1}{2 \delta}\int_0^t 1_{(-\delta, \delta )}(Y_s) d s .
\end{equation*}
\begin{lem}\label{L:4.2}
$\widehat{L}_t^0(Y)= \frac{2\pi \varepsilon + p}{4\pi \varepsilon } L_t^0(Y) = (2\pi \varepsilon + p) L^0_t (X)$.
\end{lem}
\begin{proof} Viewing $|Y_t| = |-Y_t|$ and applying Tanaka's formula to the semimartingale $-Y$, we can derive in a way analogous to the computation leading to \eqref{e:4.4} that
$$
(2\pi\varepsilon+p)L_t^0(X)=- \textrm{sgn}(-Y_t)(2\pi\varepsilon-p) L_t^0(X)+L_t^0(-Y)
= ( 2\pi\varepsilon-p)L_t^0(X)+L_t^0(-Y).
$$
Thus we get $ 2pL_t^0(X)=L_t^0(-Y)$, which yields
$$
\widehat{L}_t^0(Y) =\frac{1}{2}\left(L_t^0(Y)+L_t^0(-Y)\right)
=\frac{1}{2}\left(4\pi\varepsilon L_t^0(X)+2pL_t^0(X)\right) = (2\pi\varepsilon+p)L_t^0(X).
$$
\end{proof}
Lemma \ref{L:4.2} together with \eqref{710901} gives the following SDE characterization for the signed radial
process $Y$, which tells us precisely how $X$ moves after hitting $a^*$.
\begin{prop}\label{P:4.3}
\begin{equation}\label{YsymmSDE}
dY_t=dB_t+\frac{1}{2(Y_t+\varepsilon)}\mathbf{1}_{\{Y_t>0\}}dt+\frac{2\pi\varepsilon-p}{2\pi\varepsilon+p}d\widehat{L}_t^0(Y).
\end{equation}
\end{prop}
\medskip
Let $\beta = \frac{2\pi\varepsilon-p}{2\pi\varepsilon+p}$.
SDE \eqref{YsymmSDE} says that $Y$ is a skew Brownian motion with drift on ${\mathbb{R}}$ with skew parameter $\beta$.
It follows (see \cite{RY}) that
starting from $a^*$, the process $Y$ (respectively, $X$) has probability $(1-\beta)/2= \frac{p}{2\pi \varepsilon +p}$ to enter $(-\infty, 0)$
(respectively, the pole) and probability $(1+\beta )/2=\frac{2\pi \varepsilon}{2\pi \varepsilon +p}$ to enter $(0, \infty)$ (respectively, the plane).
\medskip
SDE \eqref{YsymmSDE} has a unique strong solution; see, e.g., \cite{BC}. So $Y$ is a strong Markov process on ${\mathbb{R}}$.
The following is a key to get the two-sided sharp heat kernel estimate on $p(t, x, y)$ for BMVD $X$.
\begin{prop}\label{P:4.4}
The one-dimensional diffusion process $Y$ has a jointly continuous transition density function
$P^{(Y)}(t, x, y)$ with respect to the Lebesgue measure on ${\mathbb{R}}$. Moreover, for every $T\geq 1$,
there exist constants $C_i>0$, $1\le i \le 4$, such that the following estimate holds:
\begin{equation}\label{e:4.5}
\frac{C_1}{\sqrt{t}} e^{- {C_2|x-y|^2}/{t}} \le p^{(Y)}(t,x,y)\le\frac{C_3}{\sqrt{t}} e^{- {C_4|x-y|^2/}/{t}},
\quad (t,x,y)\in (0,T]\times {\mathbb{R}}\times {\mathbb{R}}.
\end{equation}
\end{prop}
\proof Let $\beta:=\frac{2\pi\varepsilon-p}{2\pi\varepsilon+p}$ and $Z$ be the skew Brownian motion
$$ dZ_t = dB_t + \beta \widehat L^0_t (Z) ,
$$
where $\widehat L^0_t (Z)$ is the symmetric local time of $Z$ at $0$.
The diffusion process $Y$ can be obtained from $Z$ through a drift perturbation
(i.e. Girsanov transform).
The transition density function $p_0(t, x, y)$ of $Z$ is explicitly known and
enjoys the two-sided Aronson-type Gaussian estimates \eqref{e:4.5};
see, e.g., \cite{RY}. One can further verify that
$$ | \nabla_x p_0(t, x, y)| \leq c_1 t^{-1} \exp (-c_2 |x-y|^2/t),
$$
from which one can deduce \eqref{e:4.5} by using the same argument as that for Theorem A in Zhang \cite[\S 4]{Z1}.
\qed
\medskip
Proposition \ref{P:4.4} immediately gives the two-sided estimates on the transition function
$p(t, x, y)$ of $X$ when $x, y\in {\mathbb{R}}_+$ since $X_t=-Y_t$ when $X_t\in {\mathbb{R}}_+$.
\medskip
\begin{thm} \label{T:4.5}
For every $T\geq 1$,
there exist $C_i>0$, $5\le i \le 8$, such that the following estimate holds:
\begin{equation*}
\frac{C_5}{\sqrt{t}} e^{-{C_6|x-y|^2}/{t}} \le p (t,x,y)\le\frac{C_7}{\sqrt{t}} e^{-{C_8|x-y|^2}/{t}}
\qquad \hbox{for } t\in (0, T] \hbox{ and } x, y \in {\mathbb{R}}_+ .
\end{equation*}
\end{thm}
\medskip
Let $A$ be any rotation of the plane around the pole.
Using the fact that starting from $a^*$, $AX_t$ has the same distribution as
$X_t$,
we can derive estimates for $p(t, x, y)$ for other $x, y \in E$.
The next result gives the two-sided estimates on
$p(t,x,y)$ when $x\in {\mathbb{R}}$ and $y\in D_0$.
\begin{thm}\label{T:4.6} For every $T\geq 1$,
there exist constants $C_i>0$, $9\le i\le 12$, such that for all $x\in {\mathbb{R}}_+$, $y\in D_0$ and $t\in [0,T]$,
\begin{equation*}
\frac{C_9}{\sqrt{t}}e^{-{C_{10}\rho(x,y)^2}/{t}} \le p(t,x,y)\le
\frac{C_{11}}{\sqrt{t}}e^{-{C_{12}\rho(x,y)^2}/{t}}.
\end{equation*}
\end{thm}
\begin{proof}
We first note that in this case by the symmetry of $p(t, x, y)$,
\begin{equation*}
p (t,x,y) = p(t, y, x) =\int_{0}^t {\mathbb{P}}_y(\sigma_{a^*}\in ds)p(t-s, a^*,x).
\end{equation*}
By the rotational invariance of two-dimensional Brownian motion, ${\mathbb{P}}_y(\sigma_{a^*}\in ds)$ only depends
on $|y|_\rho$, therefore so does $y\mapsto p^{(X)}(t,x,y)$.
For $x\in {\mathbb{R}}_+$ and $y\in D_0$, set $\widetilde {p}(t,x,r):=p(t,x,y)$ for $r=|y|_\rho$. For all $a>b>0$ and $x\in {\mathbb{R}}_+$,
\begin{align*}
\int_a^b p^{(Y)}(t,-|x|,y)dy&= {\mathbb{P}}_{-|x|} ( a\leq Y_t\leq b) = {\mathbb{P}}_x ( X_t \in D_0 \hbox{ with } a\leq |X_t |_\rho\leq b) \\
&= \int_{y\in D_0: a \leq |y|_\rho \leq b}p (t,x,y)m_p(dy) =\int_{y\in D_0: a+\varepsilon \leq |y| \leq b+\varepsilon}p (t,x,y)m_p(dy)
\\
&=\int_{a}^b 2\pi (r+\varepsilon)\widetilde {p}(t,x,r)dr.
\end{align*}
This implies when $x\in {\mathbb{R}}_+$, $y\in D_0$,
\begin{equation}\label{stuff10751}
p^{(Y)}(t,-|x|,|y|_\rho) =2\pi ( |y|_\rho +\varepsilon) \widetilde {p} (t,x, |y|_\rho ) =
2\pi ( |y|_\rho +\varepsilon)p (t,x,y) .
\end{equation}
We thus have by Proposition \ref{P:4.4} that
\begin{equation}
\frac{c_1}{\sqrt{t}}e^{-c_2\rho(x,y)^2/t} \le p(t,x,y)\le
\frac{c_3}{\sqrt{t}}e^{-c_4\rho(x,y)^2/t}
\quad \text{ for } x\in {\mathbb{R}}_+ \hbox{ and }
y\in D_0 \text{ with }|y|_\rho<1.
\end{equation}
When $|y|_\rho>1$, we first have
\begin{equation*}
p(t,x,y)=\frac{1}{2\pi (|y|_\rho+\varepsilon)}p^{(Y)}(t,-|x|, |y|_\rho)\lesssim \frac{1}{(|y|_\rho+\varepsilon)\sqrt{t}}e^{-{c_3\rho(x,y)^2}/{t}}
\leq \frac{1} {\sqrt{t}}e^{-{c_3\rho(x,y)^2}/{t}},
\end{equation*}
while since $\rho(x,y)\ge |y|_\rho>1$,
\begin{align}
p(t,x,y)&=\frac{1}{2\pi (|y|_\rho+\varepsilon)}p^{(Y)}(t,-|x|, |y|_\rho)\gtrsim
\frac{1}{(|y|_\rho+\varepsilon)\sqrt{t}}e^{-{c_4\rho(x,y)^2}/{t}}\nonumber
\\
&\gtrsim \frac{1}{\sqrt{t }} \frac{\sqrt{t}}{\sqrt{T} \rho (x, y)}
e^{-{c_4\rho(x,y)^2}/{t}}
\gtrsim \frac{1}{\sqrt{t}} e^{-{(c_4+1)\rho(x,y)^2}/{t}}.\label{e:4.8}
\end{align}
This completes the proof.
\end{proof}
Theorems \ref{T:4.5} and \ref{T:4.6} establish Theorem \ref{T:smalltime}(i).
We next consider part (ii) of Theorem \ref{T:smalltime}
when both $x$ and $y$ are in
$D_0$.
\begin{thm}\label{T:4.7}
For every $T\geq 1$,
there exist constants $C_i>0$, $13\le i\le 22$, such that for all
$t\in [0, T]$ and $x, y\in D_0$, the following estimates hold.
\\
When $\max\{|x|_\rho, |y|_\rho\} \leq 1$,
\begin{align}\label{stuffstuff}
\nonumber &\frac{C_{13}}{\sqrt{t}}e^{-{C_{14}\rho(x,y)^2}/{t}}+\frac{C_{13}}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{-{C_{15}|x-y|^2}/{t}} \le p(t,x,y)
\\
&\le \frac{C_{16}}{\sqrt{t}}e^{-{C_{17}\rho(x,y)^2}/{t}}+\frac{C_{16}}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{-{C_{18}|x-y|^2}/{t}};
\end{align}
and when $\max\{|x|_\rho, |y|_\rho\} >1$,
\begin{equation}\label{stuffstuff*}
\frac{C_{19}}{t}e^{-{C_{20}\rho(x,y)^2}/{t}} \le p(t,x,y) \le \frac{C_{21}}{t}e^{-{C_{22}\rho(x,y)^2}/{t}} .
\end{equation}
Here $|\cdot|$ and $|\cdot|_\rho$ denote the Euclidean metric and
the geodesic metric in $D_0$, respectively.
\end{thm}
\begin{proof}
For $x\in D_0$ and $t \in (0, T]$, note that
\begin{equation}\label{e:4.10a}
p(t,x,y) =\overline{p}_{D_0}(t,x,y)+p_{D_0}(t,x,y),
\end{equation}
where
\begin{equation}\label{e:4.10}
\overline{p}_{D_0}(t,x,y) =
\int_{0}^t p(t-s, a^*,y) {\mathbb{P}}_x (\sigma_{\{a^*\}}\in ds).
\end{equation}
As mentioned in the proof for Theorem \ref{T:4.6}, $ p(t-s, a^*,y)$
is a function in $y$ depending only on $|y|_\rho$. Therefore so is $y\mapsto
\overline{p}_{D_0}(t,x,y)$. Set $\widetilde {p}_{D_0}(t,x, r):=\overline{p}_{D_0}(t,x,y)$ for $r=|y|_\rho$. For any $b>a>0$,
\begin{eqnarray*}
{\mathbb{P}}_x \left( \sigma_{a^*}<t, \, X_t \in D_0 \hbox{ with } a\leq |X_t|_\rho \leq b\right)
& =& \int_{a\le |y|_\rho\le b}\overline{p}_{D_0} (t,x,y)m_p(dy)
\\
&=& 2 \pi \int_a^b(r+\varepsilon)\widetilde {p}_{D_0}(t,x,r)dr.
\end{eqnarray*}
On the other hand,
\begin{eqnarray*}
&& {\mathbb{P}}_x \left( \sigma_{a^*}<t, \, X_t \in D_0 \hbox{ with } a\leq |X_t|_\rho \leq b\right) \\
&=& {\mathbb{P}}_{|x|_\rho}^{(Y)} \left( \sigma_{a^*}<t, \, Y_t>0 \hbox{ with } a\leq |Y_t|_\rho \leq b\right)
= \int_0^t \left(\int_a^b p^{(Y)}(t-s, 0, r) dr \right) {\mathbb{P}}_{|x|_\rho }^{(Y)}
(\sigma_0 \in ds) .
\end{eqnarray*}
It follows that
$$
2\pi (r+\varepsilon) \widetilde {p}_{D_0}(t,x,r) = \int_0^t p^{(Y)}(t-s, 0, r)
{\mathbb{P}}_{|x|_\rho }^{(Y)} (\sigma_{\{0\}} \in ds)
= p^{(Y)} (t, -|x|_\rho , r).
$$
In other words,
\begin{equation}\label{e:4.11}
\overline p_{D_0} (t, x, y)= \frac{1}{2( | y|_\rho +\varepsilon)} p^{(Y)} (t, -|x|_\rho , |y|_\rho ).
\end{equation}
It is known that the Dirichlet heat kernel $p_{D_0} (t, x, y)$
enjoys the following two-sided estimates:
\begin{eqnarray}
\frac{c_1}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{- {c_2|x-y|^2}/{t}}
\leq p_{D_0} (t, x, y)
\leq \frac{c_3}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{-{c_4|x-y|^2}/t} \nonumber \\
\label{e:4.12}
\end{eqnarray}
for $t\in (0, T]$ and $x, y\in D_0$.
We now consider two different cases.
\medskip
\noindent{\it Case} (i): $\max\{|x|_\rho , |y|_\rho \} \leq 1$ and $t\in (0, T]$. In this case,
it follows from \eqref{e:4.10a}-\eqref{e:4.12}
and Proposition \ref{P:4.4} that
\begin{align}
\nonumber \frac{c_5}{\sqrt{t}}e^{-{c_6(|x|_\rho+|y|_\rho)^2}/{t}}&+\frac{c_5}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{-{c_7|x-y|^2}/{t}}\le p(t,x,y)
\\
\label{7181222} &\le \frac{c_8}{\sqrt{t}}e^{-{c_9(|x|_\rho+|y|_\rho)^2}/{t}}+\frac{c_8}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{-{c_{10}|x-y|^2}/{t}}.
\end{align}
Observe that
\begin{equation}\label{e:4.16}
(|x|_\rho + |y|_\rho) ^2/t \asymp \rho (x, y)^2/t \quad \hbox{ if }
{|x|_\rho} \wedge {|y|_\rho} \leq {\sqrt{t}} .
\end{equation}
When $ {|x|_\rho} \wedge {|y|_\rho} >{\sqrt{t}}$,
for $a>0$, $b>0$,
\begin{eqnarray}
&& \frac{1}{\sqrt{t}}e^{- {a (|x|_\rho + |y|_\rho) ^2}/{t}} +\frac{1 }{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{- {b |x-y|^2}/{t}} \nonumber \\
&\asymp & \frac{1}{\sqrt{t}}e^{- {a (|x|_\rho + |y|_\rho) ^2}/{t}} +\left( \frac1{\sqrt{t}} + \frac{1 }{t} \right)
e^{- {b |x-y|^2} / {t}} \nonumber \\
&=& \frac{1}{\sqrt{t}} \left( e^{- {a (|x|_\rho + |y|_\rho) ^2}/{t}} + e^{- {b |x-y|^2} / {t}} \right)
+ \frac{1 }{t} e^{- {b |x-y|^2} / {t}} \label{e:4.17}
\end{eqnarray}
The desired estimate \eqref{stuffstuff} now follows from \eqref{7181222}-\eqref{e:4.17} and the
fact \eqref{e:1.1}.
\medskip
\noindent{\it Case} (ii): $\max\{ |x|_\rho, |y|_\rho\} >1$ and $t\in (0, T]$. By the symmetry of $p(t, x, y)$ in $x$ and $y$,
in this case we may and do assume $|y|_\rho >1>\sqrt{t/T}$.
It then follows from \eqref{e:4.11}-\eqref{e:4.12}, Proposition \ref{P:4.4}
and \eqref{e:4.8} that
\begin{align}\label{stuff1242}
\nonumber\frac{c_{11}}{t}e^{- {c_{12}(|x|_\rho+|y|_\rho)^2}/{t}}&+\frac{c_{11}}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{- {c_{13}|x-y|^2}/{t}} \le p(t,x,y)
\\
&\le \frac{c_{14}}{t}e^{- {c_{15}(|x|_\rho+|y|_\rho)^2}/{t}}+\frac{c_{14}}{t}\left(1\wedge
\frac{|x|_\rho}{\sqrt{t}}\right)\left(1\wedge
\frac{|y|_\rho}{\sqrt{t}}\right)e^{- {c_{16}|x-y|^2}/{t}} .
\end{align}
When $ {|x|_\rho} \wedge {|y|_\rho} \leq {\sqrt{t}}$,
the lower bound estimate \eqref{stuffstuff*} follows from \eqref{stuff1242} and \eqref{e:4.16},
while the upper bound estimate \eqref{stuffstuff*} follows from Proposition \ref{offdiagUBE}.
Whereas when $ {|x|_\rho} \wedge {|y|_\rho} >{\sqrt{t}}$,
the desired estimate \eqref{stuffstuff*} follows from \eqref{stuff1242} and \eqref{e:1.1}.
This completes the proof of the theorem.
\end{proof}
\begin{rmk}\label{rmkonsmalltimeHKE} \rm
\begin{description}
\item{(i)} One cannot expect to rewrite the estimate of
\eqref{stuffstuff} as $t^{-1}e^{-{c\rho(x,y)^2}/{t}}$. A
counterexample is that $x=y=a^*$, in which case $x$ and $y$ can be
viewed as either on ${\mathbb{R}}$ or on $D_0$,
therefore both Proposition \ref{P:4.4} and Theorem \ref{T:4.6} have already
confirmed that $p(t,x,y)\asymp t^{-1/2}$, which is
consistent with the \eqref{stuffstuff}.
\item{(ii)} The Euclidean distance appearing in
\eqref{stuffstuff} cannot be replaced with the geodesic distance. To
see this, take
$x=(\varepsilon+t^{-1/2},0 )$ and $y=(-\varepsilon-t^{-1/2},0 )$ in $D_0$.
The estimate of \eqref{stuffstuff} is
comparable with
$t^{-1/2}+t^{-1}\exp(-{\varepsilon^2}/{t})$, but if
we replaced $|x-y|$ with $\rho(x,y)$, it would be comparable with
$t^{-1/2}+t^{-1}$. For fixed $\varepsilon$, as $t \downarrow 0$,
$t^{-1/2}+t^{-1}\exp(- {\varepsilon^2}/{t})\sim
t^{-1/2}$, but $t^{-1/2}+t^{-1}\sim
t^{-1}$.
\item{(iii)} Theorem \ref{T:smalltime} also shows that the parabolic Harnack inequality fails
for $X$. For a precise statement of the parabolic Harnack inequality, see, for example, \cite{Gr, LSC1, St2}.
For $s\in (0, 1]$, take some $y\in D_0$ such that $|y|_\rho=\sqrt{s}$. Set $Q_+:=(3s/2, 2s)\times B_\rho(y, 2\sqrt{s})$ and $Q_-:=(s/2, s)\times B_\rho(y, 2\sqrt{s})$. Let $u(t,x):=p(t,x,y)$. It follows from Theorem \ref{T:smalltime}
that $ \sup_{Q_+}u \asymp s^{-1/2}+s^{-1}\asymp s^{-1}$ and
$ \inf_{Q_-} u\asymp s^{-1}$. Clearly there does not exist any positive constant $C>0$ so that $\sup_{Q_+}u\le C\inf_{Q_-}u$ holds
for all $s\in (0, 1]$. This shows that parabolic Harnack inequality fails for $X$.
\end{description}
\end{rmk}
\section{Large time heat kernel estimates}\label{S:5}
Recall that $X$ denotes the BMVD process on $E$ and its signed radial process
defined by \eqref{710901} is denoted by $Y$.
In this section, unless otherwise stated, it is always assumed
that $T\geq 8$ and $t\in [T, \infty)$. With loss of generality, we assume
that the radius $\varepsilon$ of the ``hole'' $B(0, \varepsilon)$ satisfies $\varepsilon \leq 1/4$.
We begin with the following estimates for the distribution of
hitting time of a disk by a two-dimensional Brownian motion,
which follow directly from \cite[\S 5.1, Case (a), $\alpha=1$]{GSC}
and a Brownian scaling.
\begin{prop}[Grigor'yan and Saloff-Coste \cite{GSC}]\label{P:5.1}
Let $X$ be a Brownian motion on ${\mathbb{R}}^2$ and $K$ be the closed ball with radius $\varepsilon$
centered at the origin.
\begin{description}
\item{\rm (i)} If $0<t<2|x|^2$ and $|x|\geq 1+\varepsilon$, then
\begin{equation}\label{hitting-Bessel2-PDF1}
\frac{c}{\log |x|}\exp\left(-C {|x|^2}/ {t}\right)\le {\mathbb{P}}_x (\sigma_K \leq t) \le \frac{C}{\log|x|}\exp\left(-c {|x|^2}/{t}\right),
\end{equation}
for some positive constants $C> c>0$.
\item{\rm (ii)} If $t\ge 2|x|^2$ and $|x|\geq 1+\varepsilon$, then
\begin{equation}\label{hitting-Bessel2-PDF2}
{\mathbb{P}}_x (\sigma_K \leq t) \asymp \frac{\log \sqrt{t}-\log |x|}{\log \sqrt{t}},
\end{equation}
and
\begin{equation}\label{hitting-Bessel2-derivative}
\partial_t {\mathbb{P}}_x (\sigma_K \leq t) \asymp \frac{\log |x|}{t(\log t)^2}.
\end{equation}
\end{description}
\end{prop}
Our first goal is to
establish an upper bound estimate on $\int_0^t p(s, a^*, a^*)ds $ the Proposition \ref{1139}.
This will be done through two Propositions by using the above hitting time estimates.
\begin{prop}\label{1125}
$p(t,a^*,a^*)$ is decreasing in $t\in (0, \infty)$.
\end{prop}
\begin{proof} This follows from
\begin{align*}
\frac{d}{dt}p(t,a^*,a^*)&=\frac{d}{dt}\int_x p(t/2, a^*,x)^2m_p(dx)
\\
&=\int \left(\frac{\partial }{\partial t}p(t/2, a^*,x)\right)p(t/2, a^*,x)m_p(dx)
\\
&=\int \mathcal{L}_x p(t/2,a^*,x)p(t/2,a^*,x)m_p(dx)
\\
&=-{\mathcal{E}}\left(p(t/2, a^*,x), p(t/2,a^*,x)\right) \le 0.
\end{align*}
\end{proof}
\begin{prop}\label{1126}
There exists some constant $C_1>0$ such that
\begin{equation*}
p(t,a^*,a^*)\le C_1\frac{\log t}{t} \quad \textrm{for } t\in [8, \infty).
\end{equation*}
\end{prop}
\begin{proof}
For $t\geq 8$ and $x\in D_0$ with $1<|x|_\rho <\sqrt{t/3}$, by Proposition \ref{1125},
\begin{align*}
p(t,x,a^*)&= \int_{0}^t {\mathbb{P}}_x(\sigma_{a^*}\in ds)p(t-s, a^*,a^*)
\ge p(t,a^*,a^*){\mathbb{P}}_x(\sigma_{a^*}\le t)
\asymp p(t,a^*,a^*)\left(1-\frac{\log |x| }{ \log \sqrt{t}}\right),
\end{align*}
where the $``\asymp "$ is due to \eqref{hitting-Bessel2-derivative}. Therefore,
\begin{align*}
1\ge {\mathbb{P}}_{a^*}\left(X_t\in D_0 \hbox{ with } 1<|X_t|_\rho <\sqrt{t/4} \right)
&=\int_{D_0\cap \{1<|x|_\rho <\sqrt{t/4}\}}p(t,a^*,x)dx
\\
&\ge c_1 p(t,a^*,a^*) \int_{1+\varepsilon}^{\sqrt{t/4}+\varepsilon}\left(1-\frac{\log r}{ \log \sqrt{t}}\right)rdr
\\
& \geq c_2 \, p(t,a^*,a^*) \, \frac{t}{\log t}.
\end{align*}
By selecting $C_2$ large enough, the above yields the desired estimate for $p(t,a^*,a^*) $ for $t\geq 8$.
\end{proof}
\begin{prop}\label{1139}
There exists some $C_2>0$ such that
\begin{equation*}
\int_0^t p(s,a^*,a^*)ds \le C_2 \log t, \quad \text{ for all }t\geq 4.
\end{equation*}
\end{prop}
\begin{proof}
For $t\geq 8$ and $x\in D_0$ with $ 1<|x|_\rho<\sqrt{t/2}$, we have by \eqref{hitting-Bessel2-derivative},
\begin{align*}
p(t, x, a^*)&\ge \int_{t/2}^t p(t-s, a^*,a^*) {\mathbb{P}}_x(\sigma_{a^*}\in ds )
\asymp \frac{\log |x|}{t(\log t)^2}\int_{0}^{t/2}p(s,a^*,a^*)ds.
\end{align*}
Thus by using polar coordinate,
\begin{eqnarray*}
1 & \ge & {\mathbb{P}}_{a^*}\left(X_t\in D_0 \hbox{ with } 1<|X_t|_\rho<\sqrt{t/2} \right)
=\int_{\{x\in D_0: 1<|x|_\rho <\sqrt{t/2} \}} p(t, a^*, x)m_p(dx)
\\
&\gtrsim& \int_{1}^{\sqrt{t/2}}\frac{\log (r+\varepsilon)}{t(\log t)^2} (r+\varepsilon)dr \cdot \int_0^{t/2}p(s,a^*,a^*)ds
\\
&= & \frac1{t(\log t)^2} \int_{1+\varepsilon}^{\sqrt{t/2}+\varepsilon} {r\log r} dr \cdot \int_0^{t/2}p(s,a^*,a^*)ds
\\
&\asymp & \frac{t\log t}{t(\log t)^2}\int_0^{t/2}p(s,a^*,a^*)ds
\\
&=& \frac{1}{\log t}\int_0^{t/2}p(s,a^*,a^*)ds.
\end{eqnarray*}
This yields the desired estimate.
\end{proof}
The above proposition suggests that $p(t, a^*, a^*)\leq c/t$ for $t\in [4, \infty)$.
However, in order to prove this rigorously, we first compute the upper bounds for $p(t, a^*, x)$ for different regions of $x$,
and then use the identity $p(t, a^*, a^*)=\int_E p(t/2, a^*, x)^2 m_p(dx)$ to obtain the sharp upper bound estimate for $p(t, a^*, a^*)$.
\begin{prop}\label{313}
There exists $C_3>0$ such that for all $t \geq 8$ and $x\in D_0$ with $1<|x|_\rho<\sqrt{t/2}$,
\begin{equation*}
p(t, a^*, x)\le \frac{C_3}{t}\log \left( \frac{\sqrt{t}}{|x|}\right) .
\end{equation*}
\begin{proof}
By Proposition \ref{1126}, \eqref{hitting-Bessel2-PDF2} and Proposition \ref{1139},
\begin{align*}
p(t, x, a^*)&= \int_0^{t/2} p(t-s, a^*, a^*) {\mathbb{P}}_x(\sigma_{a^*}\in ds) +
\int_{t/2}^t p(t-s, a^*, a^*) {\mathbb{P}}_{x}(\sigma_{a^*}\in ds)
\\
&\le c_1\left( \frac{\log t}{t}{\mathbb{P}}_x(\sigma_{a^*}\le t/2)+\frac{\log |x|}{t(\log t)^2}\int_{0}^{t/2}p(s,a^*,a^*)ds\right)
\\
&\le c_2\left(\frac{\log t }{t} \, \frac{\log \left( {\sqrt{t}}/{ |x|}\right)}{\log \sqrt{t}} +\frac{\log |x|}{t\log t}\right)
\\
&\le c_3\left(\frac{1}{t}\log \left( \frac{\sqrt{t} }{|x|} \right)+\frac{1}{t}\right)
\leq \frac{c_4}{t}\log \left( \frac{\sqrt{t} }{|x|} \right).
\end{align*}
\end{proof}
\end{prop}
The following asymptotic estimate for the distribution of Brownian hitting time of a disk from \cite[Theorem 2]{KU} will be used in the next proposition.
\begin{lem}[Uchiyama \cite{KU}]\label{KUchiyama}
Let $(B_t)_{t\ge 0}$ be the standard two-dimensional Brownian motion and $\sigma_r:=\inf\{t>0, |B_t|\le r\}$.
Denote by $p_{r,x}(t)$ the probability density function of $\sigma_r$ with $B_0=x$.
For every $r_0$, uniformly for $|x|\ge r_0$, as $t\rightarrow \infty$,
\begin{equation*}
p_{r_0, x}(t)=\frac{\log \left(\frac{1}{2} e^{c_0}|x|^2 \right)}{t\left( \log t + c_0 \right)^2}
\exp \left( -\frac{|x|^2}{2t} \right) +
\begin{cases}
O\left(\frac{1+(\log(|x|^2/t))^2}{|x|^2(\log t)^3}\right) \quad
&\hbox{for }\,|x|^2\ge t, \medskip \\
\frac{2\gamma \log (t/|x|^2)}{t (\log t)^3}
+O\left(\frac{1 }{t(\log t)^3}\right) \quad
&\hbox{for }\,|x|^2 < t,
\end{cases}
\end{equation*}
where $c_0$ is a positive constant only depending on $r_0$ and
$\gamma = - \int_0^\infty e^{-u} \log u du$ is the Euler constant.
\end{lem}
\begin{prop}\label{157}
There exists $C_4, C_5>0$ such that for all $t\geq 8$ and all $x\in D_0$ ,
\begin{equation*}
p(t, a^*, x)\le \begin{cases}
C_4\left(\frac{1}{t}e^{- {C_5|x|^2}/{t}}+\frac{\left(\log \left( |x|^2/t \right)\right)^2}{|x|^2 \left( \log t\right)^2}\right)
&\hbox{when } |x|_\rho >\sqrt{t} , \\
\frac{C_4}{t} \left( e^{- {C_5|x|^2}/{t}}+\frac{1}{ (\log t)^2} \right)
&\hbox{when } \sqrt{t}/2 \leq |x|_\rho \leq \sqrt{t}.
\end{cases}
\end{equation*}
\end{prop}
\begin{proof} Note that
\begin{eqnarray}
\nonumber p(t, a^*, x)&=&\int_0^t p(t-s, a^*, a^*) {\mathbb{P}}^x(\sigma_{a^*}\in ds)
\\
\label{1158} &=&\int_{0}^{t/2}p(t-s,a^*,a^*) {\mathbb{P}}_x(\sigma_{a^*}\in ds)
+\int_{t/2}^{t} p(t-s, a^*, a^*) {\mathbb{P}}_x(\sigma_{a^*}\in ds).
\end{eqnarray}
By the monotonicity of $p(t,a^*, a^*)$ established in Proposition \ref{1125},
estimate \eqref{hitting-Bessel2-PDF1} and Proposition \ref{1126},
\begin{eqnarray}
\nonumber \int_{0}^{t/2}p(t-s,a^*,a^*) {\mathbb{P}}_x(\sigma_{a^*}\in ds)
&\le & p(t/2,a^*,a^*){\mathbb{P}}_x(\sigma_{a^*}\le t/2)
\\
\nonumber &\stackrel{\eqref{hitting-Bessel2-PDF1}}{\le} & p(t/2,a^*,a^*)\frac{c_1}{\log |x|}e^{-{c_2|x|^2}/{t}}
\\
\nonumber &\le& \frac{c_1 \log t}{t\log |x| }e^{- {c_2|x|^2}/{t}}
\\
\label{1226}&\le& \frac{c_3}{t}e^{- {c_2|x|^2}/{t}}.
\end{eqnarray}
On the other hand, by Proposition \ref{1139} and Lemma \ref{KUchiyama},
for $|x|_\rho \geq \sqrt{t}$,
\begin{align}
\nonumber \int_{t/2}^t p(t-s, a^*, a^*) {\mathbb{P}}^x(\sigma_{a^*}\in ds)
&\le \sup_{s\in [t/2, t]}p_{\varepsilon, x}(s) \cdot \int_{t/2}^{t}p(t-s, a^*, a^*)ds
\\
\nonumber &\le c_4 \left(\frac{\log |x|}{t(\log t)^2}e^{- |x|^2/{(2t)}}+\frac{\left(\log \left(|x|^2/t\right)\right)^2}{|x|^2(\log t)^3}\right) \, \log t
\\
\nonumber &= c_4\left(\frac{\log |x|}{t\log t}e^{- |x|^2/{(2t)}}+\frac{\left(\log \left(|x|^2/t\right)\right)^2}{|x|^2(\log t)^2}\right)
\\
\label{1249}&\leq \frac{c_4}{t}e^{- c_5|x|^2/t }+c_4 \frac{\left(\log \left(|x|^2/t\right)\right)^2}{|x|^2(\log t)^2},
\end{align}
while for $\sqrt{t}/2\leq |x|_\rho \leq \sqrt{t}$,
\begin{align}
\nonumber \int_{t/2}^t p(t-s, a^*, a^*) {\mathbb{P}}_x(\sigma_{a^*}\in ds)
&\le \sup_{s\in [t/2, t]}p_{\varepsilon, x}(s) \cdot \int_{t/2}^{t}p(t-s, a^*, a^*)ds
\\
\nonumber &\le c_6 \left(\frac{\log |x|}{t(\log t)^2}e^{- |x|^2/{(2t)}}+\frac{1}{t(\log t)^3}\right) \, \log t
\\
\label{e:5.7}&\leq \frac{c_6}{t}e^{- c_5|x|^2/t }+ \frac{c_6}{t(\log t)^2}.
\end{align}
The desired estimate now follows from \eqref{1158}-\eqref{e:5.7}.
\end{proof}
\begin{prop}\label{1253}
There exists $C_6 >0$ such that
\begin{equation*}
p(t, a^*, x)\le \frac{C_6 \log t}{t}e^{-{ x^2}/{(2t)}} \qquad \hbox{for all }t\geq 8
\hbox{ and } x\in {\mathbb{R}}_+.
\end{equation*}
\end{prop}
\begin{proof}
Starting from $x\in {\mathbb{R}}_+$, BMVD $X$ on $E$ runs like a one-dimensional Brownian motion before hitting $a^*$.
Thus by the known formula for the first passage distribution for one-dimensional Brownian motion,
\begin{equation}\label{e:5.8}
{\mathbb{P}}_x(\sigma_{a^*} \in dt)=\frac{1}{\sqrt{2\pi t^3}} x e^{-x^2/(2t)} dt
\quad \hbox{for } x\in (0, \infty).
\end{equation}
This together with Propositions \ref{1125}--\ref{1139} and a change of variable $s=x^2/r$ gives
\begin{eqnarray*}
\nonumber p(t, a^*, x)&=&
\int_0^{t/2} p(t-s, a^*, a^*) {\mathbb{P}}_x(\sigma_{a^*}\in ds) + \int_{t/2}^t p(t-s, a^*, a^*) {\mathbb{P}}_x(\sigma_{a^*}\in ds)
\\
\nonumber & \leq & p(t/2, a^*, a^*) \int_0^{t/2} \frac{1}{\sqrt{2\pi s^3}} x e^{-x^2/(2s)} ds+\int_{t/2}^t
p(t-s, a^*, a^*) \frac{1}{\sqrt{2\pi s^3}} x e^{-x^2/(2s)} ds
\\ \nonumber
&\lesssim & \frac{ \log t}{t} \int_{2x^2/t}^\infty \frac{1}{\sqrt{r}} e^{-r/2} dr
+\frac{ x}{\sqrt{t^3}}e^{- x^2/t } \int_{t/2}^{t} p(t-s, a^*, a^*)ds
\\ \nonumber
&\lesssim & \frac{ \log t}{t} e^{-x^2/t}
+\frac{1}{t}e^{- x^2/(2t) } \cdot \log t \\
&\lesssim & \frac{ \log t}{t} e^{-x^2/(2t)}.
\end{eqnarray*}
\end{proof}
We are now in a position to establish the following on-diagonal upper bound estimate at $a^*$.
\begin{thm}\label{138}
There exists $C_7>0$ such that
\begin{equation*}
p(t, a^*, a^*)\le {C_7}\left( t^{-1/2} \wedge t^{-1} \right) \quad \hbox{for all } t\in (0, \infty).
\end{equation*}
\end{thm}
\begin{proof}
For $t\geq 8$, we have
\begin{align}
p(t, a^*, a^*)&=\int_E p(t/2,a^*,x)^2m_p(dx) \nonumber
\\
&=\left(\int_{{\mathbb{R}}_+}+\int_{D_0\cap \{0<|x|_\rho<1\}}+\int_{D_0\cap \{1<|x|_\rho<\sqrt{t/2}\}}
+\int_{D_0\cap \{|x|_\rho>\sqrt{t/2}\}}\right)p(t/2,a^*,x)^2m_p(dx). \label{147}
\end{align}
It follows from Proposition \ref{1253} that
\begin{equation}\label{144}
\int_{{\mathbb{R}}_+}p(t/2,a^*,x)^2m_p(dx)\lesssim \left(\frac{\log t}{t}\right)^2 p
\int_0^\infty e^{- {x^2}/{(2t)}} dx \asymp \frac{c_1(\log t)^2}{t^{3/2}}.
\end{equation}
By Proposition \ref{offdiagUBE},
\begin{eqnarray}
\nonumber \int_{D_0\cap \{0<|x|_\rho<1\}}p(t/2,a^*,x)^2m_p(dx) &\le& \left(\sup_{x\in D_0: {0<|x|_\rho<1}}
p(t/2,a^*,x)\right)^2m_p(D_0\cap \{0<|x|_\rho<1\})
\\
&\lesssim& \left(\frac{1}{\sqrt{t}}\right)^2= \frac{1}{t}. \label{e:5.11}
\end{eqnarray}
In view of Proposition \ref{157},
\begin{eqnarray}
&& \int_{D_0\cap \{|x|_\rho>\sqrt{t/2}\}}p(t/2,a^*,x)^2m_p(dx) \nonumber \\
& \le & \int_{ D_0\cap \{|x|_\rho>\sqrt{t/2}\}}
\left(\frac{c_1}{t}e^{- {c_2|x|^2}/{t}}\right)^2m_p(dx)
+\int_{ D_0\cap \{|x|_\rho>\sqrt{t}\}}c_1\left(\frac{\left(\log \left(|x|^2/t \right)\right)^2}{|x|^2 \left( \log t\right)^2}\right)^2m_p(dx)
\nonumber \\
&& + \int_{ D_0\cap \{\sqrt{t/2} \leq |x|_\rho\leq \sqrt{t}\}}c_1\left(\frac{1}{t (\log t)^2} \right)^2m_p(dx) . \label{616201}
\end{eqnarray}
Using polar coordinate,
\begin{align}\label{206}
\int_{ D_0\cap \{|x|_\rho>\sqrt{t/2}\}}\left(\frac{c_1}{t}e^{- {c_2|x|^2}/{t}}\right)^2m_p(dx)
= c_3 \int_{\sqrt{t/2}+\varepsilon }^\infty \frac{r}{t^2}e^{- {c_2r^2}/{t}}dr
\leq \frac{c_4}{t},
\end{align}
while
\begin{eqnarray}
&& \int_{ D_0\cap \{|x|_\rho>\sqrt{t}\}} \left(\frac{\left(\log \left( |x|^2/t \right)\right)^2}{|x|^2 \left( \log t\right)^2}\right)^2m_p(dx) = 2 \pi \int_{\sqrt{t}+\varepsilon }^\infty \frac{r\left(\log (r^2/t)\right)^4}{r^4(\log t)^4}dr
\nonumber \\
&\stackrel{u=r/\sqrt{t}}{\leq }& 2\pi\int_{1}^\infty \frac{(\log u)^4}{u^3t^{3/2}(\log t)^4}\,\sqrt{t}\,du
= \frac{2\pi} {t(\log t)^4} \int_{1}^\infty \frac{(\log u)^4}{u^3}\,du = \frac{c_5}{t(\log t)^4}, \label{212}
\end{eqnarray}
and
\begin{equation}\label{e:5.15}
\int_{ D_0\cap \{\sqrt{t/2} \leq |x|_\rho\leq \sqrt{t}\}} \left(\frac{1}{t (\log t)^2} \right)^2m_p(dx)
\asymp \frac{1}{t (\log t)^4} \lesssim \frac1{t} .
\end{equation}
Hence it follows from \eqref{616201}--\eqref{e:5.15} that
\begin{equation}\label{215}
\int_{ D_0\cap \{|x|_\rho>\sqrt{t/2}\}}p(t/2,a^*,x)^2m_p(dx) \le \frac{c_6}{t}.
\end{equation}
By Proposition \ref{313} and using polar coordinates,
\begin{eqnarray}
&& \int_{D_0\cap \{1<|x|_\rho<\sqrt{t/2}\}}p(t/2,a^*,x)^2m_p(dx) \nonumber \\
&\lesssim & \frac{1}{t^2}\int_{x\in D_0\cap \{1<|x|_\rho<\sqrt{t/2}\}}\left(\log \left(\frac{\sqrt{t}}{|x|}\right)\right)^2m_p(dx) \nonumber \\
&=& \frac{2\pi}{t^2}\int_{1+\varepsilon}^{\sqrt{t/2}+\varepsilon}r\left(\log \left(\frac{\sqrt{t}}{r}\right)\right)^2dr \nonumber \\
&\stackrel{u=r/\sqrt{t}}{=}& \frac{2\pi }{t} \int_{(1+\varepsilon)/\sqrt{t}}^{(\sqrt{t/2}+\varepsilon)/\sqrt{t}} u (\log u)^2 du \leq \frac{c_7}{t}. \label{353}
\end{eqnarray}
Combining \eqref{144},\eqref{e:5.11}, \eqref{215} and \eqref{353}, we conclude that
$p(t, a^*, a^*)\le c_8/t$ for $t\geq 8$.
On the other hand, taking $x=y=0=a^*$ in Theorem \ref{T:4.5} yields that
$ p(t, a^*, a^*)\le c_9 t^{-1/2}$ for $t\in (0, 8]$. This completes the proof of the theorem.
\end{proof}
\begin{thm}\label{T:5.10} There is a constant $C_8\geq 1$ so that
\begin{equation*}
C_8^{-1} \left(t^{-1/2} \wedge t^{-1} \right) \leq p(t, a^*, a^*)
\leq C_8 \left(t^{-1/2} \wedge t^{-1} \right) \quad \hbox{for all } t\in (0, \infty).
\end{equation*}
\end{thm}
\begin{proof} In view of Theorem \ref{138},
it remains to establish the lower bound estimate.
By Cauchy-Schwartz inequality,
for $M\geq 1$ to be determined later,
\begin{eqnarray} \label{e:5.18}
p(t, a^*, a^*) &=& \int_E p(t/2, a^*, x)^2 m_p (dx)
\geq \int_{\{x\in E: |x|_\rho \leq M\sqrt{t}\}}
p(t/2, a^*, x)^2 m_p (dx)
\nonumber \\
&\geq& \frac1{m_p ( \{x\in E: |x|_\rho \leq M\sqrt{t}\})} \left(
\int_{\{x\in E: |x|_\rho \leq M\sqrt{t}\}} p(t/2, a^*, x) m_p (dx) \right)^2 \nonumber \\
& \gtrsim &\left( t^{-1/2} \wedge t^{-1}\right) {\mathbb{P}}_{a^*} (|X_t|_\rho \leq M\sqrt{t})^2.
\end{eqnarray}
We claim that by taking $M$ large enough,
${\mathbb{P}}_{a^*} (|X_t|_\rho \leq M\sqrt{t})\geq 1/2$ for every $t>0$, which will then give the desired
lower bound estimate on $p(t, a^*, a^*)$. Recall the signed radial process $Y=u(X)$ of $X$ from
\eqref{848} satisfies SDE \eqref{YsymmSDE}.
For any $a>0$ and $\delta \in (0, \varepsilon)$, let $Z^{\delta, a}$ and $\widetilde Z^{\delta, -a}$
be the pathwise unique solution of the following SDEs; see \cite[Theorem 4.3]{BC}:
\begin{eqnarray}
Z^{\delta, a}_t &=& a +B_t + \int_0^t \frac{1}{Z^{\delta, a}_s + \delta} {\bf 1}_{\{ Z^{\delta, a}_s>0\}} ds
+ \widehat L^0_t ( Z^{\delta, a}), \label{e:5.19} \\
\widetilde Z^{\delta, -a}_t &=& -a +B_t +\int_0^t \frac{1}{\widetilde Z^{\delta, -a}_s -\delta}
{\bf 1}_{\{ \widetilde Z^{\delta, -a}_s< 0\}} ds
- \widehat L^0_t (\widetilde Z^{\delta, -a}), \label{e:5.20}
\end{eqnarray}
where $B$ is the Brownian motion in \eqref{YsymmSDE}, and
$\widehat L^0 ( Z^{\delta, a})$, $\widehat L^0 ( \widetilde Z^{\delta, -a})$
are the symmetric local times of $ Z^{\delta, a}$ and $ Z^{\delta, -a}$
at 0, respectively.
Denote by $Y^a$ and $Y^{-a}$ the pathwise solutions of
\eqref{YsymmSDE} with $Y^a_0=a$ and $Y^{-a}_0=-a$, respectively.
By comparison principle from \cite[Theorem 4.6]{BC}, we have
with probability one that
$Y^a_t \leq Z^{\delta, a}_t$ for all $t\geq 0$ and
$Y^{-a}_t \geq Z^{\delta, -a}_t$ for all $t\geq 0$.
On the other hand, there are unique solutions to
\begin{eqnarray*}
dZ^a_t = a + B_t+ \int_0^t \frac1{Z^a_s} ds, \qquad
dZ^{-a}_t = -a + B_t+\int_0^t \frac1{Z^{-a}_s} ds .
\end{eqnarray*}
In fact, $Z^a$ and $-Z^{-a}$ are both two-dimensional Bessel processes
on $(0, \infty)$ starting from $a$. They have infinite lifetimes
and never hits $0$. By \cite[Theorem 4.6]{BC} again, diffusion processes
$Z^{\delta, a}$ is decreasing in $\delta$, and $\widetilde Z^{\delta, -a}$ is increasing in $\delta$.
It is easy to see from the above facts that
$\lim_{\delta \to 0} Z^{\delta, a}_t=Z^a_t$
and $\lim_{\delta \to 0} \widetilde Z^{\delta, -a}_t= Z^{-a}_t$.
Consequently, with probability one,
$$
Y^a_t \leq Z^a_t \quad \hbox{and} \quad Y^{-a}_t \geq Z^{-a}_t
\qquad \hbox{for every } t\geq 0.
$$
In particular, we have for every $t>0$,
$ {\mathbb{P}} (Y^a_t \geq M\sqrt{t}) \leq {\mathbb{P}} (Z^a_t \geq M\sqrt{t})$ and
$$
{\mathbb{P}} (Y^{-a}_t \leq -M\sqrt{t})
\leq {\mathbb{P}} (Z^{-a}_t \leq - M\sqrt{t})= {\mathbb{P}} (Z^a_t \geq M\sqrt{t}).
$$
Let $W$ be two-dimensional Brownian motion. Then we have from the above that for every $t>0$,
$$
{\mathbb{P}}_a (Y_t \geq M\sqrt{t}) + {\mathbb{P}}_{-a} (Y_t \leq -M\sqrt{t})
\leq 2 {\mathbb{P}}_{(a, 0)} (|W_t|\geq M\sqrt{t}).
$$
Passing $a\to 0$ yields, by the Brownian scaling property, that
$$
{\mathbb{P}}_{a^*}(|X_t|_\rho \geq M\sqrt{t})=
{\mathbb{P}}_0 (|Y_t| \geq M\sqrt{t}) \leq 2 {\mathbb{P}}_0 (|W_t|\geq M\sqrt{t})
=2 {\mathbb{P}}_0 (|W_1|\geq M ),
$$
which is less than 1/2 by choosing $M$ large. This completes the proof of the theorem.
\end{proof}
In the next two propositions, we use the two-sided estimate for $p(t, a^*, a^*)$ as well as Markov property of $X$
to get two-sided bounds for $p(t, a^*, x)$ for different regions of $x$.
We first record an elementary lemma that will be used later.
\begin{lem}\label{L:5.11} For every $x>0$,
$$
\frac{1}{1+x}\, e^{-x^2/2} \leq \int_x^\infty e^{-y^2/2} \, dy
\leq \frac{e\pi}{1+x} e^{-x^2/2}.
$$
\end{lem}
\begin{proof} Define $\phi (x) = \int_x^\infty e^{-y^2/2} dy -\frac{1}{1+x } e^{-x^2/2}$. Then
$ \phi ' (x)= - \frac{x}{(1+x)^2} e^{-x^2/2}<0$. Since $\lim_{x\to \infty} \phi (x)=0$, we have
$\phi (x)>0$ for every $x>0$. This establishes the lower bound estimate of the lemma.
For the upper bound, note that for $x\in (0, 1)$,
$$
\int_x^\infty e^{-y^2/2} \, dy\leq \frac12 \int_{-\infty}^\infty e^{-y^2/2} \, dy =\sqrt{\frac{\pi}{2}} \leq \sqrt{e \pi /2} \, e^{-x^2/2},
$$
while for every $x>0$, using a change of variable $y=x+z$,
$$
\int_x^\infty e^{-y^2/2} \, dy\leq e^{-x^2/2} \int_0^\infty e^{-xz} \, dz =x^{-1} e^{-x^2/2}.
$$
This establishes the upper bound estimate of the lemma.
\end{proof}
\begin{prop}\label{P:5.12}
There exist constants $C_i>0$, $9\le i\le 10$, so that for all $x\in {\mathbb{R}}_+$
and $t\geq 2$,
\begin{equation*}
\frac{C_9}t \left(1+ \frac{|x| \log t }{\sqrt{t}} \right)
e^{- {2x^2}/{t}} \le p(t, a^*, x)\le \frac{C_{10}}t \left(1+ \frac{|x|\log t }{\sqrt{t}} \right) e^{- {x^2}/{2t}}.
\end{equation*}
\end{prop}
\begin{proof} Observing that when $x\in {\mathbb{R}}_+$, we have by \eqref{e:5.8},
\begin{align}
\nonumber
p(t, a^*, x)&=p(t, x, a^*)=\int_0^t p(t-s, a^*, a^*)
{\mathbb{P}}_x(\sigma_{a^*}\in ds)
\\
&=\int_0^{t/2} p(t-s, a^*, a^*) \frac{x}{\sqrt{2\pi s^3}}
e^{-{x^2}/{2s}} ds +\int_{t/2}^t p(t-s, a^*, a^*)
\frac{x}{\sqrt{2\pi s^3}}e^{-{x^2}/{2s}}ds.
\label{7161042}
\end{align}
By Theorem \ref{T:5.10}, Lemma \ref{L:5.11} and a change of variable
$r=x/\sqrt{s}$,
\begin{eqnarray*}
\int_0^{t/2} p(t-s, a^*, a^*) \frac{x}{\sqrt{2\pi s^3}}e^{-{x^2}/{2s}} ds
&\asymp& \frac1t \int_0^{t/2}
\frac{x}{s^{3/2}}e^{-{x^2}/{2s}} \,ds
= \frac2{t} \int_{x\sqrt{2/t}}^\infty e^{-r^2/2}\, dr \\
&\asymp& \frac1t \frac1{1 +(x/\sqrt{t})} e^{-x^2/t},
\end{eqnarray*}
while
\begin{eqnarray*}
\frac{x}{\sqrt{2\pi t^3}}e^{-{x^2}/{t}}\int_{0}^{t/2} p(r, a^*, a^*) dr
&\le &
\int_{t/2}^{t} p(t-s, a^*, a^*) \frac{x}{\sqrt{2\pi s^3}}e^{-{x^2}/{2s}}\,ds \\
&\le& \frac{2x}{\sqrt{\pi t^3}}e^{-{x^2}/{2t}} \int_{0}^{t/2} p(r, a^*, a^*) dr .
\end{eqnarray*}
This, Theorem \ref{T:5.10} and \eqref{7161042} yields the desired result.
\end{proof}
\begin{prop}\label{P:5.13}
There exist $C_i>0$, $11\le i\le 14$ such that
\begin{equation*}
\frac{C_{11}}{t}e^{-{C_{12}|x|_\rho^2}/{t}} \le p(t, a^*, x)\le \frac{C_{13}}{t}e^{-{C_{14}|x|_\rho^2}/{t}} \qquad \text{for all } t \geq 1 \hbox{ and } x\in D_0.
\end{equation*}
\end{prop}
\begin{proof}
When $t\in [1, 8]$, the estimates follows from Theorem \ref{T:4.7}. So it remains to establish
the estimates for $t>8$.
We do this by considering three cases.
Note that $p(t, a^*, x)=p(t, x, a^*)$.
\noindent {\it Case 1. } $1\le|x|_\rho<2\sqrt{t}$.
We have by Theorem \ref{T:5.10} and Proposition \ref{P:5.1},
\begin{align*}
p(t, x, a^*)&= \int_0^{t/2}p(t-s, a^*, a^*){\mathbb{P}}_x(\sigma_{a^*}\in ds)
+\int_{t/2}^t p(t-s, a^*, a^*) {\mathbb{P}}_{x}(\sigma_{a^*}\in ds)
\\
&\asymp \frac1{t} {\mathbb{P}}_x(\sigma_{a^*}\le t/2)+\frac{\log |x|}{t(\log t)^2}\int_{0}^{t/2}p(s,a^*,a^*)ds
\\
& {\asymp} \frac1t\left(1-\frac{\log |x|}{\log \sqrt{t}}\right)+\frac{\log |x|}{t (\log t)^2}
\int_0^{t/2} \left( \frac1{\sqrt{s}} \wedge \frac1s\right) ds
\\
&\asymp \frac{1}{t}\left(1-\frac{\log |x|}{\log \sqrt{t}}\right)+\frac{\log |x|}{t\log t} \asymp \frac{1}{t}.
\end{align*}
\noindent {\it Case 2. } $|x|_\rho\ge2\sqrt{t}$. In the following computation, $B_\rho(x,r):=\{y\in E: \rho(x, y)< r\}$, and $B_e(x,r) :=\{y\in E: |y-x|<r\}$.
We denote by $\{W; {\mathbb{P}}^0_x, x\in {\mathbb{R}}^2\}$
two-dimensional Brownian motion and $p^0(t, x, y)=(2\pi t)^{-1} \exp (-|x-y|^2/2t)$ its transition density.
Since
\begin{equation*}
p(t,x,a^*)={\mathbb{E}}_x \left[ p(t-\sigma_{B_\rho(a^*,2)} , X_{\sigma_{B_\rho(a^*,2)}}, a^*);
\sigma_{B_\rho(a^*,2)}<t \right],
\end{equation*}
it follows from Case 1, Theorem \ref{T:4.5} and Theorem \ref{T:4.7} that there is $c_1 \geq 1$ so that
\begin{eqnarray*}
p(t,x,a^*) &\lesssim & {\mathbb{E}}_x \left[ \left( t-\sigma_{B_\rho(a^*,2)} \right)^{-1} \, e^{-\frac{(2+2\varepsilon)^2}{2c_1(t-\sigma_{B_\rho(a^*,2)})}};
\sigma_{B_\rho(a^*,2)}<t \right] \\
&= & {\mathbb{E}}^0_x \left[ \left(t-\sigma_{B_e(0,2+\varepsilon)} \right)^{-1} \, e^{-\frac{ (2+2\varepsilon )^2 }{2c_1 (t-\sigma_{B_e(0,2+\varepsilon)})}};
\sigma_{B_e(0,2+\varepsilon)}<t \right] \\
&\lesssim & {\mathbb{E}}^0_x \left[ \left(c_1t-\sigma_{B_e(0,2+\varepsilon)} \right)^{-1} \, e^{-\frac{ (2+ \varepsilon )^2 }{2 (c_1 t-\sigma_{B_e(0,2+\varepsilon)})}};
\sigma_{B_e(0,2+\varepsilon)}<t \right] \\
&\leq & {\mathbb{E}}^0_x \left[ \left(c_1 t-\sigma_{B_e(0,2+\varepsilon)} \right)^{-1} \, e^{-\frac{ (2+\varepsilon )^2 }{2 (c_1 t-\sigma_{B_e(0,2+\varepsilon)})}};
\sigma_{B_e(0,2+\varepsilon)}< c_1 t \right] \\
&\asymp & {\mathbb{E}}^0_x \left[ p^0(c_1 t-\sigma_{B_e(0,2+\varepsilon)}), W_{\sigma_{B_e(0,2+\varepsilon)}}, 0);
\sigma_{B_e(0,2+\varepsilon)}< c_1 t \right] \\
&=& p^0(c_1 t, x, 0) =(2\pi c_1 t)^{-1} \exp (-|x|^2/2c_1t).
\end{eqnarray*}
Similarly, for the lower bound estimate,
it follows from Case 1, Theorem \ref{T:4.5} and Theorem \ref{T:4.7} that there is $c_2 \in (0, 1]$ so that
\begin{eqnarray*}
p(t,x,a^*) &\gtrsim & {\mathbb{E}}_x \left[ \left( t-\sigma_{B_\rho(a^*,2)} \right)^{-1}\, e^{-\frac{2^2}{2c_2(t-\sigma_{B_\rho(a^*,2)})}};
\sigma_{B_\rho(a^*,2)}<t \right] \\
&\geq & {\mathbb{E}}^0_x \left[ \left(t-\sigma_{B_e (0,2+\varepsilon)} \right)^{-1} \, e^{-\frac{ 2^2 }{2c_2 (t-\sigma_{B_e (0,2+\varepsilon)})}};
\sigma_{B_e(0,2+\varepsilon)}<c_2 t \right] \\
& \gtrsim & {\mathbb{E}}^0_x \left[ \left(c_2t-\sigma_{B_e(0,2+\varepsilon)} \right)^{-1} \, e^{-\frac{ (2+\varepsilon )^2 }{2 (c_2 t-\sigma_{B_e(0,2+\varepsilon)})}};
\sigma_{B_e(0,2+\varepsilon)}<c_2 t \right] \\
&\asymp & {\mathbb{E}}^0_x \left[ p^0(c_2 t-\sigma_{B_e(0,2+\varepsilon)}), W_{\sigma_{B_e(0,2+\varepsilon)}}, 0);
\sigma_{B_e(0,2+\varepsilon)}< c_2 t \right] \\
&=& p^0(c_2 t, x, 0) =(2\pi c_2 t)^{-1} \exp (-|x|^2/2c_2t).
\end{eqnarray*}
Realizing that $|x|_\rho>2\sqrt{t}>4\sqrt{2}$ implies that $|x|_\rho\asymp |x|$, we get the desired estimates
in this case.
\noindent {\it Case 3. } $0<|x|_\rho<1$. Note that by Proposition \ref{P:3.2},
\begin{equation}\label{521408}
\int_{y\in D_0\cap B_\rho(a^*, 2)}p(t/2, a^*,y)p(t/2, y, x)m_p(dy) \le \int_{D_0\cap B_\rho(a^*, 2)}\left(\frac{c_1}{\sqrt{t}}\right)^2m_p(dy)
\asymp \frac{1}{t},
\end{equation}
while by Cases 1 and 2 above,
\begin{equation}\label{521409}
\int_{D_0\cap B_\rho(a^*, 2)^c}p(t/2, x,y)p(t/2, a^*, y)m_p(dy)
\le \sup_{y\in D_0\cap B_\rho(a^*, 2)^c}p(t/2,a^*,y)
\lesssim \frac1t .
\end{equation}
On the other hand, by Theorem \ref{T:5.10} again,
\begin{align}
\nonumber \int_{{\mathbb{R}}_+}p(t/2, a^*, y)p(t/2, y,x)m_p(dy)
&={\mathbb{E}}_x \left[ p(t/2, X_{t/2}, a^*); X_{t/2} \in {\mathbb{R}}_+ \right]
\\
\nonumber &={\mathbb{E}}_x \left[ p(t/2, X_{t/2}, a^*); \sigma_{a^*} <t/2 \hbox{ and } X_{t/2} \in {\mathbb{R}}_+ \right]
\\
\nonumber &={\mathbb{E}}_x \left[ {\mathbb{E}}_{a^*} [p(t/2, X_{t/2-s}, a^*) ] |_{s=\sigma_{a^*}}; \sigma_{a^*} <t/2 \hbox{ and } X_{t/2} \in {\mathbb{R}}_+ \right]
\\
\nonumber & = {\mathbb{E}}_x \left[ p(t-\sigma_{a^*}, a^*, a^*) ; \sigma_{a^*} <t/2
\hbox{ and } X_{t/2}\in {\mathbb{R}}_+ \right]
\\
&\asymp \frac{1}{t}\, {\mathbb{P}}_x\left(\sigma_{a^*}\le t/2
\hbox{ and } X_{t/2}\in {\mathbb{R}}_+ \right) \leq \frac{1}{t}. \label{521410}
\end{align}
The estimates \eqref{521408}, \eqref{521409} and \eqref{521410} imply that
\begin{align*}
\nonumber p(t,a^*, x) = & \int_{D_0\cap B_\rho(a^*, 2)}p(t/2, a^*, y)p(t/2, y,x)m_p(dy)
+\int_{D_0\cap B_\rho^c(a^*, 2)}p(t/2, a^*, y)p(t/2, y,x)m_p(dy) \\
&+\int_{{\mathbb{R}}_+}p(t/2, a^*, y)p(t/2, y,x)m_p(dy)
\lesssim \frac1t.
\end{align*}
On the other hand, there is a constant $c_3>0$ so that ${\mathbb{P}}_x (\sigma_{a^*}\leq 1)\geq c_3$
for all $x\in D_0$ with $|x|_\rho \leq 1$. Hence we have by Theorem \ref{T:5.10} that
for $t\geq 2$ and $x\in D_0$ with $|x|_\rho \leq 1$,
$$
p(t, a^*, x)=p(t, x, a^*)\geq \int_0^1 p(t-s, a^*, a^*){\mathbb{P}}_x (\sigma_{a^*}\in ds)
\gtrsim \frac1t {\mathbb{P}}_x (\sigma_{a^*}\leq 1) \gtrsim \frac1t.
$$
In conclusion, we have $p(t, a^*, x)\asymp \frac1t$ for $t\geq 4$ and $x\in D_0$ with $|x|_\rho \leq 1$.
This completes the proof of the proposition.
\end{proof}
We are now in the position to derive estimates on $p(t,x,y)$ for $(x,y)\in D_0\times D$ and $t\in [8, \infty)$, by
using the two-sided estimate of $p(t, x, a^*)$ and the Markov property of $X$.
\begin{thm} \label{154}
There exist constants $C_i>0$, $15 \le i \le 18$, such that the following estimate holds:
\begin{equation*}
\frac{C_{15}}{t}e^{- {C_{16}\rho(x,y)^2}/{t}} \le p(t,x,y)\le \frac{C_{17}}{t}e^{- {C_{18}\rho(x,y)^2}/{t}},
\quad (t,x,y)\in [8, \infty)\times D_0\times D_0.
\end{equation*}
\end{thm}
\begin{proof}
As before, denote by $\{W; {\mathbb{P}}^0_x, x\in {\mathbb{R}}^2\}$
two-dimensional Brownian motion and $p^0(t, x, y)=(2\pi t)^{-1} \exp (-|x-y|^2 /2t)$ its transition density.
We first note that,
as a special case of \cite[Theorem 1.1(a)]{Z3},
there are constants $c_1>c_2>0$ so that
for $t\geq 1$ and $x, y\in D_0$,
$$
\left( |x|_\rho \wedge 1 \right) \left(|y|_\rho \wedge 1 \right) t^{-1} e^{-c_1 |x-y|^2 /t}
\lesssim p^0_{D_0} (t, x, y) \lesssim
\left( |x|_\rho \wedge 1 \right) \left(|y|_\rho \wedge 1 \right)
t^{-1} e^{-c_2 |x-y|^2 /t} .
$$
It follows that there is $c_3\in (0, 1]$ so that
\begin{equation}\label{e:5.25}
p^0_{D_0} (t, x, y) \gtrsim p^0_{D_0} (c_3t, x, y) \quad \hbox{for every } t\geq 1 \hbox{ and } x, y\in D_0.
\end{equation}
We will prove the theorem by considering two different cases.
\\
{\it Case 1. } $|x|_\rho+|y|_\rho >2$.
Without loss of generality, we assume $|y|_\rho>1$.
In this case, it is not hard to verify that
$\rho(x,y)\asymp |x-y|$.
Recall from \eqref{e:1.6}-\eqref{e:1.7} that for $x, y\in D_0$,
$$
\overline{p}_{D_0}(t,x,y):=p(t, x, y)- p_{D_0} (t, x, y)= {\mathbb{E}}_x \left[ p(t-\sigma_{a^*}, a^*, y); \sigma_{a^*}<t \right].
$$
By Proposition \ref{P:5.13} and the assumption that $\varepsilon \in (0, 1/4]$, there are constants $c_4\geq 1$ so that
for every $x, y\in D_0$ with $|y|_\rho >1$
\begin{eqnarray}
\overline{p}_{D_0}(t,x,y)
&\lesssim& \int_0^t \frac{1}{t-s}e^{-\frac{ (|y|_\rho +3\varepsilon)^2}{2c_4(t-s)}} \, {\mathbb{P}}_x\left(\sigma_{a^*}\in ds\right)
\nonumber \\
&\leq & \int_0^{c_1t} \frac{1}{c_4t-s}e^{-\frac{ (|y|_\rho +2\varepsilon)^2}{2(c_4t-s)}} \, {\mathbb{P}}^0_x\left(\sigma_{B_e(0, \varepsilon)}\in ds\right)
\nonumber \\
&\lesssim & {\mathbb{E}}_x^0 \left[ p^0(c_4t-s, W_{\sigma_{B_e(0, \varepsilon)}}, y); \sigma_{B_e(0, \varepsilon)}<c_4 t \right]
\nonumber \\
&\leq & p^0 (c_4 t, x, y) . \label{e:5.26}
\end{eqnarray}
Similarly, there is a constant $c_5 \in (0, c_3]$ so that
\begin{eqnarray}
\overline{p}_{D_0}(t,x,y)
&\gtrsim & \int_0^t \frac{1}{t-s}e^{-\frac{ (|y|_\rho -\varepsilon)^2}{2c_5(t-s)}} \, {\mathbb{P}}_x\left(\sigma_{a^*}\in ds\right) \nonumber \\
&\geq & \int_0^{c_5t} \frac{1}{c_5t-s}e^{-\frac{ |y|_\rho^2}{2(c_5t-s)}} \, {\mathbb{P}}^0_x\left(\sigma_{B_e(0, \varepsilon)}\in ds\right) \nonumber \\
&\gtrsim & {\mathbb{E}}_x^0 \left[ p^0(c_5t-s, W_{\sigma_{B_e(0, \varepsilon)}}, y); \sigma_{B_e(0, \varepsilon)}<c_5t \right] \nonumber \\
&= & \overline p^0_{D_0} (c_5t, x, y) . \label{e:5.27}
\end{eqnarray}
Since $p_{D_0}(t, x, y)=p^0_{D_0}(t, x, y)$ and $c_4\geq 1$, we have from \eqref{e:5.26} that
$$ p(t, x, y)= p_{D_0}(t, x, y)+ \overline p_{D_0}(t, x, y) \lesssim p^0(t, x, y)+p^0(c_4t, x, y)\lesssim p^0(c_4t, x, y)
\lesssim t^{-1} e^{-\rho (x, y)^2/2c_4t} .
$$
On the other hand, we have by \eqref{e:5.25} and \eqref{e:5.27} that for every $t\geq 1$ and $x, y\in D_0$ satisfying $|x|_\rho+|y|_\rho>2$,
$$
p(t, x, y)
\gtrsim p^0_{D_0} (c_5 t, x, y) + \overline p^0_{D_0}(c_5 t, x, y)
=p^0(c_5t, x, y) \gtrsim t^{-1} e^{-|x-y|^2/2c_5t} \gtrsim t^{-1} e^{-\rho (x, y)^2/c_5t} ,
$$
where the last ``$\gtrsim$" is due to the fact that $|x-y|/\sqrt{2}\le \rho(x,y)$, which can be verified easily from the assumptions that $|x|_\rho+|y|_\rho>2$ and $\varepsilon\le 1/4$. This establishes the desired two-sided estimates in this case.
\noindent {\it Case 2. } $|x|_\rho+|y_\rho \le 2$.
In this case, it suffices to show that $p(t, x, y)\asymp t^{-1}$ for $t\geq 8$.
The proof is similar to that of Case $3$ of Proposition \ref{P:5.13}.
By Proposition \ref{P:3.2}, for $t\geq 8$,
\begin{equation}\label{e:5.28}
\int_{y\in D_0\cap B_\rho(a^*, 2)}p(t/2, x,z)p(t/2, z, y)m_p(dy) \le \int_{D_0\cap B_\rho(a^*, 2)}\left(\frac{c_6}{\sqrt{t}}\right)^2m_p(dy)
\asymp \frac{1}{t},
\end{equation}
while by Case 1,
\begin{equation} \label{e:5.29}
\int_{D_0\cap B(a^*, 2)^c}p(t/2, x,z)p(t/2, z, y)m_p(dz) \le \sup_{z\in D_0\cap B(a^*, 2)^c}p(t/2, y,z)
\lesssim \frac1t.
\end{equation}
On the other hand, by Proposition \ref{P:5.13}, for $t\geq 8$,
\begin{align}
\nonumber \int_{{\mathbb{R}}_+}p(t/2, x, z)p(t/2, z, y)m_p(dz)
&={\mathbb{E}}_x \left[ p(t/2, X_{t/2}, y); X_{t/2} \in {\mathbb{R}}_+ \right]
\\
\nonumber &={\mathbb{E}}_x \left[ p(t/2, X_{t/2}, y); \sigma_{a^*} <t/2 \hbox{ and } X_{t/2} \in {\mathbb{R}}_+ \right]
\\
\nonumber &={\mathbb{E}}_x \left[ {\mathbb{E}}_{a^*} [p(t/2, X_{t/2-s}, y) ] |_{s=\sigma_{a^*}}; \sigma_{a^*} <t/2 \hbox{ and } X_{t/2} \in {\mathbb{R}}_+ \right]
\\
\nonumber & = {\mathbb{E}}_x \left[ p(t-\sigma_{a^*}, a^*, y) ; \sigma_{a^*} <t/2
\hbox{ and } X_{t/2}\in {\mathbb{R}}_+ \right]
\\
&\asymp \frac{1}{t}\, {\mathbb{P}}_x\left(\sigma_{a^*}\le t/2
\hbox{ and } X_{t/2}\in {\mathbb{R}}_+ \right) \leq \frac{1}{t}. \label{e:5.30}
\end{align}
The estimates \eqref{e:5.28}-\eqref{e:5.30} imply that
\begin{align*}
\nonumber p(t,a^*, x) = & \int_{D_0\cap B_\rho(a^*, 2)}p(t/2, a^*, y)p(t/2, y,x)m_p(dy)
+\int_{D_0\cap B_\rho^c(a^*, 2)}p(t/2, a^*, y)p(t/2, y,x)m_p(dy) \\
&+\int_{{\mathbb{R}}_+}p(t/2, a^*, y)p(t/2, y,x)m_p(dy) \\
\lesssim & \frac1t.
\end{align*}
On the other hand, there is a constant $c_7>0$ so that ${\mathbb{P}}_x (\sigma_{a^*}\leq 1)\geq c_7$
for all $x\in D_0$ with $|x|_\rho \leq 2$. Hence we have by Proposition \ref{P:5.13} that
for $t\geq 2$ and $x\in D_0$ with $|x|_\rho \leq 1$,
$$
p(t, x, y) \geq \int_0^1 p(t-s, a^*, y){\mathbb{P}}_x (\sigma_{a^*}\in ds)
\gtrsim \frac1t {\mathbb{P}}_x (\sigma_{a^*}\leq 1) \gtrsim \frac1t.
$$
Therefore we have $p(t, x, y)\asymp \frac1t$ for $t\geq 8$ and $x, y\in D_0$ with $|x|_\rho +|y|_\rho \leq 2$.
This completes the proof of the theorem.
\end{proof}
\begin{thm}\label{T:5.15}
There exist constants $C_i>0$, $19 \le i \le 26$, such that the following estimates hold for $(t,x,y)\in [4, \infty)\times {\mathbb{R}}_+\times D_0$:
when $|y|_\rho<1$,
\begin{equation}\label{1233}
\frac{C_{19} }{t} \left( 1 + \frac{|x| \log t}{\sqrt{t}} \right)
e^{- {C_{20}\rho(x,y)^2}/{t}} \le p(t,x,y)\le \frac{C_{21} }{t} \left( 1 + \frac{|x| \log t}{\sqrt{t}} \right)
e^{- {C_{22}\rho(x,y)^2}/{t}};
\end{equation}
while for $|y|_\rho\ge 1$,
\begin{eqnarray}
&& \frac{C_{23}}{t}
\left( 1 + \frac{|x|}{\sqrt{t}} \log \left( 1+\frac{\sqrt{t}}{|y|_\rho} \right) \right)
e^{- {C_{24}\rho(x,y)^2}/{t}} \nonumber \\
&\le & p(t,x,y) \le \frac{C_{25}}{t}\left( 1 + \frac{|x|}{\sqrt{t}} \log \left( 1+\frac{\sqrt{t}}{|y|_\rho} \right) \right)
e^{- {C_{26}\rho(x,y)^2}/{t}}.
\label{e:5.32}
\end{eqnarray}
\end{thm}
\begin{proof}
First, note that by Proposition \ref{P:5.13}
and Theorem \ref{T:4.7},
\begin{equation}\label{e:5.33}
\frac{1}{t}e^{-{c_1|y|_\rho^2}/{t}}\lesssim p(t,x,y)\lesssim \frac{1}{t}e^{-{c_2|y|_\rho^2}/{t}} \quad \hbox{for }
t \geq 1 \hbox{ and } y\in D_0
\end{equation}
and
\begin{equation}\label{e:5.34}
\frac{1}{\sqrt{t}}e^{- {c_3|y|_\rho^2}/{t}} \lesssim p(t,a^*,y)\lesssim \frac{1}{\sqrt{t}}e^{- {c_4|y|_\rho^2}/{t}} \quad
\hbox{for } t\leq 1 \hbox{ and } y\in D_0 \hbox{ with } |y|_\rho \leq 1.
\end{equation}
By \eqref{e:5.8},
\begin{equation}\label{e:5.35}
p(t,x,y) = \int_0^tp(t-s,a^*,y) {\mathbb{P}}_x(\sigma_{a^*}\in ds)
=\int_0^t p(t-s,a^*,y) \frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/(2s)} ds .
\end{equation}
It follows from \eqref{e:5.33} and Lemma \ref{L:5.11} that for every $y\in D_0$ and $t\geq 4$,
\begin{eqnarray}
&& \int_0^{t/2} p(t-s,a^*,y) \frac{ |x|}{\sqrt{2\pi s^3}} e^{-x^2/(2s)} ds
\lesssim - \frac{1}{t}e^{-{c_{2}|y|_\rho^2}/{t}}\int_{s=0}^{t/2}e^{-\frac{ |x|^2}{2s}}d\left(\frac{|x|}{\sqrt{s}}\right)
\nonumber \\
&\lesssim & \frac1t e^{- c_2 |y|_\rho^2 /t} \frac{1}{1+|x|/\sqrt{t}} e^{-|x|^2/t} \leq \frac 1t
e^{-c_3 \rho (x, y)^2/t}. \label{e:5.36}
\end{eqnarray}
Similarly, we have
\begin{equation}\label{e:5.37}
\int_0^{t/2} p(t-s,a^*,y) \frac{x}{\sqrt{2\pi s^3}} e^{-x^2/(2s)} ds
\gtrsim \frac1t e^{-(|x|^2+c_4 |y|_\rho^2)/t}
\gtrsim \frac1t e^{-c_5 \rho (x, y)^2/t}.
\end{equation}
We now consider two cases depending on the range of the values of $|y|_\rho$.
\noindent {\it Case 1. } $y \in D_0$ with $|y|_\rho<1$.
In this case, we have by \eqref{e:5.33}
\begin{eqnarray*}
&& \int_{t/2}^{t-1} p(t-s,a^*,y) \frac{x}{\sqrt{2\pi s^3}} e^{-x^2/(2s)} ds
\lesssim \int_{t/2}^{t-1} \frac{1}{t-s}e^{-{c_2 |y|_\rho^2}/{(t-s)}} \frac{|x|}{\sqrt{ s^3}} e^{-x^2/(2s)} ds \\
&\lesssim & \frac{|x|}{t^{3/2}} e^{-|x|^2 /2t} \int_{t/2}^{t-1} \frac{1}{t-s} d s
\lesssim \frac{|x| \log t }{t^{3/2}} e^{-|x|^2 /(2t)} .
\end{eqnarray*}
Similarly, we have
$$
\int_{t/2}^{t-1} p(t-s,a^*,y) \frac{x}{\sqrt{2\pi s^3}} e^{-x^2/(2s)} ds
\gtrsim \frac{|x| \log t }{t^{3/2}} e^{- |x|^2 /t}.
$$
On the other hand, by \eqref{e:5.34},
\begin{eqnarray*}
&& \int_{t-1}^t p(t-s,a^*,y) \frac{x}{\sqrt{2\pi s^3}} e^{-x^2/(2s)} ds
\lesssim \frac{|x|}{\sqrt{ t^3}} e^{-x^2/(2t)} \int_{t-1}^t \frac{1}{\sqrt{t-s}}e^{-{c_4 |y|_\rho^2}/{(t-s)}} ds \\
& \lesssim & \frac{|x|}{\sqrt{ t^3}} e^{-x^2/(2t)} \int_{t-1}^t \frac{1}{\sqrt{t-s}} ds
\lesssim \frac{|x|}{\sqrt{ t^3}} e^{-x^2/(2t)} ,
\end{eqnarray*}
and similarly
$$
\int_{t-1}^t p(t-s,a^*,y) \frac{x}{\sqrt{2\pi s^3}} e^{-x^2/(2s)} ds
\gtrsim \frac{|x|}{\sqrt{ t^3}} e^{-x^2/t} .
$$
These estimates together with \eqref{e:5.35}-\eqref{e:5.37} establishes \eqref{1233}.
\medskip
\noindent {\it Case 2. } $y\in D_0$ with $|y|_\rho>1$. Note that by \eqref{e:5.8},
\begin{align}
p(t,x,y)&=\int_0^t p(t-s,a^*,y) {\mathbb{P}}_x(\sigma_{a^*}\in ds)\label{e:5.38}
\\
&=\int_0^{t/2} p(t-s,a^*,y) \frac{|x|}{\sqrt{2\pi s^3}}e^{-{x^2}/{2s}} ds
+ \int_{t/2}^{t} p(t-s,a^*,y) \frac{|x|}{\sqrt{2\pi s^3}}e^{-{x^2}/{2s}} ds. \nonumber
\end{align}
By Theorem \ref{T:4.7}, \eqref{e:5.33} and a change of variable $r=|y|_\rho /\sqrt{t-s}$, we have
\begin{eqnarray*}
\int_{t/2}^{t} p(t-s,a^*,y) \frac{|x|}{\sqrt{2\pi s^3}}e^{-{x^2}/{2s}} ds
&\lesssim &\int_{t/2}^{t} \frac{1}{t-s}e^{-{c_6|y|_\rho^2}/{(t-s)}} \frac{|x|}{\sqrt{s^3}}e^{-{x^2}/{2s}} ds \\
&\lesssim & \frac{|x|}{ t^{3/2}}e^{-{x^2}/{2t}}\int_{t/2}^{t} \frac{1}{t-s}e^{-{c_6|y|_\rho^2}/{(t-s)}} ds \\
&= & \frac{|x|}{ t^{3/2}}e^{-{x^2}/{2t}} \int_{ |y|_\rho/ \sqrt{2/t}}^\infty \frac2{r} e^{-c_6 r^2} dr .
\end{eqnarray*}
Note that for each fixed $a>0$,
$\int_\lambda^\infty r^{-1} e^{-a r^2} dr = \int_\lambda^1 r^{-1} e^{-a r^2} dr + \int_1^\infty r^{-1} e^{-c_2 r^2} dr$
is comparable to $ \log (1/\lambda )$ when $0<\lambda \leq 1/2$. For $\lambda \geq 1/2$, by Lemma \ref{L:5.11}
$$ \int_\lambda^\infty r^{-1} e^{-a r^2} dr \leq 2 \int_\lambda ^\infty e^{-a r^2} dr
=\frac{2}{\sqrt{2a}} \int_{\sqrt{2a}\lambda}^\infty e^{-s^2/2} ds \lesssim \frac1{1+\sqrt{a}\lambda} e^{-a \lambda^2}
\leq e^{-a \lambda^2}
$$
and
$$
\int_\lambda^\infty r^{-1} e^{-a r^2} dr
\gtrsim \int_\lambda ^\infty e^{-2 a r^2} dr
=\frac{1}{2\sqrt{a}} \int_{2\sqrt{ a}\lambda}^\infty e^{-s^2/2} ds \gtrsim \frac1{1+\sqrt{a}\lambda} e^{-2a \lambda^2}
\gtrsim e^{-3a \lambda^2}
$$
Hence we have
\begin{equation}
\log (1+\lambda^{-1}) e^{-3a \lambda^2} \lesssim \int_\lambda^\infty r^{-1} e^{-ar^2} dr \leq \log (1+\lambda^{-1}) e^{- a \lambda^2}
\quad \hbox{for any } \lambda >0.
\end{equation}
Thus we have
\begin{eqnarray*}
\int_{t/2}^{t} p(t-s,a^*,y) \frac{|x|}{\sqrt{2\pi s^3}}e^{-{x^2}/{2s}} ds
&\lesssim & \frac{|x|}{ t^{3/2}}e^{-{x^2}/{2t}} \log \left(1+\frac{\sqrt{t}}{|y|_\rho} \right) e^{-2c_6 |y|_\rho^2/t} \\
&\leq & \frac{|x|}{ t^{3/2}} \log \left(1+\frac{\sqrt{t}}{|y|_\rho} \right) e^{-c_7\rho (x, y)^2/t}.
\end{eqnarray*}
Similarly, we have
$$
\int_{t/2}^{t} p(t-s,a^*,y) \frac{|x|}{\sqrt{2\pi s^3}}e^{-{x^2}/{2s}} ds
\gtrsim \frac{|x|}{ t^{3/2}} \log \left(1+\frac{\sqrt{t}}{|y|_\rho} \right) e^{-c_8 \rho (x, y)^2/t}.
$$
These together with \eqref{e:5.36}-\eqref{e:5.37} establishes \eqref{e:5.32}.
\end{proof}
We will need the following elementary lemma.
\begin{lem}\label{L:5.16}
For every $c>0$, there exists $C_{27} \geq 1$ such that for every $t\geq 3$ and $0<y\leq \sqrt{t}$,
\begin{equation*}
C_{27}^{-1} \log t \leq \int_{2}^t \frac1s \left( 1+ \frac{ y\log s}{\sqrt{s} }\right) e^{- {c|y|^2}/{s}}ds\le C_{27}\log t.
\end{equation*}
\end{lem}
\begin{proof}
By a change of variable $r=y/\sqrt{s}$,
$$
\int_{2}^t \frac{ y\log s}{s^{3/2} } e^{-{c|y|^2}/{s}}ds
\le \log t\int_{2}^t\frac{y}{ s^{3/2}}e^{-{c|y|^2}/{s}}ds
= 2 \log t\int_{ {y}/{\sqrt{t}}}^{y/\sqrt{2}}e^{-cr^2}dr \lesssim \log t,
$$
while since $0<y\leq \sqrt{t}$,
$$ \int_{2}^t \frac1s e^{-{c|y|^2}/{s}}ds \asymp \int_{2}^t \frac1s ds \asymp \log t .
$$
This proves the lemma.
\end{proof}
\begin{thm}\label{T:5.17}
There exist constants $C_i>0$, $28\le i\le 34$, such that the following estimate holds for all $(t,x,y)\in [8, \infty)\times {\mathbb{R}}_+\times {\mathbb{R}}_+$:
\begin{align}\label{1241}
\nonumber &\frac{C_{28}}{\sqrt{t}}\left(1\wedge \frac{|x|}{\sqrt{t}}\right)\left(1\wedge \frac{|y|}{\sqrt{t}}\right)
e^{- {C_{29}|x-y|^2}/{t}}+\frac{C_{28}}{t} \left( 1+ \frac{(|x|+|y|)\log t}{\sqrt{t}}\right)e^{- {C_{30}(x^2+y^2)}/{t}} \le p(t,x,y)
\\
&\le \frac{C_{31}}{\sqrt{t}}\left(1\wedge \frac{|x|}{\sqrt{t}}\right)\left(1\wedge \frac{|y|}{\sqrt{t}}\right)e^{- {C_{32}|x-y|^2}/{t}}+\frac{C_{31}}{t} \left( 1+ \frac{(|x|+|y|)\log t}{\sqrt{t}}\right) e^{-{C_{33}(x^2+y^2)}/{t}}.
\end{align}
\end{thm}
\begin{proof} When either $x=a^*$ or $y=a^*$, this has been established in Proposition \ref{P:5.12}
so we assume $|x|\wedge |y|>0 $.
For simplicity, denote ${\mathbb{R}}_+\setminus \{a^*\}$ by $(0, \infty)$.
Since
$$
p_{(0, \infty)}(t,x,y)= (2\pi t)^{-1/2} \left( e^{-|x-y|^2/2t} - e^{-|x+y|^2/2t}\right)
=(2\pi t)^{-1/2} e^{-|x-y|^2/2t} \left( 1- e^{-2 xy/t}\right),
$$
there are constants $c_1>c_2>0$ so that
\begin{equation}\label{e:5.41}
\frac{1}{\sqrt{t}}\left(1\wedge \frac{|x|}{\sqrt{t}}\right)\left(1\wedge \frac{|y|}{\sqrt{t}}\right)e^{- {c_1 |x-y|^2}/{t}}
\lesssim p_{(0, \infty)}(t,x,y) \lesssim
\frac{1}{\sqrt{t}}\left(1\wedge \frac{|x|}{\sqrt{t}}\right)\left(1\wedge \frac{|y|}{\sqrt{t}}\right) e^{- {c_2 |x-y|^2}/{t}} .
\end{equation}
for all $t>0$ and $x, y\in {\mathbb{R}}_+$. Note also
\begin{equation}\label{e:5.42}
p(t,x,y)=p_{(0, \infty)}(t,x,y)+\int_0^t p(t-s,a^*, y) {\mathbb{P}}_x(\sigma_{a^*}\in ds).
\end{equation}
We prove this theorem by considering two cases.
\\
{\it Case 1. } $ |x| \wedge |y| \geq \sqrt{t}$. In this case ,
$ p(t,x,y)\geq p_{(0, \infty)}(t,x,y)\gtrsim t^{-1/2} e^{- {c_3|x-y|^2}/{t}}$.
Thus we have by Proposition \ref{offdiagUBE},
\begin{equation*}
\frac{1}{\sqrt{t}}e^{- {c_{3}|x-y|^2}/{t}} \lesssim p(t,x,y)\lesssim \frac{1}{\sqrt{t}}e^{- {c_{4}|x-y|^2}/{t}}.
\end{equation*}
\noindent {\it Case 2. } $0< |x| \wedge |y| <\sqrt{t}$.
Without loss of generality, we may and do assume $|y|< \sqrt{t}$.
By \eqref{e:5.8},
\begin{equation}\label{e:5.43}
\int_0^t p(t-s,a^*, y) {\mathbb{P}}_x(\sigma_{a^*}\in ds)
= \int_0^t p(t-s,a^*, y) \frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/2s} ds.
\end{equation}
By Proposition \ref{P:5.12} and Lemma \ref{L:5.11}
\begin{eqnarray*}
\int_0^{t/2} p(t-s,a^*, y) \frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/2s} ds
&\lesssim& \int_0^{t/2} \frac{1}{t-s} \left(1+ \frac{|y|\log (t-s) }{\sqrt{t-s}} \right) e^{- {y^2}/{2(t-s)}}
\frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/2s} ds \\
&\lesssim & \frac{1}{t } \left(1+ \frac{|y|\log t }{\sqrt{t }} \right) \int_0^{t/2}\frac{|x|}{ s^{3/2}} e^{-x^2/2s} ds \\
&\lesssim & \frac{1}{t } \left(1+ \frac{|y|\log t }{\sqrt{t }} \right) \frac{1}{1+|x|/\sqrt{t}} e^{-x^2/t} \\
&\lesssim & \frac{1}{t } \left(1+ \frac{|y|\log t }{\sqrt{t }} \right) e^{-(x^2+y^2)/t},
\end{eqnarray*}
while by Lemma \ref{L:5.16},
\begin{eqnarray*}
\int_{t/2}^{t-2} p(t-s,a^*, y) \frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/2s} ds
&\lesssim& \int_ {t/2}^{t-2} \frac{1}{t-s} \left(1+ \frac{|y|\log (t-s) }{\sqrt{t-s}} \right) e^{- {y^2}/{2(t-s)}}
\frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/2s} ds \\
&\lesssim & \frac{|x|}{t^{3/2}} e^{-x^2/2t} \int_{t/2}^{t-2} \frac{1}{t-s}
\left(1+ \frac{|y|\log (t-s) }{\sqrt{t-s}} \right) e^{- {y^2}/{2(t-s)}}ds \\
&\stackrel{r=t-s}{=} & \frac{|x|}{t^{3/2}} e^{-x^2/2t} \int_{2}^{t/2} \frac{1}{r}
\left(1+ \frac{|y|\log r }{\sqrt{r}} \right) e^{- {y^2}/{2r}}dr \\
&\lesssim & \frac{|x|}{t^{3/2}} e^{-x^2/2t} \log t \asymp \frac{|x| \log t }{t^{3/2}} e^{-(x^2+y^2) /2t}.
\end{eqnarray*}
A similar calculation shows
$$
\int_0^{t/2} p(t-s,a^*, y) \frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/2s} ds
\gtrsim \frac{1}{t } \left(1+ \frac{|y|\log t }{\sqrt{t }} \right) e^{-2(x^2+y^2)/t}
$$
and
$$
\int_{t/2}^{t-2} p(t-s,a^*, y) \frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/2s} ds
\gtrsim \frac{|x|}{t^{3/2}} e^{-x^2/2t} \log t \asymp \frac{|x| \log t }{t^{3/2}} e^{-2(x^2+y^2) / t}.
$$
By Theorem \ref{T:4.5},
\begin{eqnarray*}
\int_{t-2}^{t} p(t-s,a^*, y) \frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/2s} ds
&\lesssim & \int_{t-2}^{t} \frac1{\sqrt{t-s}} e^{-c_5 y^2/(t-s)} \frac{|x|}{\sqrt{2\pi s^3}} e^{-x^2/2s} ds \\
&\lesssim & \frac{|x|}{\sqrt{ t^3}} e^{-x^2/ t} \int_{t-2}^{t} \frac1{\sqrt{t-s}} e^{-c_5 y^2/(t-s)} ds \\
&\lesssim & \frac{|x|}{ t^{3/2}} e^{-x^2/2t} \lesssim \frac{|x|}{t^{3/2}} e^{-(x^2+y^2)/2t}.
\end{eqnarray*}
These estimates together with \eqref{e:5.41}-\eqref{e:5.43} establish the theorem.
\end{proof}
Theorem \ref{T:5.17} together with Theorems \ref{154} and \ref{T:5.15}
gives Theorem \ref{largetime}.
\section{H\"{o}lder regularity of Parabolic Functions}\label{S:6}
As we noted in Remark \ref{rmkonsmalltimeHKE}(iii), parabolic Harnack principle fails for the BMVD $X$.
However we show in this section that H\"older regularity holds for the parabolic functions of $X$.
In the elliptic case (that is, for harmonic functions instead of parabolic functions),
this kind of phenomenon has been observed for solutions of SDEs driven by multidimensional
L\'evy processes with independent coordinate processes;
see \cite{BC2}.
To show the H\"{o}lder-continuity of parabolic functions of $X$, we begin with the following two lemmas.
\begin{lem}\label{L:6.1}
There exist $C_1>0$ and $0<C_2\leq 1/2$ such that for every $x_0\in E$ and $R>0$,
\begin{equation*}
p_{B(x_0,R)}(t,x, y) \ge \frac{1}{2}\; p(t, x, y)
\qquad \hbox{for } t\in (0, C_1/ (R\vee 1)^2 ] \hbox{ and } x, y\in B(x_0, C_2R) .
\end{equation*}
\end{lem}
\begin{proof}
By Theorem \ref{T:smalltime}, there exist constants $c_i>0$, $1\le i\le 4$, such that for all $t\leq 1$ and $x,y \in E$,
\begin{equation}\label{24105}
\frac{c_3}{\sqrt{t}}e^{- {c_4 \rho (x,y)^2}/{t}} \leq p(t, x,y)\le \frac{c_1}{t}e^{- {c_2\rho (x,y)^2}/{t}} .
\end{equation}
We choose $0<c_5<1/2$ sufficiently small such that
\begin{equation}\label{24119}
\frac{(1-c_5)^2}{(2c_5)^2}\ge \frac{c_2}{c_4}.
\end{equation}
As $t \mapsto t^{-1}e^{-c_0/t}$ is increasing in $t\in (0, 1/c_0]$, we have for $0<t\leq 1/(c_2(1-c_5)^2R^2)$
and $x, y\in B(x_0, c_5R)$,
\begin{eqnarray*}
\overline p_{B(x_0, c_5R)} (t, x, y)&:=& {\mathbb{E}}_x [ p(t-\tau_{B(x_0, c_5R)}, X_{B(x_0, c_5R)}, y); \tau_{B(x_0, c_5R)}<t ] \\
& \lesssim & {\mathbb{E}}_x [ (t-\tau_{B(x_0, c_5R)})^{-1} e^{- {c_2 ((1-c_5)R)^2}/{(t-\tau_{B(x_0, c_5R)}})} ; \tau_{B(x_0, c_5R)}<t ] \\
&\leq & t^{-1} e^{- {c_2 ((1-c_5)R)^2}/t} \lesssim e^{- {c_2 ( 1-c_5)^2R^2}/2t} ,
\end{eqnarray*}
while
\begin{equation*}
p (t, x, y) \geq \frac{c_3}{\sqrt{t}}e^{- {c_4 (2c_5R)^2}/{t}} \stackrel{\eqref{24119}}{\geq} \frac{c_3}{\sqrt{t}}e^{- {c_2 ( 1-c_5)^2R^2}/2t}.
\end{equation*}
Hence there is $c_6\leq 1/(c_2(1-c_5)^2 )$ so that $p (t, x, y) \geq \frac12 \overline p_{B(x_0, c_5R)} (t, x, y)$
for every $R>0$, $x_0\in E$, $0< t \leq c_6 /(R\vee 1)^{2} $ and $x, y\in B(x_0, c_5R)$. This proves the lemma as
$p_{B(x_0, c_5R)}(t, x, y) = p(t, x, y) -
\overline p_{B(x_0, c_5R)} (t, x, y)$.
\end{proof}
Let $Z_s=(V_s, X_s)$ be the space-time process of $X$ where $V_s=V_0+s$.
In the rest of this section,
$$
Q(t,x,R):= (t, t+R^2)\times B_\rho(x,R) .
$$
For any Borel measurable set $A\subset Q(t,x,R)$, we use $|A|$ to denote its measure under the product measure
$dt \times m_p (dx)$.
\begin{lem}\label{L:6.2}
Fix $R_0\geq 1$. There exist constants $0<C_3\leq 1/2$ and $C_4>0$ such that for all $0<R\leq R_0$, $x_0\in E$,
$x\in B_\rho (x_0, C_3R)$ and any $A\subset Q(0,x_0,C_3R)$ with
$\frac{|A|}{|Q(0, x_0, C_3R)|} \ge \frac{1}{3}$,
\begin{equation}
{\mathbb{P}}_{(0,x)}(\sigma_A<\tau_R )\ge C_4,
\end{equation}
where $\tau_R=\tau_{Q(0,x_0, R)}=\inf\{t\geq 0: X_t \notin B_\rho(x_0, R)\} \wedge R^2$
and $\sigma_A:=\inf\{t\geq 0: (V_t, X_t)\in A\}$.
\end{lem}
\begin{proof}
Let $C_1$ and $C_2$ be the constants in Lemma \ref{L:6.1}. Define $C_3= (C_1/R_0^3)^{1/2} \wedge C_2$.
For $x_0\in E$ and $R\in (0, R_0]$, denote by $X^{B_\rho(x_0, R)}$ the subprocess of $X$ killed upon exiting the ball $B_\rho(x_0,R)$ and
$p_{B_\rho(x_0, R)}$ its transition density with respect to the measure $m_p$.
As $ |(0, C_3^2R^2/6)\times B_\rho (x_0, C_3R)|=|Q(0, x_0, C_3R)|/6$ and $|A|\geq |Q(0, x_0, C_3R)|/3$,
we have
$$ | \{(t, x)\in A: t\in [C_3^2R^2/6, C_3^2R^2] \}| \geq |Q(0, x_0, C_3R)|/6.
$$
For $s>0$, let $A_s:=\{x\in E: (s,x)\in A\}$.
Note that
\begin{eqnarray}
{\mathbb{E}}_x \int_0^{\tau_R }\mathbf{1}_A(s,X_s)ds
&= & {\mathbb{E}}_x \int_0^{\tau_R \wedge (C_3R)^2} \mathbf{1}_A(s, X^{B_\rho(x_0,R)})ds \nonumber \\
&=& \int_0^{C_3^2R^2}{\mathbb{P}}_{x}\left(X_s^{B_\rho(x_0,R)}\in A_{s}\right)ds \nonumber \\
& =& \int_{0}^{C_3^2 R^2}\int_{A_{s}}p_{B_\rho(x_0,R)}(s,x,y)m_p(dy)ds \nonumber \\
&\geq &\int_{C_3^2 R^2/6}^{C_3^2 R^2} p_{B_\rho(x_0,R)}(s,x,y)m_p(dy)ds \label{e:6.4}.
\end{eqnarray}
We now consider two cases.
\noindent {\it Case 1.} $m_p( B_\rho(x_0, R))>pR/6$. In this case, we have by \eqref{e:6.4},
Lemma \ref{L:6.1} and \eqref{24105} that
\begin{align*}
{\mathbb{E}}_x \int_0^{\tau_R }\mathbf{1}_A(s,X_s)ds
& \gtrsim \int_{C_3^2 R^2/6}^{C_3^2 R^2}\int_{A_{s}} \frac1{\sqrt{t}} e^{-c_2 \rho (x, y)^2 /t} m_p(dy)ds \nonumber \\
& \gtrsim \frac1R | \{(t, x)\in A: t\in [C_3^2R^2/6, C_3^2R^2] \}| \nonumber \\
& \gtrsim |Q(0, x_0, C_3R)|/ R \gtrsim R^2.
\end{align*}
\noindent{\it Case 2.} $m_p(B_\rho(x_0, R)) \le pR/6$. In this case, $x_0$ must be in $D_0$ with $\rho(x_0, a^*)\ge \frac{5R}{6}$ and so
$m_p(B_\rho(x_0, R))\ge (5R/6)^2$. Thus we have by \eqref{e:6.4} and Theorem \ref{T:smalltime}(iii),
\begin{eqnarray*}
{\mathbb{E}}_x \int_0^{\tau_R }\mathbf{1}_A(s,X_s)ds
& \gtrsim &\int_{C_3^2 R^2/6}^{C_3^2 R^2}\int_{A_{s}} \frac1t e^{-c_2 \rho (x, y)^2 /t} m_p(dy)ds \nonumber \\
& \gtrsim & \frac1{R^2} | \{(t, x)\in A: t\in [C_3^2R^2/6, C_3^2R^2] \}| \nonumber \\
& \gtrsim & |Q(0, x_0, C_3R)|/ R^2 \gtrsim R^2.
\end{eqnarray*}
Thus in both cases, there is a constant $c_0 >0$ independent of $x_0$ and $R\in (0, R_0]$ so that
\begin{equation}\label{e:6.5}
{\mathbb{E}}_x \int_0^{\tau_R }\mathbf{1}_A(s,X_s)ds \geq c_0 \, R^2.
\end{equation}
On the other hand,
\begin{align*}
{\mathbb{E}}_x\int_0^{\tau_R }\mathbf{1}_A(s,X_s)ds&= \int_0^\infty {\mathbb{P}}_x\left(\int_0^{\tau_R }\mathbf{1}_A(s,X_s)ds>u\right)du\\
&=\int_0^{R^2}
{\mathbb{P}}_x \left(\int_0^{\tau_R}\mathbf{1}_A(s,X_s)ds>u\right)du\\
&\le R^2\, {\mathbb{P}}_x (\sigma_A<\tau_R).
\end{align*}
The desired estimate now follows from this and \eqref{e:6.5}.
\end{proof}
\begin{thm}
For every $R_0>0$, there are constants $C=C(R_0)>0$ and $\beta \in (0, 1)$ such that for every $R\in (0, R_0]$, $x_0\in E$, and every bounded parabolic function $q$ in $Q(0, x_0, 2R)$, it holds that
\begin{equation}\label{Holdercont}
|q(s,x)-q(t,y)|\le C\|q\|_{\infty, R}\,
R^{-\beta}\left(|t-s|^{1/2}+\rho(x,y)\right)^\beta
\end{equation}
for every $(s,x), \, (t,y)\in Q(0,x_0, R/4)$, where $\displaystyle{\|q\|_{\infty, R}:=\sup_{(t,y)\in (0, 4R^2] \times B_\rho(x_0, 2R) }|q(t,y)|}$.
\end{thm}
\begin{proof}
With loss of generality, assume $0\le q(s)\le \|q\|_{\infty, R}=1$. We first assume $x_0=a^*$ and show that \eqref{Holdercont} holds
for all $(s,x), (t,y)\in Q(0, x_0, R)$ (instead of $Q(0, x_0, R/4)$). Let $C_3\in (0, 1/2]$ and $C_4\in (0, 1)$ be the constants in Lemma \ref{L:6.2}. Let
$$
\eta=1-C_4/4 \geq 3/4\quad \hbox{and} \quad \gamma=C_3/2\leq 1/4.
$$
Note that for every $(s,x)\in Q(0, a^*, R)$, $q$ is parabolic in $Q(s,x,R)\subset Q(0, a^*, 2R)$. We will show by induction that $\displaystyle{\sup_{Q(s,x,\gamma^kR)}|q|-\inf_{Q(s,x,\gamma^kR)}|q|\le \eta^k}$ for all integer $k$. For notation convenience, we denote $Q(s,x,\gamma^kR)$ by $Q_k$.
Define $a_i=\displaystyle{\inf_{Q_i}q}$, $b_i=\displaystyle{\sup_{Q_i}q}$. Clearly, $b_i- a_i\le 1\le \eta^i$ for all $i\le 0$. Now suppose $b_i-a_i\le \eta^i$ for all $i\le k$ and we wil show that $b_{k+1}-a_{k+1}\le \eta^{k+1}$. Observe that $Q_{k+1}\subset Q_k$ and so $a_k\le q\le b_k$ on $Q_{k+1}$. Define
$$
A':=\left\{z\in \big( s+(\gamma^{k+1}R)^2, s+ (C_3 \gamma^kR)^2 \big)
\times B_\rho (x, C_3 \gamma^kR): q(z)\le (a_k+b_k)/2\right\},
$$
which is a subset of $Q_k$.
Note that
$$
\left|\big(s+(\gamma^{k+1}R)^2, s+ (C_3 \gamma^kR)^2\big)
\times B_\rho (x, \gamma^kR) \right|= \frac34 (C_3\gamma^kR)^2 \, m_p (B_\rho(x, C_3\gamma^k R)).
$$
We may suppose $|A'|\geq \frac12
(C_3\gamma^kR)^2 \, m_p (B(x, C_3 \gamma^k R))$;
otherwise we consider $1-q$ instead of $q$. Let $A$ be a compact subset of $A'$ such that $|A|\ge \frac12 (C_3\gamma^kR)^2 \, m_p (B(x, C_3 \gamma^k R))$. For any given $\varepsilon>0$, pick $z_1=(t_1, x_1), z_2\in Q_{k+1}$ so that $q(z_1)\ge b_{k+1}-\varepsilon$ and $q(z_1)\le a_{k+1}+\varepsilon$. Note that $Z_{\tau_{k}}\in \partial Q_{k}$ as
BMVD $X_t$ has continuous sample paths. So by
Lemma \ref{L:6.2},
\begin{eqnarray*}
b_{k+1}-a_{k+1}-2\varepsilon
&\le &q(z_1)-q(z_2) \\
&=& {\mathbb{E}}_{z_1}\left[q(Z_{\sigma_A\wedge \tau_{k}})-q(z_2)\right] \\
&=& {\mathbb{E}}_{z_1}\left[q(Z_{\sigma_A})-q(z_2); \sigma_A< \tau_k\right]+{\mathbb{E}}_{z_1}\left[q(Z_{\tau_k})-q(z_2); \tau_k < \sigma_A \right]\\
\\
&\le& \left(\frac{a_k+b_k}{2}-a_k\right){\mathbb{P}}_{z_1} \left(\sigma_A<\tau_{k}) \right)
+(b_k-a_k){\mathbb{P}}_{z_1}(\sigma_A>\tau_k) \\
&=& (b_k-a_k)\left( 1-{\mathbb{P}}_{z_1}(\sigma_A<\tau_{k})/2\right)
\\
&\le & \eta^k(1-C_4/2)
\\
&\le & \eta^{k+1}.
\end{eqnarray*}
Since $\varepsilon$ is arbitrary, we get $b_{k+1}-a_{k+1}\leq \eta^{k+1}$.
This proves that $b_{k}-a_{k}\leq \eta^{k}$ for all integer $k$.
For $z=(s,x)$ and $w=(t,y)$ in $Q(0,a^*,R)$ with $s\le t$, let $k$ be the smallest integer such that $|z-w|:=|t-s|^{1/2}+\rho(x,y)\le \gamma^kR$.
Then
\begin{equation*}
|q(z)-q(w)|\le \eta^k=\gamma^{k\log\eta/\log \gamma}
\le \left(\frac{|z-w|}{\gamma R}\right)^{\log\eta/\log\gamma}.
\end{equation*}
This establishes
\eqref{Holdercont} for $x_0=a^*$ and for every $(s,x), (t, y)\in Q(0,a^*, R)$ with $\beta=\log \eta /\log \gamma$. Note that
$\beta \in (0, 1)$ since $0< \gamma <\eta <1$.
For general $x_0\in E$, we consider two cases based on the distance $\rho(x, a^*)$:
\smallskip
\noindent {\it Case 1.} $|x|_\rho<R/2$. In this case,
$Q(0, x_0, R/4)\subset Q(0, a^*, 3R/4) \subset Q(0,a^*, 3R/2)\subset Q(0,x_0, 2R)$. By what we have
established above,
\begin{equation*}
|q(s,x)-q(t,y)|\le C(R_0)\|q\|_{\infty, R} \,
R^{-\beta}\left(|t-s|^{1/2}-\rho(x,y)\right)^\beta \quad \hbox{for } (s,x), (t,y)\in Q(0,x_0, R/4).
\end{equation*}
\smallskip
\noindent {\it Case 2. }$|x|_\rho\ge R/2$. Since $a^*\notin Q(0, x_0, R/2)$, it follows from the classical results for Brownian motion in ${\mathbb{R}}^d$
with $d=1$ and $d=2$ that for every $(s,x), (t,y)\in Q(0, x_0, R/4)$
\begin{align*}
|q(s,x)-q(t,y)|&\le C(R_0) \| q \|_{\infty, R/2} R^{-\beta}\left(|t-s|^{1/2}+\rho(x,y)\right)^\beta\\
&\le C(R_0)\|q \|_{\infty, R} \,\, R^{-\beta}\left(|t-s|^{1/2}+\rho(x,y)\right)^\beta .
\end{align*}
This completes the proof of the theorem.
\end{proof}
\section{Green Function Estimates}
\indent In this section, we establish two-sided bounds for the Green function of BMVD
$X$ killed upon exiting a bounded connected $C^{1,1}$ open set $D\subset E$.
Recall that the Green function $G_D(x,y)$ is defined as follows:
\begin{equation*}
G_D(x,y)=\int_0^\infty p_D(t,x,y)dt,
\end{equation*}
where $p_D(t, x, y)$ is the transition density function of the subprocess $X^D$ with respect to $m_p$.
We assume $a^*\in D$ throughout this section, as otherwise, due to the connectedness of $D$, either $D\subset {\mathbb{R}}_+$ or $D\subset D_0$. Therefore $G_D(x,y)$ is just the standard Green function of a bounded $C^{1,1}$ domain for Brownian motion
in one-dimensional or two-dimensional spaces, whose two-sided estimates are known, see \cite{CZ}.
It is easy to see from
$$ p_D(t, x, y)=p(t, x, y)-{\mathbb{E}}_x [ p(t-\tau_D, X_{\tau_D}, y); \tau_D <t],
$$
that $p_D(t, x, y)$ is jointly continuous in $(t, x, y)$.
Recall that for any bounded open set $U\subset E$, $\delta_U(\cdot):=\rho(\cdot, \partial U)$ denotes the $\rho$-distance to
the boundary $\partial U$.
For notational convenience, we set $D_1:=D\cap ({\mathbb{R}}_+\setminus \{a^*\})$ and $D_2:=D\cap D_0$.
Note that $a^*\in \partial D_1 \cap \partial D_2$.
\medskip
The following theorem gives two-sided Green function estimates for $X$ in bounded $C^{1,1}$ domains.
\begin{thm} \label{T:7.1}
Let $G_{_D}(x,y)$ be the Green function of $X$ killed upon exiting $D$, where $D$ is a connected bounded $C^{1,1}$ domain of $E$ containing
$a^*$. We have for $x\not= y$ in $D$,
\begin{equation*}
G_{_D}(x,y)\asymp\left\{
\begin{array}{ll}
\delta_D(x)\wedge \delta_D(y),
&
\hbox{$x\in D_1 \cup \{a^*\}$, $y\in D_1 \cup \{a^*\}$;}
\\ \\
\delta_D(x) \delta_D(y)+\ln\left(1+\frac{\delta_{D_2}(x)\delta_{D_2}(y)}{|x-y|^2}\right),
& \hbox{$x\in D_2$, $y\in D_2$;}
\\ \\
\delta_D(x) \delta_D(y),
& \hbox{$x\in D_1\cup \{a^*\}$, $y\in D_2$.}
\end{array}
\right.
\end{equation*}
\end{thm}
\begin{proof} We first show that $G_D(x, a^*)$ is a bounded positive continuous function on $D$.
By Theorem \ref{T:smalltime}, there is a constant $c_1>0$ so that for every $x\in D$,
\begin{eqnarray}
{\mathbb{P}}_x(\tau_D<1)&\ge& {\mathbb{P}}_x(X_1\in E\setminus D) =\int_{{\mathbb{R}}_+ \cap D^c} p(1,x,z)m_p(dz)
+ \int_{D_0 \cap D^c} p(1,x,z)m_p(dz) \nonumber \\
&\ge & c_1. \label{e:7.1}
\end{eqnarray}
Thus ${\mathbb{P}}_x (\tau_D\geq 1) \leq 1-c_1$ for every $x\in D$.
By the strong Markov property of $X$, there are constants $c_2, c_3>0$ so that
${\mathbb{P}}_x (\tau_D\geq t) \leq c_2 e^{-c_3t} $ for every $x\in D$
and $t>0$. For $t\geq 2$ and $x, y\in D$, we thus have by Theorem \ref{T:smalltime},
\begin{eqnarray*}
p_{_D}(t,x,y) =\int_D p_{_D}(t-1, x,z)p_{_D}(1,z,y)m_p(dz) \leq
c_4 \int_D p_{_D}(t-1, x,z) m_p(dz) \leq c_5e^{-c_3t}.
\end{eqnarray*}
By Theorem \ref{T:smalltime} again, we conclude that
$$
G_D(x, a^*)=\int_0^2 p_D(t, x, y) dt + \int_2^\infty p_D(t, x, y)dt
$$
converges and is a bounded positive continuous function in $x\in D$.
In particular, $G_D(a^*, a^*)<\infty$.
We further note that $x\mapsto G_D(x, a^*)$ is a harmonic function in $D_1$ and so it is a linear function.
As it vanishes
at $b :=\partial D \cap {\mathbb{R}}_+$, we have
\begin{equation}\label{e:7.2}
G_D(x, a^*) = c_6 |b-x|\asymp \delta_D (x) \quad \hbox{for } x\in D_1.
\end{equation}
\medskip
\noindent (i) Assume $x, y \in D_1\cup \{a^*\}$ and $x\not= y$. If $x=a^*$ or $y=a^*$,
the desired estimate holds in view of \eqref{e:7.2}.
Thus we assume neither $x$ nor $y$ is $a^*$. By the strong Markov property of $X$,
$$
G_D(x, y )= G_{D_1}(x, y) + {\mathbb{E}}_x [ G_D(X_{\sigma_{a^*}}, y); \sigma_{a^*}<\tau_D]
=G_{D_1}(x, y) + G_D(a^*, y) {\mathbb{P}}_x ( \sigma_{a^*}<\tau_D).
$$
We know from the one-dimensional Green function estimates,
$$
G_{D_1}(x, y) \asymp \delta_{D_1}(x)\wedge \delta_{D_1}(y) .
$$
On the other hand, $x\mapsto {\mathbb{P}}_x ( \sigma_{a^*}<\tau_D)$ is a harmonic function
in $D_1$ that vanishes at $b$. Thus by the same reasoning as that for \eqref{e:7.2}, we have
\begin{equation}\label{e:7.3}
{\mathbb{P}}_x ( \sigma_{a^*}<\tau_D)= c_7 |x-b|\asymp \delta_D (x) \quad \hbox{for } x\in D_1.
\end{equation}
It follows then
$$
G_D(x, y ) \asymp \delta_{D_1}(x)\wedge \delta_{D_1}(y) + \delta_{D }(x) \delta_{D }(y)
\asymp \delta_{D }(x)\wedge \delta_{D }(y) .
$$
\noindent (ii) Assume that $x, y\in D_2$.
By the strong Markov property of $X$,
$$
G_D(x, y )= G_{D_2}(x, y) + {\mathbb{E}}_x [ G_D(X_{\sigma_{a^*}}, y); \sigma_{a^*}<\tau_D]
=G_{D_2}(x, y) + G_D(a^*, y) {\mathbb{P}}_x ( \sigma_{a^*}<\tau_D).
$$
Since both $y\mapsto G_D(a^*, y)$ and $x\mapsto {\mathbb{P}}_x ( \sigma_{a^*}<\tau_D)$
are bounded positive harmonic functions on $D\cap D_0$ that vanishes on $D_0 \cap \partial D$,
it follows from the boundary Harnack inequality for Brownian motion in ${\mathbb{R}}^2$ that
\begin{equation}\label{e:7.4}
G_D(a^*, y) \asymp \delta_D (y) \quad \hbox{and} \quad
{\mathbb{P}}_x ( \sigma_{a^*}<\tau_D) \asymp \delta_D (x)
\end{equation}
This combined with the Green function estimates of $G_{D_2}(x, y)$ (see \cite{CZ}) yields
$$
G_D(x, y) \asymp
\ln \left(1+\frac{\delta_{D_2}(x)\delta_{D_2}(y)}{|x-y|^2}\right) +\delta_D(x)\delta_D (y).
$$
\noindent
(iii) We now consider the last case that $x\in D_1\cup \{a^*\}$ and $y\in D_2$.
When $x=a^*$, the desired estimates follows from \eqref{e:7.4} and so it remains to
consider $x\in D_1$ and $y\in D_2$.
By the strong Markov property of $X$, \eqref{e:7.3} and \eqref{e:7.4},
\begin{eqnarray*}
G_D(x, y) = {\mathbb{E}}_x [ G_D(X_{\sigma_{a^*}} , y); \sigma_{a^*}<\tau_D]
= G_D(a^*, y) {\mathbb{P}}_x (\sigma_{a^*}<\tau_D) \asymp \delta_D(x) \delta_D (y).
\end{eqnarray*}
This completes the proof of the theorem.
\end{proof}
\section{Some other BMVD}\label{S:8}
In this section, we present two more examples of spaces with varying dimension
that are variations of ${\mathbb{R}}^2 \cup {\mathbb{R}}$ considered in previous sections.
The existence and uniqueness of BMVD on these spaces can be established
in a similar way as Theorem \ref{existence-uniqueness} in Section \ref{S:2}. We will concentrate on short time two-sided
estimates on the transition density function of these two BMVD.
One can further study their large time heat kernel estimates.
Due to the space limitation, we will not pursue
it in this paper.
\subsection{A square with several flag poles}\label{S:8.1}
In this subsection, we study the BMVD on a large square with multiple flag poles of infinite length.
The state space $E$ of the process embedded in ${\mathbb{R}}^3$ is defined as follows.
Let $\{z_j; 1\leq j\leq k\}$ be $k$ points in ${\mathbb{R}}^2$ that have distance
at least 4 between each other. Fix a finite sequence $\{\varepsilon_j; 1\leq j \leq k \} \subset (0,1/2)$ and a sequence of positive constants
$\vec p:=\{ p_j; 1\leq j\leq k\}$.
For $1\le j\le k$, denote by $B_j$ the closed disk on ${\mathbb{R}}^2$ centered at $z_j$ with radius $\varepsilon_j$. Clearly, the distance between two distinct balls is at least 3.
Let $\displaystyle{D_0={\mathbb{R}}^2\setminus \left(\cup_{1\le i\le k}B_i\right)}$. For $1\le j\le k$, denote by $L_j$ the half-line $\{(z_j, w)\in {\mathbb{R}}^3: w>0\}$. By identifying each closed ball $B_j$ with a singleton denoted
by $a_j^*$, we equip the space $E:=D_0\bigcup \{a_1^*, \cdots, a_k^*\}\bigcup (\cup_{i=1}^k L_i)$ with induced topology from ${\mathbb{R}}^2$ and the half-lines $L_j$, $1\leq j\leq k$,
with the endpoint of the half-line $L_i$ identified with $a_i^*$ and
a neighborhood of $a_i^*$ defined as $\{a_i^*\}\cup \left(U_1\cap L_i\right)\cup \left(U_2\cap D_0\right)$ for some neighborhood $U_1$ of $0$ in ${\mathbb{R}}$ and $U_2$ of $B_i $ in ${\mathbb{R}}^2$. Let $m_{\vec p}$ be the measure on $E$ whose restriction on $D_0$ is the two-dimensional Lebesgue measure, and whose restriction on $L_j$ is the one-dimensional Lebesgue measure multiplied by $p_j$
for $1\leq j\leq k$.
So in particular, $m_{\vec p} \left(\{a_i^*\}\right)=0$ for all $1\le i\le k$. We denote the geodesic distance on $E$ by $\rho$.
Similar to Definition \ref{D:1.1}, BMVD on the plane with multiple half lines is defined as follows.
\begin{dfn}\rm Given a finite sequence $\vec \varepsilon:=\{\varepsilon_j; 1\leq j\leq k\}\subset (0,1/2)$ and a sequence of positive constants $\vec p:=\{p_j; 1\leq j\leq k\}$. A Brownian motion with varying dimension with parameters $(\vec \varepsilon, \vec p)$ on $E$ is an $m_{\vec p}$-symmetric diffusion $X$ on $E$ such that
\begin{description}
\item{(i)} its subprocess in $L_i$, $1\le i\le k$, or $D_0$ has the same law as that of standard Brownian motion in ${\mathbb{R}}_+$ or $D_0$;
\item{(ii)} it admits no killings on any $a_i^*$, $1\le i\le k$.
\end{description}
\end{dfn}
Recall that the endpoint of the half-line $L_i$ is identified with $a^*_i$, $1\le i\le k$. Similar to Theorem \ref{existence-uniqueness}, we have the following theorem stating the existence and uniqueness of the planary BMVD $X$ with multiple half lines.
\begin{thm}\label{existence-uniqueness-planary}
For each $k\geq 2$,
every $\vec\varepsilon :=\{ \varepsilon_j ; 1\leq j\leq k\}\subset (0, 1/2)$ and
$\vec p:=\{p_j; 1\leq j\leq k\}\subset (0, \infty)$,
BMVD $X$ on $E$ with parameter $(\vec \varepsilon, \vec p)$ exists and is unique.
Its associated Dirichlet form $({\mathcal{E}}, {\mathcal{F}})$ on $L^2(E; m_p)$ is given by
\begin{eqnarray*}
{\mathcal{F}} &=& \left\{f: f|_{{\mathbb{R}}^2}\in W^{1,2}({\mathbb{R}}^2); \, f|_{L_i}\in W^{1,2}({\mathbb{R}}),\, f|_{B_i}=f|_{L_i}(a^*_i) \hbox{ for } 1\leq i \leq k \right\},
\\
{\mathcal{E}}(f,g) &=& \frac{1}{2}\int_{D_0} \nabla f(x) \cdot \nabla g(x) dx
+\sum_{i=1}^k\frac{p_i}{2}\int_{L_i}f'(x)g'(x)dx.
\end{eqnarray*}
\end{thm}
It is not difficult to see that BMVD $X$ has a continuous
transition density $p(t,x,y)$ with respect to the measure $m_{\vec p}$.
\begin{prop}\label{P8:3}
There exist constants $C_1, C_2>0$ such that
\begin{displaymath}
p(t,x,y)\le C_1\left(\frac{1}{t}+
\frac{1}{t^{1/2}}\right)e^{-C_2\rho(x,y)^2/t}
\qquad\text{for all } x, y \in
E \hbox{ and } t> 0 .
\end{displaymath}
\end{prop}
\begin{proof}
By an exactly the same argument as that for Proposition \ref{NashIneq},
we can establish Nash-type inequality for $X$. From it,
the off-diagonal upper bound can be derived using Davies' method
as in Propositions \ref{P:3.2} and \ref{offdiagUBE}.
\end{proof}
The following theorem gives two-sided bounds for the transition density function $p(t,x,y)$ when $t\in (0, T]$ for each fixed $T>0$.
\begin{thm}\label{small-time-multi-line}
Let $T\geq 2$ be fixed. There exist positive constants $C_i$, $3\le i\le 16$ so that the transition density $p(t,x,y)$ of BMVD $X$ on $E$ satisfies the following estimates when $t\in (0, T]$.
\\
Case 1. For $x, y \in D_0\cap B_\rho(a^*_i, 1)$ for some $1\leq i\leq k$,
\begin{align*}
\frac{C_3}{\sqrt{t}}e^{-C_4\rho(x,y)^2/t}&+\frac{C_3}{t}\left(1\wedge
\frac{\rho(x, a^*_i)}{\sqrt{t}}\right)\left(1\wedge
\frac{\rho(y, a^*_i)}{\sqrt{t}}\right)e^{-C_5|x-y|^2/t} \le p(t,x,y)
\\
&\le \frac{C_6}{\sqrt{t}}e^{-C_7\rho(x,y)^2/t}+\frac{C_6}{t}\left(1\wedge
\frac{\rho(x, a^*_i)}{\sqrt{t}}\right)\left(1\wedge
\frac{\rho(y, a^*_i)}{\sqrt{t}}\right)e^{-C_{8}|x-y|^2/t};
\end{align*}
Case 2. For some $1\le i\le k$, $x\in L_i,\, y\in L_i\cup B_\rho(a^*_i, 1)$,
\begin{align*}
\frac{C_{9}}{\sqrt{t}}e^{-C_{10}\rho(x,y)^2/t} \le p(t,x,y) \le \frac{C_{11}}{\sqrt{t}}e^{-C_{12}\rho(x,y)^2/t};
\end{align*}
Case 3. For all other cases,
\begin{equation}\label{e:8.1}
\frac{C_{13}}{t}e^{-C_{14}\rho(x,y)^2/t} \le p(t,x,y)\le \frac{C_{15}}{t}e^{-C_{16}\rho(x,y)^2/t}.
\end{equation}
\end{thm}
\begin{proof}
The idea of the proof is to reduce it to the heat kernel for BMVD on plane
with one vertical half-line.
For an open subset $D$ of $E$, we use $X^D$ to denote the subprocess of
$X$ killed upon leaving $D$ and $p_D(t, x, y)$ the transition density function of $X^D$ with respect to $m_{\vec p}$.
Let $C_1>0$ and $C_2\in (0, 1/2)$ be the constants in Lemma \ref{L:6.1}.
(i) We first show that the desired estimates hold for any $x, y\in E$ with
$\rho (x, y) <2C_2$ and for every $t\in (0, T]$.
In this case, let $z_0\in E$ so that
$\{x, y\} \in B_\rho (z_0, C_2)$.
Since $\rho (a^*_i, a^*_j)>3$ for $i\not= j$,
without loss of generality we may and do assume that $a^*_1$ is the base that is closest to
$z$ and so $\min_{2\leq j\leq k} \rho (z_0, a^*_j)>3/2$.
We have by Lemma \ref{L:6.1} that
\begin{equation}\label{e:8.2}
p_{B_\rho (z_0, 1)}(t,w,z) \geq \frac12 p_1 (t, w, z)
\quad \hbox{ for } t\in (0, C_1] \hbox{ and } w, z\in B_\rho (z_0, C_2),
\end{equation}
where $p_1 (t, x, y)$ stands for the transition density function
of BMVD on the plane with one vertical halfline $L_1$ at base $a_1^*$ and vertical half line $L_1$. This together with Theorem \ref{T:smalltime} in particular implies that
there is a constant $c_1>0$ so that
$$
p_{B_\rho (z_0, 1)}(t,w,z) \geq c_1
\quad \hbox{for every } t\in [C_1/2, C_1] \hbox{ and }
w, z \in B_\rho (z_0, C_2).
$$
It thus follows from the Chapman-Kolmogorov's equation
that there is a constant $c_2>0$ so that
\begin{equation}\label{e:8.3}
p_{B_\rho (z_0, 1)}(t,w,z) \geq c_2
\quad \hbox{for every } t\in [C_1, T] \hbox{ and }
w, z \in B_\rho (z_0, C_2).
\end{equation}
This together with \eqref{e:8.2} implies that there is a constant
$c_3>0$ so that
\begin{equation}\label{e:8.4}
p(t, w, z) \geq
p_{B_\rho (z_0, 1)}(t,w,z) \geq c_3 p_1 (t, w, z)
\quad \hbox{ for } t\in (0, T] \hbox{ and } w, z\in B_\rho (z_0, C_2).
\end{equation}
On the other hand, we have by Proposition \ref{offdiagUBE} and the fact that
$s\mapsto s^{-1}e^{-a^2/s}$ is increasing in $(0, a^2)$ and decreasing
in $(a^2, \infty)$ that for $t\in (0, T]$ and $\rho (x, y)<2C_2<1$,
\begin{eqnarray*}
\bar{p}_{B_\rho (z_0, 1)}(t,x,y)&:=&
{\mathbb{E}}_x[p(t-\tau_{B_\rho (z_0, 1)},X_{\tau_{B_\rho (z_0, 1)}},y); \tau_{B_\rho (z_0, 1)}< t] \\
&\lesssim& t^{-1}e^{-c_4/t}\lesssim e^{-c_5/t}\lesssim e^{-c_6\rho(x,y)^2/t}.
\end{eqnarray*}
Consequently,
\begin{equation} \label{e:8.5}
p(t, x, y)= {p}_{B_\rho (z_0, 1)}(t,x,y) + \bar{p}_{B_\rho (z_0, 1)}(t,x,y)
\lesssim p_1 (t, x, y)+ e^{-c_6\rho(x,y)^2/t}.
\end{equation}
This together with \eqref{e:8.4} and Theorem \ref{T:smalltime} establishes
the desired estimate of the theorem for any $x, y\in E$ with $\rho (x, y)< 2C_2$
and $t\in (0, T]$.
\medskip
(ii) We now consider the case when $x\in L_i$ and $ y\in L_i\cup B_\rho(a^*_i, C_2)$ for some $1\le i\le k$ with
$\rho (x, y)\geq 2C_2$.
Without loss of generality, we may and do assume $i=1$ and $\rho(x, a_1^*)<\rho(y, a_1^*)$ if $y\in L_1$.
Let $z_0\in L_1$ with $\rho(z_0, a_1^*)=\rho(y, a^*_1)+C_2.$
By the strong Markov property of $X$, \eqref{e:8.4}-\eqref{e:8.5} and Theorem \ref{T:smalltime}, we have for $t\in (0, T]$,
\begin{eqnarray*}
p(t, x, y) &=& {\mathbb{E}}_x [ p(t-\sigma_{z_0}, z_0, y); \sigma_{z_0}<t] \\
&\geq& c_3 {\mathbb{E}}_x [ p_1(t-\sigma_{z_0}, z_0, y); \sigma_{z_0}<t]
\\
&=& c_3 p_1 (t, x, y) \gtrsim t^{-1/2} \, e^{-c_7\rho (x, y)^2/y},
\end{eqnarray*}
and
\begin{eqnarray*}
p(t, x, y) &= &{\mathbb{E}}_x [ p(t-\sigma_{z_0}, z_0, y); \sigma_{z_0}<t] \\
&\stackrel{\eqref{e:8.5}}{\lesssim}& {\mathbb{E}}_x [ p_1(t-\sigma_{z_0}, z_0, y); \sigma_{z_0}<t] + {\mathbb{E}}_x [ e^{-c_6 C_2^2/(t-\sigma_{z_0})}; \sigma_{z_0}<t]\\
&\leq & p_1 (t, x, y) + e^{-c_6 C_2^2/t} {\mathbb{P}}_x (\sigma_{z_0}<t) \\
&\lesssim & t^{-1/2} \, e^{-c_8\rho (x, y)^2/y} + e^{-c_6 C_2^2/t} e^{-|x-z_0|^2/t} \\
&\lesssim & t^{-1/2} \, e^{-c_9\rho (x, y)^2/y}.
\end{eqnarray*}
In the second to last inequality, we used crossing estimate for one-dimensional Brownian motion and Lemma \ref{L:5.11}.
The above two estimates give the desired estimates.
\medskip
(iii) Let $D_1= D_0\setminus \cup_{j=1}^k B_\rho (a^*_j, C_2) $.
There are three remaining cases:
\begin{description}
\item{(a)} $x\in L_i \cup B_\rho (a^*_i, C_2)$ and $y\in L_j \cup B_\rho (a^*_j, C_2)$ for $i\not= j$;
\item{(b)} $x\in L_i \cup B_\rho (a^*_i, C_2)$ for some $1\leq i \leq k$ and $y\in D_1 $
with $\rho (x, y)\geq 2C_2$;
\item{(c)} $x, y\in D_1 $ with $\rho (x, y)\geq 2C_2$.
\end{description}
We claim that \eqref{e:8.1} holds for all these three cases. The upper bound in \eqref{e:8.1} holds due to
Proposition \ref{offdiagUBE} so it remains to establish the lower bound.
It follows from the Dirichlet heat kernel estimate for Brownian motion in $C^{1,1}$-domain \cite{Z3, CKP}
that in case (c), for any $t\in (0, T]$,
\begin{equation} \label{e:8.6}
p(t, x, y)\geq p_D(t, x, y) \gtrsim t^{-1} e^{-c_{10} |x-y|^2/t} \geq t^{-1} e^{-c_{10} \rho (x, y)^2/t}.
\end{equation}
For case (b), without loss of generality, we assume $i=1$. Define $u_1(w)= -\rho (w, a^*_1)$ for $w\in L_1$
and $u_1(w)=\rho (w, a^*_1)$ analogous to \eqref{848}. Let $Y=u(X)$ and $\tau_1:=\inf\{t>0: Y_t \geq 2C_2\}$.
Then $Y^{(-\infty, 2C_2)}_t := Y_t$ for $ t\in [0, \tau_1)$, and $Y^{(-\infty, 2C_2)}_t :=\partial$ for $t\geq \tau_1$
has the same distribution as the killed radial process radial process $Y$ in Section \ref{S:4} for BMVD on plane with
one vertical half-line. By Proposition \ref{P:4.3} and the arguments similar to Proposition \ref{P:4.4} of this paper
and that of \cite{CKP}, one can show that $Y^{(-\infty, 2C_2)}_t$ has a transition density function $p^0(t, w, z)$
with respect to the Lebesgue measure on ${\mathbb{R}}$
and it has the following two-sided estimates:
$$
t^{-1/2} \left( 1\wedge \frac{|w|}{\sqrt{t}} \right) \left( 1\wedge \frac{|z|}{\sqrt{t}} \right) e^{-c_{11} |w-z|^2/t}
\lesssim p^0(t, w, z)
\lesssim t^{-1/2} \left( 1\wedge \frac{|w|}{\sqrt{t}} \right) \left( 1\wedge \frac{|z|}{\sqrt{t}} \right) e^{-c_{12} |w-z|^2/t}
$$
for $t\in (0, T]$ and $x, y\in (-\infty, 2C_2)$. Thus we have for $t\in (0, T]$,
$x\in L_1 \cup B_\rho (a^*_1, C_2)$ and $y\in D_1 $
with $\rho (x, y)\geq 2C_2$,
\begin{eqnarray}
p(t, x, y) &\geq& \int_{D_0 \cap (B_\rho (a^*_1, 6C_2/4) \setminus B_\rho (a^*_1, 5C_2/4))} p(t/2, x, z) p(t/2, z, y)
m_{\vec p} (dz) \nonumber \\
&\gtrsim & t^{-1} e^{-c_{13} \rho (a^*_1, y)^2/t} \int_{D_0 \cap (B_\rho (a^*_1, 6C_2/4) \setminus B_\rho (a^*_1, 5C_2/4))}
p(t/2, x, z) m_{\vec p} (dz) \nonumber \\
&\geq & t^{-1} e^{-c_{13} \rho (a^*_1, y)^2/t} \int^{ 6C_2/4}_{5C_2/4}
p^0 (t/2, u_1( x) , w) dw \nonumber \\
&\gtrsim & t^{-1} e^{-c_{13} \rho (a^*_1, y)^2/t} \int^{ 6C_2/4}_{5C_2/4} t^{-1/2}
e^{-c_{11} (6C_2/4 - u_1(x))^2/t} dw \nonumber \\
&\gtrsim & t^{-1} e^{-c_{13} \rho (a^*_1, y)^2/t} e^{-c_{14} (6C_2/4 - u_1(x))^2/t} \nonumber \\
&\gtrsim & t^{-1} e^{-c_{15} \rho (x, y)^2/t} , \label{e:8.7}
\end{eqnarray}
where in the second inequality it was used that $\rho(z,y)\le 5\rho(a^*_1, y)/2$, and in the last two inequalities we used the fact that $ |6C_2/4 - u_1(x)|\geq C_2/2$ and $\rho (x, y)\geq 2C_2$.
Now for case (a) when $x\in L_i \cup B_\rho (a^*_i, C_2)$ and $y\in L_j \cup B_\rho (a^*_j, C_2)$ for $i\not= j$,
let $z_0\in D_0$ so that both $\rho (z, a^*_i)$ and $\rho (z, a^*_j)$ take values within $(\rho (a^*_i, a^*_j)/3, 2\rho (a^*_i, a^*_j)/3)$.
We then have by \eqref{e:8.7} that for all $t\in (0, T]$,
\begin{eqnarray*}
p(t, x, y) &\geq& \int_{D_0\cap B_\rho (z_0, C_2)} p(t/2, x, z) p(t/2, z, y) m_{\vec p} (dz) \nonumber \\
&\gtrsim & t^{-1} e^{-c_{16} \rho (x, z_0)^2/t }\, t^{-1} e^{-c_{16} \rho (y, z_0)^2/t} \, m_{\vec p} (D_0\cap B_{\rho} (z_0, C_2)) \nonumber \\
&\gtrsim & t^{-1}e^{-c_{17} \rho (x, y)^2/t },
\end{eqnarray*}
where in the last inequality, we used the fact that $\rho (x, y)\geq 3$.
This completes the proof that the lower bound in \eqref{e:8.1} holds for all three cases (a)-(c).
The theorem is now proved.
\end{proof}
\subsection{A large square with an arch}
In this subsection, we study Brownian motion on a large square with an arch.
The state space $E$ of the process is defined as follows. Let $z_1, z_2 \in {\mathbb{R}}^2$ with $|z_1-z_2|\geq 6$.
Fix constants $0<\varepsilon_1, \varepsilon_2<1/2$ and $p>0$. For $i=1,2$, denote by $B_i$ the closed disk on ${\mathbb{R}}^2$ centered at $z_i$
with radius $\varepsilon_i$. Let $\displaystyle{D_0={\mathbb{R}}^2\setminus \left(B_1 \cup B_2 \right)}$.
We short $B_i$ into a singleton denoted by $a_i^*$. Denote by $L$ a one dimensional arch with two endpoints $a^*_1$ and $a^*_2$.
Without loss of generality, we assume $L$ is isometric to an closed interval $[-b, b]$ for some $b\geq 4$.
We equip the space $E:= D_0 \cup \{a^*_1, a^*_2\} \cup L$ with the Riemannian distance $\rho$ induced from $D_0$ and $L$, analogous to the last example of a large square with multiple flag poles.
Let $m_p$ be the measure on $E$ whose restriction on $L$ and $D_0$ is the arch length measure and the Lebesgue measure multiplied by $p$ and $1$, respectively. In particular, we have $m_p\left(\{a^*_1, a^*_2\}\right)=0$.
As before, BMVD on $E$ is defined as follows.
\begin{dfn}\label{def-planary-arch}\rm Given $0<\varepsilon_1, \varepsilon_2<1/2$ and $p>0$,
BMVD on $E$ with parameters $(\varepsilon_1, \varepsilon_2, p)$ on $E$ is an $m_p$-symmetric diffusion $X$ on $E$ such that
\begin{description}
\item{(i)} its subprocess process in $L$ or $D_0$ has the same distribution as the one-dimensional or two-dimensional Brownian motion
in $L$ or $D_0$, respectively.
\item{(ii)} it admits no killings at $\{a_1^*, a_2^*\}$.
\end{description}
\end{dfn}
Similar to Theorem \ref{existence-uniqueness}, we have the following.
\begin{thm}\label{T:8.6}
For every $0<\varepsilon_1, \varepsilon_2 < 1/2$ and
$p>0$,
BMVD $X$ on $E$ with parameter $(\varepsilon_1, \varepsilon_2, p)$ exists and is unique.
Its associated Dirichlet form $({\mathcal{E}}, {\mathcal{F}})$ on $L^2(E; m_p)$ is given by
\begin{eqnarray*}
{\mathcal{F}} &=& \left\{f: f|_{{\mathbb{R}}^2}\in W^{1,2}({\mathbb{R}}^2),\, f|_{L}\in W^{1,2}(L),\, f|_{B_i}=f|_L(a^*_i)\,,i=1,2 \right\}, \\
{\mathcal{E}} (f, g) &=& \frac{1}{2}\int_{D_0} \nabla f(x) \cdot \nabla g(x) dx+ \frac{p}{2}\int_{L} f'(x)g'(x)dx.
\end{eqnarray*}
\end{thm}
It is easy to see that BMVD $X$ has a continuous transition density function $p(t, x, y)$ with respect to the measure $m_p$.
Similar to that for Proposition \ref{NashIneq}, Propositions \ref{P:3.2} and \ref{offdiagUBE},
using the classical Nash's inequality for one- and two-dimensional Brownian motion
and Davies method, one can easily establish the following.
\begin{prop}\label{UB-arch}
Let $T\geq 2$.
There exist $C_1, C_2>0$ such that
\begin{displaymath}
p (t,x,y)\le C_1\left(\frac{1}{t}+
\frac{1}{t^{1/2}}\right)e^{-C_2\rho(x,y)^2/t} \qquad\text{for all } x, y \in
E, t\in (0,T].
\end{displaymath}
\end{prop}
The next theorem gives short time sharp two-sided estimates on $p(t, x, y)$.
\begin{thm}\label{T:smalltime-arch}
Let $T\geq 2$ be fixed. There exist positive constants $C_i$, $3\le i\le 16$,
so that the transition density $p(t,x,y)$ of BMVD $X$ on $E$ satisfies the following estimates when $t\in (0, T]$:
\begin{description}
\item{\rm (i)} For $x \in L$ and $y\in E$,
\begin{equation}\label{eq-arch-1}
\frac{C_3}{\sqrt{t}}e^{-C_4\rho (x, y)^2/t } \le p(t,x,y)\le\frac{C_5}{\sqrt{t}}e^{- C_6\rho (x, y)^2/t}.
\end{equation}
\item{\rm (ii)} For $x,y\in D_0\cup \{a^*_1, a^*_2\}$, when $\rho(x, a^*_i)+\rho(y, a^*_i)<1$ for some $i=1, 2$,
\begin{eqnarray}\label{eq-arch-3}
&& \frac{C_{7}}{\sqrt{t}}e^{- C_{8}\rho(x,y)^2/t}+\frac{C_{7}}{t}\left(1\wedge
\frac{\rho(x, a^*_i)}{\sqrt{t}}\right)\left(1\wedge
\frac{\rho(y, a^*_i)}{\sqrt{t}}\right)e^{- C_{9}|x-y|^2/t} \\
&\le & p(t,x,y)
\le \frac{C_{10}}{\sqrt{t}}e^{-C_{11}\rho(x,y)^2/t}
+\frac{C_{10}}{t}\left(1\wedge
\frac{\rho(x, a^*_i)}{\sqrt{t}}\right)\left(1\wedge
\frac{\rho(y,a^*_i)}{\sqrt{t}}\right)e^{-C_{12}|x-y|^2/t}; \nonumber
\end{eqnarray}
otherwise,
\begin{equation}\label{eq-arch-4}
\frac{C_{13}}{t}e^{- C_{14}\rho(x,y)^2/t} \le p(t,x,y) \le \frac{C_{15}}{t}e^{- C_{16}\rho(x,y)^2/t} .
\end{equation}
\end{description}
\end{thm}
\begin{proof} This theorem can be established by a similar consideration as that for
Theorem \ref{small-time-multi-line}.
Here we only give a brief sketch.
Let $C_1>0$ and $C_2\in (0, 1/2)$ be the constants in Lemma \ref{L:6.1}.
\smallskip
\noindent {\it Case 1.} $\rho(x,y)<2C_2$. The desired estimates can be obtained in a similar way
as that for {\it Case 1} in the proof of Theorem \ref{small-time-multi-line}, by using Lemma \ref{L:6.1}.
\smallskip
\noindent {\it Case 2. } $\rho(x,y)\geq 2C_2$.
Due to the upper bound estimate in Theorem \ref{UB-arch}, it suffices to
show the following lower bound estimate hold:
there exists some $c_1, c_2 >0$, such that
\begin{equation}\label{e:8.11}
p (t,x,y)\ge c_1 e^{-c_2 \rho(x,y)^2/t} \quad \text{for all }t\in (0, T].
\end{equation}
We divided its proof into three cases.
\begin{description}
\item{(i)} Both $x$ and $y$ are in $ L\cup B_\rho (a^*_1, C_2) \cup
B_\rho (a^*_2, C_2)$. Without loss of generality, we assume $x\in L\cup B_\rho (a^*_1, C_2)$ and $x$ is closer
to $a^*_1$ if both $x$ and $ y$ are on the arch $L$.
Denote by $\ell (w, z)$ the arch length in $L$ between two points $w, z\in L$,
and $D_1:= L\cup B_\rho (a^*_1, 3C_2) \cup
B_\rho (a^*_2, 3C_2)$.
We define a modified signed radial process $Y_t=u(X^{D_1}_t)$,
where
$$
u(z):=\begin{cases}
\rho (z, a^*_1) & \hbox{if } z \in D_0\cap B_\rho (a^*_1, 3C_2) , \cr
- \ell (a^*_1, z) & \hbox{if } z \in L, \cr
-\ell (a^*_1, a^*_2) -\rho (z , a^*_2) & \hbox{if } z \in D_0\cap B_\rho (a^*_2, 3C_2).
\end{cases}
$$
By a similar argument as that for Section \ref{S:4}, we can show that
$Y$ is a subprocess of a skew Brownian motion on ${\mathbb{R}}$ with skewness at points 0 and $\ell (a^*_1, a^*_2)$ and bounded drift
killed upon leaving $I=(- \ell (a^*_1,a^*_2)-2C_2, 2C_2)$. The desired lower bound estimate
for $p(t, x, y)$ can be
derived from the Dirichlet heat kernel estimate for the one-dimensional diffusion $Y$.
\item{(ii)} Both $x, y \in D_2:=D_0\setminus (B_\rho(a^*_1, C_2/2)\cup B_\rho(a^*_2, C_2/2))$.
In this case, the desired lower bound estimate for $p(t, x, y)$ follows from the Dirichlet heat kernel estimate
for two-dimensional Brownian motion in $C^{1,1}$ domain $D_2$.
\item{(iii)} $x\in L \cup B_\rho(a^*_1, C_2/2)\cup B_\rho(a^*_2, C_2/2)$ and $y\in D_0\setminus (B_\rho(a^*_1, 2C_2)\cup B_\rho(a^*_2, 2C_2))$. Without loss of generality, we assume $x$ is closer to $a^*_1$ than to $a_2^*$. Let
$$ D_3:= \{z\in D_0 : C_2/2\le \rho(z, a^*_1)\le C_2\}.
$$
Note that $\rho(x,y)\ge (\rho (x,z)+\rho(y, z))/5$ for $z\in D_3$.
By Markov property,
$$
p(t,x,y)\ge \int_{D_3} p(t/2, x, z)p(t/2, z,y)m_p(dz).
$$
The desired lower bound for $p(t,x,y)$ follows from the results obtained in (i) and (ii).
\end{description}
\end{proof}
|
1,116,691,499,196 | arxiv | \section{Introduction}
Rotation curve is the major tool to study the dynamics and structure of the Galaxy (Binney and Merrifield 1998; Sofue and Rubin 2001; Sofue et al. 2009). Once it is determined, the rotation curve is used to obtain the distribution of mass in the Galaxy, and map the interstellar matter from kinematical observables. Accuracy of the obtained results depends not only on the observational accuracy, but also on the methods as well as on the location of observed objects in the galactic disk. However, accuracy analyses have been not systematically obtained in the current studies of galactic kinematics.
VLBI measurements of parallaxes and proper motions of galactic maser sources have opened a new era in the study of galactic kinematics, particularly in determining the rotation curve and distances of objects (e.g. Honma et al. 2007; Reid et al. 2009). VERA has played the essential role, and high accuracy determination of the rotation velocity is now available using trigonometric distance measurements combining with proper-motion (Honma et al. 2007).
A more general method to measure the distance, radial velocity, and proper motion at the same time has been applied for determining three-dimensional velocity vectors of maser sources (Oh et al. 2010). These measurements are expected to provide us with a global rotation curve constructed from objects distributed over the galactic disk. This will improve the accuracy not only of the outer rotation curve, but also of the inner rotation curve, where the current observations have been obtained mostly from the tangent point data.
Accordingly, observations of a larger number of objects distributed in the galactic disk have become possible for deriving the rotation curve. We remember that the accuracy of derived rotation velocity depends not only on the intrinsic observing errors, but also on the location of measured objects.
In this paper, we analyze the behaviors of observable kinematical quantities such as the radial velocity and proper motions in the galactic disk. We investigate the dependence of the accuracy of derived rotation velocities on the galactic positions of the observed sources. The results would be useful as a guide for optimizing the selection of objects for determination of the Galactic rotation curve. The present analyses will be given on the assumption of circular rotation of the galactic objects. We present the result only for a fixed set of the galactic constants, $R_0=8$ kpc and $V_0=200$ \kms. However, the analysis does not include systematic errors arising from the uncertainties of these quantities. Hence, this paper should be taken as a methodological guide, and a more practical use may be recalculated for such constants in individual cases and data sets.
\section{Rotation Curve}
The tangent-point method, or the terminal velocity method, has been most often applied to determine the inner rotation curve (Clemens 1985). The outer rotation curve has been determined using spectro-photometric distances combined with radial velocities of interstellar lines (Fich et al. 1989), or by the HI disk thickness method (Merrifield 1991; Honma and Sofue 1997). However, the outer rotation curve is still crude mainly because of the distance uncertainties.
Figure \ref{fig-rc} shows a rotation curve of the Galaxy obtained by compiling measured rotation velocities from the literature in the decades (Sofue et al. 2009). The galactic constants are taken to be $R_0=8.0$ kpc and $V_0=200$ \kms in this paper. Recent values from VERA observations are also included (see table \ref{tab-veravrot}). Rotation velocities within the solar circle are accurately determined using the tangent point method, which uses the terminal velocities in the spectral lines of interstellar gases (Clemens 1985). Since this method does not require distances of the emission regions, it gives relatively high accuracy of rotation velocity. On the other hand, the outer rotation curve is crude, because the distance measurements of objects, which usually contain large errors, are inevitable (Blitz 1979; Fich et al. 1989; Merrifield 1992; Honma and Sofue 1997; Binney and Dehnen 1997).
In order to discuss the accuracy of derived rotation velocities in the following sections, we need to use an approximate rotation curve to represent the observations. Since the present study treats the kinematics only in the galactic plane, we here adopt a simple model of three-component Plummer potential.
\be
V(R)=\sqrt{R {\partial \Phi \over \partial R}},
\label{eq-vrot}
\ee
and
\be
\Phi=\Sigma {G M_i \over \sqrt{R^2+a_i^2}},
\label{eq-phi}
\ee
where $G$ is the gravitational constant, $a_i$ are the scale radii of the individual mass components, and $M_i$ are the masses. Table \ref{tab-plummer} lists the values of the parameters, and figure \ref{fig-rcmodel} shows the model rotation curve, which approximately represents the observations in figure \ref{fig-rc} with $R_0=8$ kpc and $V_0=200$ \kms. We note that the calculated results in the next sections are not strongly dependent on the shape of rotation curve.
\begin{table}
\caption{Plummer potential parameters for a model galactic rotation curve, mimicking the observed rotation curve in figure \ref{fig-rc}. }
\begin{tabular}{lllllll}
\hline\hline
Component $i$ & $a_i$ (kpc) & $M_i~(\Msun)$ \\
\hline
1 & 0.2 & $0.8\times 10^{10}$ \\
2 & 3.6 & $0.6 \times 10^{11}$ \\
3 & 15 & $2.0 \times 10^{11}$ \\
\hline \\
\end{tabular}
\label{tab-plummer}
\end{table}
\begin{figure}
\bc
\includegraphics[width=8.5cm]{fig-rc-vera} \\
\ec
\caption{Rotation curve of the Galaxy (Sofue et al. 2009). Inner curve at $r<8$ kpc is obtained mainly by the tangent-point method, while the outer curve is crude because of the ambiguities in distance estimation. Big diamonds are from recent VERA observations (see table \ref{tab-veravrot}).}
\label{fig-rc}
\bc
\includegraphics[width=8cm]{fig-rcmn.eps}
\ec
\caption{A Plummer model rotation curve $V(R)$ for parameters in table \ref{tab-plummer}.}
\label{fig-rcmodel}
\end{figure}
\begin{figure}
\bc
\includegraphics[width=7cm]{fig-circ.eps}
\ec
\caption{Definition of used variables and parameters.}
\label{fig-circ}
\end{figure}
\section{Accuracy Diagrams for Rotation Curve}
We denote the radial velocity by $\vr$, and perpendicular velocity to the line of sight by $\vp=\mu r$ with $\mu$ being the proper motion and $r$ the distance to the object from the Sun. These quantities are related to the circular rotation velocity $V$ as
\be
\vr= \left({R_0 \over R} V - V_0\right) \sinl,
\label{eq-vr}
\ee
and
\be
\vp=\mu r =-{s \over R} V -V_0 \cosl,
\label{eq-vp}
\ee
where
\be
s=r-R_0 \cosl
\ee
Here $R$ is the galacto-centric distance, and is related to $r$ and galactic longitude $l$ as
\be
R=\sqrt{r^2+R_0^2 - 2 r R_0 \cosl}.
\ee
Figure \ref{fig-circ} illustrates the definition of used variables and parameters in this article.
\subsection{Rotation Velocity $\Vrotr$ from Radial-Velocity, and Accuracy Diagram, $\Delta \Vrotr(X,Y)$}
If we assume that the object's orbit is circular around the Galactic Center, the rotation velocity $V$ can be obtained by measuring the radial velocity $v_{\rm r}$ and its distance $r$, which is expressed by the galacto-centric distance $R$ and longitude $l$:
\be
\Vrotr= {R \over R_0} \left({v_{\rm r} \over \sinl} +V_0 \right).
\label{eq-V}
\ee
Since the observations includes errors in $\vr$ and $r$, the resultant rotation velocity $V$ has an error which is expressed by
\be
\Delta \Vrotr=\sqrt{\delta V_{\rm vr}^2+\delta V_{\rm r}^2}.
\ee
Here,
\be
\delta V_{\rm vr}={\partial V \over \partial v_{\rm r}} \delta v_{\rm r}, ~~~~
\delta {V_{\rm r}}={\partial V \over \partial r} \delta r.
\ee
We obtain
\be
\Delta \Vrotr
=\left[{\left(R \over R_0 \sinl \right)^2\delta v_{\rm r}^2
+ \left(s ~V \over R^2 \right)^2 \delta r^2}\right]^{1/2}.
\label{eq-dvrad}
\ee
The uncertainty in the galacto-centric distance $R$ arises from the error in distance measurement as
\be
\delta R = {s \over R} \delta r.
\label{eq-deltaR}
\ee
Note that
\be
\sinl=X/r,~~~ \cosl=-(Y-R_0)/r
\ee
in the Cartesian coordinates centered on the Galactic Center. Since equation \ref{eq-dvrad} includes the rotation velocity $V$, the error distribution depends on the rotation curve.
Figure \ref{fig-vrad} shows the thus calculated distribution of the expected error in rotation velocity, $\Delta \Vrotr$, by a contour map in the Cartesian coordinates $(X,Y)$. We may call this diagram the "accuracy diagram" for the rotation velocity. The calculation was made for a combination of $\delta \vr=1$ \kms and $\delta r/r=0.02 $, or 2\% error in distance measurement. The regions with higher accuracy or with smaller errors are presented by bright area, while regions with larger errors are dark.
This figure indicates that the accuracy is highest along the tangent point circle. Along this circle $s= r - R_0 \cosl =0$, and the second term in equation \ref{eq-dvrad} is equal to zero.
For different parameters, the diagram may change quantitatively, but the overall characteristics remain unchanged, and hence, this diagram represents the general behavior of the accuracy distribution.
This is obviously the reason why the tangent-point method has resulted in higher-accuracy rotation curve inside the solar circle as in figure \ref{fig-rc}. Thus, the tangent-point circle is a special region for accurate rotation curve determination from radial velocity observations. Outside the tangent-point circle, the error is smoothly minimized in broad "butterfly" regions around $l\sim 100-135\deg$ and $l\sim 225-280\deg$.
On the other hand, this method yields the largest error near the Sun-GC line, where the direction of the circular rotation is perpendicular to the line-of-sight velocity, so that small observational error in the radial velocity largely affects the resultant rotating velocity. The Sun-GC line is, thus, the singular line in this method.
The uncertainty in $R$ is calculated as in equation (\ref{eq-deltaR}), and propagates to the uncertainty in $\Vrotr$ as equation (\ref{eq-dvrad}). Therefore, equation (\ref{eq-dvrad}) already includes the uncertainty $\delta R$ caused by $\delta r$. The uncertainty in $R$ also causes the uncertainty in model $V(R)$. This affects the result by the second order smallness, because the rotation curve is assumed to be nearly flat in most regions. However, $\delta R$ becomes very large at small $R$, as equation (\ref{eq-deltaR}) indicates. Therefore, the accuracy diagram should not be taken serious near the Galactic Center. These arguments apply similarly to the following sections.
\begin{figure}
\bc
\includegraphics[width=7.5cm]{fig-vrotr1_proof.eps} \\
\ec
\caption{Accuracy diagram $\Delta \Vrotr(X,Y)$ for $\delta \vr=1$ \kms, $\delta r/r=0.02 $ (2\% distance error). Dashed circles represent $R=8$ kpc (solar circle), $R=15$, 20 and 25 kpc. Contours are drawn from white to black at 1.2, 1.5, 2, 2.5, ..., 5, 6, 7, .... with 3 and 5 \kms by thick lines. Sources near the Sun-GC line yield the largest error. }
\label{fig-vrad}
\bc
\includegraphics[width=7.5cm]{fig-vrotp1.eps} \\
\ec
\caption{Accuracy diagram $\Delta \Vrotp(X,Y)$ for $\delta \mu= 0.21 {\rm mas~y^{-1}} $ and $\delta r/r=0.02$ for 2\% distance error. Contours are drawn at 2, 4, ..., 20, 25, 30, ... \kms with 10 \kms by thick line. The error becomes largest along the tangent point circle.}
\label{fig-vprop}
\end{figure}
\begin{figure}
\bc
\includegraphics[width=7.5cm]{fig-vvec1.eps} \\
\ec
\caption{Accuracy diagram $\Delta \Vrotv(X,Y)$ for $\delta \mu=0.21$ mas y$^{-1}$, $\delta \vr =1$ \kms, and $ \delta r/r=0.02$.. Contours are drawn at 2, 4, 6, .... 20, 25, 30, ... \kms with 10 and 20 \kms by thick lines. }
\label{fig-vvec}
\end{figure}
\subsection{Rotation Velocity $\Vrotp$ from Proper Motion, and Accuracy Diagram, $\Delta \Vrotp(X,Y)$}
If we assume circular motion, the rotation velocity is also determined by measuring the proper motion $\mu$ as
\be
\Vrotp = - {R \over s} (r \mu + V_0 \cosl).
\label{eq-vrotprop}
\ee
In the same way as in the previous section and remembering that $R^2-s^2=R_0^2{\rm sin}^2 l$, we have
\be
\Delta \Vrotp = \sqrt{\left(\partial V \over \partial r \right)^2 \delta r^2 + \left(\partial V \over \partial \mu \right)^2 \delta \mu^2}
\ee
\be
={R \over |s| } \left[ r^2 \delta \mu^2 + \left( {R_0^2 V {\rm sin}^2 l \over s R^3} + {\vp \over r} \right)^2 \delta r ^2 \right]^{1/2}.
\label{eq-dvprop}
\ee
The errors in $v_{\rm p}$ and $r$ may be assumed to be proportional to the distance. We here calculate an accuracy diagram for $\delta \mu=0.21 $mas y$^{-1}$, and $\delta r/r= 0.02$. Figure \ref{fig-vprop} shows the thus calculated accuracy diagram $\Delta \Vrotp$ for the same rotation curve as in figure \ref{fig-rcmodel}.
Figure \ref{fig-vprop} shows that the error becomes smallest along the Sun-GC line, and is still small in a large area in the anti-center direction. On the other hand, the error is largest around the tangent point circle, and the error equation \ref{eq-dvprop} diverges on the tangent-point circle, where $s=0$. Thus, the tangent-point circle is a singular region in this method.
These behaviors are just in the opposite sense to the case for equation \ref{eq-dvrad} and figure \ref{fig-vrad}. In this context the radial-velocity method, including the tangent-point (terminal-velocity) method, and the proper-motion method are complimentary to each other, in so far as circular rotation assumption is made.
\subsection{Rotation Velocity $\Vrotv$ from Velocity Vector, and Accuracy Diagram $\Delta \Vrotv(X,Y)$}
If the radial velocity and proper motion as well as the distance are known at the same time for the same object, its three-dimensional velocity vector is determined without assuming circular orbit. The absolute value of the velocity vector is calculated by
\be
V=\sqrt{\Up^2+ \Ur^2},
\label{eq-Vvpvr}
\ee
where
\be
\Up=r \mu+\V0 \cosl
\ee
and
\be
\Ur=\vr + \V0 \sinl.
\ee
Since the deviation of velocity vector from the circular orbit may be assumed to be small, we here neglect the error arising from the non-circular components, and define the rotation velocity by the velocity given here as $\Vrotv=V$. We now calculate the error included in the absolute value of the velocity vector by
\be
\Delta \Vrotv= \left[
\left({\partial V \over \partial \mu} \right)^2\delta \mu^2
+\left({\partial V \over \partial \vr} \right)^2 \delta \vr^2
+\left({\partial V \over \partial r}\right)^2 \delta r ^2
\right]^{1/2}
\ee
\be
={1 \over V} \left[\Up^2 r^2 \delta \mu^2+\Ur^2 \delta \vr^2 + \Up^2 \mu^2 \delta r^2 \right]^{1/2}.
\label{eq-dvvec}
\ee
Figure \ref{fig-vvec} shows an accuracy diagram for velocity vector, or the distribution of $\Delta \Vrotv$ calculated for $\delta \vr=1$ \kms, $\delta \mu=0.21 $ mas y$^{-1}$, and $\delta r/r=0.02 $. This diagram shows a milder variation of error in $V$ near the tangent-point circle compared with that calculated for the radial velocity method in figures \ref{fig-vrad} and that for proper motion method in \ref{fig-vprop}. Also, figure \ref{fig-vvec} shows milder error variations around the Sun-GC line. Thus, we see that the velocity-vector method has no singular regions to determine the rotation velocity, and provides us with more general information from the entire galactic disk.
\begin{table*}
\caption{Distances, proper motions, and radial velocities for star forming regions observed with VERA. }
\bc
\begin{tabular}{llllllllllll}
\hline\hline
Source& $l$ &$b$&$r \pm \delta r$&$\vr \pm \delta \vr$ & $\mu \pm \delta \mu^\dagger$ &Reference\\
&(deg)&(deg)&(kpc) &(\kms) & (mas y$^{-1}$)\\
\hline
IRAS 06058+2138 &188.9&0.88 &$1.76\pm 0.11$ & $3\pm 3$ &$2.57\pm 0.40$& (1)\\
IRAS 19213+1723&52.10&1.04 &$3.98\pm 0.57$&$41.7\pm 2$&$-14.66\pm 4.41$&(1) \\
AFGL 2789&94.60& -1.79& $3.07\pm 0.29$ & $-44\pm 2$ &$-4.03\pm 0.40$&(1)\\
G48.61+0.02&48.61& 0.02& $5.03\pm 0.19$& $ 19\pm 1$&$ -5.86\pm 0.20$&(2)\\
ON1& 69.54& -0.98& $2.47\pm 0.11$& $12\pm 1$ & $-7.50\pm 0.01$ &(3)\\
ON2N & 75.78& -0.34& $3.83\pm 0.13$ & $0\pm 1$& $ -7.98\pm 0.03$&(4)\\
\hline \\
\end{tabular}\\
{\small (1) Oh et al. (2009); (2) Nagayama et al. (2011a); (3) Nagayama et al. (2011b); (4) Ando et al. (2011)
\\
$\dagger$ Recalculated from $\vp$ given in the original papers.
\\
\label{tab-veradata}
\ec
\caption{Values of $\Vrotr$, $\Vrotp$ and $\Vrotv$ and their errors for sources in table \ref{tab-veradata}. }
\bc
\begin{tabular}{llllllllllll}
\hline\hline
Source& $R \pm \delta R$& $\Vrotr \pm \Delta \Vrotr$ & ${\Vrotp} \pm \Delta \Vrotp$&$\Vrotv \pm \Delta \Vrotv$ \\
&(kpc)&(\kms) & (\kms) & (\kms)& \\
\hline
IRAS 06058&$9.74\pm 0.11$ &$219.9\pm 23.9$ &$177.6\pm 3.6$ &$178.4\pm 3.6$\\
IRAS 19213&$ 7.05\pm 0.25$& $223.0\pm 12.9$ &--- $^\ddagger$ &$199.5\pm 2.0$&\\
AFGL 2789 &$8.8-\pm 0.12$& $171.4\pm 16.7$ & $177.1\pm 16.7$ &$172.4\pm 4.0$ \\
G48.61&$6.01\pm 0.01$ & $169.2\pm 1.7$ & --- & $169.2\pm 0.9$ \\
ON1& $7.50\pm 0.01$ & $199.6\pm 1.0$ & --- & $199.4\pm 0.10$ &\\
ON2N & $7.98\pm 0.03$ & $199.4\pm 3.1$ & --- & $200.0\pm 1.5$ &\\
\hline
\end{tabular}\\
$\ddagger$ Near tangent point.\\
\label{tab-veravrot}
\ec
\end{table*}
\subsection{Observed examples}
Recently, trigonometric parallax and proper motion measurements of maser sources have obtained reliable data for galactic rotation determinations (Oh et al. 2009; Ando et al. 2011; Nagayama et al. 2011a, b). Combined with radial velocities from associated CO line, the data are available to calculate the rotation velocity. Table \ref{tab-veradata} shows the observed parameters from the VERA observations, where the errors for systemic CO velocity were read from the original papers cited therein. We calculate $\Vrotr$, $\Vrotp$, and $\Vrotv$ and their errors from the data in this table. Table \ref{tab-veravrot} shows the calculated results.
The thus derived values for rotation velocities are consistent with each other as well as with those derived by Oh et al. (2009). However, some peculiar results are obtained in proper-motion method for near tangent-point sources, whose values are not listed. In general, the velocity-vector method gives more reliable results. The slight differences in the here derived rotation velocities $\Vrotv$ and those by Oh et al. is due to neglecting motions perpendicular to the galactic disk in this paper, since we aimed at examining the accuracy diagrams within the galactic disk.
\section{Radial-Velocity and Proper-Motion Fields and Kinematical Distances}
Once the rotation curve is determined, and if we assume circular rotation, the rotation curve may be in turn used to measure kinematical distances of objects by applying the velocity-space transformation (Oort et al. 1958; Nakanishi and Sofue 2003, 2006). The kinematical distance is obtained either from radial velocity or from proper motion using equations (\ref{eq-vr}) and (\ref{eq-vp}).
\subsection{Radial-Velocity Field $\vr(X,Y)$ }
If we assume circular motion, the velocity field can be used to derive kinematical distance $r_{\vr}$ by measuring the radial velocity $\vr$. This velocity-to-space transformation is useful to map the density distribution of interstellar gases from HI and/or CO emission lines (e.g. Nakanishi and Sofue 2003).
The kinematical distance $r$ is given by
\be
r=R_0 \cosl \pm \sqrt{R^2-R_0^2 {\rm sin}^2 l}
\label{eq-r}.
\ee
Here the galacto-centric distance $R$ is related to the radial velocity through equation \ref{eq-vrot}, where $V_{\rm rot}^{\vr}$ is replaced with the determined rotation velocity $V(R)$. Then, we obtain
\be
R=R_0 V(R) \left({\vr \over \sinl}+V_0 \right)^{-1}.
\label{eq-R}
\ee
Figure \ref{fig-vr-r} shows the variation of $\vr$ as a function of $r$ at $l=30\circ$. Figure \ref{fig-vfi} (a) shows a radial velocity field, e.g. the distribution of $\vr$ on the galactic plane, calculated for the model rotation curve in figure \ref{fig-rcmodel}. Such velocity fields have been often used to obtain the distribution of interstellar gases from radial velocities of the HI line and CO molecular lines (e.g. Nakanishi and Sofue 2003; 2006).
\subsection{Accuracy diagram $\Delta r_{\vr}(X,Y)$}
The accuracy of the velocity-to-space transformation depends on the accuracy of the kinematical distance $r_{\vr}$. We now construct an accuracy diagram for $r_{\vr}$ by differentiating equation (\ref{eq-r}) with respect to $\vr$ to obtain the error $\Delta r_{\vr}$ of $r_{\vr}$ in terms of the error $\delta \vr$ of $\vr$. Since $V(R)$ is a slow function of $R$ and $r$, we may neglect the terms including the derivative of $\partial V(R)/\partial r ~\delta r$. Note that this approximation does not hold in the Galactic Center at $R<\sim 0.5$ kpc, so that the following results is not precise enough in the central region. We, then, obtain
\be
\Delta r_{\vr} ={R^3 \over R_0}
{1 \over \sqrt{ {\rm sin}^2 l (R^2- R_0^2 {\rm sin}^2 l ) } }
{\delta \vr \over V(R)}
\label{eq-dr}
\ee
Figure \ref{fig-adr} shows the accuracy diagram of the kinematical distance $r$, or the distribution of $\Delta r_{\vr}$ on the galactic plane, calculated for $\delta \vr=1$ \kms using the model rotation curve shown in figure \ref{fig-rcmodel}. It is trivial that the distance error is largest along the Sun-GC line, where the motion by galactic rotation is perpendicular to the line of sight. The figure shows that the tangent-point circle is a singular region, where the distance determination cannot be applied. This is deeply related to the near-far ambiguity problem in solving the kinematical distance inside the solar circle.
\subsection{Proper-Motion Field $\mu(X,Y)$}
Given a rotation curve $V(R)$, and if we assume circular rotation, the proper motion of an object is given by
\be
\mu=-{1 \over r}\left({s \over R}V(R) +V_0 \cosl \right).
\label{eq-mu}
\ee
Figure \ref{fig-mu-r} shows the variation of $\mu$ as a function of distance $r$ in the direction of $l=30^\circ$. Figure \ref{fig-mufield} shows the $\mu$ field, or the distribution of proper-motion on the galactic plane, calculated for the model rotation curve in figure \ref{fig-rcmodel}. Obviously, an object on the solar circle has proper motion of
\be
\mu_\odot=-{V_0 \over R_0} =\Omega_0 .
\ee
For our present values $R_0=8$ kpc and $V_0=$ 200 \kms, we have $\mu_{\rm \odot}=-5.26$ mas y$^{-1}$.
This kind of $\mu$ field may have not been used in the current studies for galactic dynamics because of the lack in a sufficient number of objects with measured proper motions. However, the progress in VLBI trigonometric measurements have made it possible to apply such a diagram for determination of kinematical distance $r_\mu$. It must be noted that there is no singular region beyond the Galactic Center.
Remembering $R=\sqrt{r^2+R_0^2-2rR_0 \cosl}$ and $s=r-R_0 \cosl$, we may iteratively solve the above equations to obtain the kinematical distance $r$ in terms of $\mu$. We first start with an arbitrary initial value of $r=r_1$ to calculate the corresponding proper motion $\mu_1$.
Then the difference $\delta \mu=\mu_{\rm ob}-\mu_1$ is related to a correction $\delta r$ to $r$ as
\be
\delta r_\mu={- r \delta \mu \over \mu + (1/R - s^2/R^3)V(R)}.
\label{eq-muiter}
\ee
Now, the initial value of $r_\mu=r_1$ is corrected for thus obtained $\delta r_1$ to yield the second approximate value $r_2=r_1+\delta r_1$.
Equation (\ref{eq-mu}) is then used to find the next approximate value $\mu_2$, which is further used to get the second correction $\delta r_2$ to obtain $r_3$ and $\mu_3$. In this way, the iteration may be repeated until the difference $\delta \mu_{i+1}=\mu_{\rm ob}-\mu_i$ becomes sufficiently small compared to the observational error. Since the $\mu$ field has a mild variation over the galactic disk as shown in figure \ref{fig-mufield}, the iteration usually results in a sufficiently stable value within several times, e.g. within $i=4$ to 5. If the $\mu$ value exceeds the maximum value plotted in figure \ref{fig-mufield}, we have no solution, but the iteration diverges.
Table \ref{tab-muiteration} shows an example of the convergence for a case of an assumed object at $l=30^\circ$ and $\mu=-4$ \masy, starting from an arbitrary distance at $r=15$ and 2 kpc corresponding to far and near-side solutions. The table shows the rapid convergence of the iteration for both sides.
\begin{table*}
\caption{Example of convergence of $\mu$ and $r_\mu$ for an assumed object for far and near side solutions.}
\bc
\begin{tabular}{llllllllllll}
\hline\hline
& & Far solution & &Near solution& \\
$i$ &$l$& $\mu$ & $r_\mu$ & $\mu$& $r_\mu$\\
& (deg)& (\masy) & (kpc) & (\masy)& (kpc)\\
\hline
0 &$30^\circ$& $-4.0 \pm 0.1$ &15.0 & $-4.0 \pm 0.1$&2.0\\
1 &&-4.93949 &18.5053& -1.83192 &7.34575\\
2 && -4.09845 &18.993 & -5.55054 &4.90354\\
3 && -3.99913 &18.9885& -3.57916 &5.43869\\
4 && -4.00003 &18.9886& -4.02638 &5.40723\\
5 && -4. &18.9886& -3.99951& 5.40782\\
6 && -4. &$18.9886 \pm 0.44$ & -4.00001& $5.40781\pm 0.043 $ \\
\hline \\
\end{tabular}
\\
\label{tab-muiteration}
\ec
\caption{Proper-motion distances, $r_\mu$, determined for the VERA data in table \ref{tab-veradata}.}
\bc
\begin{tabular}{llllllllllll}
\hline\hline
Source& $l$ & $r_\pi \pm \delta r_\pi$ & $\mu \pm \delta \mu$ & $r_\mu \pm \delta r_\mu$ \\
&(deg)&(kpc) &(mas)&(kpc)\\
\hline
IR06058 &188.9 & $1.76\pm 0.11$ & $ 2.6 \pm 0.4$ & --- (no solution)\\
IR19213 &52.10 & $3.98\pm 0.57$ & $ -6.5 \pm 0.8$ & --- \\
AFGL2789 &94.60 & $3.07\pm 0.29$ & $ -4.0 \pm 0.5 $ & $5.5 \pm 0.4$\\
G48.61 &48.61 & $5.03\pm 0.19$ & $ -5.8 \pm 0.1$ & $6.5\pm 0.1$ \\
ON1 &69.54 & $2.47\pm 0.11$& $ -6.0 \pm 0.7$ & ---\\
ON2N &75.78 & $3.83\pm 0.13$ & $ -5.4 \pm 0.2$ & ---\\
\hline \\
\end{tabular}
\label{tab-mudistance}
\ec
\end{table*}
\subsection{Accuracy diagram $\Delta r_\mu(X,Y)$}
The error propagation is estimated from equation (\ref{eq-mu}) by giving small perturbations to $r$ and $\mu$. We again neglect the term including $(\partial V(R)/\partial r) \Delta r$. Then $\Delta r_\mu$, or the error in $r_\mu$, may be expressed in terms of $\delta \mu$ as
\be
\Delta r_\mu= {r^2 \delta \mu \over V(R) \left[r s^2 / R^3- (r+s)/ R \right]-V_0 \cosl}.
\label{eq-drmu}
\ee
Figure \ref{fig-admufi} shows the accuracy diagram for kinematical $\mu$ distance $r_\mu$, or the distribution of $\Delta r_\mu$ on the galactic plane, calculated for $\delta \mu=0.2$ \masy. The figure indicates that the distance ambiguity is largest along the Sun-GC line in the near side of the Galactic Center, where the objects have proper motions $\mu \sim \left[V(R)-V_0\right]/r$, which is usually small because of the flat rotation curve.
On the other hand, the ambiguity is drastically reduced in the region beyond the Galactic Center, where the object moves in the opposite direction to the Solar motion, perpendicularly to the line of sight at about twice the rotation velocity, yielding large proper motion
\be
\mu \sim -{V(R)+V_0 \over r},
\ee
yielding $\mu=-5.26$ \masy for a solar circle object beyond the Galactic Center.
Hence, the distances of Sun-GC line objects beyond GC may be determined with relatively good accuracy from the proper motion method. This is particularly important, because distances cannot be measured by radial-velocities. Also, the trigonometric (parallax) measurements would be still difficult for such distant objects beyond GC, whereas the proper motion is sufficiently large.
We here try to apply the proper-motion method for kinematical distance $r_\mu$ to the observed $\mu$ values given in table \ref{tab-veradata}. Table \ref{tab-mudistance} lists the thus derived distances. We obtained converged solutions for the two sources, FGL2789 and G48.61+0.02. The iteration diverged for the other sources, whose proper motions exceed the expected values from the model rotation curve. This may arise either due to intrinsically non-circular motions of the sources (e.g. Oh et al. 2009), or by the adopted rotation curve which might not well represent the true galactic rotation.
\begin{figure}
\bc
\includegraphics[width=7.5cm]{fig-vr-r.eps} \\
\ec
\caption{Variation of radial-velocity $\vr$ as a function of the line of sight distance $r$ at $l=30^\circ$. }
\label{fig-vr-r}
\bc
\includegraphics[width=7.5cm]{fig-vrfield.eps} \\
\ec
\caption{Radial-velocity field, $\vr(X,Y)$, for the rotation curve in figure \ref{fig-rcmodel}. Contours are drawn every 20 \kms interval with bright region being positive and dark being negative velocities. }
\label{fig-vfi}
\bc
\includegraphics[width=7.5cm]{fig-dr.eps} \\
\ec
\caption{Accuracy diagram, $\Delta r_{\vr} (X,Y)$, for $\delta \vr=1$ \kms. Contours are drawn at $\delta r=$0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64, .... kpc from white to dark.}
\label{fig-adr}
\end{figure}
\begin{figure}
\bc
\includegraphics[width=7.5cm]{fig-mu-r.eps} \\
\ec
\caption{Variation of proper motion $\mu$ as a function of the line of sight distance $r$ at $l=30^\circ$.}
\label{fig-mu-r}
\bc
\includegraphics[width=7.5cm]{fig-mufield.eps} \\
\ec
\caption{Proper-motion field, $\mu(X,Y)$, for the rotation curve in figure \ref{fig-rcmodel}. Contours are drawn every 1 mas y$^{-1}$ with the thick line near the solar circle being -5 mas y$^{-1}$. }
\label{fig-mufield}
\bc
\includegraphics[width=7.5cm]{fig-drp.eps} \\
\ec
\caption{Accuracy diagram, $\Delta r_\mu(X,Y)$, for $\delta \mu =0.2$ \masy. Contours are drawn at $\delta r=$0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64, .... kpc from white to dark.}
\label{fig-admufi}
\end{figure}
\section{Discussion}
We have analyzed the expected errors in rotation curve determinations arising from observational errors of distance, radial velocity and proper motion for individual objects depending on their galactic positions. We displayed the error distributions for derived rotation velocities from the various methods as accuracy diagrams. We presented the accuracy diagrams for some combinations of assumed errors in $r$, $v_r$ and $v_p$, while they represent the general properties of the error distributions. If we adopt different combinations of intrinsic observation errors, the diagrams will change quantitatively, but their general characteristics are similar to those presented here.
The radial velocity method assuming circular motion has been most often used in the decades. The tangent point method for the inner rotation curve using the HI and CO line emissions is an extreme case choosing objects on the loci of the minimum errors in figure \ref{fig-vrad}. The accuracy diagram, $\Delta \Vrotr(X,Y)$, well explains the reason why the observed rotation curve in figure \ref{fig-rc} is nicely determined at $R<8$ kpc by the tangent-point method compared to the outer rotation curve. The tangent-point circle is a special region where the radial velocity method can give the highest accuracy rotation curve. Furthermore, this diagram suggests that the butterfly areas at $l\sim 100-135\deg$ and $l\sim 225-280\deg$ are suitable regions for selecting the sources for determination of outer rotation curve in this method. It should be mentioned that similar accuracy is expected for sources within the tangent-point circle in order to determine the inner rotation curve. Sources near the Sun-GC line are, of course, not appropriate for this method, as the accuracy diagram shows singularity.
In the proper motion method assuming circular motion, the most accurate measurement of rotation velocity is obtained for objects near the Sun-GC line as shown by the accuracy diagram, $\Delta \Vrotp(X,Y)$, in figure \ref{fig-vprop}, as indeed realized by Honma et al. (2007). It must be also emphasized that the minimum error area is widely spread over $l \sim 120 - 250\deg$ in the anti-center region, as well as in the central region inside the tangent-point circle. The largest error occurs for objects lying near the tangent-point circle. Thus, the tangent-point circle is singularity circle in this method.
In the 3-D velocity vector measurement, where no assumption of circular motion is made, the three independent values of the radial velocity, proper motion and distance are required at the same time. The accuracy diagram, $\Delta \Vrotv(X,Y)$, in figures \ref{fig-vvec} indicates milder dependence of the accuracy on the location of the sources compared to those in the previous two methods. Optimization of source selection is, therefore, easier in the 3-D method.
Analyses of the accuracy diagrams presented in this paper may summarize the behaviors observed in the galactic rotation curves obtained in the decades. In the current observations, source selection has been optimized by the authors a priori, while it was not necessarily in a systematic way. The present analysis may be helpful for further source selections in the future in order to optimize the observations for higher accuracy rotation curves from limited resources of observing time and facilities.
Once a rotation curve is obtained, it may be in turn used to map the ISM and stellar objects in the galactic plane from their radial velocities and proper motions. The former method has been often applied to map the Galaxy. The latter method, however, is not used often because of the lack of proper motion measurements. The recent VLBI trigonometric measurements of proper motions would be challenging in mapping the Galaxy. We have presented the radial velocity field, $\vr(X,Y)$, and proper motion field, $\mu(X,Y)$, for an assumed circular rotation curve as in figure \ref{fig-rcmodel}. We constructed accuracy diagrams, $\Delta r_{\vr}(X,Y)$ and $\Delta r_\mu(X,Y)$, for distance measurements using the velocity and proper motion fields. The $\Delta r_{\vr}(X,Y)$ field confirms the current behaviors seen in the galactic maps so far published in the decades. The $\Delta r_\mu(X,Y)$ also confirms that the proper motion measurements are powerful tool to solve the distance ambiguity in the Galaxy along the Sun-GC line, particular for sources beyond the Galactic Center.
Finally, we comment on possible systematic errors. Non-circular motions by the bar, spiral arms, random motions, tidal effects by the companion galaxies, etc.., would be superposed on the rotation curve. Such effects will cause systematic errors in the determined quantities, and affect the error analyses. The uncertainties in the adopted galactic constants like $R_0$ and $V_0$ affect the results, which, moreover, affect the adopted rotation curve to calculate the errors. Hence, the present error analyses include circularity among the evaluated quantities and errors. This circularity means that the results should not be interpreted too rigorously, but are to be used as a guide to the galactic dynamics based on the adopted galactic constants and the assumption of circular rotation.
|
1,116,691,499,197 | arxiv | \section{Introduction}
\label{sec:intro}
Cosmic inflation~\cite{Starobinsky:1980te, Guth:1980zm, Linde:1981mu,
Albrecht:1982wi, Linde:1983gd} is presently the most promising
paradigm to describe the physical conditions that prevailed in the
very early universe. It consists of two stages. First, there is a
phase of accelerated expansion. In the simplest models, it is driven
by a scalar field, the inflaton, slowly rolling down its potential,
and the background spacetime almost exponentially expands. Second,
there is the reheating epoch~\cite{Albrecht:1982mp, Dolgov:1982th,
Abbott:1982hn, Turner:1983he, Shtanov:1994ce, Kofman:1994rk,
Kofman:1997yn} (see \Refs{Bassett:2005xm,Amin:2014eta} for reviews)
during which the inflaton field oscillates around the minimum of its
potential and decays into other degrees of freedom it couples
to. Then, after thermalisation of these decay products, the
radiation-dominated era of the hot big-bang phase starts.
One of the main successes of the inflationary scenario is that it
provides a convincing mechanism for the origin of the structures in
our universe~\cite{Mukhanov:1981xt, Kodama:1985bj}. According to the
inflationary paradigm, they stem from quantum fluctuations born on
sub-Hubble scales and subsequently amplified by gravitational
instability and stretched to super-Hubble distances by cosmic
expansion. During this process, which occurs in the slow-roll phase,
cosmological perturbations acquire an almost scale-invariant power
spectrum, which is known to provide an excellent fit to the
astrophysical data at our disposal~\cite{Akrami:2018vks,
Akrami:2018odb}.
In the simplest models where inflation is driven by a single scalar
field with canonical kinetic term, on large scales, the curvature
perturbation is conserved~\cite{Mukhanov:1981xt, Kodama:1985bj}, which
implies that the details of the reheating process do not affect the
inflationary predictions or, in other words, that ``metric
preheating'' is inefficient on those scales. Since these models are
well compatible with the data~\cite{Martin:2013tda, Martin:2013nzq,
Martin:2015dha, Chowdhury:2019otk}, the stage of reheating is
usually not considered as playing an important role in the evolution
of cosmological perturbations~\cite{Finelli:1998bu}. For the scales
observed in the Cosmic Microwave Background (CMB), the only effect of
reheating is through the amount of expansion that proceeds during this
epoch, which relates physical scales as we observe today to the time
during inflation when they emerge. This thus determines the part of
the inflationary potential that we probe with the CMB. In practice,
there is a single combination~\cite{Martin:2006rs} of the reheating
temperature and of the mean equation-of-state parameter, that sets the
location of the observational window along the inflationary
potential. Given the restrictions on the shape of the potential now
available~\cite{Martin:2013nzq, Vennin:2015eaa}, this can be used to
constrain the kinematics of reheating~\cite{Martin:2010kz,
Martin:2014nya, Martin:2016oyk, Hardwick:2016whe}. In multiple-field
scenarios, on the contrary, large-scale curvature perturbations can be
strongly distorted by the reheating
process~\cite{Bassett:1999cg,Bassett:1999mt,
Jedamzik:1999um,Finelli:2000ya,Allahverdi:2010xz}, which means that
metric preheating can be important and, thus, can have more impact on
CMB observations.
The situation is very different for scales smaller than those observed
in the CMB, more precisely for scales crossing back in the Hubble
radius during reheating (or never crossing out the Hubble radius
during inflation). In particular, it was shown in
\Ref{Jedamzik:2010dq} (see also \Ref{Easther:2010mr}) that the density
contrast of the scalar field fluctuations can grow on small scales
during preheating, due to a parametric instability sourced by the
oscillations of the inflaton at the bottom of its potential. This
mechanism demonstrates that metric preheating can be important even in
single-field inflation, although not on large scales. It can give rise
to different interesting phenomena such as early structure
formation~\cite{Jedamzik:2010dq}, gravitational waves
production~\cite{Jedamzik:2010hq} and even Primordial Black Holes
(PBHs)~\cite{Carr:1974nx,Carr:1975qj} formation~\cite{Martin:2019nuw}
(PBHs formation from scalar fields was considered in
\Ref{Khlopov:1985jw}, in the case of two-fields models in
\Ref{Bassett:2000ha} and in the case of single-field tachyonic
preheating in \Ref{Suyama:2006sr}).
These phenomena can lead to radical shifts in the standard picture of
how reheating proceeds. Indeed, in \Ref{Martin:2019nuw}, it was shown
that the production of light PBHs from metric preheating is so
efficient that they can quickly come to dominate the universe content,
such that reheating no longer occurs because of the inflaton decay, as
previously described, but rather through PBHs Hawking
evaporation. This conclusion, however, was reached by neglecting the
decay products of the inflaton throughout the instability phase, and
by simply assuming that they would terminate the instability abruptly
at the time when they dominate the energy budget (if PBHs have not
come to dominate the universe before then). However, as will be made
explicit below, the instability of metric preheating proceeds in the
narrow resonance regime. One may therefore be concerned that it
requires a delicate balance in the dynamics of the system, and that
even a small amount of produced radiation could be enough to distort
or jeopardise the instability mechanism. The goal of this paper is
therefore to investigate how the presence of inflaton decay products
(modelled as a perfect fluid), produced by perturbative reheating,
affects the metric preheating instability.
The paper is organised as follows. In \Sec{sec:reheating}, we briefly
review metric preheating, which leads to the growth of the inflaton
density contrast at small scales. Then, in \Sec{sec:radiativedecay},
we study whether a small amount of radiation, originating from the
inflaton decay, can modify this growth. For this purpose, we introduce
a covariant coupling model between the inflaton scalar field and a
perfect fluid, leading to equations of motion at the background (see
\Sec{subsec:back}) and perturbative (see \Sec{subsec:pert}) levels
that feature no substantial change in the instability structure until
the fluid dominates. In \Sec{subsec:pbh}, we discuss the application of
the previous results to the production of PBHs during reheating,
which, we stress, cannot be described as originating from perfect
fluid inhomogeneities, contrary to what is sometimes argued. Finally,
in \Sec{sec:conclusions}, we briefly summarise our main results and
present our conclusions.
\section{Preheating in single-field inflation}
\label{sec:reheating}
In this work, we consider single scalar field models of inflation,
with a canonical kinetic term. In these models, a homogeneous inflaton
field $\phi(t)$ drives the expansion of a flat
Friedmann-Lema\^itre-Robertson-Walker (FLRW) space-time, described by
the metric $\mathrm{d} s^2=-\mathrm{d} t^2+a^2(t)\mathrm{d} {\bm x}^2$, where $a(t)$ is the
FLRW scale factor. The corresponding equations of motion are the
Friedmann and Klein-Gordon equations, namely
\begin{align}
\label{eq:KG&F}
H^2=\frac{1}{3M_\usssPl^2}\left[\frac{\dot{\phi}^2}{2}+V(\phi)\right], \quad
\ddot{\phi}+3H\dot{\phi}+V_\phi\left(\phi\right) =0\, ,
\end{align}
where $H=\dot{a}/a$ is the Hubble parameter, $V_\phi$ the derivative
of the potential with respect to $\phi$, $M_\usssPl$ the reduced Planck mass
and a dot denotes a derivative with respect to cosmic time $t$. The
inflaton field potential $V(\phi)$ must be such that the potential
energy dominates over the kinetic energy of the inflaton, and
inflation ($\ddot{a}>0$) ends when they become comparable, that is to
say when the first slow-roll parameter $\epsilon_1 \equiv
-\dot{H}/H^2$ reaches one. This usually happens in the vicinity of a
local minimum of the potential. There, most potentials can be
approximated by a quadratic function, $V(\phi)\sim m^2\phi^2/2$, where
$m$ is the curvature of the potential at its minimum. In fact, this
expression can be seen as a leading-order Taylor expansion of the
potential around its minimum, and it is not valid only for potentials
having an exactly vanishing mass at their minimum, for which the
leading term is of higher order. When the inflaton reaches this region
of the potential, it oscillates according to $\phi(t)\propto a^{-3/2}
\sin\left(mt\right)$, the expansion becomes, on average, decelerated,
and similar to that of a matter-dominated
universe~\cite{Turner:1983he}, {i.e.~} $\langle \rho \rangle \propto
a^{-3}$ (where $\langle \cdot \rangle$ denotes averaging over one
oscillation).
\subsection{Perturbative reheating}
\label{sec:perturbative:reheating}
These considerations however ignore the possible coupling of the
inflaton with other degrees of freedom. In order to incorporate it,
several descriptions are possible. A simple way, which corresponds to
``perturbative reheating'', consists in introducing a term ``$\Gamma
\dot{\phi}$'' (where $\Gamma$ is a decay rate) in the Klein-Gordon
equation to account for the decay of the inflaton into a perfect fluid
(typically
radiation)~\cite{Albrecht:1982mp,Dolgov:1982th,Abbott:1982hn,Kofman:1997yn}.
In this case, the friction term becomes
$(3H+\Gamma/2)\dot{\phi}$. Initially, $H\gg \Gamma$ and the effect of
the inflaton decay is negligible, until $H$ crosses down $\Gamma$, at
a time around which most of the decay of the inflaton occurs. In the
next section, we explain how to introduce $\Gamma$ covariantly, thus
allowing us to perform a consistent treatment both at the background
and perturbative levels. Microscopically, if one considers for
instance that $\phi$ is coupled to another scalar field $\chi $
through the interaction Lagrangian ${\cal L}_\mathrm{int}=-2g^2\sigma
\phi \chi^2$, where $g$ is a dimensionless coupling constant and
$\sigma$ a new mass scale, the corresponding decay rate can be
calculated within perturbation theory and one finds
$\Gamma=g^4\sigma^2/(4\pi m)$~\cite{Kofman:1997yn}. If this process
occurs at sufficiently high energy, the mass of the $\chi$-particles
are small compared to the Hubble parameter at decay and, effectively,
the inflaton field decays into relativistic matter or radiation.
\subsection{Non-perturbative preheating}
\label{sec:NonPerturbativePreheating}
The above perturbative description is however not sufficient since
non-perturbative effects can also play an important
role~\cite{Shtanov:1994ce, Kofman:1994rk, Kofman:1997yn}. This can be
simply illustrated if one considers the case where the interaction
Lagrangian reads ${\cal L}_\mathrm{int}=-g^2\phi^2\chi^2/2$. If one
denotes the monotonously decreasing amplitude of the inflaton
oscillations as $\phi_0(t)$, such that $\phi\simeq\phi_0(t)\sin(mt)$,
then the equation of motion of the Fourier transform $\chi_{\bm k}$ of
the field $\chi$ reads
\begin{align}
\label{eq:eom:chik:1}
\ddot{\chi}_{\bm k}+ 3 H \dot{\chi}_{\bm k}+\left[
\frac{k^2}{a^2(t)}+m_\chi^2+ g^2\phi_0^2(t)\sin^2(m
t)\right]\chi_{\bm k}=0\, ,
\end{align}
where $m_\chi$ is the mass of $\chi$ and ${\bm k}$ the wavenumber of
the mode under consideration. Writing $X_{\bm k}=a^{3/2}\chi_{\bm k}$
and using the variable $z\equiv mt$, the above equation can also be
written under the following form
\begin{align}
\label{eq:mathieu}
\frac{\mathrm{d} ^2 X_{\bm k}}{\mathrm{d} z^2}
+\left[A_{\bm k}-2q\cos\left(2z\right)\right]
X_{\bm k}=0,
\end{align}
where the quantities $A_{\bm k}$ and $q$ are defined by
\begin{align}
\label{eq:A:q:def}
A_{\bm k} &=\frac{k^2}{a^2m^2}+\frac{m_\chi^2}{m^2}
-\frac32 \frac{H^2}{m^2}\left(\frac32-\epsilon_1\right)
+2q, \quad
q=\frac{g^2\phi_0^2}{4 m^2}.
\end{align}
As a first step, in order to gain intuition about the behaviour of the
solutions, it is convenient to analyse the above equation in the
Minkowski space-time (for simplicity, we also consider the massless
case $m_\chi=0$). In that situation, the coefficients $A_{\bm
k}=k^2/m^2+2q$ and $q$ are constant and \Eq{eq:mathieu} is a Mathieu
equation~\cite{Abramovitz:1970aa:Mathieu}. This equation possesses
unstable, exponentially growing solutions $\chi_{\bm {k}} \propto
\exp(\mu_{\bm{k}} z)$. In \Fig{fig:mapmathieu}, known as the Mathieu
instability chart, we display the value of $\mu_{\bm{k}}$, the
so-called Floquet index of the unstable mode (namely the maximum of
the two Floquet indices), as a function of $A_{\bm k}$ and
$q$. Unstable regions correspond to where $\mu_{\bm k}>0$, and are
organised in several ``bands'', which can be identified as the non
dark-blue regions in \Fig{fig:mapmathieu}. Since $A_{\bm
k}=k^2/m^2+2q$, the parameter space of interest is such that $A_{\bm
k}> 2q$, which corresponds to the region above the white line in
\Fig{fig:mapmathieu}. At a given $q$, one can see in
\Fig{fig:mapmathieu} that there are several ranges of values of
$A_{\bm k}$, hence several ranges of wavenumbers $k$, where an
instability develops. One also notices that the band with the smallest
value of $A_{\bm k}$ is the most pronounced one. When $q\gg 1$, the
range of excited modes is large, which corresponds to being in the
``broad-resonance'' regime. When $q\ll 1$, on the contrary, there is
only a small range of values of $k$ being excited, which correspond to
the ``narrow-resonance'' regime. In that limit, the boundaries of the
first band correspond to $1-q\lesssim A_{\bm k}\lesssim 1+q$.
Then, space-time dynamics must be restored and its impact on the
previous considerations discussed. In that case, three time scales
play a role in \Eq{eq:eom:chik:1}: the inflaton oscillation period
$m^{-1}$, the Hubble time $H^{-1}$, and the $\bm{k}$-mode period,
$a/k$. The quantities $A_{\bm k}$ and $q$ now become functions of time
[notice that the oscillating phase starts when $m\sim H$, and since
$H$ decreases afterwards, one quickly reaches the regime where $H\ll
m$ and, as a consequence, the term $\propto H^2/m^2$ in the
definition~(\ref{eq:A:q:def}) of $A_{\bm k}$ can be neglected]. This
means that \Eq{eq:mathieu} is no longer a Mathieu equation: a given
mode $\bm{k}$ now follows a certain path in the map of
\Fig{fig:mapmathieu}. What is then the fate of the two regimes (narrow
and broad resonance) identified before? Since more time is being spent
in the wide bands than in the narrow ones, the broad resonance regime
is the most important one to amplify the $\chi$ field. However, this
regime is also crucially modified by space-time expansion and gives
rise to the so-called ``stochastic-resonance regime'', discovered in
\Ref{Kofman:1997yn}. Preheating effects have also been studied in
other contexts, for instance when the curvature of (some region of)
the inflationary potential is negative, as it is the case, for
instance, in small-field inflation, leading to tachyonic
preheating~\cite{Felder:2000hj, Desroche:2005yt}.\\
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth]{mapmathieu.pdf}
\caption{Instability chart of the Mathieu equation. The colour code
(see the colour bar on the right hand side of the plot) represents
the value of the Floquet exponent $\mu_{\bm k}$ of the unstable
mode. In the present case, stable solutions corresponds to $\mu_{\bm
k}=0$ and are represented by the dark blue regions. The other
regions, structured in different bands, correspond to unstable
solutions.}
\label{fig:mapmathieu}
\end{center}
\end{figure}
\subsection{Metric preheating}
So far we have discussed preheating at the background level only,
without including the inflaton and metric perturbations. They however
play an important role, in a mechanism known as ``metric
preheating''~\cite{Finelli:1998bu,Bassett:1999cg,Bassett:1999mt,
Jedamzik:1999um,Finelli:2000ya,Allahverdi:2010xz}. Including scalar
fluctuations only, in the longitudinal gauge, the perturbed metric can
be written as $\mathrm{d} s^2=a^2(\eta)\left[-\left(1+2\Phi\right) \mathrm{d}
\eta^2+\left(1-2\Phi\right)\delta _{ij}\mathrm{d} x^i\mathrm{d} x^j\right]$, where
$\eta $ is the conformal time related to the cosmic time by $\mathrm{d}
t=a\mathrm{d} \eta$. As is apparent in the previous expression, the scalar
perturbations are described by a single quantity, namely the Bardeen
potential $\Phi$. Matter perturbations, which, in the context of
inflation, boil down to scalar field perturbations, are also
characterised by a single quantity, the perturbed scalar field $\delta
\phi^{\mathrm{(gi)}}$, where the ``gi'' indicates that this is a
gauge-invariant quantity ($\delta \phi^{\mathrm{(gi)}}=\delta\phi$ in
the longitudinal gauge and is mapped by gauge transformations
otherwise). Using the perturbed Einstein equations, the whole scalar
sector can in fact be described by a single quantity, which is a
combination of metric and matter perturbations. This single quantity
is the Mukhanov-Sasaki variable~\cite{Mukhanov:1981xt,Kodama:1985bj}
$v\equiv a\left[\delta \phi^{\mathrm{(gi)}}+\phi'\Phi/{\cal
H}\right]$, where ${\cal H}=a'/a$ (a prime denotes a derivative with
respect to conformal time) is the conformal Hubble parameter, and is
directly related to the comoving curvature perturbation $\mathcal{R}$
by $v=Z\mathcal{R}$, where $Z\equiv \mathpalette\DHLhksqrt{2\epsilon_1}aM_\usssPl$. The
Fourier component $v_{\bm k}$ evolves according to the equation of a
parametric oscillator where the time dependence of the frequency is
determined by the dynamics of the background~\cite{Mukhanov:1990me}
\begin{align}
\label{eq:eomv}
v_{\bm k}''+\left(k^2-\frac{Z''}{Z}\right)v_{\bm k} = 0,
\end{align}
with
\begin{align}
\label{eq:MS}
\frac{Z''}{Z}= a^2 H^2 \left[ \left(1+\frac{\epsilon_2}{2}\right)
\left(2-\epsilon_1+\frac{\epsilon_2}{2}\right)
+\frac{\epsilon_2\epsilon_3}{2}\right]\, ,
\end{align}
where $\epsilon_2 \equiv \mathrm{d}\ln\epsilon_1/\mathrm{d} N$ and $\epsilon_3
\equiv \mathrm{d}\ln\epsilon_2/\mathrm{d} N$ are the second and the third slow-roll
parameters respectively.
The question is then whether \Eq{eq:eomv} allows for parametric
resonance when the inflaton field oscillates at the bottom of its
potential. One might indeed expect that the oscillations in $\phi(t)$
induce oscillations in the Hubble parameter $H$, hence in the
slow-roll parameters, hence in $Z''/Z$. In this case, \Eq{eq:eomv}
could be of the Mathieu type, or more generally of the Hill type, and
could lead to parametric resonance. This was first thought not to be
the case, the main argument being that, despite the oscillations in
$Z''/Z$, the curvature perturbation has to remain constant and, as a
consequence, there cannot be any growth of scalar
perturbations~\cite{Finelli:1998bu}. It has also been stressed that
the situation can be drastically different in multiple-field
inflation~\cite{Finelli:2000ya}, where entropy fluctuations source the
evolution of curvature perturbations. If the entropy fluctuations are
parametrically amplified, they can also cause a parametric
amplification of adiabatic perturbations. This is the reason why
metric preheating was first mostly studied in the context of
multiple-field (and in practice, mostly two-field) inflation, see for
instance \Ref{Finelli:2000ya}.
It was then realised in \Ref{Jedamzik:2010dq} (see also
\Ref{Easther:2010mr}) that \Eq{eq:eomv} can be put under the form
\begin{align}
\label{eq:Mathieu:v}
\frac{\mathrm{d}^2}{\mathrm{d} z^2}\left(\mathpalette\DHLhksqrt{a} v_{\bm k}\right)
+\left[A_{\bm k}-2q\cos(2z)\right]\left(\mathpalette\DHLhksqrt{a} v_{\bm k}\right)=0,
\end{align}
with
\begin{align}
\label{eq:A:q:def:metric:preheating}
A_{\bm k}=1+\frac{k^2}{m^2a^2}, \quad
q=\frac{\mathpalette\DHLhksqrt{6}}{2}\frac{\phi_\mathrm{end}}{M_\usssPl}\left(\frac{a_\mathrm{end}}{a}\right)^{3/2},
\end{align}
where $a_\mathrm{end}$ is the scale factor at the end of inflation and
$z\equiv mt+\pi/4$. Although, strictly speaking, this equation is not
of the Mathieu type because of the time dependence of the parameters
$A_{\bm k}$ and $q$, it was shown in \Ref{Jedamzik:2010dq} that this
time dependence is sufficiently slow so that \Eq{eq:Mathieu:v} can be
analysed using Mathieu equations techniques. At the end of inflation
and at the onset of the oscillations,
$\phi_0(t_\mathrm{end})=\phi_\mathrm{end}$ is of the order of the
Planck mass, so \Eq{eq:A:q:def:metric:preheating} indicates that $q$
starts out being of order one and quickly decreases afterwards. In
contrast to the situation of non-perturbative preheating discussed in
\Sec{sec:NonPerturbativePreheating}, the narrow-resonance regime $q\ll
1$ is therefore always the relevant one for metric preheating. In that
regime, and contrary to the case of broad resonance, space-time
expansion does not blur the resonance but, on the contrary, reinforces
its effectiveness, in a sense that we will explain below. As mentioned
above, in the $q\ll 1$ limit, the boundaries of the first instability
band are given by $1-q < A_{\bm k} < 1+q$, which here translates into
\begin{align}
\label{eq:instability:band:1}
k < a \mathpalette\DHLhksqrt{3 H m}.
\end{align}
One notices the appearance of a new scale in the problem, namely
$\mathpalette\DHLhksqrt{3Hm}$. Since the universe behaves as matter dominated during
the oscillations of the inflaton, $a\mathpalette\DHLhksqrt{H} \propto a^{1/4}$, and the
upper bound~\eqref{eq:instability:band:1} increases with time. This
means that the range of modes subject to the instability widens up as
time proceeds, hence the above statement that space-time expansion
strengthens the resonance effect.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{Scales.pdf}
\caption{Evolution of the physical scales appearing in
\Eq{eq:instability:band:2}, with time parameterised by the number of
$e$-folds~(counted from the end of inflation). The orange solid line
represents the Hubble radius $1/H$, the solid green line the new
length scale $1/\mathpalette\DHLhksqrt{3Hm}$ and the solid blue line the physical
wavelength of a mode of interest, which enters the instability band
from above. In all figures of this work, we study the comoving scale
$k/a_\mathrm{ini}=0.002 M_\usssPl$, where the initial time of integration
is set 6 $e$-folds~before the end of inflation, in a quadratic
potential model $V(\phi)=m^2\phi^2/2$ with $m=10^{-5}M_\usssPl$. The
inflaton decay constant (the definition of which is detailed in
\Sec{subsec:back}) is given by $\Gamma=10^{-7}M_\usssPl$. Here, we
consider the case where the inflaton decays into a radiation fluid,
with equation-of-state parameter $w_{\mathrm{f}}=1/3$.}
\label{fig:scale}
\end{center}
\end{figure}
Inside the first instability band, the Floquet index of the unstable
mode is given by $\mu_{\bm{k}} \simeq q/2$, so $v_{\bm{k}}\propto
a^{-1/2} \exp(\int \mu_{\bm{k}} \mathrm{d} z)\propto
a$~\cite{Finelli:1998bu,Jedamzik:2010dq}. The comoving curvature
perturbation, $\mathcal{R}_{\bm k}= v_{\bm k}/(M_\usssPl a \mathpalette\DHLhksqrt{2
\epsilon_1})$, is thus conserved for modes satisfying
\Eq{eq:instability:band:1}. Notice that, since $H\ll m$ during the
oscillatory phase, this comprises super-Hubble modes, $k<a H$, for
which the conservation of $\mathcal{R}$ is a well-known
result~\cite{Mukhanov:1981xt, Kodama:1985bj}. However, the
conservation of $\mathcal{R}$ also applies for those sub-Hubble modes
having $k < a \mathpalette\DHLhksqrt{3 H m}$, and for which this leads to an increase
of the density contrast. Indeed, if $\mathcal{R}$ is constant, and
given that the pressure vanishes on average, the fractional energy
density perturbation $\delta_{\bm k} = \delta\rho_{\bm k}/\rho$ (where
$\rho$ is the background energy density of the scalar field) in the
Newtonian gauge is related to the curvature perturbation
via~\cite{Jedamzik:2010dq}
\begin{align}
\delta_{\bm k} = -\frac{2}{5}
\left(\frac{k^2}{a^2 H^2}+3\right) \mathcal{R}_{\bm{k}}\, .
\end{align}
On super-Hubble scales, the first term in the braces can be neglected,
hence $\delta_{\bm{k}}$ is constant as $\mathcal{R}_{\bm {k}}$. On
sub-Hubble scales however, the first term becomes the dominant one,
and since $a^2 H^2\propto a^{-1}$, the density contrast grows like
\begin{align}
\label{eq:delta:a}
\delta_{\bm{k}}\propto a\, .
\end{align}
This corresponds to a physical instability (notice that, at sub-Hubble
scales, there are no gauge ambiguities in the definition of the
density contrast), which therefore operates at scales
\begin{align}
\label{eq:instability:band:2}
aH<k<a\mathpalette\DHLhksqrt{3 H m}.
\end{align}
The scales appearing in this relation are displayed in
\Fig{fig:scale}. An instability is triggered if the physical
wavelength of a mode (blue line) is smaller than the Hubble radius
(orange line) during the oscillatory phase and larger than the new
scale $1/\mathpalette\DHLhksqrt{3Hm}$ (green line). This implies that the instability
only concerns modes that are inside the Hubble radius at the end of
the oscillatory phase, which is not the case for the scales probed in
the CMB. It is therefore true that metric preheating does not operate
at CMB scales, although it plays a crucial role at smaller scales
(typically those crossing out the Hubble radius a few $e$-folds~before
the end of inflation) where it triggers an instability in the
narrow-resonance regime. The growth of the density contrast along
\Eq{eq:delta:a} may have several important consequences such as early
structure formation~\cite{Jedamzik:2010dq}, emission of gravitational
waves~\cite{Jedamzik:2010hq} and, as recently studied in
\Ref{Martin:2019nuw}, formation of PBHs that may themselves contribute
to the reheating process, via Hawking evaporation.
As already mentioned, preheating effects cannot by themselves ensure a
complete transition to the hot big-bang phase~\cite{Kofman:1997yn,
Bassett:2005xm,Lozanov:2017hjm,Maity:2018qhi} (except if reheating
occurs by Hawking evaporation of the very light primordial black holes
produced from the instability if they come to dominate the universe
content~\cite{Martin:2019nuw}), which also requires perturbative decay
of the inflaton to complete. Metric preheating has however been
investigated only in the context of purely single-field setups, and it
is not clear whether or not the narrow resonant structure of metric
preheating is immune to the decay of the inflaton into other degrees
of freedom. This is why in the next sections, we study metric
preheating and perturbative reheating altogether, in order to
determine if and how the later can spoil the former.
\section{Metric preheating and radiative decay}
\label{sec:radiativedecay}
We have seen before that perturbations entering the instability
band~\eqref{eq:instability:band:2} undergo a growth of their density
contrast proportional to the FLRW scale factor, see \Eq{eq:delta:a},
and that this can lead to a variety of interesting phenomena. At some
stage, however, the inflaton field decays and the growth of the
density contrast, sourced by the oscillations of the inflaton
condensate, should come to an end. In \Ref{Martin:2019nuw} this was
simply modelled by abruptly stopping the oscillating phase at a
certain time (e.g., when $H$ becomes smaller than a certain value that
can be identified with the decay rate $\Gamma$) and by assuming
instantaneous production of radiation at that time. However, clearly,
the inflaton decay should proceed continuously. Although it is true
that the production of radiation becomes sizeable when the Hubble
parameter becomes of the order of the decay rate, tiny amounts of
radiation are present before and one may wonder whether or not they
can destroy the delicate balance which is responsible for the presence
of the modes in the instability band. Indeed, the instability proceeds
in the narrow resonance regime, which means that the instability band
spans a small, fine-tuned volume of parameter space. In this section,
we investigate these questions.
\subsection{Setup and background}
\label{subsec:back}
In order to study the influence of fluid production, we must first
modify the equations of motion of the system and introduce an explicit
coupling between the inflaton field and a perfect fluid, both at the
background and perturbative levels. This poses non-trivial problems at
the technical level and we now review the formalism that can be used
in order to tackle them. Let us consider a collection of fluids in
interaction. The presence of interactions break the energy-momentum
conservation for each fluid. On very general grounds, their
non-conservations can be described non-perturbatively by detailed
balance equations of the form \cite{Malik:2004tf, Choi:2008et,
Malik_2009, Leung:2012ve, Leung:2013rza, Huston:2013kgl,
Visinelli:2014qla}
\begin{align}
\label{eq:conservationT}
\nabla _{\nu}T^{\mu \nu}_{(\alpha)}=\sum_{\beta}\left[Q^{\mu}_{(\alpha)\rightarrow (\beta)}
-Q^{\mu}_{(\beta)\rightarrow (\alpha)}\right],
\end{align}
where the transfer coefficients $Q^{\mu}_{(\alpha)\rightarrow
(\beta)}$ are responsible for the non-conservation of energy-momentum
originating from the interaction between the fluids. The indices
between parenthesis [such as ``$(\alpha)$''] label the different fluid
components. The term $Q^{\mu}_{(\alpha)\rightarrow (\beta)}$ describes
a loss due to the decay of the fluid $\alpha$ into the fluids $\beta$
while, on the contrary, the term $Q^{\mu}_{(\beta)\rightarrow
(\alpha)}$ corresponds to a gain originating from the decay of the
fluids $\beta$ into $\alpha$. The evolution of the stress-energy
tensor of the fluid $\alpha$, which is denoted $T^{\mu
\nu}_{(\alpha)}$, is then controlled by the detailed balance between
those two effects. The transfer coefficient
$Q^{\mu}_{(\alpha)\rightarrow (\beta)}$ can always be decomposed as
\begin{align}
\label{eq:defQ}
Q^{\mu}_{(\alpha)\rightarrow (\beta)}=Q_{(\alpha)\rightarrow (\beta)}u^{\mu}
+f^{\mu}_{(\alpha)\rightarrow (\beta)},
\end{align}
where $Q_{(\alpha)\rightarrow (\beta)}$ is a scalar quantity and
$f^{\mu}_{(\alpha)\rightarrow (\beta)}$ a vector orthogonal to the
matter flow, that is to say $f^{\mu}_{(\alpha)\rightarrow
(\beta)}u_{\mu}=0$ where $u^{\mu}$ is the total velocity of matter.
In an FLRW universe it is given by $u^{\mu}=(1/a,{\bm 0})$,
$u_\mu=(-a,{\bm 0})$, which immediately implies that
$f^0_{(\alpha)\rightarrow (\beta)}=0$ at the background
level. Furthermore, in an homogeneous and isotropic background, one
must have $f^i_{(\alpha)\rightarrow (\beta)}=0$ to respect the
symmetries of space-time, hence $f^{\mu}_{(\alpha)\rightarrow
(\beta)}=0$. This allows us to write $Q^0_{(\alpha)\rightarrow
(\beta)}=Q_{(\alpha)\rightarrow (\beta)}/a$ and
$Q^i_{(\alpha)\rightarrow (\beta)}=0$. At the background level, the
energy transfer is therefore entirely specified by the scalar
$Q_{(\alpha)\rightarrow (\beta)}$.
Let us now apply these considerations to a system made of one scalar
field (the inflaton field) and a perfect fluid assumed to be the
inflaton decay product. In order to consistently couple the scalar
field $\phi$ with the fluid, one must view the scalar field as a
collection of two fictitious fluids, the ``kinetic'' one, with energy
density and pressure given by $\rho_K=p_K=\phi'^2/(2a^2)$, and the
``potential'' one, with $\rho_V=-p_V=V(\phi)$, each of them having a
constant equation-of-state, one and minus one, respectively. The
energy density and pressure of the scalar field are just the sums of
the energy densities and pressures of the two fluids, namely
$\rho_\phi=\rho_K+\rho_V$ and $p_\phi=p_K+p_V$. In order to recover
the standard equations for a scalar field, one must also consider that
the fictitious kinetic and potential fluids are coupled, the coupling
being described by~\cite{Malik:2004tf}
\begin{align}
aQ_{K\rightarrow V}&=-\phi'V_{\phi}, \quad aQ_{V\rightarrow
K}=0.
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{Omegas.pdf}
\caption{Evolution of the different energy density contributions as
a function of the number of $e$-folds. The solid orange line
represents the contribution of the fictitious kinetic fluid, the
solid blue line the contribution of the fictitious potential fluid
and the solid green line the contribution of the physical scalar
field which is the sum of those two. The solid red line
corresponds to the contribution of radiation. Before the end of
inflation, the scalar field dominates the energy budget and, then,
when its decay becomes effective, radiation takes over. The
parameter values are the same as in \Fig{fig:scale}.}
\label{fig:backgroundOmega}
\end{center}
\end{figure}
Then, we consider the ``real'' interaction between the scalar field
and the perfect fluid (in practice radiation). The crucial
idea~\cite{Malik:2004tf, Jerome_Lucas_in_prep} is that it is obtained by
coupling the fluid only to the fictitious kinetic fluid related to
$\phi$ and introduced above (and not to the potential fluid). This
implies that $Q^{\mu}_{V\rightarrow
\mathrm{f}}=Q^{\mu}_{\mathrm{f}\rightarrow V}=0$. In practice, we
consider the case where the covariant interaction between ``$K$'' and
``f'' can be described non-perturbatively by the following energy
four-momentum transfer:
\begin{align}
\label{eq:QmuK:to:f}
Q^{\mu}_{K \rightarrow \mathrm{f}}=\Gamma
\, T^{\mu \nu}_{K}u_{\nu}^{K}, \quad Q^{\mu}_{\mathrm{f}\rightarrow K}=0,
\end{align}
where $\Gamma$ is the decay rate and is the only new parameter
introduced in order to account for the interaction. Note that this
description should be understood as a phenomenological
parametrisation of the decays of scalar fields in cosmological
fluids, and not as a concrete microphysical model. At the
background level, one recovers the picture described in
\Sec{sec:perturbative:reheating}, since the equations of
motion~\eqref{eq:conservationT} of the system (namely the energy
conservation equation, since the momentum conservation equation is
trivial) can be written as
\begin{align}
\label{eq:eomscalargamma}
\phi''+2{\cal H}\phi'+\frac{a\Gamma}{2}\phi'+a^2V_\phi&=0, \\
\label{eq:eomfluidgamma}
\rho_\mathrm{f}'+3{\cal H}(1+w_\mathrm{f})\rho_\mathrm{f} -\frac{\Gamma}{2a}\phi'^2&=0.
\end{align}
The first equation is the modified Klein-Gordon equation while the
second one is the modified conservation equation for the fluid with
equation-of-state parameter $w_\mathrm{f}$ (in practical applications,
unless stated otherwise, we take $w_\mathrm{f}=1/3$). These equations
are usually introduced in a phenomenological way. The fact that we are
able to derive them from a covariant formulation,
\Eq{eq:conservationT}, will allow us to describe perturbations in the
same framework, by assuming that \Eq{eq:conservationT} also holds at
the perturbative (and in principle, even non-perturbative) level, see
\Sec{subsec:pert}.
We have numerically integrated \Eqs{eq:eomscalargamma}
and~\eqref{eq:eomfluidgamma} for $V(\phi)=m^2\phi^2/2$ with
$m=10^{-5}M_\usssPl$, $w_\mathrm{f}=1/3$ and $\Gamma=10^{-7}M_\usssPl$. For a
quadratic potential, inflation stops when $\phi_\mathrm{end}/M_\usssPl\simeq
\mathpalette\DHLhksqrt{2}$ and the (slow-roll) trajectory reads $\phi(N)/M_\usssPl\simeq
\mathpalette\DHLhksqrt{2-4(N-N_\mathrm{end})}$ where $N$ is the number of $e$-folds. Here, we
want to focus on the last $e$-folds~of inflation and, therefore, the
initial conditions are chosen such that the evolution is started on
the slow-roll attractor at $\phi_\mathrm{ini}/M_\usssPl\simeq 5$, corresponding to
$N_\mathrm{end}-N_\mathrm{ini}\simeq 6$, and $\rho_\mathrm{f}^\mathrm{ini}=0$ (as we will
show below, the precise choice of the time at which we set
$\rho_\mathrm{f}=0$ does not matter since $\rho_\mathrm{f}$ quickly
reaches an attractor during inflation). The result is represented in
\Figs{fig:backgroundOmega} and~\ref{fig:backgroundw}, where inflation
ends when $N-N_\mathrm{end}=0$. Then starts the oscillation phase. In
\Fig{fig:backgroundOmega}, $\Omega_K\equiv
\rho_K/(\rho_\phi+\rho_\mathrm{f})$, $\Omega_V\equiv
\rho_V/(\rho_\phi+\rho_\mathrm{f})$, $\Omega _\phi\equiv
\Omega_K+\Omega_V$ and $\Omega_\mathrm{f}\equiv
\rho_\mathrm{f}/(\rho_\phi+\rho_\mathrm{f})$ are displayed as a
function of time. Initially, we have $\Omega_\phi\simeq \Omega_V\simeq
1$ and $\Omega_\mathrm{f}\simeq\Omega_K\simeq 0$. Indeed, in the
slow-roll phase, the potential energy largely dominates over the
kinetic energy, since the first slow-roll parameter can be expressed
as $\epsilon_1 = 3[(1+w_\mathrm{f})\Omega_\mathrm{f}+2\Omega_K]/2$,
hence both $\Omega_\mathrm{f}$ and $\Omega_K$ need to be very
small. In this regime, we also have $H\gg \Gamma$ and the amount of
radiation being produced is very small. Then, inflation stops and
$\Omega_K$ and $\Omega_V$ become of comparable magnitude and start
oscillating. During that phase, radiation still provides a small,
though non-vanishing, contribution. Finally, when $H\simeq \Gamma$, at
the time $N\equiv N_\Gamma$, radiation starts to be produced in a
sizeable amount and cannot be neglected anymore. After the end of
inflation, the universe expands, on average, as in a matter-dominated
era, for which $H\propto a^{-3/2}$, that is to say $H\propto H_\mathrm{end}
\exp[-3(N-N_\mathrm{end})/2]$. Writing the condition $H=\Gamma$ thus leads to
an estimate of $N_\Gamma$, namely
\begin{align}
N_\Gamma- N_\mathrm{end} \simeq \frac{2}{3}\ln \left(\frac{\mathpalette\DHLhksqrt{2}}{2}
\frac{m}{\Gamma}\right) .
\end{align}
With the values used in \Fig{fig:backgroundOmega}, one obtains
$N_\Gamma-N_\mathrm{end}\simeq 2.8$, which is in good agreement with what can
be observed in this figure. Then, within a few $e$-folds, radiation
takes over and the radiation-dominated era starts.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{EquationsOfStateBackground.pdf}
\caption{Evolution of the instantaneous (transparent blue line) and
time-averaged equation-of-state parameters as a function of the
number of $e$-folds, in the same setup as in
\Fig{fig:backgroundOmega}. The averaging procedure consists in
convolving the instantaneous signal with a Gaussian kernel of
constant standard deviation given by $0.2$ $e$-folds~(orange line)
and $0.1$ $e$-folds~(green line), such that oscillations on shorter
time scales are averaged out. The analytical approximation
\Eq{eq:eos_anal} is also displayed (red line), and the inset plot
zooms in its regime of validity ({i.e.~} at the onset of the
oscillating phase).}
\label{fig:backgroundw}
\end{center}
\end{figure}
In \Fig{fig:backgroundw}, the transparent blue line displays the total
equation-of-state parameter for the background, namely
$w_\mathrm{bg}=(p_\phi+p_\mathrm{f})/(\rho_\phi+\rho_\mathrm{f})$. The
same remarks as in \Fig{fig:backgroundOmega} apply. Initially,
$w_\mathrm{bg}\simeq -1$ and inflation proceeds in the slow-roll
regime, until $w_\mathrm{bg}$ crosses $-1/3$ and inflation
stops. After inflation, $w_\mathrm{bg}$ oscillates, and finally
asymptotes $1/3$ when the transition towards the radiation-dominated
era is completed. In order to factor out the effect of oscillations
and only study their envelope, we also display the averaged value of
$w_\mathrm{bg}$, {i.e.~} $\langle w_\mathrm{bg}\rangle$, for two different
time scales of averaging. The orange curve corresponds to
$w_\mathrm{bg}$ convolved with a Gaussian kernel of standard deviation
given by $0.2$ $e$-fold, while the green one follows the same procedure
but with standard deviation $0.1$ $e$-fold. Interestingly, right after
the onset of the oscillatory phase, $\langle w_\mathrm{bg}\rangle$ is
close to zero, which confirms that the background expands on average
as in a matter-dominated era, until the production of radiation
becomes effective.
The behaviour of $\langle w_\mathrm{bg}\rangle$ when radiation is
still subdominant ({i.e.~} during inflation and during the first stage of
the oscillating phase) can be described analytically as follows. A
first remark is that \Eq{eq:eomfluidgamma} can be solved exactly,
\begin{align}
\label{eq:rhof:exact}
\rho_{\mathrm{f}}(t)=\frac{\Gamma}{2}\int_{t_\mathrm{in}}^t \dot{\phi}^2(\tilde{t})
\left[\frac{a(\tilde{t})}{a(t)}\right]^{3(1+w_{\mathrm{f}})} \mathrm{d}\tilde{t}\, .
\end{align}
This expression can be cast as a perturbative expansion in
$\Gamma$. At leading order, the integrand should be evaluated with
$\Gamma=0$, {i.e.~} using the background dynamics in the absence of the
radiation fluid, which we know how to describe.
During inflation, using the formula given above for the slow-roll
trajectory, one can compute \Eq{eq:rhof:exact} explicitly in terms of
error functions. The resulting expression is not particularly
illuminating so we do not reproduce it here, but we note that if the
initial time is chosen sufficiently far in the past, it converges
to\footnote{This convergence proves that, as mentioned above, after a
few $e$-folds~in slow-roll inflation, $\rho_{\mathrm{f}}$ reaches an
attractor, which implies that our results do not depend on our
choice of initial time of integration.}
\begin{align}
\label{eq:rhof:inflation:approx}
\rho_{\mathrm{f}} \simeq \frac{mM_\usssPl^2\Gamma}{3}
\mathpalette\DHLhksqrt{\frac{\pi}{2\left(1+w_\mathrm{f}\right)}}
e^{\frac{3}{2}\left(1+w_\mathrm{f}\right)\left[1-2\left(N-N_\mathrm{end}\right)\right]}
\mathrm{erfc}\left\lbrace \mathpalette\DHLhksqrt{\frac{3}{2}\left(1+w_\mathrm{f}\right)
\left[1-2\left(N-N_\mathrm{end}\right)\right]} \right\rbrace\, .
\end{align}
At the end of inflation, $\rho_{\mathrm{f}}$ is therefore of order
$mM_\usssPl^2\Gamma$, hence $\Omega_{\mathrm{f}}$ is of order
$\Gamma/H_\mathrm{end}$, so radiation can indeed be neglected when the decay
rate is much smaller than the Hubble scale during inflation. For
instance, with the parameter values used in \Fig{fig:backgroundOmega},
\Eq{eq:rhof:inflation:approx} gives $\Omega_\mathrm{f}(t_\mathrm{end})\simeq
8.1\times 10^{-4}$ while the numerical integration performed in
\Fig{fig:backgroundOmega} gives $\Omega_\mathrm{f}(t_\mathrm{end})\simeq
9.8\times 10^{-4}$, which allows us to check the validity of our
approach (the small difference between these two values is explained
by the fact that the slow-roll approximation breaks down towards the
end of inflation).
During the oscillating phase, in the absence of fluid, as explained
above $\phi(t)\propto \sin(m t) a^{-3/2}$. Plugging this formula into
\Eq{eq:rhof:exact}, and after averaging over the oscillating term, one
obtains
\begin{align} \Omega_{\mathrm{f}} &\simeq
\Omega_{\mathrm{f}}(t_\mathrm{end})e^{-3
w_\mathrm{f}\left(N-N_\mathrm{end}\right)}+ \frac{\Gamma}{12
H_\mathrm{end}}\frac{\phi_\mathrm{end}^2}{M_\usssPl^2} \left\lbrace
\frac{3}{4\left(w_\mathrm{f}-\frac{1}{2}\right)}
\left[e^{-\frac{3}{2}\left(N-N_\mathrm{end}\right)}-e^{-3
w_\mathrm{f}\left(N-N_\mathrm{end}\right)}\right] \right. \nonumber \\ &
\quad\quad\quad\quad\quad\quad \left.
+\frac{m^2}{3\left(w_\mathrm{f}+\frac{1}{2}\right)H_\mathrm{end}^2}
\left[e^{\frac{3}{2}\left(N-N_\mathrm{end}\right)}-e^{-3
w_\mathrm{f}\left(N-N_\mathrm{end}\right)}\right]\right\rbrace\, .
\end{align}
After a few $e$-folds, if $w_\mathrm{f}>-1/2$, the first term on the
second line is the dominant one, which leads to
\begin{align}
\Omega_{\mathrm{f}} \simeq \frac{1}{18\left(2 w_\mathrm{f}+1\right)}
\frac{\phi_\mathrm{end}^2 m^2}{M_\usssPl^2 H_\mathrm{end}^2} \frac{\Gamma}{H}\, .
\end{align}
In a quadratic potential, using the slow-roll formula
$\phi_\mathrm{end}\simeq \mathpalette\DHLhksqrt{2}M_\usssPl$, one has $H_\mathrm{end}\simeq m/\mathpalette\DHLhksqrt{2}$ and
the equation-of-state parameter $w_\mathrm{bg}\simeq w_\mathrm{f}
\Omega_\mathrm{f}$ is given by
\begin{align}
\label{eq:eos_anal}
w_\mathrm{bg} \simeq
\frac{2w_\mathrm{f}}{9\left(2 w_\mathrm{f}+1\right)}\frac{\Gamma}{H}\, .
\end{align}
Because of the slow-roll violation at the end of inflation, this
formula is expected to provide an accurate description only up to an
overall factor of order one (for instance in $m/H_\mathrm{end}$), and in
\Fig{fig:backgroundw} one can check that this is indeed the case, see
the inset in particular (the agreement in the case of other fluid
equation-of-state parameters can be checked in \Fig{fig:otherfluids}
below). When $\Gamma$ becomes of order $H$, {i.e.~} when $N\sim N_\Gamma$,
the approximation breaks down and \Eq{eq:eos_anal} cannot be trusted
anymore.
\subsection{Perturbations}
\label{subsec:pert}
Having established how the background evolves, we now turn to the
behaviour of the perturbations. Since the equations we started from,
\Eqs{eq:conservationT} and~\eqref{eq:QmuK:to:f}, have a covariant
form, they can be perturbed. As stressed above, this is not the case
of the background equations of motion, \Eqs{eq:eomscalargamma}
and~\eqref{eq:eomfluidgamma}, which explains why these two equations
cannot be used as a starting point, and why it was necessary to
re-derive them from a covariant principle. For more explanations about
this formalism, we refer the interested reader to \Refs{Malik:2004tf,
Jerome_Lucas_in_prep}. By perturbing \Eq{eq:conservationT}, one
obtains
\begin{align}
\delta\left[\nabla _{\nu}T^{\mu \nu}_{(\alpha)}\right]=\sum_{\beta}
\left[\delta Q^{\mu}_{(\alpha)\rightarrow (\beta)}
-\delta Q^{\mu}_{(\beta)\rightarrow (\alpha)}\right].
\end{align}
In this formula, the perturbed energy transfer $\delta
Q^{\mu}_{(\alpha)\rightarrow (\beta)}$, using Eq.~(\ref{eq:defQ}), can
be written as
\begin{align}
\delta Q^{\mu}_{(\alpha)\rightarrow (\beta)}=\delta Q_{(\alpha)\rightarrow (\beta)}u^{\mu}
+Q_{(\alpha)\rightarrow (\beta)}\delta u^{\mu}
+\delta f^{\mu}_{(\alpha)\rightarrow (\beta)}.
\end{align}
The constraint that the four-vector $f^{\mu}_{(\alpha)\rightarrow
(\beta)}$ is orthogonal to the Hubble flow must also be satisfied at
the perturbed level, and this leads to $\delta
[f^{\mu}_{(\alpha)\rightarrow (\beta)} u_{\mu}]=0$. As a consequence,
$\delta f^{0}_{(\alpha)\rightarrow (\beta)}=0$ and only $\delta
f_i^{(\alpha)\rightarrow (\beta)} \neq 0$. Since we consider scalar
perturbations, we write $\delta f_i^{(\alpha)\rightarrow
(\beta)}=\partial_i \delta f_{(\alpha )\rightarrow (\beta)}$ and
work in terms of the function $\delta f_{(\alpha )\rightarrow
(\beta)}$.
Let us now perturb the gradient of the stress energy tensor for a
scalar field in interaction with a perfect fluid. At the perturbed
level, the kinetic and potential fictitious fluids associated to
$\phi$ have perturbed energy density and pressure given by
\begin{align}
\delta \rho_{K}^{\rm (gi)}&=\delta p_{K}^{\rm (gi)}
=\frac{\phi'}{a^2}\delta \phi^{\rm (gi)}{}'-
\frac{\phi'{}^2}{a^2}\Phi,
\\
\delta \rho_V^{\rm (gi)}&=-\delta p_V^{\rm (gi)}
=V_{\phi}\delta\phi^{\rm (gi)},
\end{align}
and the perturbed gradient of the stress energy tensor also involves
the velocity potential $v^{\mathrm{(gi)}}_{(\alpha)}$, related to the
spatial component of the perturbed velocity by
$v_{(\alpha),i}^{\mathrm{(gi)}}=\partial_i
v^{\mathrm{(gi)}}_{(\alpha)}$, and the rescaled velocity
$\varsigma_{(\alpha)}^{\mathrm{(gi)}}$ defined by
$\varsigma_{(\alpha)}^{\mathrm{(gi)}}\equiv
[\rho_{(\alpha)}+p_{(\alpha)}]v_{(\alpha)}^{\rm (gi)}$, \begin{equation}\begin{aligned}
v_{K}^{\rm (gi)}=-\frac{\delta \phi^{\rm (gi)}}{\phi'}, \quad
\varsigma_{K}^{\rm (gi)}=-\frac{\phi'}{a^2}\delta \phi^{\rm (gi)}.
\end{aligned}\end{equation} Notice that we do not need to specify $v_{V}^{\rm (gi)}$ since it
does not appear in the equations. In these expressions, as already
mentioned, the superscript ``$(\mathrm{gi})$'' means that the
corresponding quantity is gauge-invariant and coincides with its value
in the longitudinal gauge. At the perturbed level, the
energy-momentum transfer coefficients between the kinetic and
potential fluids are given by
\begin{align}
a\delta Q_{K\rightarrow V}&=-V_{\phi}\delta \phi^{\rm (gi)}{}'
+V_{\phi}\phi'\Phi-V_{\phi\phi}\phi'
\delta\phi^{\rm (gi)}, \quad
a\delta Q_{V\rightarrow K}=0, \\
\delta f_{K\rightarrow V}&=
\delta f_{V\rightarrow K}=0.
\end{align}
As will be shown below, these formulas are indeed needed to recover
the standard equation of motion for the scalar field fluctuation ({i.e.~}
the equation of motion in absence of coupling with a fluid, for which
a Lagrangian formulation of the theory exists and the equation of
motion is well prescribed). Regarding the interaction between the
kinetic and potential fluids on one hand, and the perfect fluid on the
other hand, we have from perturbing \Eq{eq:QmuK:to:f}
\begin{align}
\label{eq:deltaQ:K:f}
\delta Q_{K\rightarrow \mathrm{f}}& =-\Gamma\delta \rho_{K}^{\mathrm{(gi)}}, \quad
\delta Q_{\mathrm{f}\rightarrow K}=
\delta Q_{V\rightarrow \mathrm{f}} =\delta Q_{\mathrm{f}\rightarrow V}=0,
\end{align}
and
\begin{align}
\label{eq:delta:f:K:f}
\delta f_{K\rightarrow \mathrm{f}}&=a\Gamma
\left[v_\mathrm{tot}^{\mathrm{(gi)}}
-v_K^{\mathrm{(gi)}}\right]\rho_K, \quad \delta f_{\mathrm{f}\rightarrow K}=
\delta f_{V\rightarrow \mathrm{f}}=\delta f_{\mathrm{f}\rightarrow V}=0,
\end{align}
where the total velocity $v^{\rm (gi)}_\mathrm{tot}$ is defined by the
following expression
\begin{align}
v^{\rm (gi)}_\mathrm{tot}=\frac{1}{\rho+p}\sum _{\alpha}
\left[\rho_{(\alpha)}+p_{(\alpha)}\right]v_{(\alpha)}^{\rm (gi)},
\end{align}
with $\rho$ and $p$ the total energy density and pressure.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{Perturbations.pdf}
\caption{Evolution of the modulus of the gauge invariant perturbative
quantities $\delta_k^\phi$ (inflaton density contrast, blue line),
$\delta_k^\mathrm{f}$ (radiation density contrast, orange line),
$\delta_k^\mathrm{tot}$ (total, {i.e.~} scalar field plus radiation,
density contrast, green line) and $\mathcal{R}_k$ (comoving
curvature perturbation, red line) as a function of the number of
$e$-folds, in the same setup as the one displayed in the previous
figures. Soon after perturbative reheating becomes effective (which,
according to the discussion around \Fig{fig:backgroundOmega}, occurs
when $N_\Gamma-N_\mathrm{end}\simeq 2.8$), the scalar field density
contrast decreases, the curvature perturbation stops being constant
and decreases as well, hence the total density contrast stops
increasing, which signals the end of the instability.}
\label{fig:perturbations}
\end{center}
\end{figure}
Endowed with these definitions and assumptions one can then derive the
perturbed equations of motion. For the scalar field, one obtains the
perturbed Klein-Gordon equation
\begin{align}
\delta \phi^{\mathrm{(gi)}}{}''+2{\cal H}\delta \phi^{\mathrm{(gi)}}{}'
+\frac{a\Gamma}{2}\delta \phi^{\mathrm{(gi)}}{}'
-\nabla^2\delta \phi^{\mathrm{(gi)}}{}+a^2 V_{\phi \phi}
\delta \phi^{\mathrm{(gi)}}=4\phi'\Phi'-2a^2V_\phi\Phi
-\frac{a\Gamma}{2}\phi'\Phi.
\end{align}
For the perfect fluid, one has two equations, namely the time and space
components of the conservation equation, yielding an equation for the
perturbed energy density and the perturbed velocity respectively, which read
\begin{align}
& \delta \rho^{\mathrm{(gi)}}{}'+3{\cal H}(1+w_\mathrm{f})\delta \rho^{\mathrm{(gi)}}
-3(1+w_\mathrm{f})\rho \Phi'
+(1+w_\mathrm{f})\rho \nabla^2 v^{\mathrm{(gi)}}
\nonumber \\ &
-\frac{\Gamma}{a}\left[\phi'\delta \phi^{\mathrm{(gi)}}{}'
-\frac12\phi'^2\Phi\right]=0, \\
&\varsigma^{\mathrm{(gi)}}{}'+4{\cal H}\varsigma^{\mathrm{(gi)}}+\rho(1+w_\mathrm{f})\Phi
+w_\mathrm{f}\delta \rho^{\mathrm{(gi)}}{}+\frac{\Gamma}{2a}
\phi'\delta \phi^{\mathrm{(gi)}}=0.
\end{align}
One also needs an equation to track the evolution of the
Bardeen potential and this is provided by the perturbed Einstein
equations,
\begin{align}
\Phi'=-{\cal H}\Phi-\frac{a^2}{2M_\usssPl^2}\left[-\frac{1}{a^2}
\phi'\delta \phi^{\mathrm{(gi)}}+\varsigma^{\mathrm{(gi)}}{}\right].
\end{align}
In \Fig{fig:perturbations}, we have numerically integrated the above
equations using the same parameters as in \Figs{fig:backgroundOmega}
and~\ref{fig:backgroundw} and for the mode $k/a_\mathrm{ini}=0.002 M_\usssPl$, the
physical wavelength of which is displayed in \Fig{fig:scale}. The
solid blue line in \Fig{fig:perturbations} represents the scalar field
density contrast $k^3\vert \delta_{\bm k}^{\phi}\vert^2=k^3\vert
\delta \rho_{\phi,\bm k}^{\mathrm{(gi)}}/\rho_\phi\vert^2$, the solid
orange line corresponds to the radiation fluid density contrast
$k^3\vert \delta_{\bm k}^{\mathrm{f}}\vert^2=k^3\vert \delta
\rho_{\mathrm{f},\bm k}^{\mathrm{(gi)}}/\rho_\mathrm{f}\vert^2$, while
the green line is the total density contrast $k^3\vert \delta
^\mathrm{tot}_{\bm k}\vert^2=k^3\vert [\delta \rho_{\phi,\bm
k}^{\mathrm{(gi)}}+\delta \rho_{\mathrm{f},\bm
k}^{\mathrm{(gi)}}{}]/(\rho_\phi+\rho_\mathrm{f})\vert^2$. When
the mode enters the instability band around $N-N_\mathrm{end}\simeq
0.5$ $e$-fold, we see that the scalar field density contrast grows and
one can check that this growth is proportional to the scale factor
$a(t)$. This is a first consistency check. Originally, this growth was
derived from an analysis based on the Mathieu-like equation for the
Mukhanov-Sasaki variable, see \Ref{Jedamzik:2010dq}. Here, we recover
it using the conservation equations. We also notice that, initially,
the total density contrast is equal to the scalar field density
contrast which is of course expected since the production of radiation
has not yet started in a sizeable way. When the amount of radiation
starts being substantial, the two density contrasts become different
as revealed by the fact that the green and blue curves separate. Then,
the scalar field density contrast strongly decreases and becomes
quickly negligible. This means that the total density contrast is
given by the radiation density contrast and we see that, when the
transition is completed, it stays constant. In
\Fig{fig:perturbations}, we have also represented the comoving
curvature perturbation ${\cal R}_{\bm k}=\Psi_{\bm k}-aH v^{\rm
(gi)}_{\mathrm{tot},\bm k}$ with the red line. At the onset of the
instability phase, it is, as expected from the above analysis,
constant, and then it decreases as expected for sub-sonic
perturbations in a radiation-dominated universe.
The main conclusion of this analysis is a confirmation that
perturbative reheating effects do not destroy the metric preheating
instability, since the instability stops only when, at the background
level, the radiation fluid dominates the energy budget of the
universe. The tiny amount of radiation that is initially present is
not sufficient to blur the narrow-resonance regime and to remove the
system from the first, and very thin, instability band of the Mathieu
equation chart. Notice that this supports the treatment of
\Ref{Martin:2019nuw} where the instability was simply stopped at the
time when the universe becomes radiation dominated. This also
demonstrates the robustness of the results obtained in
\Ref{Jedamzik:2010dq} and the generic, unavoidable presence of an
instability in single-field models of inflation at small scales.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{EquationsOfStateBackground_matter.pdf}
\includegraphics[width=0.49\textwidth]{EquationsOfStateBackground_stiff.pdf}
\includegraphics[width=0.49\textwidth]{Perturbations_matter.pdf}
\includegraphics[width=0.49\textwidth]{Perturbations_stiff.pdf}
\caption{Time evolution of the background equations-of-state
parameters (upper panels), with the insets zooming in the regime of
validity of the analytical approximation~\eqref{eq:eos_anal}, as
well as scalar perturbations (lower panels), as a function of the
number of $e$-folds, in the cases of decay into pressureless matter
$w_\mathrm{f}=0$ (left panels) and into a stiff fluid
$w_\mathrm{f}=1$ (right panels). Apart from the value of
$w_\mathrm{f}$, the setup and parameter values are the same as in
all previous figures.}
\label{fig:otherfluids}
\end{center}
\end{figure}
Another way to test this robustness is to study whether the above
conclusion is still valid when the inflaton decays into a fluid with
an equation of state that differs from the one of radiation. We have
therefore considered two additional cases corresponding to a decay
into a fluid with $w_\mathrm{f}=0$ (pressureless matter) and a decay
into a fluid with $w_\mathrm{f}=1$ (stiff matter). The results are
displayed in \Fig{fig:otherfluids} and confirm that our description of
the instability generalises to arbitrary equation-of-state parameters
$w_\mathrm{f}$. On the upper panels, we show the total equations of
state $w_{\mathrm{bg}}$ and their averaged values, as well as the
analytical approximation \Eq{eq:eos_anal}, as a function of the number
of $e$-folds. One verifies that the equation-of-state parameter indeed
asymptotes $w_{\mathrm{bg}}=0$ (left panel) and $w_{\mathrm{bg}}=1$
(right panel) at late time. On the lower panels, we have displayed the
time evolution of the density contrasts and of the curvature
perturbation. The growth $\delta_k\propto a$ is still observed until
the universe is dominated by the fluid,\footnote{In the case where the
decay product is a pressureless fluid, the growth $\delta_k\propto
a$ still continues afterwards for all scales. In the case
where $w_\mathrm{f}=1$, stiff fluid density fluctuations also grow
like $\delta_k\propto a$ on sub-Hubble scales, see the relation
above \Eq{eq:w:eff:sub_sonic}.} regardless of its equation of state.
\subsection{Radiative decay and PBH formation from metric preheating}
\label{subsec:pbh}
In the covariant description developed in \Sec{sec:radiativedecay},
two fluids were necessary to fully describe the scalar field
fluctuations. This shows that cosmological inhomogeneities of a scalar
field and of a perfect fluid are a priori two very different physical
systems, featuring different properties. It is therefore rather
intriguing that, during the oscillatory phase, the averaged equation
of state is the one of pressureless matter, and that inside the
instability band, the density contrast behaves as the one of
pressureless matter too, since it grows linearly with the scale
factor.
The formation of primordial black holes has mostly been studied in the
context of perfect fluids, so if this correspondence between an
oscillating scalar field and a pressureless perfect fluid does hold
(and even in the presence of additional radiation), it would have
important practical consequences~\cite{Carr:2018nkm} for studying the
production of PBHs from the metric preheating instability. This is
why, in this section, we compare more carefully the behaviour of the
cosmological perturbations of the system at hand with those of a
single perfect fluid sharing the same equation-of-state parameter.
A key concept in this comparison is the one of the equation of state
``felt'' by the perturbations, if they are interpreted as
perturbations of a single perfect fluid. We start by recalling the
behaviour of the density contrast for a perfect fluid with a given
equation-of-state parameter $w$. This will allow us to extract the
equation-of-state parameter from the time dependence of the density
contrast, and to apply this formula to the system studied in
\Sec{sec:radiativedecay} in order to derive the effective ``equation
of state'' felt by the density perturbations. We will then compare it
with the equation of state of the background.
In order to implement this program, a remark is in order regarding the
definition of the density contrast. So far, we have worked in terms of
the density contrast $\delta^{\mathrm{(gi)}}$ (noted $\delta
_{\mathrm{g}}$ in \Ref{Bardeen:1980kt}),
which consists in measuring the
energy density relative to the hypersurface which is as close as
possible to a ``Newtonian'' time slicing. However, for a single
perfect fluid, this density contrast usually stays constant at large
scales and, as a consequence, cannot be used as a tracer of the
equation-of-state parameter. Fortunately, as is well-known, there are
other possible definitions, in particular $\delta_\mathrm{com}$ (noted
$\delta_{\mathrm{m}}$ in \Ref{Bardeen:1980kt}), which measures the
amplitude of energy density from the point of view of matter, and
corresponds to the density contrast in the comoving-orthogonal
gauge. The behaviour of $\delta_\mathrm{com}$ does depend on $w$ on
large scales and, therefore, it is a useful quantity for our
purpose. The relationship between $\delta ^\mathrm{(gi)}$ and
$\delta_\mathrm{com}$ is given by
\begin{align}
\delta^\mathrm{(gi)}=\delta
_\mathrm{com}-\frac{\rho'}{\rho}v^{(\mathrm{gi})}=\delta
_\mathrm{com} \left[1+ 3 \frac{a^2 H^2 }{k^2}
\left(1+ \frac{\Phi^\prime}{ a H \Phi} \right) \right],
\end{align}
which shows that, although they behave differently on super-Hubble scales, their
evolution is identical on small scales. To prove this relation, we have
used that the density contrast $\delta_\mathrm{com}$ is related to
the Bardeen potential through the Poisson
equation~\cite{Bardeen:1980kt}
\begin{align}
\delta _\mathrm{com}=-\frac{2k^2M_\usssPl^2}{a^2\rho}\Phi\, .
\end{align}
If the space-time expansion is driven by a perfect fluid with constant
equation-of-state parameter $w$, the energy density scales as
$\rho=\rho_\mathrm{end}(a_\mathrm{end}/a)^{3(1+w)}$, which leads to
\begin{align}
\label{eq:deltam:Phi:a}
\delta_\mathrm{com}=\delta_\mathrm{com}^\mathrm{end}
\left(\frac{a}{a_\mathrm{end}}\right)^{1+3w}\frac{\Phi}{\Phi_\mathrm{end}},
\end{align}
where the Bardeen potential follows the equation of
motion~\cite{Mukhanov:1990me}
\begin{align}
\frac{\mathrm{d}^2 }{\mathrm{d} (k\eta)^2}\left[(k\eta)^\nu\Phi\right]
+\frac{2}{k\eta}\frac{\mathrm{d} }{\mathrm{d} k\eta}\left[(k\eta)^\nu\Phi\right]
+\left[w- \frac{\nu(\nu+1)}{\left(k\eta\right)^2}\right](k\eta)^\nu\Phi=0\, ,
\end{align}
with $\nu=2/(1+3w)$. The solution to this equation is given by
\begin{align}
\label{eq:Phi:perfect:fluid}
\Phi_{\bm k}=(wk\eta)^\alpha\left[A_{\bm
k}J_{\alpha}(wk\eta)+B_{\bm k}J_{-\alpha}(wk\eta)\right],
\end{align}
with $\alpha=-(5+3w)/[2(1+3w)]$, $J_{\alpha}$ being a Bessel function
and $A_{\bm k}$, $B_{\bm k}$ two integration constants fixed by the
initial conditions. The behaviour of this solution depends on whether
$\vert wk\eta\vert \ll 1$ or $\vert wk\eta\vert \gg 1$, {i.e.~} on whether
the mode wavelength is larger or smaller than the sound horizon $w/H$.
On super-sonic scales, $\vert wk\eta\vert \ll 1$, the Bessel functions
can be expanded according to $J_\alpha(z)\propto z^\alpha$. Since
$\alpha<0$ for $w>-1/3$, \Eq{eq:Phi:perfect:fluid} features a constant
mode and a decaying mode. The Bardeen potential thus asymptotes to a
constant, and $\delta_\mathrm{com} \propto a^{1+3w}$, see
\Eq{eq:deltam:Phi:a}. If $w=0$, then $\delta_\mathrm{com}\propto a$,
which is a well-known result.
On sub-sonic scales, $\vert wk\eta\vert \gg 1$, the Bessel functions
can be expanded according to $J_\alpha(z) \simeq \mathpalette\DHLhksqrt{2/(\pi
z)}\cos[z-\pi(1+2\alpha)/4]$. This leads to $\delta_\mathrm{com}\simeq
a^{-1/2+3w/2}\cos[wk\eta-\pi(1+2\alpha)/4]$. The density contrast thus
oscillates as a result of the competition between gravity and
pressure, and compared to the super-sonic case, the overall amplitude
also scales differently with the scale factor. One also notices that
this formula cannot be applied if $w=0$. Indeed, in that case, the
argument of the Bessel functions vanishes. Physically, if $w=0$, there
is no sound horizon anymore (since the pressure vanishes), and all
scales are ``super-sonic'' by definition.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{EquationsOfStatePerturbationsNew}
\caption{Effective equation-of-state parameters for the
perturbations as a function of the number of
$e$-folds.
When the mode $k$ is super-sonic, its effective equation-of-state parameter is given by $w_\mathrm{eff}^\mathrm{super}$ (red line), and respectively by $w_\mathrm{eff}^\mathrm{sub}$ (green line) when it is sub-sonic. In order to facilitate the reading of the
figure, the effective equation-of-state parameters are displayed with dashed lines in the regimes where they are not relevant.
The instantaneous background equation of state (transparent blue line)
and its averaged value (orange line) are also represented for
comparison. When $\left\langle w_\mathrm{bg} \right\rangle\approx
0$, the sound horizon is very small and the super-sonic effective
equation of state (red line) is the relevant one. As expected, it
is close to $0$. However when $\left\langle w_\mathrm{bg}
\right\rangle$ starts to depart from zero, since the physical mode
$k/a$ is within the Hubble radius (see \Fig{fig:scale}), the
sub-sonic equation of state (green line) becomes the relevant one
and, as expected, it quickly converges to $1/3$. Note however that
between these two asymptotic regimes, the effective equation of
state for the perturbation does not match the one of the
background. }
\label{fig:effw}
\end{center}
\end{figure}
These two limiting expressions of the density contrast can be used to
define an effective equation-of-state parameter ``felt'' by the
perturbations. Since $\delta_\mathrm{com} \propto a^{1+3w}$ on
super-sonic scales, we define
\begin{align}
w_\mathrm{eff}^\mathrm{super}\equiv \frac{1}{6}\frac{\mathrm{d}
\ln \left(k^3\left\langle \delta_\mathrm{com}^2 \right\rangle\right)}{\mathrm{d}\ln a}
-\frac{1}{3}\, ,
\end{align}
where $\langle \cdot \rangle$ stands for time averaging over possible
background oscillations. On sub-sonic scales, $\delta_\mathrm{com}\simeq
a^{-1/2+3w/2}\cos[wk\eta-\pi(1+2\alpha)/4]$, so we introduce
\begin{align}
\label{eq:w:eff:sub_sonic}
w_\mathrm{eff}^\mathrm{sub}\equiv\frac{1}{3} \frac{\mathrm{d} \ln \left(k^3
\left\langle \delta_\mathrm{com}^2 \right\rangle\right)}{\mathrm{d} \ln a }+\frac{1}{3}\, .
\end{align}
Which of these two effective equations of state is relevant depends on
whether the mode $k$ is sub-sonic and super-sonic. In \Fig{fig:effw},
we display these two quantities, $w_\mathrm{eff}^\mathrm{super}$ and
$w_\mathrm{eff}^\mathrm{sub}$, from the value of $\delta_\mathrm{com}$
numerically obtained as in the previous figures, and compare them with
the (averaged) equation-of-state parameter of the background. In all
cases, the time averaging is performed with a Gaussian kernel of
constant standard deviation given by $0.2$ $e$-folds. Let us also stress
again that, on sub-Hubble scales, the density contrast in the
comoving-orthogonal gauge, $\delta_\mathrm{com}$, coincides with the
one in the longitudinal gauge displayed in \Figs{fig:perturbations}
and~\ref{fig:otherfluids}.
During the first oscillations, the equation-of-state parameter
vanishes (on average) in the background, and recalling that all modes
are super-sonic for a vanishing equation-of-state parameter, one can
check that the relevant equation of state,
$w_\mathrm{eff}^\mathrm{super}$, indeed vanishes, and that the red and
orange curves in \Fig{fig:effw} are indeed close. This however lasts
for a few $e$-folds~only, after which neither of the effective equations
of state correctly reproduces the behaviour of the (averaged) equation
of state of the background. In addition, for the sub-sonic scales that
lie inside the instability band, $\langle w_\mathrm{bg}\rangle$ does
not coincide at all with $w_\mathrm{eff}^\mathrm{sub}$ during the
oscillating phase until radiation strongly dominates the universe
content and both converge to $1/3$.
Therefore, despite the fact that
the inflaton background effectively behaves as pressureless matter on
average, and that its decay product is a perfect fluid, the
perturbations of the system are not those of perfect fluids. This
confirms that the system made of a decaying, oscillating scalar field
has different behaviour from a pure perfect fluid, and cannot be
simply modelled as such.
Let us note that this fundamental difference is even more striking in
the case where the inflaton potential is quartic close to its minimum,
since in that case the correspondence between the inflaton
perturbations and those of a perfect fluid with the same background
equation of state breaks down even in the absence of inflaton
radiative decay. As shown in \Ref{Jedamzik:2010dq} indeed, while
$\langle w_\mathrm{bg} \rangle =1/3$ in such a case, the instability
of metric preheating is still present, and the density contrast grows
even faster than that of pressureless matter (namely, exponentially
with the scale factor) in the instability band, while the density
contrast for a perfect fluid having $w=1/3$ is constant on sub-sonic
scales.
As mentioned above, this implies that, in order to study the
production of PBHs that arises from the increase of the density
contrast in the instability band, one cannot rely on techniques
developed for perfect fluids. In \Ref{Carr:2018nkm} for instance, it
was used that an overdensity of a perfect fluid with constant
equation-of-state parameter $w$ collapses into a black hole if it
exceeds the critical density contrast\footnote{The criterion for
PBH formation is expressed in terms of the density contrast rather
than curvature perturbation, the latter being affected
by environmental effects~\cite{Yoo:2018kvb}, see also
\Ref{Young:2014ana}.}~\cite{Harada:2013epa, Harada:2017fjm}
\begin{align}
\label{eq:deltac:perfect:fluid}
\delta _\mathrm{c}=\frac{3(1+w)}{5+3w}
\sin^2\left(\frac{\pi \mathpalette\DHLhksqrt{w}}{1+3w}\right)\, ,
\end{align}
in which $w$ was replaced with $w \sim \Gamma/H$ [see
\Eq{eq:eos_anal}]. If $w=0$, \Eq{eq:deltac:perfect:fluid} indicates
that any local overdensity ends up forming a black hole, which is
indeed the case in the absence of any pressure force. The analysis of
\Ref{Carr:2018nkm} thus suggests that what limits the formation of
PBHs from the instability of metric preheating is the presence of
(even small amounts of) radiation, which provide a non-vanishing value
to the equation-of-state parameter, and hence to
$\delta_\mathrm{c}$. However, the results of the present work cast some doubt
on such a treatment since we showed that an oscillating scalar field
decaying into a radiation fluid cannot be treated as a collection of
perfect fluids at the perturbed level [furthermore, the background
equation of state for such a system is strongly time dependent, see
\Eq{eq:eos_anal}, while \Eq{eq:deltac:perfect:fluid} only applies to
constant equation-of-state parameters].
In \Ref{Martin:2019nuw}, the formation of PBHs from the overdensities
of an oscillating scalar field was studied in the context of metric
preheating, and it was found that what limits the formation of PBHs is
rather the fact that the instability does not last for ever, since it
stops when radiation takes over. Indeed, although it is true that any
overdensity inside the instability band develops towards forming a
black hole, the amount of time needed for a black hole to form depends
on (and decreases with) the initial value of the density contrast. By
requiring that it takes less time than what is available before the
complete inflaton decay (which, as we have established in
\Sec{subsec:pert}, signals the end of the instability phase that is
otherwise not affected by the presence of radiation being produced),
one obtains a lower bound on the density contrast, which however has
nothing to do with \Eq{eq:deltac:perfect:fluid}.
\section{Conclusions}
\label{sec:conclusions}
Preheating effects are often believed to be observationally irrelevant
in single-field models of inflation. Although this is true at large
scales, where the curvature perturbation is merely conserved, the
situation is different at small scales, namely those leaving the
Hubble radius a few $e$-folds~before the end of inflation. Such scales
are subject to a persistent instability proceeding in the
narrow-resonance regime~\cite{Jedamzik:2010dq}, which causes the
density contrast to grow, leading to various possible effects such as
early structure formation or even PBHs
formation~\cite{Martin:2019nuw}.
In contrast to the case of background preheating, where the
narrow-resonance regime is irrelevant since, in a time-dependent
background, the system spends very little time in the thin instability
band and the resonance effects are wiped out, in the metric preheating
case, the presence of the instability is actually caused by cosmic
expansion itself (see \Fig{fig:scale}). This is the reason why this mechanism
is both atypical and very efficient.
This fact was known to be true~\cite{Jedamzik:2010dq} only if the
inflaton is uncoupled to other degrees of freedom. However, in order
for reheating to proceed, the inflaton field must decay into
radiation, and the goal of this paper was to determine whether this
decay could spoil the instability. Using the formalism of cosmological
perturbations in the presence of interactions between fluids, we have
shown that it is not the case, and that the growth of the density
contrast inside the instability band remains unaffected until the
radiation fluid dominates the universe content.
We have also stressed that there is a fundamental difference between
the cosmological perturbations of an oscillating scalar field and
those of a perfect fluid, and that techniques developed to study the
formation of PBHs from perfect fluid overdensities cannot be applied
to the present context. Instead, a dedicated analysis such as the one
of \Ref{Martin:2019nuw} must be performed. Our results have confirmed
that the presence of radiation can simply be ignored until it comes to
dominate the energy budget, thus stopping the instability.
The results of this work therefore confirm that the instability of
metric preheating is unavoidable in single-field models of inflation,
since it only requires an oscillating scalar field in a cosmological
background, which is the state of the universe at the end of most
inflationary models, and given that it is robust against perturbative decay of this field.
\begin{acknowledgments}
T.~P. acknowledges support from a grant from the Fondation CFM pour la
Recherche in France as well as funding from the Onassis Foundation -
Scholarship ID: FZO 059-1/2018-2019, from the Foundation for Education
and European Culture in Greece and the A.G. Leventis Foundation.
\end{acknowledgments}
\bibliographystyle{JHEP}
|
1,116,691,499,198 | arxiv | \section{Introduction}
Topological Sigma models and string theories were introduced and mainly developed by Edward Witten by the end of 1991 ~\cite{witten1,witten2,vafamir,vafarev}. Although the main motivation for their introduction was to find a way to describe topological invariants of certain manifolds by correlation functions of some kinds of field theories; gradually more physicists became interested in studying and applying them. The first physical application of topological string theories were found in 1994 by Antoniadis, Gava, Narain, and Taylor ~\cite{antoni}. They could successfully show that the g-loop scattering amplitude of 2 gravitons and 2g-2 graviphotons in type IIA (IIB) superstring theory comapctified on a Calabi-Yau Manifold $M$ is equal to the g-loop partition function of A (B) model topological string theory which lives on the same Calabi-Yau manifold $M$.
\begin{equation}
\langle {R^2 T^{2g-2}}\rangle ^{IIA/IIB}_{g-loop}=F_g^{topA/topB}(t^{\alpha})
\end{equation}
in this expression R and T are the field strengths of graviton and graviphoton respectively; $F_g^{top}(t)$ is the g-loop topological string partition function. For A (B) model topological string theory $t^\alpha$ are k\"ahler (complex) structure moduli of the Calabi-Yau which is used in the compactification of the corresponding superstring theory. Moreover, Bershadsky, Cecotti, Ooguri, and Vafa could prove that
\begin{equation}
{{\langle} {R^2}{\rangle}}_{one-loop}^{IIA}=F_{g=1}^{top A}(t^{\alpha})
\end{equation}
On the other hand, Cardoso, de Wit, and Mohaupt applied Wald's formula and computed the entropy of these black holes ~\cite{dewit1,devia,moha}. According to (1) the g-loop free energies of topological string theories clearly appear in the low energy effective action of type II supersting theories. In other words, one has terms such as
\begin{equation}
S_{effective}^{IIA/IIB}=\sum_{g=1}{g{F_g}^{top A/top B}}[R^2 T^{2g-2}+2(g-1)(RT)^2 T^{2g-2}]
\end{equation}
where $R^2$ means $R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$, $(RT)^2=(RT)_{\rho\sigma}(RT)^{\rho\sigma}$, and $(RT)_{\rho\sigma}={R_{\mu\nu\rho\sigma}T^{\mu\nu}}$.
Therefore, it looks reasonable that one can express the entropy or partition function of the BPS black hole in terms of an observable in the corresponding topological string theory. Finally, in 2004 Ooguri, Strominger, and Vafa conjectured one can define a mixed ensemble for 4D N=2 BPS black holes in type II superstring theories, and write a partition function for them ~\cite{osv}. Theses black holes are described by three observables comprising their mass $M$, electric charges $q_I$, magnetic charges $p^I$. More precisely, they considered electric and magnetic charges as two different degrees of freedom and utilized a microcanonical ensemble for magnetic charges and a canonical ensemble for electric charges. In this manner, one can write the partition function of the black hole, and compute its free energy.
\begin{eqnarray}
Z_{mixed}= \sum_{q _{I}} \Omega(p^I, q _{I})\times \exp (-q.\phi)
\end{eqnarray}
in which $\Omega(p^I, q_{I})$ indicates the degeneracy of states with magnetic charges $p^I$ and electric charges $q_I$, and $\phi^I$ specify how much energy of the system will be changed if one adds or subtracts a unit of electric charge to the black hole. Since, the meaning of ${\phi}^I$ is analogous to that of chemical potential in statistical physics, Ooguri, Strominger, and Vafa (OSV) called it the electric chemical potential. Similarly, for every magnetic charge there is a magnetic chemical potential which in the literature is shown by $\chi_I$.
\\ When one compactifies type IIA or IIB supersting theory on a Calabi-Yau manifold, whose holonomy group is $SU(3)$, obtains some massless scalar fields $X^I$ from components of different tensor fields. Some of them sit in N=2 vector multiplets, others sit in N=2 hypermultiplets. Theses massless scalar fields called moduli are k\"ahler or complex structure moduli of the Calabi- Yau manifold.
Here we show K\"ahler moduli by $X^I$, and complex structure moduli by $\acute{X^I}$. The space of Ka\"eler moduli itself has proven to be a k\"ahler manifold; this manifold has a real nowhere vanishing function named K\"ahler potential $K(X)$ whose second order derivatives give the components of the K\"ahler metric, $g_{i\bar{j}}=\partial_i\partial_{\bar{j}} K$ of the K\"ahler moduli space.
\\There is a function (or more precisely a section) $F(X^I)$ of moduli whose first order derivatives give the K\"ahler potential $K(X)$. It is called prepotential since its first derivatives determine the K\"ahler potential ~\cite {moha}.
\begin{equation}
K(X)=-\ln (-i[X^I\bar{F_I}-F_I \bar{X^I}])
\end{equation}
It is useful to introduce the inhomogeneous coordinates $t^{\alpha\neq0}=\dfrac{X^{I\neq0}}{X^0}$ on the moduli spaces.
The prepotential receives loop corrections in closed string coupling; after taking into account theses loop corrections, it is called generalizaed prepotential. For type IIA supergravity and up to the one loop order in string coupling it is given by ~\cite{devia}
\begin{equation}
F^{IIA}(t^\alpha)=(CX^0)^2 D_{\alpha\beta\gamma}t^{\alpha}t^{\beta}t^{\gamma}-\dfrac{1}{6}c_{2\alpha}t^{\alpha}+....
\end{equation}
the first term is computed at tree level in string coupling and $ D_{\alpha\beta\gamma}$ determines the number of intersections of four cycles of the Calabi-Yau manifold. If one chooses bases $ \lbrace\omega_{\alpha}\rbrace $ for the cohomology group $H^{(1,1)}(M)$ it can be written as
\begin{equation}
D_{\alpha\beta\gamma}=\int_M \omega_{\alpha}\wedge\omega_{\beta}\wedge\omega_{\gamma}
\end{equation}
The second term in (6) occurs at one-loop order in string coupling, and is related to the second Chern class $ c_2 $ of the Calabi-Yau manifold.
\begin{equation}
c_{2\alpha}=\int_M c_2 \wedge \omega_\alpha
\end{equation}
On the other hand, the free energy of A model topological string theory has a perturbative expansion in closed topological string coupling $ g_{top} $ as follows [7,11]
\begin{equation}
F^{top,A}(t^\alpha)=-\dfrac{{(2\pi)}^3 i}{g_{top,A}^2} D_{\alpha\beta\gamma}t^{\alpha}t^{\beta}t^{\gamma}-\dfrac{\pi i}{12}c_{2\alpha}t^{\alpha}+...
\end{equation}
By comparing (6) and (9) term by term, OSV concluded that
\begin{equation}
F^{IIA}(t^{\alpha})=-\dfrac{2i}{\pi}F^{top,A}(t^\alpha)
\end{equation}
Moreover, by utilizing the mixed ensemble, OSV could prove that the free energy of N=2 extremal black holes is obtained by taking the imaginary part of the generalized prepotential.
Therefore, they gained the following astonishing formula which relates the partition function $ Z^{IIA}_{Mixed} $ of 4D N=2 extremal Type IIA black holes to the partition function $ Z^{top,A} $ of the A-model topological string theory.
\begin{equation}
Z^{IIA}_{Mixed} =\vert{ Z^{top,A} }\vert^2
\end{equation}
Here we should emphasize on few points. First, the corresponding topological string theory lives on the same Calabi-Yau manifold on which Type IIA superstring theory is compactified. Second, two-loop and higher order corrections in the generalized prepotential and in the topological string theory have been ignored. It can be shown that by choosing a special charge configuration, in which some of electric and magnetic charges are very large, such higher order corrections can be omitted ~\cite{osv}. Third, though in ~\cite{osv} authors used a mixed ensemble to derive equation (11), shortly after that, people could prove that one can use any kind of ensemble to describe the black hole ~\cite{ensemble}. This situation is analogous to what we have in ordinary statistical mechanics; All ensembles are equivalent to each other in an appropriate limit. In other words, no matter of which ensemble one chooses to compute the thermodynamical quantities of the system, the results derived by different ensembles should be equal. In this context, the appropriate limit is the limit of large electric and magnetic charges, i.e. in this limit the microcanonical and canonical ensembles become equivalent to each other.
\\The importance of this discovery is that calculations in topological string theories are much easier than in ordinary superstring theories. The reasons are as follows: First, Topological string theories can merely live on Euclidean, i.e. Calabi-Yau, manifolds. Hence they are time independent. Second, Their Hilbert spaces are much smaller. Third, their vertex operators are in a one to one correspondence to the members of the Dolbeualt cohomology groups $ H^{(1,1)}(M) $ or $ H^{(2,1)}(M) $ in A-model or B-model topological string theories, respectively. Consequently, computing correlation functions are reduced to the calculation of the integrals of the Wedge product of theses forms on the Calabi-Yau manifold. Due to these facts, this conjecture has excited a lot of interest during the past six years.
\\ Having been reviewed the original conjecture, we are now ready to study this intriguing idea in the case of 4D N=2 extremal blcak holes in the low energy limit of type IIB superstring theory.
\newpage
We know if one compatifies type IIA superstring theory on a circle of radius $R$, and applies T-duality on the circle, then type IIA theory turns into type IIB theory on a circle whose radius is $\dfrac{1}{R}$. In 1994, Zaslow and Strominger ~\cite{zaslow} proved that T-duality is a special case of a symmetry named Mirror Symmetry. Mirror symmetry turns a Calabi-Yau threefold $M$ into another Calabi-Yau threefold $\acute{M}$ (called mirror manifold) such that there is an isomorphisms between certain Dolbeault cohomology groups ~\cite{vafamir,mir1},
\begin{equation}
H^{(1,1)}(M) \cong H^{(2,1)}(\acute{M})
\end{equation}
and their Euler characteristics are related to each other by:
\begin{equation}
\chi =- \acute{\chi}
\end{equation}
These formulae just hold for the case in which the complex dimension of $M$ is three. Under Mirror symmetry type IIA superstring theory compactified on $M$ is exchanged by type IIB superstring theory compactified on $\acute{M}$.
\\On the other hand, since mirror symmetry exchanges K\"ahler structure moduli $t^{\alpha}$ of manifold $M$ and complex structures moduli $\acute{t}^{\alpha}$ of manifold $\acute{M}$, through the isomorphism (12), it turns the type A topological string theory on $M$ into type B topological theory on $\acute{M}$ ~\cite{vafamir, vafarev}. By putting everything together we can draw figure 1. Hence it seems probable that one can write the partition function of 4D N=2 extremal black holes of type IIB in terms of the partition function of B-model topological string theory. Our aim in this paper is to verify this claim explicitly.
\begin{figure}
\centering
\includegraphics[scale=.6]{1}
\caption{Mirror Symmetry supports the presence of the OSV cojecture for N=2 extremal black holes of type IIB}
\label{fig:1}
\end{figure}
The paper is organized as follows: In section 2 we derive the prepotential of type IIB supergravity up to one loop order in string coupling. In section 3 we calculate the free energy of type B topological string theory up to one loop order. We discuss that one needs a field redefinition in order to calculate the one loop free energy. In section 4 we compare the prepotential and the free energy that we obtained in the previous sections. At the end by utilizing a mixed ensemble for the black hole we verify that the OSV conjecture holds for 4D N=2 extremal black holes of type IIB string theory. To start we first need to know the generalized prepotential of the low energy limit of 4D type IIB string theory.
\section{Perepotential of Type IIB Supegravity}
To find the prepotential we will use mirror symmetry. According to (12) we have
\begin{equation}
\omega \in H^{(1,1)}(M)\longrightarrow \xi \in H^{(2,1)}(\acute{M})
\end{equation}
On the other hand, there is another isomorphism
\begin{equation}
H^{(2,1)}(\acute{M})\cong H^{(0,1)}(\acute{M},T\acute{M})
\end{equation}
A member $\acute{\mu}$ of $ H^{(0,1)}(\acute{M},T \acute{M}) $ is a closed anti-holomorphic 1-form as well as a holomorphic vector, i.e. $\acute{\mu}=\acute{\mu}^l_{\bar{k}} d \acute{\bar{z}}^{\bar{k}} \acute{\partial_l}$. Consequently, isomorphism (12) takes the K\"ahler form $\omega \in H^{(1,1)}(M)$ (13) to $\acute{\mu} \in H^{(0,1)}(\acute{M},T\acute{M})$ :
\begin{equation}
\omega \longrightarrow \acute{\mu}
\end{equation}
One can write both sides of the above isomorphism in terms of the the components of the forms,
\begin{equation}
\omega=iG_{k\bar{k}}dz^k \wedge dz^{\bar{k}} \longrightarrow \acute{\mu}= \acute{G}_{k\bar{k}} \acute{\mu}^k _{\bar{l}} {\acute{G}^{l\bar{l}} d \acute{\bar{z}}^{\bar{k}} \acute{\partial_l}}
\end{equation}
in which $ \acute{z} $ and $ \acute{G} $ are the coordinates and the K\"ahler metric of the mirror manifold $ \acute{M} $, respectively. On the other hand, it has been proven that under this symmetry the K\"ahler metric and anti-symmetric tensor of the manifold $M$ transform as follows ~\cite{Giveon, mir2}:
\begin{eqnarray}
G &\longrightarrow& \acute{G}(G,B)
\\ \nonumber B &\longrightarrow& \acute{B}(G,B)
\end{eqnarray}
In other words, the new metric $\acute{G}$ is a function of both $G$ and $B$. By using (17) and (18) one can write:
\begin{equation}
dz^k \wedge d{\bar{z}}^{\bar{k}}\longrightarrow -i \acute{\mu}^k_{\bar{l}} \acute{G}^{l \bar{l}} d{\acute{\bar{z}}}^{\bar{k}} \acute{{\partial}}_{\bar{l}}
\end{equation}
Thus the Ricci form $R_{j}^{i} $ and the second Chern class $c_{2} $ of the manifold $M$ transform in the following way
\begin{equation}
R_{j}^{i}\longrightarrow \acute{R}_{j}^{i}:=-i \acute{R}^i_{jk\bar{k}} \acute{\mu}^k_{\bar{l}} \acute{G}^{l \bar{l}} d{\acute{\bar{z}}}^{\bar{k}} \acute{{\partial}}_{\bar{l}}
\end{equation}
\begin{equation}
c_2 \longrightarrow \acute{c_2}:=\dfrac{1}{8{\pi}^2} {\delta}_{i_1i_2}^{j_1j_2} \acute{R}_{j_1}^{i_1} \wedge \acute{R}_{j_2}^{i_2}
\end{equation}
Putting all these together we have
\begin{equation}
D_{\alpha\beta\gamma} \longrightarrow \acute{D}_{\alpha\beta\gamma}:= \int_{\acute{M}} \acute{\mu}_{\alpha} \wedge \acute{\mu}_{\beta} \wedge \acute{\mu}_{\gamma} \, \ \acute{\Omega}_{ijk} \wedge \acute{\Omega}
\end{equation}
and
\begin{equation}
c_{2\alpha} \longrightarrow \acute{c_{2\alpha}}:=\int_{\acute{M}} \acute{c}_2 \wedge {\acute{\mu}}_{\alpha}
\end{equation}
Here $ {\acute{\mu}}_{\alpha} $ are bases for the cohomology group $H^{(0,1)}(\acute{M},T \acute{M})$ and called Beltrami differentials. Note also that $ \int_{\acute{M}} {\acute{\mu}}_{\alpha} \wedge {\acute{\mu}}_{\beta} $
\\Generally the map which relates the K\"ahler and complex structure moduli of both Calabi-Yau manifolds is very complicated ~\cite{mir1}. But if one considers the case in which the volume of $M$ and $\acute{M}$ are both large, then this map is linear.
\begin{equation}
\acute{t}^{\alpha}=\sum_{\beta}L_{\alpha\beta}t^{\beta}
\end{equation}
in which the coefficients $L_{\alpha\beta}$ are independent of the moduli.
Finally, we will find
\begin{equation}
F^{IIB}(\acute{t}^{\alpha}) ={(\acute{C}\acute{X}^0)}^2 \acute{D}_{\alpha\beta\gamma}\acute{t}^{\alpha}\acute{t}^{\beta}\acute{t}^{\gamma}-\dfrac{1}{6}\acute{c}_{2\alpha} \acute{t}^{\alpha}+...
\end{equation}
\section{Free Energy of Type B Topological String Theory}
Now we want to examine the free energy of type B topological string theory. The tree level term has already been derived ~\cite{vafamir}. According to ~\cite{kod,vonk} the tree level free energy is zero and one has to insert three vertex operators in the path integral to get a non-zero result. In other words, at tree level in topological string coupling the first non-vanishing observable is the three point function which is ~\cite{vafamir}:
\begin{equation}
F^{top,B}_{tree-level} (\acute{t}^\alpha)= \int_{\acute{M}} (\acute{\mu})^i \wedge (\acute{\mu})^j (\acute{\mu})^k {\Omega}_{ijk} \wedge \Omega
\end{equation}
Here $(\acute{\mu})^i$ belongs to $H^{(0,1)}(\acute {M},T\acute{M})$. By choosing bases $\lbrace{(\acute{\mu}_\alpha)^i}\rbrace $ for $H^{(0,1)}(\acute {M},T\acute{M})$ and expanding $(\acute{\mu})^i$ in terms of these bases,
\begin{equation}
(\acute{\mu})^i=(\acute{\mu}_{\alpha})^i\acute{t}^\alpha,
\end{equation}
we can rewrite (27) as
\begin{equation}
F_{tree-level}^{top,B}(\acute{t}^\alpha)=\acute{D}_{\alpha\beta\gamma} \acute{t}^{\alpha} \acute{t}^{\beta} \acute{t}^{\gamma}
\end{equation}
where $ \acute{D}_{\alpha\beta\gamma} $ is defined by (23). The next step is to calculate the one-loop free energy $ F_{one-loop}^{top,B} $ of the topological string theory. To do so we should calculate the following index ~\cite{one}
\begin{equation}
F_{one-loop}^{top,B}=\dfrac{1}{2} \int_F \dfrac{d^2\tau}{{\tau}_2} Tr[(-1)^F F_L F_R q^{H_L} {\bar{q}}^{H_R}]
\end{equation}
where the integral is taken on the fundamental domain of the worldsheet which is a torus. $ F_L $ and $F_R$ are charges corresponding to $U(1)$ R symmetry in the left-moving and right-moving N=(2,2) supersymmetry algebra. $H_L$ and $H_R$ are the left-moving and right-moving parts of the Hamiltonian of B-model topological string theory. To compute it we proceed like ~\cite{one} and turn it into a path integral over permitted field configurations. We know from ~\cite{witten1,vafamir} that as a result of a fixed point theorem only constant bosonic fields have a non-zero contribution in the path integral. Furthermore, we know that observables of this theory are independent of the k\"ahler structure moudli (or the volume) of $\acute{M}$ and depend on complex structure moduli (or the shape) of $\acute{M}$. Hence, we can work in the limit where $\acute{M}$ is very large. The Lagrangian density is as follows ~\cite{witten1}
\begin{eqnarray}
L_{top,B}&=&G_{IJ} \partial_I {\phi}^I \partial_J {\phi}^J+iG_{i\bar{j}} {\eta}^{\bar{j}}(D_{\bar{z}}{\rho}_z^i+D_z{\rho}_{\bar{z}}^i)
\\ \nonumber &+& i{\theta}_i(D_{\bar{z}}{\rho}_z^i-D_z{\rho}_{\bar{z}}^i)+R_{i\bar{j} k}^l {\rho}_z^i {\rho}_{\bar{z}}^k {\eta}^{\bar{j}} {\theta}_l
\end{eqnarray}
Here $G_{i \bar{j}}$ and $R_{i\bar{j} k}^l $ are the metric and the Riemann tensor of the Calabi-Yau three-fold $\acute{M}$. ${\phi}^{I=i,\bar{i}}$ are bosonic fields on the topological string world-sheet. Moreover, ${\rho}_z^i$ and ${\rho}_{\bar{z}}^i$ are the anti-commuting spin 1 and -1, $\theta_i$ and $\eta^{\bar{j}}$ the anti-commuting spin 0 fields on the world-sheet. $D_z$ and $D_{\bar{z}}$ are covariant derivatives:
\begin{equation}
D_z \rho_{\bar{z}}^i=\partial_z \rho_{\bar{z}}^i+\Gamma_{jk}^i \partial_z \phi^j \rho_{\bar{z}}^k
\end{equation}
By looking at the Lagrangian density (31), We see that in order to prevent the action from being infinite and hence getting a zero partition function, all the bosonic and fermionic fields must be constant. Therefore, one should only consider the four-ferminon interaction term in the action.
\\By applying Noether's theorem, one can easily obtain R-symmetry charges
\begin{eqnarray}
F_L&=& -2\pi (G_{i\bar{i}} {\eta}^{\bar{i}}+{\theta}_i) {\rho}^i_{\bar{z}}
\\ \nonumber F_R &=& -2\pi (G_{i\bar{i}} {\eta}^{\bar{i}}-{\theta}_i) {\rho}^i_z
\end{eqnarray}
In analogy with ~\cite{one} we have
\begin{eqnarray}
& F_{one-loop}^{top,B} = \int_{F_0} \dfrac{d^2\tau}{4 \pi{{\tau}_2}^4} \int_{\acute{M}} dV \int {\prod}_r d{\rho}^r_z d{\rho}^r_{\bar{z}}
d{\eta}^{\bar{r}} d{\theta}_{r}&
\\ \nonumber &\times {\rho}_z^k {\rho}_{\bar{z}}^l (G_{l\bar{l}} {\eta}^{\bar{l}}+{\theta}_l)(G_{k\bar{k}} {\eta}^{\bar{k}}-{\theta}_k)
\exp(\pi {\tau}_2 R_{i\bar{i} j \bar{j}} {\rho}^i_{\bar{z}} {\rho}^j_z {\eta}^{\bar{i}} {\theta}_k G^{k\bar{j}})&
\end{eqnarray}
In this expression $dV$ is the volume element of $\acute{M}$. By using Taylor's expansion and keeping in mind that we have a Berezin integration, only the second term in this expansion will have a non-zero contribution in the path integral.
\begin{equation}
\dfrac{1}{32\pi} \int_{F_0} \dfrac{d^2{\tau}}{({\tau}_2)^2} \int_{\acute{M}} dV {\epsilon}^{l i_2 i_3} {\epsilon}^{k j_2 j_3} {\epsilon}^{\bar{l} \bar{i}_2 \bar{i}_3} {\epsilon}^ {\bar{n} \bar{j}_2 \bar{j}_3} G_{l\bar{l}} \,G_{k \bar{n}} R_{i_2 \bar{i}_2 j_2 \bar{j}_2} R_{i_3 \bar{i}_3 j_3 \bar{j}_3}
\end{equation}
Taking the integral over the fundamental domain $F_0$ of the genus one worldsheet, and using the following relations
\begin{equation}
{\epsilon}^{i_1 i_2 i_3}{\epsilon}_{i_1 j_2 j_3}={\delta}_{j_2 j_3}^{i_2 i_3}
\end{equation}
\begin{equation}
\acute{c}_2=-\dfrac{1}{8{\pi}^2}{\delta}_{i_1 j_2}^{j_1 j_2} \acute{R}_{j_1}^{i_1} \wedge \acute{R}_{j_2}^{i_2}
\end{equation}
\begin{equation}
\int_{\acute{M}} \acute{\omega} \wedge \acute{c}_2 =\dfrac{1}{8{\pi}^2} \int dV {\epsilon}^{i_1 i_2 i_3} {\epsilon}^{j_1 j_2 j_3} {\epsilon}^{\bar{i}_1 \bar{i}_2 \bar{i}_3} {\epsilon}^ {\bar{j}_1 \bar{j}_2 \bar{j}_3} \acute{G}_{i_1\bar{i}_1} \, \acute{G}_{j \bar{j}_1} \acute{R}_{i_2 \bar{i}_2 j_2 \bar{j}_2} \acute{R}_{i_3 \bar{i}_3 j_3 \bar{j}_3}
\end{equation}
We can write
\begin{equation}
F_{one-loop}^{top,B}=\dfrac{\pi}{4}\int_{\acute{M}} \acute{\omega} \wedge \acute{c}_2
\end{equation}
It is obvious that this expression linearly depends on the K\"ahler form $ \acute{\omega}$, and hence on the K\"ahler structure moduli of $\acute{M}$. Since it must depend on the complex structure moduli of $\acute{M}$, we certainly made a mistake! To understand what has led us to this wrong result, we should look at (35). In this expression there are two metrics: one is absorbed in the contraction of two Levi-Civita symbols (or more precisely in the definition of $\acute{c}_2$), and the other one is used in the K\"ahler form $\acute{\omega}$. In other words, the only dependence of (40) on k\"ahler moduli comes from the K\"ahler form $\acute{\omega}$ or equivalently from the metric $ g_{l\bar{l}} $.
\\To reach the right result, we have to do something to replace $\acute{\omega}$ with another form which depends on the complex structure moduli $\acute{t}^\alpha$ of $\acute{M}$. We know that for compact Calabi-Yau manifolds the components of holomorphic three-form $\acute{\Omega}$ and its complex conjugate are constant ~\cite{vafamir}. Furthermore, (1,1)-forms and (2,2)-forms, which are related by Poincar$\acute{e}$ duality, depends on the K\"ahler moduli. Thus, just the members of cohomology group $H^{(2,1)}(\acute{M})\cong H^{(0,1)}(\acute{M}, T\acute{M})$ (or the Belterami differentials $\mu$) depend on the complex structure moduli. Since, $ g_{l\bar{l}} $ has been resulted from the product $F_L F_R$, we should change $F_L$ and $F_R$ such that their product comprises the components of $\mu$. we can accomplish this by field redefinition of one of the Fermionic fields $ \lbrace {\rho}_z^i, {\rho}_{\bar{z}}^i, {\eta}^{\bar{i}}, {\theta}_i \rbrace $, so that in the new definition the components of Belterami differentials $\mu$ are used.
\\ You may remember that in the first paper ~\cite{witten1} in which Witten introduced type B topological sigma model, he applied the following field redefinition, in order to manifest the new spin of Fermionic fields after twisting, and to simplify the appearance of BRST-like transformations. These transformations act on these fields in the following way:
\begin{eqnarray}
\nonumber \delta {\phi}^i&=&0,
\\ \nonumber \delta {\phi}^{\bar{i}}&=&\epsilon{\eta}^{\bar{i}},
\\ \delta {\eta}^{\bar{i}}&=&\delta{\theta}_i =0,
\\ \nonumber \delta {\rho}_z^i &=&2i\epsilon {\partial}_z {\phi}^i,
\\ \nonumber \delta {\rho}_{\bar{z}}^i &=&-2i\epsilon \bar{{\partial}}_{\bar{z}} {\phi}^i
\end{eqnarray}One can see that if the field ${\eta}^{\bar{i}}$ is contracted by the components of $\acute{\mu}$,
\begin{equation}
{\eta}^{\bar{i}} \longrightarrow {\alpha}^l:=\acute{\mu}_{\bar{i}}^l {\eta}^{\bar{i}},
\end{equation}
then the variation of $ {\alpha}_i$, and hence the new vertex operators,
\begin{equation}
O_B^{(0)}=B_{i_1...i_p}^{j_1...j_q} {\alpha}^{i_1}...{\alpha}^{i_p} {\theta}_{j_1}....{\theta}_{j_q}
\end{equation}
would not be zero under the transformations (40). In (41) $B_{i_1...i_p}^{j_1...j_q}$ are the components of forms which belong to $H^{(p,0)} ({\wedge}^q T^{(1,0)} \acute{M})$. We could do the same redefinition for ${\theta}_i$ or for a linear combinations of them. They all lead to the same incorrect vertex operators, since we know that the correct ones belong to $H^{(0,p)}({\wedge}^q T^{(1,0)}\acute{M})$. Therefore, we can only change the definition of the field ${\rho}_z^i$ or ${\rho}_{\bar{z}}^i$, in order to have the correct vertex operators.
\begin{equation}
{\rho}_z^j \longrightarrow {\rho}_z^{\bar{i}}:={\mu}_j^{\bar{i}} {\rho}_z^j
\end{equation}
or
\begin{equation}
{\rho}_{\bar{z}}^j \longrightarrow {\rho}_{\bar{z}}^{\bar{i}}:={\mu}_j^{\bar{i}} {\rho}_{\bar{z}}^j
\end{equation}
If one considers the matrix ${\mu}_{\bar{i}}^j$, then ${\mu}_j^{\bar{i}}$ is its inverse matrix. One can show that (43) and (44) will give the desired result. Since, they have no advantages to each other and we cannot use both of them simultaneously, we choose the following redefinitions.
\begin{eqnarray}
{\gamma}^i&:=&{\rho}_z^i-{\rho}_{\bar{z}}^i,
\\ \nonumber {\beta}^{\bar{j}}&:=&{\mu}_i^{\bar{j}} ({\rho}_z^i+{\rho}_{\bar{z}}^i),
\end{eqnarray}
With this field redefinition charges will change to
\begin{eqnarray}
F_L&=&-\pi (G_{i \bar{i}} {\eta}^{\bar{i}}+\theta_i)(\mu_{\bar{j}}^i{\beta}^{\bar{j}}-{\gamma}^i)
\\ \nonumber F_R&=&-\pi (G_{i \bar{i}} {\eta}^{\bar{i}}-{\theta}_i)({\mu}_{\bar{j}}^i{\beta}^{\bar{j}}+{\gamma}^i)
\end{eqnarray}
Moreover the interaction term in the action changes
\begin{equation}
\dfrac{1}{8}R_{i_2\bar{i}_2 j_2\bar{j}_2}\mu_{\bar{k}_2}^{i_2} \beta^{\bar{k}_2} \gamma^{j_2} \eta^{\bar{i}_2} \theta_{k_2} G^{k_2\bar{j}_2}
\end{equation}
Plugging (46) and (47) into (30) and taking the Grassmannian integrals, the result will be the following expression.
\begin{eqnarray}
&\dfrac{\pi}{64} \int_{F_0} \dfrac{d^2{\tau}{({\tau}_2)^2}} \int_{\acute{M}} dV \mu_{\bar{i}}^e [\epsilon ^{\bar{i} \bar{i}_2 \bar{i}_3} \epsilon _l^{\bar{j}_2 \bar{j}_3}+\epsilon^{\bar{i} \bar{j}_2 \bar{j}_3} \epsilon _l^{\bar{i}_2 \bar{i}_3}]&
\\ \nonumber &\times \epsilon_{el_{2}l_{3}} \epsilon^{lj_2j_3}(R_{i_2\bar{i}_2j_2\bar{j}_2} \mu_{\bar{k}_2}^{i_2} G^{\bar{k}_2l_2})(R_{i_3\bar{i}_3j_3\bar{j}_3} \mu_{\bar{k}_3}^{i_3} G^{\bar{k}_3l_3})&
\end{eqnarray}
If we change the indices as
\begin{equation}
\bar{i}_a\longleftrightarrow \bar{j}_a \qquad a=2,3
\end{equation}
and utilize the equality $R_{i\bar{i}j\bar{j}}=R_{i\bar{j}j\bar{i}}$, we will see that the second term in the brackets will be equal to the first one. Thus, with this field redefinition $ G_{l\bar{l}} $ in (34) is replaced by the components of $\mu$ in (48). Now, we apply the the definitions (21) and (22) for $\acute{c}_2$, and take advantage of the fact that for a compact Calabi-Yau manifold $\Omega_{nl_2l_3}=\epsilon_{nl_2l_3}$ is constant with respect to the coordinates on the manifold. Hence we can rewrite (48) as follows:
\begin{equation}
F_{one-loop}^{top,B}=-\dfrac{{(\pi)}^4}{12}i\acute{t}^\alpha
\int_{\acute{M}} \Omega_{rr_2r_3}(\mu_\alpha)^r \wedge (\acute{c}_2)^{r_2r_3} \wedge \Omega
\end{equation}
By using the definition (24) we have
\begin{equation}
F_{one-loop}^{top,B}=-\dfrac{{(\pi)}^4}{12} i\acute{t}^\alpha \acute{c}_{2\alpha}
\end{equation}
On the other hand, one can write the total free energy $ F^{top,B}$ of the type B topological string theory as
\begin{equation}
F^{top,B}=\Sigma_{g=0}^{\infty} (g_{top,B})^{2g-2} F_g ^{top,B}(\acute{t}^{\alpha})
\end{equation}
Since the coupling $ g_{top,B}$ is very small in this case and the complex structure moduli $\acute{t}_{\alpha}$ are very large, only the first two terms are important. Thus we merely consider the contribution of theses terms. Plugging (29) and (51) into the (52) we obtain the free energy of B-model topological string theory up to one loop order in closed topological string coupling.
\begin{equation}
F^{top,B}=-\dfrac{-6i}{{(g_{top,B})^2}}\acute{D}_{\alpha\beta\gamma} \acute{t}^{\alpha} \acute{t}^{\beta} \acute{t}^{\gamma}-\dfrac{{(\pi)}^4 i}{12} \acute{c}_{2\alpha} \acute{t}^\alpha +...
\end{equation}
One can change the normalization of the worldsheet fields so that $F^{top,B} $ can be written as
\begin{equation}
F^{top,B}=-\dfrac{-(2\pi)^3 i}{{(g_{top,B})^2}}\acute{D}_{\alpha\beta\gamma} \acute{t}^{\alpha} \acute{t}^{\beta} \acute{t}^{\gamma}-\dfrac{{\pi i}}{12} \acute{c}_{2\alpha} \acute{t}^\alpha+...
\end{equation}
\section{OSV Conjecture for Type IIB N=2 Extremal Black Holes}
In the low energy limit of type IIB superstring theory compactified to four dimensions one can build N=2 extremal black holes in four dimension, by making a bound states of D1, D3, and D5 branes wrapped on cycles of the Calabi-Yau three-fold. Since these branes have mass as well as electric and magnetic charges, the final result will be an extremal black hole in four dimensions.
Again with the same reasoning one can attribute a mixed ensemble to it, and write its free energy as the imaginary part of the generalized prepotential.
\begin{equation}
F_{OSV}^{IIB}=-\pi Im(F^{IIB}(\acute{t}))
\end{equation}
It is very obvious that the type IIB prepotential
\begin{equation}
F^{IIB}(\acute{t}^{\alpha}) =(\acute{C}\acute{X^0})^2\acute{D}_{\alpha\beta\gamma}\acute{t}^{\alpha}\acute{t}^{\beta}\acute{t}^{\gamma}
-\dfrac{1}{6}\acute{c}_{2\alpha} \acute{t}^{\alpha}+...
\end{equation}
and the free energy of typ topological string
\begin{equation}
F^{top,B}(\acute{t}^{\alpha})=-\dfrac{(2 \pi)^3 i}{{(g_{top,B})^2}}\acute{D}_{\alpha\beta\gamma} \acute{t}^{\alpha} \acute{t}^{\beta} \acute{t}^{\gamma}-\dfrac{\pi i}{12} \acute{c}_{2\alpha} \acute{t}^\alpha+...
\end{equation}
are very similar to each other, and must be equal to each other up to a coefficient. Like in ~\cite{osv} we first compare the one loop terms of them and obtain
\begin{equation}
F^{IIB}(\acute{t}^{\alpha})=-\dfrac{2i}{\pi} F^{top,B}(\acute{t}^{\alpha})
\end{equation}
Then from comparing the tree level terms we gain the closed topological string coupling $g_{top,B}$
\begin{equation}
g_{top,B}=\pm\dfrac{4\pi i}{C'X'^0}
\end{equation}
As a result of (55) and (58) one can write
\begin{eqnarray}
ln Z_{mixed}^{IIB}=-\pi Im(F^{IIB})=F^{top,B}+\bar{F}^{top,B}
\end{eqnarray}
and hence
\begin{equation}
Z_{mixed}^{IIB}=\vert Z^{top,B} \vert^2
\end{equation}
\section{Conclusion}
\label{sec:outlook}
We predicted that due to the presence of T-duality the original OSV conjecture which was formulated for type IIA N=2 extremal black holes and type A closed topological string theory, holds in exactly the same way for type IIB N=2 exremal black holes. The only difference is that one should replace the partition function of type A closed topological string theory by that of type B closed topological string theory.
\section*{Acknowledgement}
\label{sec:ack}
We would like to thank professor H. Arfaei, A. Tavanfar, and C. Vafa for their valuable suggestions on the manuscript. This work is part of Farzad Omidi's M.S. thesis at Sharif University of Technology and he would like to thank the Department for giving him this opportunity.
\newpage
|
1,116,691,499,199 | arxiv | \section{Introduction}
\label{intro}
Let $\Omega \subset \mathbb{R}^d$, $d \in \{2,3\}$, be a bounded polyhedral domain. Furthermore, we assume that $\Omega$ is convex if necessary. We consider the Poisson problem as follows. Find $u: \Omega \to \mathbb{R}$ such that
\begin{align}
\displaystyle
- \varDelta u = f \quad \text{in $\Omega$}, \quad u = 0 \quad \text{on $\partial \Omega$}, \label{intro1}
\end{align}
where $f \in L^2(\Omega)$ is a given function. This paper gives error estimates for the first-order Crouzeix--Raviart (CR) finite-element approximation on anisotropic meshes in three dimensions. Anisotropic meshes have different mesh sizes in different directions. The shape regularity assumption on triangulations $\mathbb{T}_h$ is no longer valid on these meshes; see for example \cite{Ape99}. Furthermore, we do not impose the maximum-angle condition proposed in \cite{BabAzi76} during mesh partitioning. In many instances, the discussion also relates to two dimensions. We therefore discuss the problem here as uniformly valid in an arbitrary number of dimensions.
CR finite error estimates for the non-homogeneous Dirichlet Poisson problem are known. Let $CR_{h0}^1$ be the CR finite-element space, to be defined in Section \ref{FEspaces}. Let $u \in H_0^1(\Omega)$ and $u_h^{CR} \in CR_{h0}^1$ be the exact and CR finite-element solutions, respectively. In \cite[Corollary 2.2]{Gud10}, adopting medius analysis, the estimate
\begin{align}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)}
\leq c_0 \left( \inf_{v_h \in CR_{h0}^1} | u - v_h |_{H^1(\mathbb{T}_h)} + Osc_1(f) \right), \label{intro2}
\end{align}
is given, where $| \cdot |_{H^1(\mathbb{T}_h)}$ denotes the broken (piecewise) $H^1$-semi norm defined in Section \ref{regularmesh}, and $c_0$ a positive constant independent of $h$. Here, the oscillation $Osc_1(f)$ is expressed as
\begin{align*}
\displaystyle
Osc_1(f) := \left( \sum_{T \in \mathbb{T}_h} h_T^2
\left [ \inf_{\bar{f} \in \mathcal{P}^0(T)} \| f - \bar{f} \|^2_{L^2(T)} \right] \right)^{1/2},
\end{align*}
where $\mathcal{P}^0(T)$ denotes the space of constant functions on $T$. Suppose that $u \in H^2(\Omega)$ and oscillation $Osc_1(f)$ vanishes. Let $I_h u \in CR_{h0}^1$ be the nodal interpolation of $u$ at the midpoints of the faces. Then, from the standard interpolation error estimate (see for example \cite[Corollary 1.109]{ErnGue04}), we have
\begin{align*}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)} \leq c_0 | u - I_h u |_{H^1(\mathbb{T}_h)}
\leq c_1 h |u|_{H^2(\Omega)},
\end{align*}
where $c_1$ represents a positive constant independent of $h$ and $u$ but depending on the parameter of the simplicial mesh; see for example \cite[Definition 1.107]{ErnGue04}. This parameter is bounded if the simplicial mesh sequence is shape regular. However, the situation is different without the shape-regular condition. The aim of the present paper is to deduce an analogous error estimate on anisotropic finite-element meshes. Note that very flat elements might be included in the mesh sequence. In many papers reporting on such investigations, the maximum-angle condition instead of the shape-regular condition is imposed. However, the maximum-angle condition is not necessarily needed to obtain error estimates. Recently, in the two-dimensional instance, the CR finite-element analysis of the non-homogeneous Dirichlet-Poisson problem has been investigated under a more relaxed mesh condition, \cite{KobTsu18b}. The present paper extends previous research to a three-dimensional setting.
However, it may not be easy to use the estimate \eqref{intro2} on anisotropic finite-element meshes. To overcome this difficulty, we use the interpolation error estimates obtained in \cite{IshKobTsu}. In that paper, the CR and Raviart--Thomas (RT) interpolation errors are bounded in terms of $h$ and the new parameter $H$, see Corollary \ref{modify=coro2}, \ref{modify=coro3}.
The CR finite-element space is not in $H_0^1(\Omega)$. Hence, an error between the exact solution and the CR finite-element approximation solution with a $H^1$-broken seminorm is divided into two parts (\cite{Bre15,ErnGue04}). One is an approximation error that measures how well the exact solution is approximated by the CR finite-element functions, the other is a nonconformity error term. For the former, the CR interpolation error estimates (Corollary \ref{modify=coro2}) are used. In the latter, the standard scaling argument is often used to obtain the error estimates. However, in this way, we are unable to derive the correct order on anisotropic meshes. To overcome this difficulty, we shall use the lowest-order RT interpolation error estimates on anisotropic meshes (Corollary \ref{modify=coro3}). By this technique, we consequently have the error estimates in the $H^1$-broken seminorm (Theorem \ref{CR=thr4}) and the $L^2$ norm (Theorem \ref{CR=thr5}) on anisotropic meshes.
Furthermore, we present an error estimate for the first-order RT finite-element approximation of the Poisson problem \eqref{intro1} based on the dual mixed formulation (Theorem \ref{mix=thr7}). In the proof, we again use Corollary \ref{modify=coro3}. We again emphasise that we do not impose either the shape-regular or the maximum-angle condition during mesh partitioning.
We next present the equivalence of the enriched piecewise linear CR finite-element method introduced by \cite{HuMa15} and the first-order RT finite-element method. In two dimensions, the work \cite{ArnBre85} represents pioneering research. Marini \cite{Mar85} further found an expression relating RT and CR finite-element methods:
\begin{align}
\displaystyle
\bar{\sigma}_h^{RT}|_T &= \nabla \bar{u}_h^{CR} - \frac{f_T^0}{2} (x - x_T) \quad \text{on $T$}, \label{intro3}
\end{align}
where $T$ denotes a mesh element, $x_i$ ($i=1,2,3$) the vertices of triangle $T$, $x_T$ the barycentre of $T$ such that $x_T := \frac{1}{3}(x_1 + x_2 + x_3)$, and $\bar{\sigma}_h^{RT}$ and $ \bar{u}_h^{CR}$ respectively denote the RT and CR finite-element solutions with a given external piecewise-constant function $f_T^0$. It was recently proved \cite{HuMa15} that the enriched piecewise-linear CR finite-element method is identical to the first-order RT finite-element method for both the Poisson and Stokes problems in any number of dimensions. In the present paper, we extend Marini's results to three dimensions (Lemma \ref{RTCR=lem12}).
The remainder of the present paper is organised as follows. Section 2 introduces the weak form of the continuous problem \eqref{intro1}, the finite-element meshes, and finite-element spaces. Furthermore, we propose a parameter $H$. Section 3 introduces discrete settings of the CR finite-element method for \eqref{intro1} and proposes error estimates. Section 4 proves error estimates for the first-order RT finite-element method based on the dual mixed formulation of the Poisson problem. Section 5 gives the equivalence of the RT and CR finite-element problems. Finally, Section 6 presents numerical results obtained using the Lagrange P1 element and the first-order CR element.
\begin{comment}
Let $\Omega \subset \mathbb{R}^d$, $d \in \{2,3\}$, be a bounded polyhedral domain. Further, we assume that $\Omega$ is convex if necessary. We consider the Poisson problem as follows. Find $u: \Omega \to \mathbb{R}$ such that
\begin{align}
\displaystyle
- \varDelta u = f \quad \text{in $\Omega$}, \quad u = 0 \quad \text{on $\partial \Omega$}, \label{intro1}
\end{align}
where $f \in L^2(\Omega)$ is a given function. Let $\ell$ be a nonnegative integer. $H^{\ell}(\Omega)$ is a Hilbert space with the scalar product $(\varphi,\psi)_{H^{\ell}(\Omega)} := \sum_{|\beta| \leq \ell} (\partial^{\beta} \varphi , \partial^{\beta} \psi)_{L^2(\Omega)}$ and the norm $\| \varphi\|_{H^{\ell}(\Omega)} := \sqrt{(\varphi,\varphi)_{H^{\ell}(\Omega)}}$. We set $L^2(\Omega) := H^0(\Omega)$ with $(\cdot,\cdot) := (\cdot,\cdot)_{H^0(\Omega)}$ and $\| \cdot \| := \| \cdot \|_{H^0(\Omega)}$.
This paper gives error estimates for the first-order Crouzeix-Raviart (CR) finite element approximation on anisotropic meshes in the three-dimensional case. Anisotropic meshes have different mesh sizes in different directions. The shape regularity assumption on triangulations $\mathbb{T}_h$ is no longer valid on these meshes; for example, see \cite{Ape99}. Further, we do not impose the maximum-angle condition proposed in \cite{BabAzi76} for the mesh partition. In many cases, the discussion also relates to two dimensions. We will therefore discuss the problem here as uniformly valid in any number of dimensions as possible.
A CR finite error estimate for the non-homogeneous Dirichlet Poisson problem are known as follows. Let $CR_{h0}^1$ be the CR finite element space defined in Section \ref{FEspaces}. Let $u \in H_0^1(\Omega)$ and $u_h^{CR} \in CR_{h0}^1$ be the exact and CR finite element solutions, respectively. In \cite[Corollary 2.2]{Gud10}, adopting medius analysis, the estimate
\begin{align}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)}
\leq c \left( \inf_{v_h \in CR_{h0}^1} | u - v_h |_{H^1(\mathbb{T}_h)} + Osc_1(f) \right), \label{intro2}
\end{align}
is given, where $| \cdot |_{H^1(\mathbb{T}_h)}$ is the broken (piecewise) $H^1$-semi norm defined in Section \ref{regularmesh}, and $c$ is a positive constant independent of $h$. Here, the oscillation $Osc_1(f)$ is described by
\begin{align*}
\displaystyle
Osc_1(f) := \left( \sum_{T \in \mathbb{T}_h} h_T^2 \left [ \inf_{\bar{f} \in \mathcal{P}^0(T)} \| f - \bar{f} \|^2_{L^2(T)} \right] \right)^{1/2},
\end{align*}
where $\mathcal{P}^0(T)$ is the piecewise constant space in $T$. Suppose that $u \in H^2(\Omega)$ and the oscillation $Osc_1(f)$ vanishes. Let $I_h u \in CR_{h0}^1$ be the nodal interpolation of $u$ at the midpoints of the faces. Then, from the standard interpolation error estimate (e.g., see \cite[Corollary 1.109]{ErnGue04}), we have
\begin{align*}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)} \leq | u - I_h u |_{H^1(\mathbb{T}_h)}
\leq c h |u|_{H^2(\Omega)},
\end{align*}
where $c$ is a positive constant independent of $h$ and $u$ but dependent on the parameter of the simplicial mesh; for example, see \cite[Definition 1.107]{ErnGue04}. This parameter is bounded if the simplicial mesh sequence is shape regular. However, the situation is different without the shape regular condition. The aim of the present paper is to deduce an analogous error estimate on anisotropic finite element meshes. Note that very flat elements might be included in the mesh sequence. In many papers, the maximum-angle condition instead of the shape regular condition is imposed to investigate. However, the maximum-angle condition is not necessary needed to get error estimates. Recently, in the two-dimensional case, the CR finite element analysis of the non-homogeneous Dirichlet Poisson problem has been investigated under the more relaxed mesh condition, \cite{KobTsu18b}. The present paper extends previous research to the three-dimensional case.
It however might not be easy to use the estimate \eqref{intro1} on anisotropic finite element meshes. The CR finite element space is not in $H_0^1(\Omega)$. So, an error between the exact solution and the CR finite element approximation solution in $H^1$-broken seminorm is divided into two parts. One is an approximation error which measures how well the exact solution can be approximated by the CR finite element functions. The other is a nonconformity error term. For the former estimate, the CR interpolation error estimates are often used. In the latter estimate, the standard scaling argument is often used to obtain the error estimates. However, in this way, we cannot get the correct order on anisotropic meshes. To overcome this difficulty, we shall use the lowest order Raviart--Thomas (RT) interpolation error estimates on anisotropic meshes. By this technique, we consequently have the error estimates in the $H^1$-broken seminorm (Theorem \ref{CR=thr4}) and the $L^2$ norm (Theorem \ref{CR=thr5}) on anisotropic meshes.
We next present the equivalence of the enriched piecewise linear CR finite element method introduced by \cite{HuMa15} and the first-order RT finite element method. In the two-dimensional case, the work \cite{ArnBre85} is pioneering research. Marini \cite{Mar85} further found the relation between the RT and CR finite element methods:
\begin{align}
\displaystyle
\bar{\sigma}_h^{RT}|_T &= \nabla \bar{u}_h^{CR} - \frac{f_T^0}{2} (x - x_T) \quad \text{on $T$}, \label{intro3}
\end{align}
where $T$ is a mesh element, $x_i$ ($i=1,2,3$) denotes vertices of the triangle $T$, $x_T$ is the barycentre of $T$ such that $x_T := \frac{1}{3}(x_1 + x_2 + x_3)$, and $\bar{\sigma}_h^{RT}$ and $ \bar{u}_h^{CR}$ respectively denote the RT and CR finite element solutions with a piecewise constant given external function $f_T^0$. It was recently proved \cite{HuMa15} that the enriched piecewise linear CR finite element method is identical to the first-order RT finite element method for the Poisson problem and Stokes problem in any number of dimensions. In the present paper, we extend Marini's results to the three-dimensional case. To this end, we present an error estimate for the first order RT finite element approximation of the Poisson problem \eqref{intro1} based on the dual mixed formulation. We again emphasise that we do not impose either the shape-regular condition or the maximum-angle condition for the mesh partition.
The remainder of the present paper is organised as follows. Section 2 introduces the weak form of the continuous problem \eqref{intro1}, finite element meshes and finite element spaces. Furthermore, we propose a parameter $H$. Section 3 introduces discrete settings of the CR finite element method for \eqref{intro1} and proposes error estimates. Section 4 proves error estimates for the first-order RT finite element method based on a dual mixed formulation of the Poisson problem. Section 5 gives the equivalence of the RT and CR finite element problems. Finally, Section 6 presents numerical results obtained using the Lagrange P1 element and the first-order CR element.
\end{comment}
\section{Preliminaries}
\subsection{Weak formulation}
The variational formulation for the Poisson problem \eqref{intro1} is then as follows. Find $u \in H_0^1(\Omega)$ such that
\begin{align}
\displaystyle
a_0(u,\varphi) = (f , \varphi) \quad \forall \varphi \in H_0^1(\Omega), \label{pre1}
\end{align}
where $a_0: H^1(\Omega) \times H^1(\Omega) \to \mathbb{R}$ denotes a bilinear form defined by
\begin{align*}
\displaystyle
a_0(u,\varphi) := (\nabla u , \nabla \varphi).
\end{align*}
Here, we define $H_0^1(\Omega)$ as the closure of $C_0^{\infty}(\Omega)$ in the semi-norm $| \cdot |_{H^1(\Omega)}$. By the Lax--Milgram lemma, there exists a unique solution $u \in H_0^1(\Omega)$ for any $f \in L^2(\Omega)$ and it holds that
\begin{align*}
\displaystyle
| u |_{H^1(\Omega)} \leq C_P(\Omega) \| f \|,
\end{align*}
where $C_P(\Omega)$ is the Poincar$\rm{\acute{e}}$ constant depending on $\Omega$.
Furthermore, if $\Omega$ is convex, then $u \in H^2(\Omega)$ and
\begin{align}
\displaystyle
| u |_{H^2(\Omega)} \leq \| \varDelta u \|. \label{pre2}
\end{align}
The proof can be found in, for example, \cite[Theorem 3.1.1.2, Theorem 3.2.1.2]{Gri11}.
\subsection{Meshes, Mesh faces, Averages and Jumps} \label{regularmesh}
Let $\mathbb{T}_h = \{ T \}$ be a simplicial mesh of $\overline{\Omega}$, made up of closed $d$-simplices, such as
\begin{align*}
\displaystyle
\overline{\Omega} = \bigcup_{T \in \mathbb{T}_h} T,
\end{align*}
with $h := \max_{T \in \mathbb{T}_h} h_{T}$, where $ h_{T} := \mathop{\mathrm{diam}}(T)$. We assume that each face of any $d$-simplex $T_1$ in $\mathbb{T}_h$ is either a subset of the boundary $\partial \Omega$ or a face of another $d$-simplex $T_2$ in $\mathbb{T}_h$. That is, $\mathbb{T}_h$ is a simplicial mesh of $\overline{\Omega}$ without hanging nodes.
\begin{defi} \label{pre=defi1}
For any $T \in \mathbb{T}_h$, we define the parameter $H_{T}$ as
\begin{align*}
\displaystyle
H_{T} := \frac{h_{T}^2}{|T|} \min_{1 \leq i \leq 3} |L_i| \quad \text{if $d=2$},
\end{align*}
where $L_i$ $(i=1,2,3)$ denotes edges of the triangle $T$. Further, we define the parameter $H_{T}$ as
\begin{align*}
\displaystyle
H_{T} := \frac{h_{T}^2}{|T|} \min_{1 \leq i , j \leq 6, i \neq j} |L_i| |L_j| \quad \text{if $d=3$},
\end{align*}
where $L_i$ $(i=1,\ldots,6)$ denotes edges of the tetrahedra $T$. Here, $|T|$ denotes the measure of $T$. Furthermore, we set
\begin{align*}
\displaystyle
H := H(h) := \max_{T \in \mathbb{T}_h} H_{T}.
\end{align*}
\end{defi}
We impose the following assumption.
\begin{ass} \label{pre=ass1}
We assume that $\{ \mathbb{T}_h \}_{h \> 0}$ is a sequence of triangulations of $\Omega$ such that
\begin{align*}
\displaystyle
\lim_{h \to 0} H(h) = 0.
\end{align*}
\end{ass}
\begin{rem}
The parameter $H_T$ was introduced, and the interpolation errors are bounded (locally) in terms of $h_T$ and $H_T$ on anisotropic meshes without any geometric conditions in \cite{IshKobTsu}. In two-dimensional case, the parameter $H_T$ is equivalent to the circumradius of $T$. Hence, the maximum-angle condition or the semiregular condition holds if and only if there exists a constant $\sigma_0$ such that $H_T / h_T \leq \sigma_0$. In three-dimensional case, it is conjectured that the maximum-angle condition holds if and only if the quantity $H_T / h_T$ is bounded.
\end{rem}
We adopt the concepts of mesh faces, averages and jumps in the analysis of RT and CR finite element method. Let $\mathcal{F}_h^i$ be the set of interior faces and $\mathcal{F}_h^{\partial}$ the set of the faces on the boundary $\partial \Omega$. Let $\mathcal{F}_h := \mathcal{F}_h^i \cup \mathcal{F}_h^{\partial}$. For any $F \in \mathcal{F}_h$, we define the unit normal $n_F$ to $F$ as follows: (\roman{sone}) If $F \in \mathcal{F}_h^i$ with $F = T_1 \cap T_2$, $T_1,T_2 \in \mathbb{T}_h$, let $n_1$ and $n_2$ be the outward unit normals of $T_1$ and $T_2$, respectively. Then, $n_F$ is either of $\{ n_1 , n_2\}$; (\roman{stwo}) If $F \in \mathcal{F}_h^{\partial}$, $n_F$ is the unit outward normal $n$ to $\partial \Omega$.
Let $k$ be a positive integer. We then define the broken (piecewise) Sobolev space as
\begin{align*}
\displaystyle
H^k(\mathbb{T}_h) &:= \left\{ \varphi \in L^2(\Omega); \ \varphi|_T \in H^k(T) \ \forall T \in \mathbb{T}_h \right\}
\end{align*}
with the norm
\begin{align*}
\displaystyle
| \varphi |_{H^1(\mathbb{T}_h)} &:= \left( \sum_{T \in \mathbb{T}_h}\| \nabla \varphi \|^2_{L^2(T)^d} \right)^{1/2} \quad \varphi \in H^1(\mathbb{T}_h).
\end{align*}
Let $\varphi \in H^k(\mathbb{T}_h)$. Suppose that $F \in \mathcal{F}_h^i$ with $F = T_1 \cap T_2$, $T_1,T_2 \in \mathbb{T}_h$. Set $\varphi_1 := \varphi{|_{T_1}}$ and $\varphi_2 := \varphi{|_{T_2}}$. The jump and the average of $\varphi$ across $F$ is then defined as
\begin{align*}
\displaystyle
[[ \varphi ]]_F := (\varphi_1 n_1 + \varphi_2 n_2) \cdot n_F, \quad \{ \! \{ \varphi \} \! \}_F := \frac{1}{2} (\varphi_1 + \varphi_2).
\end{align*}
For a boundary face $F \in \mathcal{F}_h^{\partial}$ with $F = \partial T \cap \partial \Omega$, $[[\varphi]]_F := \varphi|_T$ and $\{ \! \{ \varphi \} \! \}_F := \varphi |_T$. When $v$ is an $\mathbb{R}^d$-valued function, we use the notation
\begin{align*}
\displaystyle
[[ v \cdot n ]]_F := (v_1 - v_2) \cdot n_F, \quad \{ \! \{ v\} \! \}_F := \frac{1}{2} (v_1 + v_2 )
\end{align*}
for the jump of the normal component of $v$. For a boundary face $F \in \mathcal{F}_h^{\partial}$ with $F = \partial T \cap \partial \Omega$, $[[v \cdot n]]_F := v|_T \cdot n $ and $\{\! \{ v \}\! \}_F := v|_T $. Whenever no confusion can arise, we simply write $[[\varphi]]$, $\{\! \{ \varphi \}\! \}$, $[[v \cdot n]]$ and $\{\! \{ v \} \! \}$, respectively.
Suppose that $F \in \mathcal{F}_h^i$ with $F = T_1 \cap T_2$, $T_1,T_2 \in \mathbb{T}_h$. For $v \in H^1(\mathbb{T}_h)^d$ and $\varphi \in H^1(\mathbb{T}_h)$, it holds that
\begin{align*}
\displaystyle
[[ (v \varphi) \cdot n ]]_F = \{ \! \{ v \} \! \}_F \cdot n_F [[ \varphi ]]_F + [[ v \cdot n]]_F \{ \! \{ \varphi \} \! \}_F.
\end{align*}
We here define a broken gradient operator as follows.
\begin{defi} \label{pre=defi2}
For $\varphi \in H^1(\mathbb{T}_h)$, the broken gradient $\nabla_h:H^1(\mathbb{T}_h) \to L^2(\Omega)^d$ is defined by
\begin{align*}
\displaystyle
(\nabla_h \varphi)|_T &:= \nabla (\varphi|_T) \quad \forall T \in \mathbb{T}_h.
\end{align*}
\end{defi}
\noindent
Note that $H^1(\Omega )\subset H^1(\mathbb{T}_h)$ and the broken gradient coincides with the distributional gradient in $H^1(\Omega)$.
\subsection{Finite Element Spaces and Interpolations Error Estimates} \label{FEspaces}
This section introduce the piecewise-constant, CR and RT finite element spaces.
Let $T \in \mathbb{T}_h$. For any $k \in \mathbb{N}_0$, let $\mathcal{P}^k(T)$ be the space of polynomials with degree at most $k$ in $T$.
\begin{thr}[Poincar\'e inequality] \label{modify=thr1}
Let $D \subset \mathbb{R}^d$ be a convex domain with diameter $\mathop{\mathrm{diam}}(D)$. It then holds that, for $\varphi \in H^1(D)$ with $\int_D \varphi dx = 0$,
\begin{align}
\displaystyle
\| \varphi \|_{L^2(D)} \leq \frac{\mathop{\mathrm{diam}} (D)}{\pi} |\varphi|_{H^1(D)}.\label{modify3}
\end{align}
\end{thr}
\begin{pf*}
The proof is found in \cite[Theorem 3.2]{Mar03}, also see \cite{PayWei60}.
\qed
\end{pf*}
\subsubsection{Piecewise-constant finite element space}
We define the standard piecewise constant space as
\begin{align*}
\displaystyle
M_h^0 := \left\{ q_h \in L^2(\Omega); \ q_h|_T \in \mathcal{P}^0(T) \ \forall T \in \mathbb{T}_h \right\}.
\end{align*}
The local interpolation $\Pi_{T}^0$ from $L^2(T)$ into the space $\mathcal{P}^0(T)$ is defined by
\begin{align*}
\displaystyle
\int_{T} (\Pi_{T}^0 q - q) dx = 0 \quad \forall q \in L^2(T).
\end{align*}
Note that $\Pi_{T}^0 q$ is the constant function equal to $\frac{1}{|T|} \int_T q dx$. We also define the global interpolation $\Pi_h^0$ to the space $M_{h}^0$ by
\begin{align*}
\displaystyle
(\Pi_h^0 q)|_T = \Pi_T^0 (q|_T) \quad \forall T \in \mathbb{T}_h, \quad \forall q \in L^2(\Omega).
\end{align*}
The Poincar\'e inequality \eqref{modify3} directly yields the following error estimate of the local $L^2$-projection $\Pi^0_T$.
\begin{thr} \label{modify=thr2}
We have the error estimate of the local $L^2$-projection such that
\begin{align}
\displaystyle
\| \Pi^0_T q - q \|_{L^2(T)} &\leq \frac{h_T}{\pi} |q|_{H^1(T)} \quad \forall T \in \mathbb{T}_h, \quad \forall q \in H^1(T). \label{modify6}
\end{align}
\end{thr}
\begin{pf*}
For any $q \in H^1(T)$, we set $w := \Pi^0_T q - q$. It then holds that
\begin{align*}
\displaystyle
\int_T w dx = \int_T ( \Pi^0_T q - q ) dx = \frac{1}{|T|} \int_T q dx |T| - \int_T q dx = 0.
\end{align*}
Therefore, using the Poincar\'e inequality \eqref{modify3}, we conclude \eqref{modify6}.
\qed
\end{pf*}
The global error estimate of the $L^2$-projection is obtained as follows.
\begin{coro} \label{modify=coro1}
Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. It then holds that
\begin{align}
\displaystyle
\| \Pi^0_h q - q \|_{L^p(\Omega)} &\leq \frac{h}{\pi} |q|_{H^{1}(\Omega)} \quad \forall q \in H^{1}(\Omega). \label{modify7}
\end{align}
\end{coro}
\subsubsection{CR finite element space}
We define the following CR finite element space as
\begin{align*}
\displaystyle
CR_{h0}^1 &:= \left \{ \varphi_h \in L^2(\Omega); \ \varphi_h|_T \in \mathcal{P}^1(T) \ \forall T \in \mathbb{T}_h, \ \int_F [[ \varphi_h ]]_F ds = 0 \ \forall F \in \mathcal{F}_h \right \}.
\end{align*}
Using the barycentric coordinates $ \lambda_i: \mathbb{R}^d \to \mathbb{R}$, $i=1,\ldots,d+1$, we define the local basis functions as
\begin{align*}
\displaystyle
\theta_i(x) := d \left( \frac{1}{d} - \lambda_i(x) \right), \quad 1 \leq i \leq d+1.
\end{align*}
For $i=1,\ldots,d+1$, let $F_i$ be the face of $T$ and $x_{F_i}$ the barycentre of the face $F_i$. We then define the local CR interpolation operator as
\begin{align*}
\displaystyle
I_{T}^{CR}: H^{1}(T) \ni \varphi \mapsto I_{T}^{CR} \varphi := \sum_{i=1}^{d+1} \left( \frac{1}{|F_i|} \int_{F_i} \varphi ds \right) \theta_i \in \mathcal{P}^1.
\end{align*}
Furthermore, it holds that
\begin{align*}
\displaystyle
\frac{1}{|F_i|} \int_{F_i} \left( I_{T}^{CR} \varphi - \varphi \right) ds = 0, \quad i=1,\ldots,d+1, \quad \forall \varphi \in H^1(T).
\end{align*}
We define the global CR interpolation $I_h^{CR}:H_0^{1}(\Omega) \to CR_{h0}^1$ by
\begin{align*}
\displaystyle
(I_h^{CR} \varphi )|_T = I_T^{CR} (\varphi|_T) \quad \forall T \in \mathbb{T}_h, \quad \forall \varphi \in H_0^{1}(\Omega).
\end{align*}
We give the local CR interpolation error estimate.
\begin{thr} \label{modify=thr3}
We have the following estimates such that for $m \in \{ 0 , 1 \}$,
\begin{align}
\displaystyle
|\varphi - {I}^{CR}_{T} \varphi|_{H^{m}(T)}
&\leq C_I^{CR,m} \left( \frac{H_{T}}{h_{T}} \right)^m h_{T}^{2-m} | \varphi |_{H^{2}(T)} \ \forall T \in \mathbb{T}_h, \ \forall \varphi \in H^{2}(T). \label{modify10}
\end{align}
Here, $C_I^{CR,m}$ is a positive constant independent of $h_T$ and $H_T$.
\end{thr}
\begin{pf*}
The proof is found in \cite[Theorem 2]{IshKobTsu}.
\qed
\end{pf*}
The global CR interpolation error estimates are obtained as follows.
\begin{coro} \label{modify=coro2}
Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. Then, there exists constants $C_G^{CR,0}, C_G^{CR,1}\> 0$, independent of $H$ and $h$, such that
\begin{align}
\displaystyle
\|\varphi - {I}^{CR}_{h} \varphi \|_{L^{2}(\Omega)}
&\leq C_G^{CR,0} h^2 | \varphi |_{H^{2}(\Omega)} \quad \forall \varphi \in H_0^1(\Omega) \cap H^{2}(\Omega),\label{modify11} \\
|\varphi - {I}^{CR}_{h} \varphi|_{H^{1}(\mathbb{T}_h)}
&\leq C_G^{CR,1} H | \varphi |_{H^{2}(\Omega)} \quad \forall \varphi \in H_0^1(\Omega) \cap H^{2}(\Omega). \label{modify12}
\end{align}
\end{coro}
The inequality \eqref{modify10} with $m=1$ can be improved by replacing $H_{T}$ with $h_{T}$. To this end, we use the Poincar\'e inequality \eqref{modify3}.
\begin{thr} \label{modify=thr4}
It then holds that
\begin{align}
\displaystyle
| I_{T}^{CR} \varphi - \varphi |_{H^1(T)} &\leq \frac{h_{T}}{\pi} |\varphi|_{H^2(T)} \quad \forall T \in \mathbb{T}_h, \quad \forall \varphi \in H^2(T). \label{modify13}
\end{align}
\end{thr}
\begin{pf*}
Let $F_i$, $i = 1,\ldots,d+1$ be the faces of the element $T$.
We set $\psi := I_{T}^{CR} \varphi - \varphi \in H^2(T)$. From Green's formula and the property of the CR interpolation, we have
\begin{align*}
\displaystyle
\int_{T} \frac{\partial \psi}{\partial x_j} dx = \int_{\partial T} \psi n_{T}^{(j)} ds = \sum_{i=1}^{d+1} n_{T}^{(j)} \int_{F_i} \psi ds = 0,
\end{align*}
where $ n_{T}^{(j)}$ denotes the $j$th component of the outer unit normal vector $n_{T}$. From the Poincar\'e inequality \eqref{modify3}, we have
\begin{align*}
\displaystyle
| I_{T}^{CR} \varphi - \varphi |_{H^1(T)}^2
&= \sum_{j=1}^d \left \| \frac{\partial}{\partial x_j} ( I_{T}^{CR} \varphi - \varphi) \right\|^2_{L^2(T)} \\
&\leq \left( \frac{h_{T}}{\pi} \right)^2 \sum_{j=1}^d \left | \frac{\partial}{\partial x_j} ( I_{T}^{CR} \varphi - \varphi) \right |^2_{H^1(T)} \\
&= \left( \frac{h_{T}}{\pi} \right)^2 \sum_{j,k=1}^d \left \| \frac{\partial^2}{\partial x_j \partial x_k} ( I_{T}^{CR} \varphi - \varphi) \right \|^2_{L^2(T)} \\
&= \left( \frac{h_{T}}{\pi} \right)^2 |\varphi|^2_{H^2(T)},
\end{align*}
which conclude \eqref{modify13}.
\qed
\end{pf*}
\begin{rem} \label{modify=rem1}
For $i=1,\ldots,d+1$, let $x_{F_i}$ the barycentre of face $F_i$. If we choose the domain of the local CR interpolation operator as $W^{\ell,p}(T) \subset \mathcal{C}^0(T)$ with $1 \leq p \< \infty$ and $d \< \ell p$, it is possible to define
\begin{align*}
\displaystyle
I_{T}^{CR,S}:W^{\ell,p}(T) \ni \varphi \mapsto I_{T}^{CR,S} \varphi := \sum_{i=1}^{d+1} \varphi (x_{F_i}) \theta_i \in \mathcal{P}^1.
\end{align*}
However, the estimate \cite[Theorem 2]{IshKobTsu}
\begin{align*}
\displaystyle
| I_{T}^{CR,S} \varphi - \varphi |_{H^{1}(T)} &\leq C_I^{CR,1} H_{T} |\varphi|_{H^{2}(T)} \quad \forall \varphi \in H^{2}(T)
\end{align*}
can not be improved by replacing $H_{T}$ with $h_{T}$.
As a counter example, let us consider $T$ with vertices $x_1 := (0,0,0)^T$, $x_2 := (h,0,0)^T$, $x_3 := (\frac{h}{2}, h^{\gamma},0)^T$ and $x_4 := (\frac{h}{2}, 0, \frac{h}{2})^T$, where $h := \frac{1}{N}$, $N \in \mathbb{N}$ and $\gamma \in \mathbb{R}$, $1 \< \gamma \leq 2$. Let $\varphi$ be a function such that
\begin{align*}
\displaystyle
\varphi(x,y,z) := x^2 + y^2 + z ^2.
\end{align*}
If an exact solution $\varphi$ is known, the error $e_h := \varphi - \varphi_h$ and $e_{h/2} := \varphi - \varphi_{h/2}$ are computed numerically for two mesh sizes $h$ and $h/2$, where $\varphi_h := I_{T}^{CR,S} \varphi$. The convergence indicator $r$ is defined by
\begin{align*}
\displaystyle
r = \frac{1}{\log(2)} \log \left( \frac{\| e_h \|_X}{\| e_{h/2} \|_X} \right).
\end{align*}
The parameter $H_{T}$ is then $H_{T} = \mathcal{O}( h^{2 - \gamma} )$. We compute the convergence order with respect to the $H_0^1$ norm defined by
\begin{align*}
\displaystyle
&Err_h^{CR,S}(H^1) := \frac{| \varphi - I_{T}^{CR,S} \varphi |_{H^1(T)}}{| \varphi |_{H^2(T)}},
\end{align*}
for the case: $\gamma = 1.5$ (Table \ref{modify=table1}).
\begin{table}[htb]
\caption{Error of the local CR interpolation operator ($\gamma = 1.5$)}
\centering
\begin{tabular}{l | l | l | l | l } \hline
$N$ & $h$ & $H_{T}$ & $Err_h^{CR,S}(H^1)$ & $r$ \\ \hline \hline
128 & 7.8125e-03 & 3.8081e-01 & 2.8183e-03 & \\
256 & 3.9062e-03 & 2.6723e-01 & 1.7641e-03 & 0.68 \\
512 & 1.9531e-03 & 1.8823e-01 & 1.1587e-03 & 0.61 \\
1024 & 9.7656e-04 & 1.3284e-01 & 7.8625e-04 & 0.60 \\
2048 & 4.8828e-04 & 9.3842e-02 & 5.4390e-04 & 0.53 \\
4096 & 2.4414e-04 & 6.6324e-02 & 3.8026e-04 & 0.52 \\
\hline
\end{tabular}
\label{modify=table1}
\end{table}
\end{rem}
\subsubsection{RT finite element space}
The lowest order RT finite element space is defined by
\begin{align*}
\displaystyle
RT^0(T) := \{ v ; \ v(x) = p + x q, \ p \in \mathcal{P}^0(T)^d, \ q \in \mathcal{P}^0(T), \ x \in \mathbb{R}^d \}.
\end{align*}
The functionals are defined by, for any $v \in RT^0(T)$,
\begin{align*}
\displaystyle
{\chi}_{i} (v) := \frac{1}{|F_i|} \int_{F_i} v \cdot n_{i} ds, \quad F_i \subset \partial T, \quad 1 \leq i \leq d+1,
\end{align*}
where $n_i$ denotes the outer unit normal vector of $T$ along $F_i$. We set $\sum := \{ {\chi}_{i} \}_{i=1}^{d+1}$. Note that $\dim RT^0(T) = d+1$. The triple $\{ T , RT^0(T) , \Sigma \}$ is then a finite element. We define the RT finite element space by
\begin{align*}
\displaystyle
RT^0_{h} &:= \{ v_h \in L^2(\Omega)^d; \ v_h |_T \in RT^0(T), \ \forall T \in \mathbb{T}_h, \ [[ v_h \cdot n ]]_F = 0, \ \forall F \in \mathcal{F}_h^i \}.
\end{align*}
Note that $RT^0_{h} \subset H(\mathop{\mathrm{div}};\Omega) := \left\{ v \in L^2(\Omega)^d ; \ \mathop{\mathrm{div}} v \in L^2(\Omega) \right\}$.
We next define the local RT interpolation as
\begin{align*}
\displaystyle
I_T^{RT}: H^1(T)^d \to RT^0(T),
\end{align*}
using
\begin{align*}
\displaystyle
\int_{F_i} (v - I_T^{RT} v) \cdot n_i ds = 0, \quad F_i \subset \partial T, \ i \in \{ 1 , \ldots, d+1\} \quad \forall v \in H^1(T)^d.
\end{align*}
Further, we define the global RT interpolation $I_h^{RT} : H^1(\Omega)^d \to RT^0_{h}$ by
\begin{align*}
\displaystyle
(I_h^{RT} v )|_T = I_T^{RT} (v|_T) \quad \forall T \in \mathbb{T}_h, \quad \forall v \in H^1(\Omega)^d.
\end{align*}
The local RT interpolation error estimate is as follows.
\begin{thr} \label{modify=thr5}
We have the following estimates such that
\begin{align}
\displaystyle
\| I_{T}^{RT} v - v \|_{L^2(T)^d}
&\leq C_I^{RT} H_{T} |v|_{H^{1}(T)^d} \quad \forall T \in \mathbb{T}_h, \quad \forall v \in H^{1}(T)^d. \label{modify14}
\end{align}
Here, $C_I^{RT}$ is a positive constant independent of $H_T$.
\end{thr}
\begin{pf*}
The proof is found in \cite[Theorem 3]{IshKobTsu}.
\qed
\end{pf*}
The global RT interpolation error estimates are obtained as follows.
\begin{coro} \label{modify=coro3}
Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. Then, there exists a constant $C_G^{RT} \> 0$, independent of $H$, such that
\begin{align}
\displaystyle
\| I_{h}^{RT} v - v \|_{L^2(\Omega)^d}
&\leq C_G^{RT} H |v|_{H^{1}(\Omega)^d} \quad \forall v \in H^{1}(\Omega)^d. \label{modify15}
\end{align}
\end{coro}
\begin{comment}
\subsubsection{RT finite element space}
Let $T \in \mathbb{T}_h$. For any $k \in \mathbb{N}_0$, let $\mathcal{P}^k(T)$ be the space of polynomials with degree at most $k$ in $T$.
The lowest order RT finite element space is defined by
\begin{align*}
\displaystyle
RT^0(T) := \{ v ; \ v(x) = p + x q, \ p \in \mathcal{P}^0(T)^d, \ q \in \mathcal{P}^0(T), \ x \in \mathbb{R}^d \}.
\end{align*}
The functionals are defined by, for any $v \in RT^0(T)$,
\begin{align}
\displaystyle
{\chi}_{i} (v) := \frac{1}{|F_i|} \int_{F_i} v \cdot n_{i} ds, \quad F_i \subset \partial T, \quad 1 \leq i \leq d+1, \label{pre3}
\end{align}
where $n_i$ denotes the outer unit normal vector of $T$ along $F_i$. We set $\sum := \{ {\chi}_{i} \}_{i=1}^{d+1}$. Note that $\dim RT^0(T) = d+1$. The triple $\{ T , RT^0(T) , \Sigma \}$ is then a finite element. We define the global RT finite element space by
\begin{align*}
\displaystyle
RT^0_{h} &:= \{ v_h \in L^2(\Omega)^d; \ v_h |_T \in RT^0(T), \ \forall T \in \mathbb{T}_h, \ [[ v_h \cdot n ]]_F = 0, \ \forall F \in \mathcal{F}_h^i \}.
\end{align*}
Note that $RT^0_{h} \subset H(\mathop{\mathrm{div}};\Omega) := \left\{ v \in L^2(\Omega)^d ; \ \mathop{\mathrm{div}} v \in L^2(\Omega) \right\}$.
We next define the local RT interpolation as
\begin{align}
\displaystyle
I_T^{RT}: H^1(T)^d \to RT^0(T), \label{pre4}
\end{align}
using
\begin{align}
\displaystyle
\int_{F_i} (v - I_T^{RT} v) \cdot n_i ds = 0, \quad F_i \subset \partial T, \ i \in \{ 1 , \ldots, d+1\} \quad \forall v \in H^1(T)^d. \label{pre5}
\end{align}
Further, we define the global RT interpolation $I_h^{RT} : H(\mathop{\mathrm{div}};\Omega) \cap H^1(\mathbb{T}_h)^d \to RT^0_{h}$ by
\begin{align}
\displaystyle
(I_h^{RT} v )|_T = I_T^{RT} (v|_T) \quad \forall T \in \mathbb{T}_h, \quad \forall v \in H^1(\Omega)^d. \label{pre6}
\end{align}
We give the local RT interpolation error estimate.
\begin{thr} \label{pre=thr1}
We have the following estimate such that
\begin{align}
\displaystyle
\| I_T^{RT} v - v \|_{L^2(T)^d} &\leq C_I^{RT} H_T |v|_{H^1(T)^d} \quad \forall T \in \mathbb{T}_h, \quad \forall v \in H^1(T)^d, \label{pre7}
\end{align}
where $C_I^{RT}$ is a positive constant independent of $H_T$.
\end{thr}
\begin{pf*}
The proof can be found in \cite[Theorem 3]{IshKobTsu}.
\qed
\end{pf*}
The global RT interpolation error estimate is obtained as follows.
\begin{coro} \label{pre=coro1}
Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. Then, there exists $C_{g}^{RT} \> 0$, independent of $H$, such that
\begin{align}
\displaystyle
\| I_h^{RT} v - v \|_{L^2(\Omega)^d} &\leq C_{g}^{RT} H |v|_{H^1(\Omega)^d} \quad \forall v \in H^1(\Omega)^d. \label{pre8}
\end{align}
\end{coro}
\subsubsection{CR finite element space}
In introducing a nonconforming method, we define the following CR finite element space as
\begin{align*}
\displaystyle
CR_{h0}^1 &:= \left \{ \varphi_h \in L^2(\Omega); \ \varphi_h|_T \in \mathcal{P}^1(T) \ \forall T \in \mathbb{T}_h, \ \int_F [[ \varphi_h ]]_F ds = 0 \ \forall F \in \mathcal{F}_h \right \}.
\end{align*}
Using the barycentric coordinates $ \lambda_i: \mathbb{R}^d \to \mathbb{R}$, $i=1,\ldots,d+1$, we define the local basis functions as
\begin{align}
\displaystyle
\theta_i(x) := d \left( \frac{1}{d} - \lambda_i(x) \right), \quad 1 \leq i \leq d+1. \label{pre9}
\end{align}
For $i=1,\ldots,d+1$, let $F_i$ be the face of $T$ and $x_{F_i}$ the barycentre of face $F_i$. We then define the local CR interpolation operator as
\begin{align}
\displaystyle
I_T^{CR}: W^{1,1}(T) \ni \varphi \mapsto I_T^{CR} \varphi := \sum_{i=1}^{d+1} \left( \frac{1}{|F_i|} \int_{F_i} \varphi ds \right) \theta_i \in \mathcal{P}^1. \label{pre10}
\end{align}
Because the trace of a function in $W^{1,1}(T)$ is in $L^1(\partial T)$, $ \frac{1}{|F_i|} \int_{F_i} \varphi ds$ is meaningful.
Further, it holds that
\begin{align}
\displaystyle
\frac{1}{|F_i|} \int_{F_i} \left( I_T^{CR} \varphi - \varphi \right) ds = 0, \quad i=1,\ldots,d+1. \label{pre11}
\end{align}
We define the global CR interpolation $I_h^{CR}:W^{1,1}(\Omega) \to CR_{h0}^1$ by
\begin{align}
\displaystyle
(I_h^{CR} \varphi )|_T = I_T^{CR} (\varphi|_T) \quad \forall T \in \mathbb{T}_h, \quad \forall \varphi \in W^{1,1}(\Omega). \label{pre12}
\end{align}
We give the local CR interpolation error estimate.
\begin{thr} \label{pre=thr2}
We have the following estimates such that
\begin{align}
\displaystyle
\| I_T^{CR} \varphi - \varphi \|_{L^2(T)} &\leq C_{I}^{CR,L^2} h_T^2 |\varphi|_{H^2(T)} \quad \forall T \in \mathbb{T}_h, \quad \forall \varphi \in H^2(T), \label{pre13}\\
| I_T^{CR} \varphi - \varphi |_{H^1(T)} &\leq C_I^{CR,H^1} H_T |\varphi|_{H^2(T)} \quad \forall T \in \mathbb{T}_h, \quad \forall \varphi \in H^2(T). \label{pre14}
\end{align}
Here, $C_I^{CR,L^2}$ and $C_I^{CR,H^1}$ are positive constants independent of $h_T$ and $H_T$.
\end{thr}
\begin{pf*}
The proof can be found in \cite[Theorem 2]{IshKobTsu}.
\qed
\end{pf*}
\begin{rem*}
The inequality \eqref{pre14} can be improved by replacing $H_T$
with $h_T$.
\end{rem*}
The global CR interpolation error estimates are obtained as follows.
\begin{coro} \label{pre=coro2}
Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. Then, there exist $ C_{g}^{CR,L^2}, C_{g}^{CR,H^1} \> 0$, independent of $H$ and $h$, such that
\begin{align}
\displaystyle
\| I_h^{CR} \varphi - \varphi \| &\leq C_{g}^{CR,L^2} h^2 |\varphi|_{H^2(\Omega)} \quad \forall \varphi \in H^2(\Omega), \label{pre15}\\
| I_h^{CR} \varphi - \varphi |_{H^1(\mathbb{T}_h)} &\leq C_g^{CR,H^1} H |\varphi|_{H^2(\Omega)} \quad \forall \varphi \in H^2(\Omega). \label{pre16}
\end{align}
\end{coro}
\subsubsection{Piecewise-constant finite element space}
We define the standard piecewise constant space as
\begin{align*}
\displaystyle
M_h^0 := \left\{ q_h \in L^2(\Omega); \ q_h|_T \in \mathcal{P}^0(T) \ \forall T \in \mathbb{T}_h \right\}.
\end{align*}
The local $L^2$-projection $\Pi_{T}^0$ from $L^2(T)$ into the space $\mathcal{P}^0(T)$ is defined by
\begin{align}
\displaystyle
\int_{T} (\Pi_{T}^0 q - q) dx = 0 \quad \forall q \in L^2(T). \label{pre17}
\end{align}
Note that $\Pi_{T}^0 q$ is the constant function equal to $\frac{1}{|T|} \int_T q dx$. We also define the global $L^2$-projection $\Pi_h^0$ to the space $M_{h}^0$ by
\begin{align}
\displaystyle
(\Pi_h^0 q)|_T = \Pi_T^0 (q|_T) \quad \forall T \in \mathbb{T}_h, \quad \forall q \in L^2(\Omega). \label{pre18}
\end{align}
The error estimate of the $L^2$-projection is as follows.
\begin{thr} \label{pre=thr3}
We have the error estimate of the $L^2$-projection such that
\begin{align}
\displaystyle
\| \Pi^0_T q - q \|_{L^2(T)} &\leq C_P^{L^2} h_T |q|_{H^1(T)} \quad \forall T \in \mathbb{T}_h, \quad \forall q \in H^1(T). \label{pre19}
\end{align}
Here, $C_P^{L^2}$ is a positive constant independent of $h_T$.
\end{thr}
\begin{pf*}
The proof can be found in \cite[Theorem 2]{IshKobTsu}.
\qed
\end{pf*}
The global error estimate of the $L^2$-projection is obtained as follows.
\begin{coro} \label{pre=coro3}
Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. Then, there exists $C_{Pg}^{L^2} \> 0$, independent of $h$, such that
\begin{align}
\displaystyle
\| \Pi^0_h q - q \| &\leq C_{Pg}^{L^2} h |q|_{H^1(\Omega)} \quad \forall q \in H^1(\Omega). \label{pre20}
\end{align}
\end{coro}
\end{comment}
Between the RT interpolation $I_h^{RT}$ and the $L^2$-projection $\Pi_h^0$, the following relation holds:
\begin{lem} \label{pre=lem1}
For any $v \in H^1(\Omega)^d$, it holds that
\begin{align*}
\displaystyle
\mathop{\mathrm{div}} (I_h^{RT} v) = \Pi_h^0 (\mathop{\mathrm{div}} v).
\end{align*}
That is to say, the diagram
\[
\begin{CD}
H^1(\Omega)^d @>{\mathop{\mathrm{div}}}>> L^2(\Omega) \\
@V{I_h^{RT}}VV @VV{\Pi_h^0}V \\
RT^0_{h} @>{\mathop{\mathrm{div}}}>> M_{h}^0
\end{CD}
\]
commutes.
\end{lem}
\begin{pf*}
The proof of this lemma is found in \cite{Bra07}.
\qed
\end{pf*}
The following relation plays an important role in the CR finite element analysis on anisotropic meshes.
\begin{lem} \label{pre=lem2}
It holds that
\begin{align}
\displaystyle
(v_h , \nabla_h \psi_h) + (\mathop{\mathrm{div}} v_h , \psi_h) = 0 \quad \forall v_h \in RT_h^0, \quad \forall \psi_h \in H_0^1(\Omega) + CR_{h0}^1. \label{pre21}
\end{align}
\end{lem}
\begin{pf*}
For any $v_h \in RT_h^0$ and $\psi_h \in H_0^1(\Omega) + CR_{h0}^1$, using Green formula and the fact $v_h \cdot n_F \in \mathcal{P}^0(F)$ for any $F \in \mathcal{F}_h$, we can derive
\begin{align*}
\displaystyle
(v_h , \nabla_h \psi_h) + (\mathop{\mathrm{div}} v_h , \psi_h)
&= \sum_{T \in \mathbb{T}_h} \int_{\partial T} (v_h \cdot n_T) \psi_h ds \notag \\
&= \sum_{F \in \mathcal{F}_h} \int_{F} [[ (v_h \psi_h ) \cdot n_F ]] ds \notag \\
&= \sum_{F \in \mathcal{F}_h} \int_{F} \left( [[ v_h \cdot n_F]] \{ \! \{ \psi_h \} \!\} + \{ \! \{ v_h \} \!\} \cdot n_F [[ \psi_h ]] \right) ds \notag \\
&= 0.
\end{align*}
\qed
\end{pf*}
\subsection{Discrete Poincar\'e Inequality on Anisotropic Meshes}
We propose the discrete Poincar\'e inequality on anisotropic meshes.
\begin{lem}[Discrete Poincar\'e inequality on anisotropic meshes] \label{pre=lem3}
Assume that $\Omega$ is convex. If $H \leq 1$, there exists $C(\Omega)$, independent of $h$, $H$, and the geometry of meshes, such that
\begin{align}
\displaystyle
\| \varphi_h \| \leq C(\Omega) |\varphi_h|_{H^1(\mathbb{T}_h)} \quad \forall \varphi_h \in CR_{h0}^1. \label{pre22}
\end{align}
\end{lem}
\begin{pf*}
Let $\varphi_h \in CR_{h0}^1$. We consider the dual problem. Find $z \in H^{2}(\Omega) \cap H_0^1(\Omega)$ such that
\begin{align*}
\displaystyle
- \varDelta z = \frac{\varphi_h}{\| \varphi_h \|} \quad \text{in $\Omega$}, \quad z = 0 \quad \text{on $\partial \Omega$}.
\end{align*}
We then have a priori estimates:
\begin{align*}
\displaystyle
|z|_{H^1(\Omega)} \leq C_P, \quad |z|_{H^2(\Omega)} \leq 1,
\end{align*}
where $C_P$ is the Poincar\'e constant. We use the duality argument to show the target inequality. That is to say, we have
\begin{align*}
\displaystyle
\| \varphi_h \| &= \frac{1}{\| \varphi_h \|} (\varphi_h , \varphi_h) = (- \varDelta z , \varphi_h) = (- \mathop{\mathrm{div}} \nabla z , \varphi_h) \\
&= (- \mathop{\mathrm{div}} \nabla z , \varphi_h - \Pi_h^0 \varphi_h) - (\nabla z - I_h^{RT}(\nabla z) , \nabla_h \varphi_h) + (\nabla z , \nabla_h \varphi_h) \\
&\leq \| \varDelta z\| \| \varphi_h - \Pi_h^0 \varphi_h \| + \| \nabla z - I_h^{RT}(\nabla z) \| |\varphi_h|_{H^1(\mathbb{T}_h)} + |z|_{H^1(\Omega)} |\varphi_h|_{H^1(\mathbb{T}_h)} \\
&\leq c \left( h+ H |\nabla z|_{H^1(\Omega)}+ C_P \right) |\varphi_h|_{H^1(\mathbb{T}_h)},
\end{align*}
which leads to
\begin{align*}
\displaystyle
\| \varphi_h \|
\leq c (2 + C_p) |\varphi_h|_{H^1(\mathbb{T}_h)} \quad \text{if $H \leq 1$}.
\end{align*}
We here used
\begin{align*}
\displaystyle
- \int_{\Omega} \mathop{\mathrm{div}} (\nabla z) \varphi_h dx
&= \int_{\Omega} ( \Pi_h^0 \mathop{\mathrm{div}} (\nabla z) - \mathop{\mathrm{div}} (\nabla z) ) \varphi_h dx - \int_{\Omega} ( \Pi_h^0 \mathop{\mathrm{div}} (\nabla z) ) \varphi_h dx\\
&= \int_{\Omega} ( \Pi_h^0 \mathop{\mathrm{div}} (\nabla z) - \mathop{\mathrm{div}} (\nabla z) ) ( \varphi_h - \Pi_h^0 \varphi_h) dx \\
&\quad - \int_{\Omega} ( \mathop{\mathrm{div}} I_h^{RT} (\nabla z) ) \varphi_h dx\\
&= - \int_{\Omega} \mathop{\mathrm{div}} (\nabla z) \left( \varphi_h - \Pi_h^0 \varphi_h \right) dx \\
&\quad - \int_{\Omega} (\nabla z - I_h^{RT} (\nabla z) ) \cdot \nabla_h \varphi_h dx + \int_{\Omega} \nabla z \cdot \nabla_h \varphi_h dx,
\end{align*}
where
\begin{align*}
\displaystyle
\int_{\Omega} ( \mathop{\mathrm{div}} I_h^{RT} (\nabla z) ) \varphi_h dx
&= \sum_{T \in \mathbb{T}_h} \int_{\partial T} n_T \cdot I_h^{RT} (\nabla z) \varphi_h ds - \int_{\Omega} I_h^{RT} (\nabla z) \cdot \nabla_h \varphi_h dx \\
&= \int_{\Omega} ( \nabla z - I_h^{RT} (\nabla z) ) \cdot \nabla_h \varphi_h dx - \int_{\Omega} \nabla z \cdot \nabla_h \varphi_h dx.
\end{align*}
\qed
\end{pf*}
\begin{comment}
\begin{lem}[Discrete Sobolev inequality on anisotropic meshes] \label{pre=lem3}
Assume that $\Omega$ is convex. Let $r$ be a real number satisfying $r \in [2,\infty)$ if $d=2$ and $r \in [2,6]$ if $d=3$. If $\max_{T \in \mathbb{T}_h} ( |T|^{(2-r)/2r} H_T) \leq 1$ for any $T \in \mathbb{T}_h$, then there exists $c$, independent of $h$, for all $v_h \in CR_{h0}^1$,
\begin{align}
\displaystyle
\| v_h \|_{L^r(\Omega)} \leq c |v_h|_{H^1(\mathbb{T}_h)}. \label{pre22}
\end{align}
\end{lem}
\begin{pf*}
Let $v_h \in CR_{h0}^1$. Let $r^{\prime}$ be a real number with $1/r + 1/r^{\prime} = 1$. We consider the dual problem. For any $w_h \in CR_{h0}^1$, find the solution $z \in H^{2}(\Omega) \cap H_0^1(\Omega)$ such that
\begin{align*}
\displaystyle
- \varDelta z = w_h \quad \text{in $\Omega$}, \quad z = 0 \quad \text{on $\partial \Omega$},
\end{align*}
which leads to
\begin{align*}
\displaystyle
- \int_{\Omega} \varDelta z v_h dx = \langle v_h , w_h \rangle_{L^r(\Omega),L^{r^{\prime}}(\Omega)} := \int_{\Omega} v_h w_h dx.
\end{align*}
We first have the a priori estimate
\begin{align*}
\displaystyle
\| \nabla z \|_{L^2(\Omega)^d}^2
&= (\nabla z , \nabla z) = (- \varDelta z , z) = \int_{\Omega} w_h z dx
\leq \| w_h\|_{L^{r^{\prime}}(\Omega)} \| z \|_{L^r(\Omega)} \\
&\leq c \| w_h \|_{L^{r^{\prime}}(\Omega)} \| \nabla z \|_{L^2(\Omega)^d}.
\end{align*}
We use the duality argument to show the target inequality. That is to say, we have
\begin{align*}
\displaystyle
&\| v_h \|_{L^r(\Omega)} \\
&= \sup_{w_h \in L^{r^{\prime}}(\Omega)} \frac{\langle v_h , w \rangle_{L^r(\Omega),L^{r^{\prime}}(\Omega)}}{\| w \|_{ L^{r^{\prime}}(\Omega)}} \\
&= \sup_{w \in L^{r^{\prime}}(\Omega)} \frac{- \int_{\Omega} \varDelta z v_h dx}{\| w \|_{ L^{r^{\prime}}(\Omega)}} = \sup_{w \in L^{r^{\prime}}(\Omega)} \frac{- \int_{\Omega} \mathop{\mathrm{div}} (\nabla z) v_h dx}{\| w \|_{ L^{r^{\prime}}(\Omega)}} \\
&\leq \sup_{w \in L^{r^{\prime}}(\Omega)} \left( \frac{ \| \varDelta z \|_{ L^{r^{\prime}}(\Omega)} \| v_h - \Pi_h^0 v_h \|_{L^r(\Omega)} }{\| w \|_{ L^{r^{\prime}}(\Omega)}} \right. \\
&\quad \quad \quad \quad \quad \left.+ \frac{ \| \nabla z - I_h^{RT} ( \nabla z) \|_{L^{r^{\prime}}(\Omega)} \| \nabla _h v_h \|_{L^{r}(\Omega)} + |z|_{H^1(\Omega)} |v_h|_{H^1(\mathbb{T}_h)}}{\| w \|_{ L^{r^{\prime}}(\Omega)}} \right).
\end{align*}
Here, we used
\begin{align*}
\displaystyle
\int_{\Omega} ( \mathop{\mathrm{div}} I_h^{RT} (\nabla z) ) v_h d
&= \sum_{T \in \mathbb{T}_h} \int_{\partial T} n_T \cdot I_h^{RT} (\nabla z) v_h ds - \int_{\Omega} I_h^{RT} (\nabla z) \cdot \nabla_h v_h dx \\
&= \int_{\Omega} ( \nabla z - I_h^{RT} (\nabla z) ) \cdot \nabla_h v_h dx - \int_{\Omega} \nabla z \cdot \nabla_h v_h dx,
\end{align*}
and
\begin{align*}
\displaystyle
- \int_{\Omega} \mathop{\mathrm{div}} (\nabla z) v_h dx
&= \int_{\Omega} ( \Pi_h^0 \mathop{\mathrm{div}} (\nabla z) - \mathop{\mathrm{div}} (\nabla z) ) v_h dx - \int_{\Omega} ( \Pi_h^0 \mathop{\mathrm{div}} (\nabla z) ) v_h dx\\
&= \int_{\Omega} ( \Pi_h^0 \mathop{\mathrm{div}} (\nabla z) - \mathop{\mathrm{div}} (\nabla z) ) ( v_h - \Pi_h^0 v_h) dx \\
&\quad - \int_{\Omega} ( \mathop{\mathrm{div}} I_h^{RT} (\nabla z) ) v_h dx\\
&= - \int_{\Omega} \mathop{\mathrm{div}} (\nabla z) \left( v_h - \Pi_h^0 v_h \right) dx \\
&\quad + \int_{\Omega} (\nabla z - I_h^{RT} (\nabla z) ) \cdot \nabla_h v_h dx - \int_{\Omega} \nabla z \cdot \nabla_h v_h dx.
\end{align*}
Using the standard scaling argument, e.g., see \cite[Lemma 1.101]{ErnGue04}, we have
\begin{align*}
\displaystyle
\| v_h - \Pi_T^0 v_h \|_{L^r(T)}
&\leq c |T|^{1/r} \| \hat{v}_h - {\Pi_{\widehat{T}}^0} \hat{v}_h \|_{L^r(\widehat{T})} \\
&\leq c |T|^{1/r} \| \hat{v}_h - {\Pi_{\widehat{T}}^0} \hat{v}_h \|_{L^2(\widehat{T})} \\
&\leq c |T|^{1/r-1/2} \| {v}_h - {\Pi_T^0} {v}_h \|_{L^2(T)} \\
&\leq c |T|^{(2-r)/2r} h_T \| \nabla {v}_h \|_{L^2(T)}.
\end{align*}
By analogous argument, we have
\begin{align*}
\displaystyle
| v_h |_{W^{1,r}(T)}
&\leq c \| A^{-1} \| |T|^{1/r} | \widehat{v_h} |_{W^{1,r}(\widehat{T})}
\leq c \| A^{-1} \| |T|^{1/r} | \widehat{v_h} |_{H^1(\widehat{T})} \\
&\leq c \| A^{-1} \| \| A \| |T|^{1/r - 1/2} | { v_h} |_{H^1({T})} \\
&\leq c \frac{H_T}{h_T} |T|^{1/r - 1/2} | { v_h} |_{H^1({T})}.
\end{align*}
Since $\Omega$ is convex, we obtain the global estimate, e.g. see \cite{Gri11} ???:
\begin{align*}
\displaystyle
\| \nabla z - I_h^{RT} ( \nabla z) \|_{L^{r^{\prime}}(\Omega)}
&\leq c \sum_{T \in \mathbb{T}_h} \| \nabla z - I_{T}^{RT} ( \nabla z) \|_{L^{r^{\prime}}(T)}
\leq cH |\nabla z|_{W^{1,r^{\prime}}(\Omega)} \\
&\leq c H \| \varDelta z \|_{L^{r^{\prime}}(\Omega)}
= c H \| w \|_{L^{r^{\prime}}(\Omega)}.
\end{align*}
If $\max_{T \in \mathbb{T}_h} ( |T|^{(2-r)/2r}) H \leq 1$, it holds that
\begin{align*}
\displaystyle
&\| v_h \|_{L^r(\Omega)} \\
&\leq c \sup_{w \in L^{r^{\prime}}(\Omega)} \frac{ \| w \|_{ L^{r^{\prime}}(\Omega)} |v_h|_{H^1(\mathbb{T}_h)} + \| w \|_{ L^{r^{\prime}}(\Omega)} | v_h |_{H^1(\mathbb{T}_h)} + \| w \|_{ L^{r^{\prime}}(\Omega)} |v_h|_{H^1(\mathbb{T}_h)}}{\| w \|_{ L^{r^{\prime}}(\Omega)}} \\
&\leq c |v_h|_{H^1(\mathbb{T}_h)}.
\end{align*}
\qed
\end{pf*}
\end{comment}
\section{CR Finite Element Approximation}
\subsection{Finite Element Approximation}
The CR finite element problem is to find $u_h^{CR} \in CR_{h0}^1$ such that
\begin{align}
\displaystyle
a_{0h}(u_h^{CR} , \varphi_h) = (f , \varphi_h) \quad \forall \varphi_h \in CR_{h0}^1, \label{CR1}
\end{align}
where $a_{0h}: ( CR_{h0}^1 + H_0^1(\Omega) ) \times ( CR_{h0}^1 + H_0^1(\Omega) ) \to \mathbb{R}$ is defined by
\begin{align*}
\displaystyle
a_{0h}(\psi_h,\varphi_h) := \sum_{T \in \mathbb{T}_h} \int_T \nabla \psi_h \cdot \nabla \varphi_h dx = (\nabla_h \psi_h , \nabla_h \varphi_h).
\end{align*}
This problem is nonconforming because $CR_{h0}^1 \not\subset H_0^1(\Omega)$.
For the CR approximate solution $u_h^{CR} \in CR_{h0}^1$ of \eqref{CR1}, we have the a priori estimate, using \eqref{pre22},
\begin{align*}
\displaystyle
| u_h^{CR} |_{H^1(\mathbb{T}_h)}^2
\leq \| f \| \| u_h^{CR} \|
\leq C(\Omega) \| f \| | u_h^{CR} |_{H^1(\mathbb{T}_h)}.
\end{align*}
By the Lax--Milgram lemma, there exists a unique solution $u_h^{CR} \in CR_{h0}^1$ for any $f \in L^2(\Omega)$.
\subsection{Classical Error Analysis}
The starting point for error analysis is the Second Strang Lemma, e.g. see \cite[Lemma 2.25]{ErnGue04},
\begin{align}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)} \leq 2 \inf_{v_h \in CR_{h0}^1} | u -v_h |_{H^1(\mathbb{T}_h)} + \sup_{\varphi_h \in CR_{h0}^1} \frac{a_{0h}(u,\varphi_h) - (f,\varphi_h)}{| \varphi_h|_{H^1(\mathbb{T}_h)}}. \label{CR2}
\end{align}
The first term of the inequality \eqref{CR2} is estimated as follows. Using the CR interpolation error estimate \eqref{modify12}, we have, for any $u \in H^2(\Omega)$,
\begin{align}
\displaystyle
\inf_{v_h \in CR_{h0}^1} | u -v_h |_{H^1(\mathbb{T}_h)}
&\leq | u -I_h^{CR} u |_{H^1(\mathbb{T}_h)} \leq c H |u|_{H^2(\Omega)}. \label{CR3}
\end{align}
\begin{comment}
In order to estimate the nonconforming term, we first introduce the trace inequality in \cite[Lemma 1.49]{PieErn12}.
\begin{lem} \label{CR=lem4}
Let $T \in \mathbb{T}_h$ and let $F$ be a face of $T$. We then have the trace inequality
\begin{align}
\displaystyle
\| w \|^2_{L^2(F)}
\leq \frac{h_T}{\ell_F} \| w \|_{L^2(T)} \left( 2 \| \nabla w \|_{L^2(T)^d} + d h_T^{-1} \| w \|_{L^2(T)} \right) \quad \forall w \in H^1(\mathbb{T}_h). \label{CR3}
\end{align}
Here, $\ell_{F} := \min_{x \in F} |x - x_F|$, where $x_F$ is the vertex of $T$ opposite to $F$.
\end{lem}
\begin{pf*}
Let $w \in H^1(\mathbb{T}_h)$. let $T \in \mathbb{T}_h$ and let $F$ be a face of $T$. We set
\begin{align*}
\displaystyle
\sigma_F(x) := \frac{|F|}{d |T|} (x - x_F),
\end{align*}
The normal component of $\sigma_F$ is constant and equal to one on $F$, and it is zero on all the other faces of $T$. From the divergence theorem, we have
\begin{align*}
\displaystyle
\| w \|^2_{L^2(F)}
&= \int_F |w|^2 ds
= \int_{\partial T} |w|^2 \sigma_F(x) \cdot n_T ds
= \int_T \mathop{\mathrm{div}} (|w|^2 \sigma_F(x)) dx \\
&= \int_T 2 w \sigma_F(x) \cdot \nabla w dx + \int_T |w|^2 \mathop{\mathrm{div}} \sigma_F(x) dx.
\end{align*}
Using
\begin{align*}
\displaystyle
\| \sigma_F \|_{L^{\infty}(T)^d} \leq \frac{|F|}{d |T|} h_T, \quad \mathop{\mathrm{div}} \sigma_F = \frac{|F|}{|T|},
\end{align*}
and the H$\rm{\ddot{o}}$lder inequality, we obtain
\begin{align*}
\displaystyle
\| w \|^2_{L^2(F)}
\leq \frac{|F|}{d |T|} h_T \| w \|_{L^2(T)} \left( 2 \| \nabla w \|_{L^2(T)^d} + d h_T^{-1} \| w \|_{L^2(T)} \right).
\end{align*}
Since $\frac{|F|}{|T|} = \frac{d}{\ell_F}$, we have
\begin{align*}
\displaystyle
\| w \|^2_{L^2(F)}
\leq \frac{h_T}{\ell_F} \| w \|_{L^2(T)} \left( 2 \| \nabla w \|_{L^2(T)^d} + d h_T^{-1} \| w \|_{L^2(T)} \right).
\end{align*}
\qed
\end{pf*}
Using the trace inequality, we have the estimates,\cite[Lemma 3.32]{ErnGue04}.
\begin{lem} \label{CR=lem5}
For $T \in \mathbb{T}_h$, $\varphi \in H^1(T)$, and a face $F \in \partial T$, set $\overline{\varphi} := \frac{1}{|F|} \int_F \varphi ds$. Then, there exists $c$ such that
\begin{align}
\displaystyle
\| \varphi - \overline{\varphi} \|_{L^2(F)} \leq c \left( \frac{d}{\ell_F} \right)^{1/2} h_T |\varphi|_{H^1(T)} \ \forall h, \ \forall T \in \mathbb{T}_h, \ \forall F \in \partial T, \ \forall \varphi \in H^1(T). \label{CR4}
\end{align}
\end{lem}
\end{comment}
From the standard scaling argument, we have a consistency error inequality, e.g., see \cite[Lemma 3.36]{ErnGue04}.
\begin{lem}[Asymptotic Consistency] \label{CR=lem6}
Let $u \in H^1_0(\Omega) \cap H^2(\Omega)$ be the solution of the homogeneous Dirichlet Poisson problem \eqref{intro1}. It then holds that
\begin{align}
\displaystyle
\frac{a_{0h}(u,\varphi_h) - (f,\varphi_h)}{| \varphi_h|_{H^1(\mathbb{T}_h)}} \leq c \left( \sum_{T \in \mathbb{T}_h} \frac{h_T^4}{ (\min_{F \in \partial \mathbb{T}_h} \ell_F)^2} |u|^2_{H^2(T)} \right)^{1/2} \ \forall h, \ \forall \varphi_h \in CR_{h0}^1, \label{CR4}
\end{align}
where $\partial \mathbb{T}_h$ denotes the set of all faces $F$ of $T \in \mathbb{T}_h$. Here, $\ell_F$ denotes the distance of the vertex of $T$ opposite to $F$ to the face.
\end{lem}
\begin{pf*}
We follow \cite[Lemma 3.36]{ErnGue04}.
Let $\varphi_h \in CR_{h0}^1$. Because $- \varDelta u = f$, we have
\begin{align*}
\displaystyle
a_{0h}(u,\varphi_h) - (f,\varphi_h)
&= \sum_{T \in \mathbb{T}_h} \int_T ( \nabla u \cdot \nabla \varphi_h - f \varphi_h ) dx \\
&= \sum_{T \in \mathbb{T}_h} \sum_{F \in \partial \mathbb{T}_h} \int_F (n_T \cdot \nabla ) u \varphi_h ds.
\end{align*}
Because each face $F$ of an element $T$ located inside $\Omega$ appears twice in the above sum, we have
\begin{align*}
\displaystyle
a_{0h}(u,\varphi_h) - (f,\varphi_h)
&= \sum_{T \in \mathbb{T}_h} \sum_{F \in \partial \mathbb{T}_h} \int_F (n_T \cdot \nabla ) u \left( \varphi_h - \overline{\varphi_h} \right) ds
\end{align*}
with the mean value
\begin{align*}
\displaystyle
\overline{\varphi_h} := \frac{1}{|F|} \int_F \varphi_h ds.
\end{align*}
Furthermore, we get
\begin{align*}
\displaystyle
a_{0h}(u,\varphi_h) - (f,\varphi_h)
&= \sum_{T \in \mathbb{T}_h} \sum_{F \in \partial \mathbb{T}_h} \int_F n_T \cdot \left( \nabla u - \overline{\nabla u} \right)\left( \varphi_h - \overline{\varphi_h} \right) ds
\end{align*}
with the mean value
\begin{align*}
\displaystyle
n_T \cdot \overline{\nabla u} := \frac{1}{|F|} \int_F (n_T \cdot \nabla ) u ds.
\end{align*}
The Cauchy--Schwarz inequality yields
\begin{align*}
\displaystyle
a_{0h}(u,\varphi_h) - (f,\varphi_h)
&\leq \sum_{T \in \mathbb{T}_h} \sum_{F \in \partial \mathbb{T}_h} \| \nabla u -\overline{\nabla u} \|_{L^2(F)^d} \| \varphi_h - \overline{\varphi_h} \|_{L^2(F)}.
\end{align*}
For $F \in \partial \mathbb{T}_h$, let $\widehat{T} \subset \mathbb{R}^d$ be the reference simplex and let $\Phi_T: \widehat{T} \to T$ be the corresponding affine transformation with Jacobian matrix $A_T$. Let $\widehat{F} = \Phi_T^{-1} ( F )$. Using the standard scaling argument and the trace theorem on the reference element, we have
\begin{align*}
\displaystyle
\| \varphi_h - \overline{\varphi_h} \|_{L^2(F)}
&\leq \left( \frac{|F|}{|\widehat{F}|} \right)^{1/2} \| \hat{\varphi}_h - \overline{\hat{\varphi}_h} \|_{L^2(\widehat{F})}
\leq c \left( \frac{|F|}{|\widehat{F}|} \right)^{1/2} \| \hat{\varphi}_h - \overline{\hat{\varphi}_h} \|_{H^1(\widehat{T})}.
\end{align*}
The Deny--Lions Lemma (see \cite[Lemma B.67]{ErnGue04}) implies
\begin{align*}
\displaystyle
\| \hat{\varphi}_h - \overline{\hat{\varphi}_h} \|_{H^1(\widehat{T})}
\leq c | \hat{\varphi}_h |_{H^1(\widehat{T})}.
\end{align*}
Using the standard scaling argument again, we obtain
\begin{align*}
\displaystyle
\| \varphi_h - \overline{\varphi_h} \|_{L^2(F)}
&\leq c \left( \frac{|F|}{|\widehat{F}|} \right)^{1/2} | \hat{\varphi}_h |_{H^1(\widehat{T})} \\
&\leq c \left( \frac{|F|}{|\widehat{F}|} \right)^{1/2} \| A_T \|_2 \left( \frac{|\widehat{T}|}{|{T}|} \right)^{1/2} | {\varphi}_h |_{H^1({T})} \\
&\leq c \left( \frac{|F|}{|{T}|} \right)^{1/2} h_T | {\varphi}_h |_{H^1({T})} = c \left( \frac{d}{\ell_F} \right)^{1/2} h_T | {\varphi}_h |_{H^1({T})}.
\end{align*}
Here, $\| A_T \|_2$ denotes the matrix $2$-norm as
\begin{align*}
\displaystyle
\| A_T \|_2 := \sup_{0 \neq x \in \mathbb{R}^d} \frac{|A_T x|}{|x|},
\end{align*}
where $| x | := ( \sum_{i=1}^d |x_i|^2)^{1/2}$ for $x \in \mathbb{R}^d$.
By analogous argument, we have
\begin{align*}
\displaystyle
\| \nabla u -\overline{\nabla u} \|_{L^2(F)^d}
&\leq c \left( \frac{d}{\ell_F} \right)^{1/2} h_T |u|_{H^2(T)}.
\end{align*}
We consequently get
\begin{align*}
\displaystyle
a_{0h}(u,\varphi_h) - (f,\varphi_h)
&\leq c \sum_{T \in \mathbb{T}_h} \sum_{F \in \partial \mathbb{T}_h} \frac{h_T^2}{\ell_F} |u|_{H^2(T)} | {\varphi}_h |_{H^1({T})} \\
&\leq c \sum_{T \in \mathbb{T}_h} \frac{h_T^2}{\min_{F \in \partial \mathbb{T}_h} \ell_F} |u|_{H^2(T)} | {\varphi}_h |_{H^1({T})} \\
&\leq c \left( \sum_{T \in \mathbb{T}_h} \frac{h_T^4}{(\min_{F \in \partial \mathbb{T}_h} \ell_F)^2} |u|_{H^2(T)}^2 \sum_{T \in \mathbb{T}_h} | {\varphi}_h |_{H^1({T})}^2 \right)^{1/2},
\end{align*}
which leads to \eqref{CR4}.
\qed
\end{pf*}
From \eqref{CR2}, \eqref{CR3} and \eqref{CR4}, we have
\begin{align*}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)}
\leq c H |u|_{H^2(\Omega)} + c \left( \sum_{T \in \mathbb{T}_h} \frac{h_T^4}{ (\min_{F \in \partial \mathbb{T}_h} \ell_F)^2} |u|^2_{H^2(T)} \right)^{1/2}.
\end{align*}
Since the order of the nonconforming term does not necessary becomes the order $H$, this inequality may be overestimated.
\begin{ex*}
Let $0 \< h_T \leq 1$. As examples, we consider two cases.
\begin{description}
\item[(\Roman{lone})] When we use meshes including the tetrahedra $T$ with vertices $(0,0,0)^T$, $(h_T,0,0)^T$, $(0,h_T,0)^T$, and $(0,0,h_T^{\varepsilon})^T$, we have
\begin{align*}
\displaystyle
\frac{h_T^4}{(\min_{F \in \partial \mathbb{T}_h} \ell_F)^2} |u|^2_{H^2(T)}\leq c h_T^{2(2-\varepsilon)} |u|^2_{H^2(T)},
\end{align*}
where $1 \< \varepsilon \leq 2$. Since $H = \mathcal{O}(h)$, we get
\begin{align*}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)}
\leq c (h + h^{2 - \varepsilon}) |u|_{H^2(\Omega)}.
\end{align*}
\item[(\Roman{ltwo})] When we use meshes including the tetrahedra $T$ with vertices $(0,0,0)^T$, $(h_T,0,0)^T$, $(0,h_T,0)^T$, and $(h_T^{\gamma},0,h_T^{\varepsilon})^T$, we have
\begin{align*}
\displaystyle
\frac{h_T^4}{(\min_{F \in \partial \mathbb{T}_h} \ell_F)^2} |u|^2_{H^2(T)} \leq c h_T^{2(2-\varepsilon)} |u|^2_{H^2(T)},
\end{align*}
where $1 \< \gamma \< \varepsilon \leq 1 + \gamma$ and $1 \< \varepsilon \leq 2$. Since $H = \mathcal{O}(h^{1 + \gamma - \varepsilon})$, we get
\begin{align*}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)}
\leq c (h^{1 + \gamma - \varepsilon} + h^{2 - \varepsilon}) |u|_{H^2(\Omega)}.
\end{align*}
\end{description}
\end{ex*}
\begin{comment}
In general, the classical estimate is not valid on anisotropic meshes including, for example, the triangle $T_1$ with vertices $(0,0)^T$, $(h,0)^T$, and $(0,h^{\varepsilon})^T$, where $0 \< h \leq 1$ and $1 \leq \varepsilon$, and/or the triangle $T_2$ with vertices $(0,0)^T$, $(h,0)^T$, and $(h^{\gamma},h^{\varepsilon})^T$, where $0 \< h \leq 1$ and $1 \leq \gamma \< \varepsilon \leq 1 + \gamma$.
\begin{pf*}
Let $\varphi_h \in CR_{h0}^1$. Since $f = - \varDelta u$,
\begin{align*}
\displaystyle
a_{0h}(u,\varphi_h) - (f,\varphi_h) = \sum_{T \in \mathbb{T}_h} \int_{\partial T} \nabla u \cdot n_T \varphi_h ds = \frac{1}{2} \sum_{F \in \mathcal{F}_h} \int_{F} \nabla u \cdot n_F [[\varphi_h]] ds.
\end{align*}
Here, each side $F \in \mathcal{F}_h$ is either side of elements $T_1,T_2 \in \mathbb{T}_h$ or part of $\partial \Omega$, The function $\nabla u \cdot n_{T_1}$ on $F \subset T_1$ has in view of the continuity of $\nabla u$ opposite sign $\nabla u \cdot n_{T_2}$ on $F \subset T_2$. On each side $F \in \mathcal{F}_h$ by the special properties of $\varphi_h \in CR_{h0}^1$, the jump $[[ \varphi_h ]]$ has mean value zero. We consequently have
\begin{align*}
\displaystyle
\sum_{F \in \mathcal{F}_h} \int_{F} \nabla u \cdot n_F [[\varphi_h]] ds
&= \sum_{F \in \mathcal{F}_h} \int_{F} \nabla u \cdot n_F \left( [[\varphi_h]] - \overline{[[\varphi_h]]} \right) ds \\
&= \sum_{F \in \mathcal{F}_h} \int_{F} \left( \nabla u - \overline{\nabla u} \right) \cdot n_F \left( [[\varphi_h]] - \overline{[[\varphi_h]]} \right) ds,
\end{align*}
with the mean values
\begin{align*}
\displaystyle
\overline{\nabla u} \cdot n_F := \frac{1}{|F|} \int_F \nabla u \cdot n_F ds, \quad \overline{[[\varphi_h]]} := \frac{1}{|F|} \int_F {[[\varphi_h]]} ds.
\end{align*}
Splitting $[[ \varphi_h ]]$ into the contributions of the two neighbouring elements $T_1$, $T_2$ results in
\begin{align*}
\displaystyle
& \int_{F} \left( \nabla u - \overline{\nabla u} \right) \cdot n_F \left( [[\varphi_h]] - \overline{[[\varphi_h]]} \right) ds \\
&\quad = \int_{F} \left( \nabla u - \overline{\nabla u} \right) \cdot n_F \left( \varphi_h|_{T_1} - \overline{\varphi_h}_F \right) ds \\
&\quad \quad - \int_{F} \left( \nabla u - \overline{\nabla u} \right) \cdot n_F \left( \varphi_h|_{T_2} - \overline{\varphi_h}_F \right) ds.
\end{align*}
From Lemma \ref{CR=lem5}, we have
\begin{align*}
\displaystyle
\left \| \left( \nabla u - \overline{\nabla u} \right) \cdot n_F \right \|_{L^2(F)}
&\leq c \left( \frac{d}{\ell_F} \right)^{1/2} h_{T_1} |u|_{H^2(T_1)}, \\
\left \| \varphi_h|_{T_1} - \overline{\varphi_h}_F \right \|_{L^2(F)}
&\leq c \left( \frac{d}{\ell_F} \right)^{1/2} h_{T_1} |\varphi_h|_{H^1(T_1)},
\end{align*}
and analogously for the element $T_2$. From the foregoing results, it holds that for $F = T_1 \cap T_2$,
\begin{align*}
\displaystyle
&\left| \int_{F} \nabla u \cdot n_F [[\varphi_h]] ds \right| \\
&\quad \leq \left \| \left( \nabla u - \overline{\nabla u} \right) \cdot n_F \right \|_{L^2(F)} \left( \left \| \varphi_h|_{T_1} - \overline{\varphi_h}_F \right \|_{L^2(F)} + \left \| \varphi_h|_{T_2} - \overline{\varphi_h}_F \right \|_{L^2(F)} \right) \\
&\quad \leq c \frac{h_{T_1}^2}{\ell_F} |u|_{H^2(T_1)} \left( |\varphi_h|_{H^1(T_1)} + |\varphi_h|_{H^1(T_2)} \right) \\
&\quad \leq c \frac{h_{T_1}^2}{\ell_F} |u|_{H^2(\Omega)} |\varphi_h|_{H^1(\mathbb{T}_h)},
\end{align*}
and correspondingly $F \subset \partial \Omega$. From this, we conclude
\begin{align*}
\displaystyle
&\left| \sum_{F \in \mathcal{F}_h} \int_{F} \nabla u \cdot n_F [[\varphi_h]] ds \right|
\leq c |u|_{H^2(\Omega)} |\varphi_h|_{H^1(\mathbb{T}_h)} \frac{h^2}{\min_{F \in \mathcal{F}_h} \ell_F}.
\end{align*}
\qed
\end{pf*}
\end{comment}
\begin{comment}
The order of the nonconforming term does not necessary becomes the order $H$. In general, the classical estimate is not true on anisotropic meshes, e.g., a right-angled triangle. We give examples in the two-dimensional case.
\begin{ex*}
Let us consider the triangle $T$ with vertices $(0,0)^T$, $(2h,0)^T$, and $(h,h^{\varepsilon})^T$, where $0 \< h \leq 1$ and $1 \leq \varepsilon \leq 2$. We then have
\begin{align*}
\displaystyle
\frac{h^2}{\ell_F} = h^{2 - \varepsilon}, \quad H = \frac{2h (h^2 + h^{2 \varepsilon})}{|T|} \leq c h^{2 - \varepsilon} \quad \text{with $|T| = h^{1+\varepsilon}$}.
\end{align*}
This case is asymptotically consistent.
\end{ex*}
\begin{ex*}
Let us consider the triangle $T$ with vertices $(0,0)^T$, $(h,0)^T$, and $(0,h^{\varepsilon})^T$, where $0 \< h \leq 1$ and $1 \leq \varepsilon$. We then have
\begin{align*}
\displaystyle
\frac{h^2}{\ell_F} = h^{2 - \varepsilon}, \quad H = \frac{h^{1 + \varepsilon} \sqrt{h^2 + h^{2 \varepsilon}}}{|T|} \leq c h \quad \text{with $|T| = \frac{1}{2} h^{1+\varepsilon}$}.
\end{align*}
This case is not asymptotically consistent.
\end{ex*}
\begin{ex*}
Let us consider the triangle $T$ with vertices $(0,0)^T$, $(h,0)^T$, and $(h^{\gamma},h^{\varepsilon})^T$, where $0 \< h \leq 1$ and $1 \leq \gamma \< \varepsilon \leq 1 + \gamma$. We then have
\begin{align*}
\displaystyle
\frac{h^2}{\ell_F} = h^{2 - \varepsilon}, \ H = \frac{h \sqrt{h^{2 \gamma} + h^{2 \varepsilon}} \sqrt{( h - h^{\gamma})^2 + h^{2 \varepsilon}}}{|T|} \leq c h^{1 + \gamma - \varepsilon} \ \text{with $|T| = \frac{1}{2} h^{1+\varepsilon}$}.
\end{align*}
This case is not asymptotically consistent.
\end{ex*}
\end{comment}
\subsection{Argument via the RT Interpolation Error}
To overcome the difficulty, we use the relation \eqref{pre21} in Lemma \ref{pre=lem2}, e.g., see also \cite{AcoDur99,LiuKik18}.
\begin{lem}[Asymptotic Consistency] \label{CR=lem7}
We assume that $\Omega$ is convex. Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. Let $u \in H^1_0(\Omega) \cap H^2(\Omega)$ be the solution of the homogeneous Dirichlet Poisson problem \eqref{intro1}. Then, there exists $c$, independent of $H$, such that
\begin{align}
\displaystyle
& \sup_{\varphi_h \in CR_{h0}^1} \frac{a_{0h}(u,\varphi_h) - (f,\varphi_h)}{| \varphi_h|_{H^1(\mathbb{T}_h)}}
\leq c H \| f \|. \label{CR5}
\end{align}
\end{lem}
\begin{pf*}
Using \eqref{pre21}, we have, for any $w_h \in RT_h^0$,
\begin{align*}
\displaystyle
\sup_{\varphi_h \in CR_{h0}^1} \frac{a_{0h}(u,\varphi_h) - (f,\varphi_h)}{| \varphi_h|_{H^1(\mathbb{T}_h)}} =
\sup_{\varphi_h \in CR_{h0}^1} \frac{ (\nabla u - w_h , \nabla_h \varphi_h) - ( \mathop{\mathrm{div}} w_h + f, \varphi_h ) }{|\varphi_h|_{H^1(\mathbb{T}_h)}}.
\end{align*}
We set $w_h := I_h^{RT} \nabla u$. From Lemma \ref{pre=lem1}, we get
\begin{align*}
\displaystyle
\mathop{\mathrm{div}} ( I_h^{RT} \nabla u ) = \Pi_h^0 \mathop{\mathrm{div}} (\nabla u) = - \Pi_h^0 f.
\end{align*}
Furthermore, we have, for any $\varphi_h \in CR_{h0}^1$,
\begin{align*}
\displaystyle
( - \Pi_h^0 f + f, \Pi_h^0 \varphi_h) &= 0.
\end{align*}
We thus obtain
\begin{align*}
\displaystyle
&(\nabla u - I_h^{RT} \nabla u, \nabla_h \varphi_h) - ( - \Pi_h^0 f + f, \varphi_h ) \\
&= (\nabla u - I_h^{RT} \nabla u, \nabla_h \varphi_h) - ( - \Pi_h^0 f + f, \varphi_h - \Pi_h^0 \varphi_h) \\
&\leq \| \nabla u - I_h^{RT} \nabla u \|_{L^2(\Omega)^d} |\varphi_h|_{H^1(\mathbb{T}_h)} + \| f - \Pi_h^0 f \| \| \varphi_h - \Pi_h^0 \varphi_h \| \\
&\leq c H |u|_{H^2(\Omega)} |\varphi_h|_{H^1(\mathbb{T}_h)} + c h \| f \| |\varphi_h|_{H^1(\mathbb{T}_h)}.
\end{align*}
\qed
\end{pf*}
We consequently obtain the error estimate of the CR finite element method on anisotropic meshes.
\begin{thr} \label{CR=thr4}
We assume that $\Omega$ is convex. Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. Let $u \in H^1_0(\Omega) \cap H^2(\Omega)$ be the solution of the homogeneous Dirichlet Poisson problem \eqref{intro1} with data $f \in L^2(\Omega)$. Let $u_h^{CR} \in CR_{h0}^1$ be the approximate solution of \eqref{CR1}. Then, there exists $c$, independent of $H$, such that
\begin{align}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)} \leq c H \| f \|. \label{CR6}
\end{align}
\end{thr}
\begin{pf*}
Using \eqref{CR2}, \eqref{modify12} and \eqref{CR5}, we have
\begin{align*}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)}
&\leq 2 \inf_{v_h \in CR_{h0}^1} | u -v_h |_{H^1(\mathbb{T}_h)} + \sup_{\varphi_h \in CR_{h0}^1} \frac{a_{0h}(u,\varphi_h) - (f,\varphi_h)}{| \varphi_h|_{H^1(\mathbb{T}_h)}} \\
&\leq 2 | u -I_h^{CR} u |_{H^1(\mathbb{T}_h)} + c H \| f \| \leq c H \| f \|,
\end{align*}
which leads to the estimate \eqref{CR6}.
\qed
\end{pf*}
We next give the $L^2$ error estimate of the CR finite element method on anisotropic meshes, see also \cite{LiuKik07,LiuKik18,BreSco08}.
\begin{thr} \label{CR=thr5}
We assume that $\Omega$ is convex. Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. Let $u \in H^1_0(\Omega) \cap H^2(\Omega)$ be the solution of the homogeneous Dirichlet Poisson problem \eqref{intro1} with data $f \in L^2(\Omega)$. Let $u_h^{CR} \in CR_{h0}^1$ be the approximate solution of \eqref{CR1}. Then, there exists $c$, independent of $H$, such that
\begin{align}
\displaystyle
\| u - u_h^{CR} \| \leq c H^2 \| f \|. \label{CR7}
\end{align}
\end{thr}
\begin{pf*}
We set $e_h := u - u_h^{CR} $. Let $z \in H^2(\Omega) \cap H_0^1(\Omega)$ satisfy
\begin{align}
\displaystyle
a_0(\varphi , z ) = ( \varphi , e_h) \quad \forall \varphi \in H_0^1(\Omega) \label{CR8}
\end{align}
and $z_h^{CR} \in CR_{h0}^1$ satisfy
\begin{align}
\displaystyle
a_{0h}(\varphi_h , z_h^{CR}) = (\varphi_h , e_h) \quad \forall \varphi_h \in CR_{h0}^1. \label{CR9}
\end{align}
We then have
\begin{align}
\displaystyle
\| e_h \|^2 &= (e_h , e_h ) = a_{0h}(u , z ) - a_{0h}(u_h^{CR} , z_h^{CR}) \notag \\
&= a_{0h}(u - u_h^{CR} , z - z_h^{CR} ) + a_{0h}( u - u_h^{CR} , z_h^{CR}) + a_{0h}( u_h^{CR} , z - z_h^{CR} ) \notag \\
&= a_{0h}(u - u_h^{CR} , z - z_h^{CR} ) \notag \\
&\quad + a_{0h}( u - u_h^{CR} , z_h^{CR} - I_h^{CR} z) + a_{0h}( u - u_h^{CR} , I_h^{CR} z) \notag\\
&\quad + a_{0h}( u_h^{CR} - I_h^{CR} u , z - z_h^{CR} ) +a_{0h}( I_h^{CR} u , z - z_h^{CR} ). \label{CR10}
\end{align}
Using Theorem \ref{CR=thr4}, the first term on the right hand side of \eqref{CR10} can be estimated as
\begin{align}
\displaystyle
a_{0h}( u - u_h^{CR} , z- z_h^{CR})
&\leq | u - u_h^{CR} |_{H^1(\mathbb{T}_h)} | z- z_h^{CR} |_{H^1(\mathbb{T}_h)} \notag \\
&\leq c H^2 \| f \| \| e_h\|. \label{CR11}
\end{align}
For the second and fourth terms on the right hand side of \eqref{CR10}, we have
\begin{align}
\displaystyle
& a_{0h}( u - u_h^{CR} , z_h^{CR} - I_h^{CR} z) \notag \\
&\quad = a_{0h}( u - u_h^{CR} , z_h^{CR} - z) + a_{0h}( u - u_h^{CR} , z - I_h^{CR} z) \notag \\
&\quad \leq | u - u_h^{CR} |_{H^1(\mathbb{T}_h)} \left( | z_h^{CR} - z |_{H^1(\mathbb{T}_h)} + | z- I_h^{CR} z |_{H^1(\mathbb{T}_h)} \right) \notag \\
&\quad \leq c H^2 \| f \| \| e_h\|, \label{CR12}
\end{align}
and, analogously,
\begin{align}
\displaystyle
a_{0h}( u_h^{CR} - I_h^{CR} u , z - z_h^{CR} )
&\leq c H^2 \| f \| \| e_h\|. \label{CR13}
\end{align}
From \eqref{CR8}, \eqref{CR9} and \eqref{pre21}, we have
\begin{align*}
\displaystyle
&a_{0h}( u - u_h^{CR} , I_h^{CR} z) \\
&\quad = a_{0h}( u , I_h^{CR} z) - a_{0h}( u_h^{CR} , I_h^{CR} z) = (\nabla u , \nabla_h I_h^{CR} z) - (f , I_h^{CR} z) \\
&\quad = (\nabla u , \nabla_h I_h^{CR} z - \nabla z) - (f , I_h^{CR} z - z) + (\nabla u , \nabla z) - (f , z) \\
&\quad = (\nabla u - I_h^{RT} \nabla u , \nabla_h I_h^{CR} z - \nabla z) - (f + \mathop{\mathrm{div}} ( I_h^{RT} \nabla u ), I_h^{CR} z - z).
\end{align*}
From Lemma \ref{pre=lem1} and $ \mathop{\mathrm{div}} ( I_h^{RT} \nabla u ) = - \Pi_h^0 f$, we have
\begin{align}
\displaystyle
&a_{0h}( u - u_h^{CR} , I_h^{CR} z) \notag \\
&\quad = (\nabla u - I_h^{RT} \nabla u , \nabla_h I_h^{CR} z - \nabla z) - (f - \Pi_h^0 f, I_h^{CR} z - z) \notag\\
&\quad \leq \| \nabla u - I_h^{RT} \nabla u \|_{L^2(\Omega)^d} | I_h^{CR} z - z |_{H^1(\mathbb{T}_h)} + \| f - \Pi_h^0 f \| \| I_h^{CR} z - z \| \notag\\
&\quad \leq c H^2 \| f \| \| e_h\|. \label{CR14}
\end{align}
Analogously, from $ \mathop{\mathrm{div}} ( I_h^{RT} \nabla z ) = - \Pi_h^0 e_h$, we have
\begin{align}
\displaystyle
&a_{0h}( I_h^{CR} u , z - z_h^{CR} ) \notag \\
&\quad= (\nabla_h I_h^{CR} u - \nabla u , \nabla z - I_h^{RT} \nabla z ) - (I_h^{CR} u - u , e_h + \mathop{\mathrm{div}} ( I_h^{RT} \nabla z)) \notag\\
&\quad \leq | I_h^{CR} u - u |_{H^1(\mathbb{T}_h)} \| \nabla z - I_h^{RT} \nabla z \|_{L^2(\Omega)^d} + \| I_h^{CR} u - u \| \| e_h - \Pi_h^0 e_h \| \notag \\
&\quad \leq c H^2 \| f \| \| e_h\|. \label{CR15}
\end{align}
Combining \eqref{CR10}, \eqref{CR11}, \eqref{CR12}, \eqref{CR13}, \eqref{CR14}, and \eqref{CR15}, we finally get
\begin{align*}
\displaystyle
\| e_h \|^2 \leq c H^2 \| f \| \| e_h\|,
\end{align*}
which leads to the target estimate.
\qed
\end{pf*}
\section{RT Finite Element Error Estimates}
\subsection{Dual mixed formulation of the Poisson problem}
The Poisson equation \eqref{intro1} $- \varDelta u = - \mathop{\mathrm{div}} \nabla u = f$ can be written as the following system. Find $(\sigma,u):\Omega \to \mathbb{R}^{d} \times \mathbb{R}$ such that
\begin{subequations} \label{mix1}
\begin{align}
\displaystyle
\sigma - \nabla u &= 0 \quad \text{in $\Omega$}, \label{mix1a}\\
\mathop{\mathrm{div}} \sigma &= - f \quad \text{in $\Omega$}, \label{mix1b}\\
u &= 0 \quad \text{on $\partial \Omega$}. \label{mix1c}
\end{align}
\end{subequations}
We consider the following dual mixed formulation: Find $(\sigma , u) \in H(\mathop{\mathrm{div}};\Omega) \times L^2(\Omega)$ such that
\begin{subequations} \label{mix2}
\begin{align}
\displaystyle
a(\sigma,v) + b(v,u) &= 0 \quad \forall v \in H(\mathop{\mathrm{div}};\Omega), \label{mix2a}\\
b(\sigma,q) &= - (f ,q) \quad \forall q \in L^2(\Omega), \label{mix2b}
\end{align}
\end{subequations}
where bilinear forms $a: H(\mathop{\mathrm{div}};\Omega) \times H(\mathop{\mathrm{div}};\Omega) \to \mathbb{R}$ and $b:H(\mathop{\mathrm{div}};\Omega) \times L^2(\Omega) \to \mathbb{R}$ are defined by
\begin{align*}
\displaystyle
&a(\sigma,v) := (\sigma , v), \quad b(v,q) := (\mathop{\mathrm{div}} v , q).
\end{align*}
We set $X_0 := \{ v \in H(\mathop{\mathrm{div}};\Omega) ; \ b(v,q) = 0 \ \forall q \in L^2(\Omega) \}$. Because there exists a constant $c \> 0$ such that
\begin{align*}
\displaystyle
a(v,v) \geq c \| v \|^2_{H(\mathop{\mathrm{div}};\Omega) } \quad \forall v \in X_0
\end{align*}
and the bilinear form $b(_\cdot , _ \cdot)$ satisfies the inf--sup condition
\begin{align}
\displaystyle
\inf_{0 \neq q \in L^2(\Omega)} \sup_{0 \neq v \in H(\mathop{\mathrm{div}};\Omega) } \frac{b(v,q)}{\| v \|_{H(\mathop{\mathrm{div}};\Omega) } \| q \|}
\geq \beta_* \> 0, \label{mix3}
\end{align}
\eqref{mix2} is uniquely solvable; e.g., see \cite{GirRav86,BofBreFor13}.
\subsection{RT Approximate Problem}
We consider the following RT approximate problem. Find $(\sigma_h^{RT} , u_h^{RT}) \in RT^0_{h} \times M_{h}^0$ such that
\begin{subequations} \label{mix4}
\begin{align}
\displaystyle
a(\sigma_h^{RT},v_h) + b(v_h,u_h^{RT}) &= 0, \quad \forall v_h \in RT^0_{h}, \label{mix4a}\\
b(\sigma_h^{RT},q_h) &= - (f ,q_h), \quad \forall q_h \in M_{h}^0.\label{mix4b}
\end{align}
\end{subequations}
This setting is conforming because $RT^0_{h} \times M_{h}^0 \subset H(\mathop{\mathrm{div}};\Omega) \times L^2(\Omega)$. It is given later that the discrete inf--sup condition
\begin{align*}
\displaystyle
\inf_{q_h \in M_{h}^0} \sup_{v_h \in RT^0_{h}} \frac{b(v_h , q_h)}{\| v_h \|_{H(\mathop{\mathrm{div}};\Omega)} \| q_h \| } \geq c_* \> 0
\end{align*}
holds, where $c_*$ is a constant independent of $h$.
\subsection{Error Estimates of the RT Finite Element Approximation}
This section gives error estimates of the mixed finite element approximation \eqref{mix4}. We emphasise that we do not impose the shape regularity condition and the maximum-angle condition for the mesh partition. That is, we assume that $\{ \mathbb{T}_h \}$ is a family of conformal meshes satisfying Assumption \ref{pre=ass1}.
\begin{lem} \label{mix=lem8}
Let $D \subset \mathbb{R}^d$ be a bounded domain. For any $g \in L^2(D)$, there exists $v \in H^1(D)^d$ such that
\begin{align}
\displaystyle
\mathop{\mathrm{div}} v = g \quad \text{in $D$} \label{mix5}
\end{align}
and
\begin{align}
\displaystyle
| v |_{H^1(D)^d} \leq \| g \|_{L^2(D)}, \quad \| v \|_{L^2(\Omega)^d} \leq C_P(D) \| g \|_{L^2(D)}, \label{mix6}
\end{align}
where $C_P(D)$ is the Poincar\'e constant.
\end{lem}
\begin{pf*}
The proof can be found in \cite[Lemma 2.2]{BofBre08}.
\qed
\end{pf*}
\begin{comment}
\begin{pf*}
Let $B \subset \mathbb{R}^d$ be a ball containing $D$. We set
\begin{align*}
\displaystyle
\tilde{g} :=
\begin{cases}
g \quad \text{in $D$},\\
0 \quad \text{in $B \backslash D$}.
\end{cases}
\end{align*}
There then exists a unique solution ${u} \in H_0^1(B) \cap H^2(B)$ such that
\begin{align*}
\displaystyle
\varDelta {u} = \tilde{g} \quad \text{in $B$}, \quad {u} = 0 \quad \text{on $\partial B$}.
\end{align*}
It is known that $u$ satisfies the a priori estimate
\begin{align*}
\displaystyle
| {u} |_{H^2(D)} \leq | {u} |_{H^2(B)} \leq \| \varDelta {u} \|_{L^2(B)} = \| \tilde{g} \|_{L^2(B)} = \| {g} \|_{L^2(D)}.
\end{align*}
We also get the a priori estimate
\begin{align*}
\displaystyle
| {u} |_{H^1(D)} \leq | {u} |_{H^1(B)} \leq C_P(D) \| \tilde{g} \|_{L^2(B)} = C_P(D) \| {g} \|_{L^2(D)}.
\end{align*}
Therefore, setting $v := \nabla {u} \in H^1(D)^d$, \eqref{mix5} and \eqref{mix6} hold.
\qed
\end{pf*}
\end{comment}
We next give the discrete inf--sup condition.
\begin{lem}[Discrete inf--sup condition] \label{mix=lem9}
If $ C_{G}^{RT} H \leq 1$, there exists a constant $c_*$, depending only on the Poincar\'e constant, such that
\begin{align}
\displaystyle
\inf_{q_h \in M_{h}^0} \sup_{v_h \in RT^0_{h}} \frac{b(v_h , q_h)}{\| v_h \|_{H(\mathop{\mathrm{div}};\Omega)} \| q_h \|} \geq c_* \> 0,\label{mix7}
\end{align}
where $ C_{G}^{RT}$ is the constant appearing in Corollary \ref{modify=coro3}.
\end{lem}
\begin{pf*}
Let $q_h \in M_{h}^0$. From Lemma \ref{mix=lem8}, there exists $v \in H^1(\Omega)^d$ such that $\mathop{\mathrm{div}} v = q_h$ in $\Omega$, $| v |_{H^1(\Omega)^d} \leq \| q_h \|$, and $\| v \|_{L^2(\Omega)^d} \leq C_P(\Omega) \| q_h \|$.
By the Gauss theorem, we have
\begin{align*}
\displaystyle
\sum_{T \in \mathbb{T}_h} \int_{\partial T} v \cdot n_T ds = \sum_{T \in \mathbb{T}_h} \int_{T} \mathop{\mathrm{div}} v dx = \int_{\Omega} q_h dx.
\end{align*}
From the definition of the Raviart--Thomas interpolation, we conclude that
\begin{align*}
\displaystyle
\int_{\Omega} \mathop{\mathrm{div}} ( I_T^{RT} v) p_h dx &= \sum_{T \in \mathbb{T}_h} p_h \int_T \mathop{\mathrm{div}} ( I_T^{RT} v) dx = \sum_{T \in \mathbb{T}_h} p_h \int_{\partial T} n_T \cdot (I_T^{RT} v) ds \\
&= \sum_{T \in \mathbb{T}_h} p_h \int_{\partial T} v \cdot n_T ds = \int_{\Omega} q_h p_h dx \quad \forall p_h \in M_h^0.
\end{align*}
Therefore, it follows that $\mathop{\mathrm{div}} (I_h^{RT} v) = q_h$.
From the definitions, we have
\begin{align*}
\displaystyle
\| I_h^{RT} v \|^2_{H(\mathop{\mathrm{div}};\Omega)}
&= \| I_h^{RT} v \|_{L^2(\Omega)^d}^2 + \| \mathop{\mathrm{div}} (I_h^{RT} v) \|^2 \\
&\leq 2 \| I_h^{RT} v - v \|_{L^2(\Omega)^d}^2 + 2 \| v \|_{L^2(\Omega)^d}^2 + \| q_h \|^2 \\
&\leq 2 ( C_{G}^{RT})^2 H^2 |v|^2_{H^1(\Omega)^d} + 2 C_P(\Omega)^2 \| q_h \|^2 + \| q_h \|^2 \\
&\leq \left( 3 + 2 C_P(\Omega)^2 \right) \| q_h \|^2.
\end{align*}
We thus have
\begin{align*}
\displaystyle
\sup_{v_h \in RT^0_{h}} \frac{b(v_h , q_h)}{\| v_h \|_{H(\mathop{\mathrm{div}};\Omega)}}
& \geq \frac{b(I_h^{RT} v , q_h)}{\| I_h^{RT} v \|_{H(\mathop{\mathrm{div}};\Omega)}}
\geq \frac{1}{\left( 3 + 2 C_P(\Omega)^2 \right)^{1/2}} \frac{(q_h,q_h)}{\| q_h \|},
\end{align*}
and the proof of \eqref{mix7} is completed with $c_* := \left( 3 + 2 C_P(\Omega)^2 \right)^{- 1/2}$.
\qed
\end{pf*}
From the discrete equations \eqref{mix4} and their continuous counterpart \eqref{mix2}, we obtain the Galerkin orthogonality
\begin{subequations} \label{mix8}
\begin{align}
\displaystyle
a(\sigma - \sigma_h^{RT} , v_h) + b(v_h , u - u_h^{RT}) &= 0 \quad \forall v_h \in RT^0_{h}, \label{mix8a} \\
b(\sigma - \sigma_h^{RT} , q_h) &= 0 \quad \forall q_h \in M_{h}^0. \label{mix8b}
\end{align}
\end{subequations}
We then get the following C\'ea-lemma-type estimates with the help of \eqref{mix8} and the inf--sup condition \eqref{mix7}.
\begin{thr} \label{mix=thr6}
Let $\sigma \in H^1(\Omega)^d$ and $\sigma_h^{RT} \in RT_h^0$ be the solutions of \eqref{mix1} and \eqref{mix4}, respectively. We then have
\begin{align}
\displaystyle
\| \sigma - \sigma_h^{RT}\|_{L^2(\Omega)^d} \leq \| \sigma - I_h^{RT} \sigma \|_{L^2(\Omega)^d}. \label{mix9}
\end{align}
Furthermore, let $(\sigma,u) \in H^1(\Omega)^d \times L^2(\Omega)$ and $(\sigma_h^{RT},u_h^{RT}) \in RT_h^0 \times M_h^0$ be the solutions of \eqref{mix1} and \eqref{mix4}, respectively. Then, if $C_{G}^{RT} H \leq 1$, it holds that
\begin{align}
\displaystyle
\| u - u_h^{RT} \|
\leq \| u - \Pi_h^0 u \| + c_*^{-1} \| \sigma - \sigma_h^{RT} \|_{L^2(\Omega)^d}. \label{mix11}
\end{align}
Here, $C_{G}^{RT}$ and $c_*$ are respectively the constants appearing in Corollary \ref{modify=coro3} and Lemma \ref{mix=lem9}.
\end{thr}
\begin{pf*}
The proof can be found in \cite[Lemma 3.7, Lemma 3.9]{BofBre08}.
\qed
\end{pf*}
Using Theorem \ref{mix=thr6} and the interpolation error estimates of Corollary \ref{modify=coro1} and \ref{modify=coro3}, we thus have the error estimates of the mixed finite element approximation \eqref{mix4} on anisotropic meshes violating the maximum-angle condition.
\begin{thr} \label{mix=thr7}
let $(\sigma,u) \in H^1(\Omega)^d \times H^1(\Omega)$ and $(\sigma_h^{RT},u_h^{RT}) \in RT_h^0 \times M_h^0$ be the solutions of \eqref{mix1} and \eqref{mix4}, respectively. Then, there exists a constant $c_1 \> 0$, independnt of $\sigma$, $H$, and the geometric properties of $\mathbb{T}_h$, such that
\begin{align}
\displaystyle
\| \sigma - \sigma_h^{RT}\|_{L^2(\Omega)^d}
&\leq c_1 H | \sigma |_{H^1(\Omega)^d}. \label{mix10}
\end{align}
Furthermore, if $C_{G}^{RT} H \leq 1$, there exists a constant $c_2 \> 0$, depending on the discrete inf--sup condition but independent of $\sigma$, $u$, $h$, $H$, and the geometric properties of $\mathbb{T}_h$
\begin{align}
\displaystyle
\| u - u_h^{RT} \|
&\leq c_2 \left(h |u|_{H^1(\Omega)} + H |\sigma|_{H^1(\Omega)^d} \right). \label{mix12}
\end{align}
Here, $C_{G}^{RT}$ is the constant appearing in Corollary \ref{modify=coro3}.
\end{thr}
\begin{comment}
\begin{pf*}
From \eqref{mix8a} and the definition of the $L^2$-projection, we have
\begin{align*}
\displaystyle
a(\sigma - \sigma_h^{RT} , v_h) + b(v_h , \Pi_h^0 u - u_h^{RT}) &= 0 \quad \forall v_h \in RT^0_{h}.
\end{align*}
With help of the discrete inf--sup stability \eqref{mix7} and H$\rm{\ddot{o}}$lder's inequality,
\begin{align*}
\displaystyle
\| u - u_h^{RT} \|
&\leq \| u - \Pi_h^0 u \| + \| \Pi_h^0 u - u_h^{RT} \| \\
&\leq \| u - \Pi_h^0 u \| + c_*^{-1} \sup_{v_h \in RT^0_{h}} \frac{b(v_h , \Pi_h^0 u - u_h^{RT})}{\| v_h \|_{H(\mathop{\mathrm{div}};\Omega)} } \\
&= \| u - \Pi_h^0 u \| + c_*^{-1} \sup_{v_h \in RT^0_{h}} \frac{a(\sigma_h^{RT} - \sigma, v_h)}{\| v_h \|_{H(\mathop{\mathrm{div}};\Omega(} } \\
&\leq \| u - \Pi_h^0 u \| + c_*^{-1} \| \sigma_h^{RT} - \sigma \|_{L^2(\Omega)^d},
\end{align*}
which concludes \eqref{mix11}.
Furthermore, using the inequality \eqref{pre14} in Corollary \ref{pre=coro1} and \eqref{mix10}, we have \eqref{mix12}.
\qed
\end{pf*}
\end{comment}
\section{Relationship between the RT and CR Finite Element Approximation}
This section shows the relationship between the RT and CR problems. Find $(\bar{\sigma}_h^{RT} , \bar{u}_h^{RT}) \in RT^0_{h} \times M_{h}^0$ such that
\begin{subequations} \label{RTCR1}
\begin{align}
\displaystyle
a(\bar{\sigma}_h^{RT},v_h) + b(v_h,\bar{u}_h^{RT}) &= 0 \quad \forall v_h \in RT^0_{h}, \label{RTCR1a}\\
b(\bar{\sigma}_h^{RT},q_h) &= - (\Pi_h^0 f ,q_h) \quad \forall q_h \in M_{h}^0 \label{RTCR1b}
\end{align}
\end{subequations}
and find $\bar{u}_h^{CR} \in CR_{h0}^1$ such that
\begin{align}
\displaystyle
a_{0h}(\bar{u}_h^{CR} , \varphi_h) = (\Pi_h^0 f , \varphi_h) \quad \forall \varphi_h \in CR_{h0}^1. \label{RTCR2}
\end{align}
Here, \eqref{RTCR2} is the CR approximation of the Poisson equation
\begin{align}
\displaystyle
- \varDelta \bar{u} = \Pi_h^0 f \quad \text{in $\Omega$}, \quad \bar{u} = 0 \quad \text{on $\partial \Omega$}. \label{RTCR3}
\end{align}
In the case of $d=2$, it is well known that there exists a relationship between $(\bar{\sigma}_h^{RT} , \bar{u}_h^{RT}) $ and $\bar{u}_h^{CR}$ introduced by Marini; for example, \cite{Mar85}. See also \cite{LiuKik07,KikSai16,LiuKik18}. We here show the relation in the three dimensional case.
Let us consider a tetrahedron $T \subset \mathbb{R}^3$ such as that in Figure \ref{fig1}. Let $x_i$ ($i=1,2,3,4$) be the vertices and $m_{i,j}$ the midpoints of edges of the tetrahedron; that is, $m_{i,j} := \frac{1}{2} (x_i + x_j)$. Furthermore, for $1 \leq i \leq 4$, let $F_i$ be the face of the tetrahedron opposite $x_i$. Then, by simple calculation, we find the equality
\begin{align*}
\displaystyle
L := \sum_{i=1}^4 |x_i - x_T|^2 = |m_{1,4} - m_{2,3}|^2 + |m_{1,3} - m_{2,4}|^2 + |m_{1,2} - m_{3,4}|^2,
\end{align*}
holds, where $x_T$ is the barycentre of $T$ such that $x_T := \frac{1}{4} \sum_{i=1}^4 x_i$.
\begin{figure}
\includegraphics[width=7cm]{tetrahedron2.png}
\caption{Tetrahedron}
\label{fig1}
\end{figure}
We present a quadrature scheme over a simplex $T \subset \mathbb{R}^3$ (e.g., \cite[p.307]{Str71}) that is easily conformed.
\begin{lem} \label{RTCR=lem10}
For any $f \in \mathcal{C}^0(T)$, the quadrature scheme
\begin{align*}
\displaystyle
\int_T f(x) dx &\sim - \frac{|T|}{20} \sum_{i=1}^4 f(x_i) + \frac{|T|}{5} \sum_{1 \leq i \textless j \leq 4} f(m_{i,j})
\end{align*}
is exact for polynomials of degree less than or equal to $2$;
\begin{align}
\displaystyle
\int_T f(x) dx + \frac{|T|}{20} \sum_{i=1}^4 f(x_i) - \frac{|T|}{5} \sum_{1 \leq i \< j \leq 4}f(m_{i,j}) = 0 \quad \forall f \in \mathcal{P}^2(T). \label{RTCR4}
\end{align}
\end{lem}
Define the function $\varphi_T$ by
\begin{align}
\displaystyle
\varphi_T(x) &:=
\begin{cases} \label{RTCR5}
L - 12 |x - x_T|^2, \quad \text{on $T$},\\
0, \quad \text{otherwise}.
\end{cases}
\end{align}
We then have the following lemma.
\begin{lem} \label{RTCR=lem11}
It holds that
\begin{align}
\displaystyle
\frac{1}{|F_i|} \int_{F_i} \varphi_T (x) ds &= 0, \quad i=1,2,3,4, \label{RTCR6} \\
\frac{1}{|T|} \int_T \varphi_T (x) dx &= \frac{2}{5} L, \label{RTCR7} \\
\frac{1}{|T|} \int_T | \nabla \varphi_T (x)|^2 dx &= \frac{144}{5} L.\label{RTCR8}
\end{align}
\end{lem}
\begin{pf*}
From second-order three-point numerical integration over $F_1$,
\begin{align*}
\displaystyle
\int_{F_1} f(x) ds &= \frac{|F_1|}{3} \left( f (m_{2,3}) + f (m_{3,4}) + f (m_{2,4}) \right) \quad \forall f \in \mathcal{P}^2(T),
\end{align*}
we have
\begin{align*}
\displaystyle
&\frac{1}{|F_1|} \int_{F_1} \varphi_T (x) ds \\
&= \frac{1}{3} \left( \varphi_T (m_{2,3}) + \varphi_T (m_{3,4}) + \varphi_T (m_{2,4}) \right) \\
&= \frac{1}{3} \left( 3 L - 12 \left( |m_{2,3} - x_T|^2 + |m_{3,4} - x_T|^2 + |m_{2,4} - x_T|^2 \right) \right) \\
&= \frac{1}{3} \left( 3 L - \frac{12}{4} \left( |m_{2,3} - m_{1,4}|^2 + |m_{3,4} - m_{1,2}|^2 + |m_{2,4} - m_{1,3}|^2 \right) \right) = 0,
\end{align*}
which leads to \eqref{RTCR6}.
Next, using \eqref{RTCR4}, we have
\begin{align*}
\displaystyle
&\frac{1}{|T|} \int_T \varphi_T (x) dx \\
&= - \frac{1}{20} \sum_{i=1}^4 \varphi_T(x_i) + \frac{1}{5} \sum_{1 \leq i \< j \leq 4}\varphi_T(m_{i,j}) \\
&= - \frac{1}{20} \left( 4 L - 12 \sum_{i=1}^4 |x_i - x_T|^2 \right) + \frac{1}{5} \left( 6L - 12 \sum_{1 \leq i \< j \leq 4} |m_{i,j} - x_T|^2 \right) \\
&= \frac{2}{5} L,
\end{align*}
which leads to \eqref{RTCR7}. We here used
\begin{align*}
\displaystyle
&\sum_{1 \leq i \< j \leq 4} |m_{i,j} - x_T|^2 \\
&= |m_{1,2} - x_T|^2 + |m_{1,3} - x_T|^2 + |m_{1,4} - x_T|^2 \\
&\quad + |m_{2,3} - x_T|^2 + |m_{2,4} - x_T|^2 + |m_{3,4} - x_T|^2 \\
&= \frac{1}{4} \left( 2 |m_{1,2} - m_{3,4}|^2 + 2 |m_{1,3} - m_{2,4}|^2 +2 |m_{1,4} - m_{2,3}|^2 \right) = \frac{L}{2}.
\end{align*}
We similarly obtain
\begin{align*}
\displaystyle
&\frac{1}{|T|} \int_T | \nabla \varphi_T (x)|^2 dx \\
&= \frac{24^2}{|T|} \int_T |x - x_T|^2 dx \\
&= - \frac{24^2}{20} \sum_{i=1}^4 |x_i - x_T|^2 + \frac{24^2}{5} \sum_{1 \leq i \< j \leq 4} |m_{i,j} - x_T|^2 = \frac{144}{5} L,
\end{align*}
which leads to \eqref{RTCR8}.
\qed
\end{pf*}
We set the bubble space $B_h$ by
\begin{align}
\displaystyle
B_h := \{ b_h \in L^2(\Omega); \ b_h |_T \in \mathop{\mathrm{span}} \{ \varphi_T\}, \ \forall T \in \mathbb{T}_h \}. \label{RTCR9}
\end{align}
Then, for any $\psi_h \in CR_{h0}^1$ and $b_h \in B_h$, because one writes $b_h|_T = c_b\varphi_T$ for $c_b \in \mathbb{R}$, it holds that
\begin{align*}
\displaystyle
(\nabla_h \psi_h, \nabla_h b_h) &= \sum_{T \in \mathbb{T}_h} c_b \int_T \nabla \psi_h \cdot \nabla \varphi_T dx \\
&= \sum_{T \in \mathbb{T}_h} c_b \left\{ \sum_{F \subset \partial T} (n_F \cdot \nabla \psi_h ) \int_F \varphi_T ds - \int_T \varDelta \psi_h \varphi_T dx \right\} = 0.
\end{align*}
We here used the facts that \eqref{RTCR6}, $n_F \cdot \nabla \psi_h$ is constant on $F$, and $\varDelta \psi_h = 0$ on $T$. That is to say, two finite element spaces $CR_{h0}^1$ and $B_h$ are orthogonal to each other.
Furthermore, we define the finite element space $X_h^{bCR}$ by
\begin{align}
\displaystyle
X_h^{bCR} := CR_{h0}^1 + B_h = \{ \psi_h + b_h ; \ \psi_h \in CR_{h0}^1, \ b_h \in B_h \}. \label{RTCR10}
\end{align}
We consider the following finite element problem. Find $u_h^{bCR} \in X_h^{bCR}$ such that
\begin{align}
\displaystyle
a_{0h}(u_h^{bCR} , \varphi_h) = (\nabla_h u_h^{bCR} , \nabla_h \varphi_h) = (\Pi_h^0 f , \varphi_h) \quad \forall \varphi_h \in X_{h}^{bCR}.\label{RTCR11}
\end{align}
The solution $u_h^{bCR} \in X_h^{bCR}$ is then decomposed as $u_h^{bCR} = \bar{u}_h^{CR} + b_h$ with $\bar{u}_h^{CR} \in CR_{h0}^1$ and $b_h\in B_h$. Note that $\bar{u}_h^{CR}$ and $b_h$ respectively satisfy \eqref{RTCR2} and the equation
\begin{align}
\displaystyle
a_{0h}(b_h , c_h) = (\nabla_h b_h , \nabla_h c_h) = (\Pi_h^0 f , c_h) \quad \forall c_h \in B_h. \label{RTCR12}
\end{align}
On each element $T \in \mathbb{T}_h$, \eqref{RTCR12} has the form
\begin{align*}
\displaystyle
\gamma_T \int_T \nabla \varphi_T \cdot \nabla \varphi_T dx = \int_T \Pi_T^0 f \varphi_T dx, \quad \gamma_T \in \mathbb{R}.
\end{align*}
From \eqref{RTCR7} and \eqref{RTCR8}, we have
\begin{align}
\displaystyle
\gamma_T = \frac{1}{72} \Pi_T^0 f \quad \forall T \in \mathbb{T}_h. \label{RTCR13}
\end{align}
\begin{thr} \label{RTCR=thr8}
Let ${u}_h^{bCR} \in X_{h}^{bCR}$ be the solution of \eqref{RTCR11} and $(\bar{\sigma}_h^{RT},\bar{u}_h^{RT}) \in RT^0_{h} \times M_{h}^0$ the solution of \eqref{RTCR1}. We then have $\nabla_h u_h^{bCR} \in RT^0_{h}$ and
\begin{align}
\displaystyle
\bar{\sigma}_h^{RT} &= \nabla u_h^{bCR} \quad\forall T \in \mathbb{T}_h, \label{RTCR14}\\
\bar{u}_h^{RT} &= \Pi_T^0 u_h^{bCR} \quad\forall T \in \mathbb{T}_h.\label{RTCR15}
\end{align}
\end{thr}
\begin{pf*}
The proof can be found in \cite{HuMa15}.
\qed
\end{pf*}
From Theorem \ref{RTCR=thr8}, for $d=3$, the following lemma holds.
\begin{lem} \label{RTCR=lem12}
Let $\overline{u}_h^{CR} \in CR_{h0}^1$ be the solution of \eqref{RTCR2} and $(\bar{\sigma}_h^{RT},\bar{u}_h^{RT}) \in RT^0_{h} \times M_{h}^0$ be the solution of \eqref{RTCR1}. We then have the relationships
\begin{align}
\displaystyle
\bar{\sigma}_h^{RT}|_T &= \nabla \bar{u}_h^{CR} - \frac{1}{3} \Pi_T^0 f (x - x_T) \quad \forall T \in \mathbb{T}_h, \label{RTCR16} \\
\bar{u}_h^{RT}|_T &= \Pi_T^0 \bar{u}_h^{CR} + \frac{1}{180} \Pi_T^0 f \sum_{i=1}^4 |x_i - x_T|^2 \quad \forall T \in \mathbb{T}_h. \label{RTCR17}
\end{align}
\end{lem}
Using relationship between the RT and CR finite element methods, we have the error estimate of the CR finite element approximation with the bubble function.
\begin{lem} \label{RTCR=lem13}
We assume that $\Omega$ is convex. Let $\{ \mathbb{T}_h \}$ be a family of conformal meshes satisfying Assumption \ref{pre=ass1}. Let $\bar{u} \in H_0^1(\Omega) \cap H^2(\Omega)$ be the solution of \eqref{RTCR3} and $u_h^{bCR} \in X_{h}^{bCR}$ be the solution of the CR problem \eqref{RTCR11}. There then exists a constant $c \> 0$ independent of $\bar{u}$, $h$, $H$ and the geometric properties of $\mathbb{T}_h$ such that
\begin{align}
\displaystyle
| \bar{u} - u_h^{bCR} |_{H^1(\mathbb{T}_h)} \leq c H \| \Pi_h^0 f \|. \label{RTCR18}
\end{align}
\end{lem}
\begin{pf*}
Let $(\bar{\sigma}_h^{RT},\bar{u}_h^{RT}) \in RT^0_{h} \times M_{h}^0$ be the solution of \eqref{RTCR1}. From Theorem \ref{RTCR=thr8}, it holds that $\nabla_h u_h^{bCR} \in RT^0_{h}$ and $\bar{\sigma}_h^{RT} = \nabla_h u_h^{bCR}$. Setting $\bar{\sigma} := \nabla \bar{u} \in H^1(\Omega)^d$, we then have, using inequality \eqref{mix10}, that
\begin{align*}
\displaystyle
| \bar{u} - u_h^{bCR} |_{H^1(\mathbb{T}_h)}
&= \left( \sum_{T \in \mathbb{T}_h} \| \bar{\sigma} - \bar{\sigma}_h^{RT} \|^2_{L^2(T)^d} \right)^{1/2} \\
&\leq c H |\bar{\sigma}|_{H^1(\Omega)^d} = c H |\bar{u}|_{H^2(\Omega)} \leq c H \| \Pi_h^0 f \|.
\end{align*}
\qed
\end{pf*}
\begin{comment}
From Lemma \ref{RTCR=lem13}, the equivalence of the RT and CR problems, we have the following theorem.
\begin{thr} \label{RTCR=thr10}
Let $u \in H_0^1(\Omega) \cap H^2(\Omega)$ be the solution of \eqref{intro1} and $u_h^{CR} \in CR_{h0}^1$ be the solution of the Crouzeix--Raviart problem \eqref{CR1}. It then holds that
\begin{align}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)}
&\leq c_1 \left( \| \nabla u - \bar{\sigma}_h^{RT} \|_{L^2(\Omega)^d} + h \| f \| \right) \leq c_2 H \| f \|, \label{RTCR19} \\
\| \nabla u - \bar{\sigma}_h^{RT} \|_{L^2(\Omega)^d} &\leq | u - u_h^{CR} |_{H^1(\mathbb{T}_h)} + c_3 h \| f \|, \label{RTCR20}
\end{align}
where $c_1$, $c_2$ and $c_3$ are independent of $h$, $H$ and the geometric properties of $\mathbb{T}_h$
\end{thr}
\begin{pf*}
Let $(\bar{\sigma}_h^{RT},\bar{u}_h^{RT}) \in RT^0_{h} \times M_{h}^0$ be the solution of \eqref{RTCR1}, $\bar{u}_h^{CR} \in CR_{h0}^1$ be the solution of the Crouzeix--Raviart problem \eqref{RTCR2}, and $u_h^{bCR} \in X_{h}^{bCR}$ be the solution of the Crouzeix--Raviart problem \eqref{RTCR11}. Then, from Theorem \ref{RTCR=thr8}, it holds that $\nabla_h u_h^{bCR} \in RT^0_{h}$ and $\bar{\sigma}_h^{RT} = \nabla_h u_h^{bCR}$.
From the triangle inequality, we have
\begin{align*}
\displaystyle
&| u - u_h^{CR} |_{H^1(\mathbb{T}_h)} \\
&= \| \nabla u - \nabla_h u_h^{CR} \|_{L^2(\Omega)^d} \\
&\leq \| \nabla u - \bar{\sigma}_h^{RT} \|_{L^2(\Omega)^d} + \| \bar{\sigma}_h^{RT} - \nabla_h \bar{u}_h^{CR} \|_{L^2(\Omega)^d} + \| \nabla_h \bar{u}_h^{CR} - \nabla_h u_h^{CR} \|_{L^2(\Omega)^d}.
\end{align*}
From \eqref{intro3} in the two-dimensional case and \eqref{RTCR16} in the three-dimensional case, we get
\begin{align*}
\displaystyle
\| \bar{\sigma}_h^{RT} - \nabla_h \bar{u}_h^{CR} \|_{L^2(\Omega)^d}^2
&= \sum_{T \in \mathbb{T}_h} \int_T |\bar{\sigma}_h^{RT} - \nabla \bar{u}_h^{CR}|^2 dx \\
&= \frac{1}{d^2}\sum_{T \in \mathbb{T}_h} (\Pi_T^0 f)^2 \int_T |x - x_T|^2 dx \\
&\leq \frac{1}{d^2} \sum_{T \in \mathbb{T}_h} |T|^{-1} \| f \|^2_{0,2,T} h_T^2 |T|
= \frac{1}{d^2} \sum_{T \in \mathbb{T}_h} h_T^2 \| f \|^2_{0,2,T}.
\end{align*}
We here used H$\rm{\ddot{o}}$lder's inequality to obtain
\begin{align*}
\displaystyle
(\Pi_T^0 f)^2 = \frac{1}{|T|^2} \left( \int_T f dx \right)^2 \leq \frac{1}{|T|^2} |T| \| f \|^2_{0,2,T} = |T|^{-1} \| f \|^2_{0,2,T}.
\end{align*}
For any $w_h \in CR_{h0}^1$, from \eqref{CR1} and \eqref{RTCR2}, we have
\begin{align*}
\displaystyle
a_{0h}(\bar{u}_h^{CR} - u_h^{CR} , w_h) &= (f - \Pi_h^0 f , w_h) = (f - \Pi_h^0 f , w_h - \Pi_h^0 w_h).
\end{align*}
Substituting $\bar{u}_h^{CR} - u_h^{CR} \in CR_{h0}^1$ into $w_h$ in the above equation, we have, using the error estimate of the $L^2$-projection \eqref{pre12},
\begin{align*}
\displaystyle
\| \nabla_h \bar{u}_h^{CR} - \nabla_h u_h^{CR} \|_{L^2(\Omega)^d}^2
&= (f - \Pi_h^0 f , \bar{u}_h^{CR} - u_h^{CR} - \Pi_h^0 (\bar{u}_h^{CR} - u_h^{CR} )) \\
&\leq C_P^{L^2} h \| f - \Pi_h^0 f \| \| \nabla_h \bar{u}_h^{CR} - \nabla_h u_h^{CR} \|_{L^2(\Omega)^d} \\
&\leq C_P^{L^2} h \| f \| \| \nabla_h \bar{u}_h^{CR} - \nabla_h u_h^{CR} \|_{L^2(\Omega)^d}.
\end{align*}
We here used $\| f - \Pi_h^0 f \| \leq \| f \|$ which is derived from the definition of the $L^2$-projection in the first inequality.
From Lemma \ref{RTCR=lem13}, we have
\begin{align*}
\displaystyle
\| \nabla u - \bar{\sigma}_h^{RT} \|_{L^2(\Omega)^d}
&\leq \| \nabla u - \nabla \bar{u
} \|_{L^2(\Omega)^d} + \| \nabla \bar{u} - \nabla_h u_h^{bCR} \|_{L^2(\Omega)^d} \\
&\leq c_{-1}\| f - \Pi_h^0 f \|_{H^{-1}(\Omega)} + \| \nabla \bar{u} - \nabla_h u_h^{bCR}\|_{L^2(\Omega)^d} \\
&\leq c_{-1} \| f - \Pi_h^0 f \|_{H^{-1}(\Omega)} + c H \| \Pi_h^0 f \|,
\end{align*}
where $c_{-1} := \| - \varDelta^{-1} \|_{\mathcal{L}(H^{-1}(\Omega) ; H^1_0(\Omega))}$. For any $\varphi \in H_0^1(\Omega)$, considering that $f - \Pi_h^0 f$ is $L^2$-orthogonal to $M_{h}^0$ and the error estimate of the $L^2$-projection \eqref{pre12}, we have
\begin{align*}
\displaystyle
(f - \Pi_h^0 f , \varphi)
&= (f - \Pi_h^0 f , \varphi - \Pi_h^0 \varphi) \\
&\leq \| f - \Pi_h^0 f \| \| \varphi - \Pi_h^0 \varphi \| \\
&\leq C_P^{L^2} h \| f \| | \varphi |_{H^1(\Omega)}
\end{align*}
and therefore
\begin{align*}
\displaystyle
\|f - \Pi_h^0 f \|_{H^{-1}(\Omega)}
&\leq C_P^{L^2} h \| f \|.
\end{align*}
Gathering the above estimates, we have
\begin{align*}
\displaystyle
| u - u_h^{CR} |_{H^1(\mathbb{T}_h)}
&\leq c_{-1} \| f - \Pi_h^0 f \|_{H^{-1}(\Omega)} + c H \| \Pi_h^0 f \| + \frac{1}{d} h \| f \| + C_P^{L^2} h \| f \| \\
&\leq C_P^{L^2} h \| f \| + c H \| f \| + \frac{1}{d} h \| f \| + C_P^{L^2} h \| f \| \\
&\leq c H \| f \|.
\end{align*}
We again use \eqref{intro3} in the two-dimensional case and \eqref{RTCR16} in the three-dimensional case for the proof of \eqref{RTCR20}. That is, we have
\begin{align*}
\displaystyle
& \| \nabla u - \bar{\sigma}_h^{RT} \|_{L^2(\Omega)^d} \\
&\leq \| \nabla u - \nabla_h {u}_h^{CR} \|_{L^2(\Omega)^d} + \| \nabla_h {u}_h^{CR} - \nabla_h \bar{u}_h^{CR} \|_{L^2(\Omega)^d} + \| \nabla_h \bar{u}_h^{CR} - \bar{\sigma}_h^{RT} \|_{L^2(\Omega)^d} \\
&\leq \| \nabla u - \nabla_h {u}_h^{CR} \|_{L^2(\Omega)^d} + c h \| f \|.
\end{align*}
\qed
\end{pf*}
\end{comment}
\section{Numerical Results}
This section presents results of numerical examples. Let $\Omega := (0,1)^3$. Let $u_h^L$ and $u_h^{CR}$ be the $\mathcal{P}^1$-Lagrange and $\mathcal{P}^1$-CR finite element solutions, respectively, for the model problem
\begin{align*}
\displaystyle
- \varDelta u &= 2y(1-y)z(1-z) + 2x(1-x)z(1-z) + 2x(1-x)y(1-y) \quad \text{in $\Omega$}, \\
u &= 0 \quad \text{on $\partial \Omega $},
\end{align*}
which is the exact solution $u = x(1-x)y(1-y)z(1-z)$.
\begin{figure}[htbp]
\includegraphics[width=7cm]{M8L22.png}
\caption{Mesh: $M=8$, $N=22$}
\label{fig2}
\vspace{0.5cm}
\includegraphics[width=8cm]{3dmesh2.png}
\caption{Elements}
\label{fig3}
\end{figure}
Let $M$ be the division number of each side of the bottom face and $N$ the division number of the height of $\Omega$ with $N \sim M^{\gamma}$ (see Fig. \ref{fig2}). There are two elements as shown in Fig. \ref{fig3}.
If an exact solution $u$ is known, the error $e_h := u - u_h$ and $e_{h/2} := u - u_{h/2}$ are computed numerically for two mesh sizes $h$ and $h/2$. The convergence indicator $r$ is defined by
\begin{align*}
\displaystyle
r = \frac{1}{\log(2)} \log \left( \frac{\| e_h \|_X}{\| e_{h/2} \|_X} \right).
\end{align*}
We set $h := \frac{1}{M}$. The parameter $H$ is then $H = \mathcal{O}( h^{2 - \gamma} )$. We compute the convergence order with respect to $H_0^1$ and $L^2$ norms defined by
\begin{align*}
\displaystyle
&Err_h^L(H^1) := \frac{| u - u_h^L |_{H^1(\Omega)}}{\| \varDelta u \|}, \quad Err_h^{L}(L^2) := \frac{\| u - u_h^L \|}{\| \varDelta u \|}, \\
&Err_h^{CR}(H^1) := \frac{| u - u_h^{CR} |_{H^1(\mathbb{T}_h)}}{\| \varDelta u \|}, \quad Err_h^{CR}(L^2) := \frac{\| u - u_h^{CR} \|}{\| \varDelta u \|},
\end{align*}
for three cases: $\gamma = 1.5$, $\gamma = 1.9$ and $\gamma = 2.0$. In order to compute the above norms, we use the five-order fifteen-point numerical integration introduced in \cite{Kea86}. The results are give in Table \ref{table1}, Table \ref{table2} when $\gamma = 1.5$, Table \ref{table3}, Table \ref{table4} when $\gamma = 1.9$, and Table \ref{table5}, Table \ref{table6} when $\gamma = 2.0$. Further, $N_p^L$ and $N_p^{CR}$ denote respectively the degrees of freedom for the $\mathcal{P}^1$-Lagrange finite element and the $\mathcal{P}^1$-CR finite element.
\begin{table}[htb]
\caption{Error of the $\mathcal{P}^1$-Lagrange finite element solution ($\gamma = 1.5$)}
\centering
\begin{tabular}{l|l|l|l|l|l|l|l|l} \hline
$M$& $N$ & $h$ & $H$ & $N_p^L$ & $Err_h^L(H^1)$ & $r$& $Err_h^L(L^2)$ & $r$ \\ \hline \hline
4& 8 & 2.50e-01 & 5.00e-01 &225 & 1.2043e-01& & 9.5321e-03& \\
8& 22 & 1.25e-01 & 3.54e-01 & 1,863 & 7.0318e-02 & 0.78 & 3.1646e-03 & 1.59 \\
16& 64 & 6.25e-02 & 2.50e-01 & 18,785 & 4.4662e-02 & 0.65 & 1.2570e-03 & 1.33 \\
32& 182 & 3.13e-02 & 1.77e-01 & 199,287 & 2.9479e-02 & 0.60 & 5.4477e-04 & 1.21 \\
\hline
\end{tabular}
\label{table1}
\end{table}
\begin{table}[htb]
\caption{Error of the $\mathcal{P}^1$-CR finite element solution ($\gamma = 1.5$)}
\centering
\begin{tabular}{l|l|l|l|l|l|l|l|l} \hline
$M$& $N$ & $h $ & $H$ & $N_p^{CR}$ & $Err_h^{CR}(H^1)$ & $r$ & $Err_h^{CR}(L^2)$ & $r$ \\ \hline \hline
4& 8 &2.50e-01 &5.00e-01 & 1,440 &8.2569e-02 & & 3.8242e-03 & \\
8& 22 & 1.25e-01 & 3.54e-01 & 14,912 & 4.0629e-02 & 1.02 & 8.8356e-04 & 2.11 \\
16& 64 & 6.25e-02 & 2.50e-01 & 168,448 & 2.0042e-02 & 1.02 & 2.0485e-04 & 2.11 \\
32& 182 & 3.13e-02 &1.77e-01 & 1,889,024 & 9.9579e-03 & 1.01 & 4.8960e-05 & 2.07 \\
\hline
\end{tabular}
\label{table2}
\end{table}
\begin{table}[htb]
\caption{Error of the $\mathcal{P}^1$-Lagrange finite element solution ($\gamma = 1.9$)}
\centering
\begin{tabular}{l|l|l|l|l|l|l|l|l} \hline
$M$& $N$ & $h $ & $H$ & $N_p^L$ & $Err_h^L(H^1)$ & $r$ & $Err_h^L(L^2)$ & $r$ \\ \hline \hline
4& 14 & 2.50e-01 & 8.71e-01 & 345 & 1.4873e-01 & & 1.4032e-02 & \\
8& 52 & 1.25e-01 & 8.12e-01 & 4,293 & 1.2167e-01 & 0.29 & 9.3061e-03 & 0.59 \\
16& 194 & 6.25e-02 & 7.58e-01 & 56,355 & 1.0919e-01 & 0.16 &7.4989e-03 & 0.31\\
32& 724 & 3.13e-02 & 7.07e-01 & 789,525 & 1.0128e-01 & 0.11 & 6.4558e-03 & 0.22 \\
\hline
\end{tabular}
\label{table3}
\end{table}
\begin{table}[htb]
\caption{Error of the $\mathcal{P}^1$-CR finite element solution ($\gamma = 1.9$)}
\centering
\begin{tabular}{l|l|l|l|l|l|l|l|l} \hline
$M$& $N$ & $h $ & $H$ & $N_p^{CR}$ & $Err_h^{CR}(H^1)$ & $r$ & $Err_h^{CR}(L^2)$ & $r$ \\ \hline \hline
4& 14 & 2.50e-01 & 8.71e-01 & 2,496 & 7.9756e-02 & & 3.2993e-03 & \\
8& 52 & 1.25e-01 & 8.12e-01 & 35,072 & 3.9708e-02 & 1.01 & 7.7177e-04 & 2.10 \\
16& 194 & 6.25e-02 & 7.58e-01 & 509,568 & 1.9814e-02 & 1.00 & 1.8781e-04 &2.04 \\
32& 724 & 3.13e-02 & 7.07e-01 & 7,508,480 & 9.9003e-03 & 1.00 & 4.6546e-05 & 2.01 \\
\hline
\end{tabular}
\label{table4}
\end{table}
\begin{table}[htb]
\caption{Error of the $\mathcal{P}^1$-Lagrange finite element solution ($\gamma = 2.0$)}
\centering
\begin{tabular}{l|l|l|l|l|l|l|l|l} \hline
$M$& $N$ & $h$ & $H$ & $N_p^L$ & $Err_h^L(H^1)$ & $r$ & $Err_h^L(L^2)$ & $r$ \\ \hline \hline
4& 16 & 2.50e-01 & 1.00 & 425 & 1.5862e-01 & & 1.5909e-02 & \\
8& 64 & 1.25e-01 & 1.00 & 5,265 & 1.4079e-01 & 0.17 & 1.2472e-02 & 0.35 \\
16& 256 & 6.25e-02 & 1.00 & 74,273 & 1.3597e-01 & 0.05 & 1.1646e-02 & 0.10 \\
32& 1,024 & 3.13e-02 & 1.00 & 1,116,225 & 1.3474e-01 & 0.01 & 1.1442e-02 & 0.03 \\
\hline
\end{tabular}
\label{table5}
\end{table}
\begin{table}[htb]
\caption{Error of the $\mathcal{P}^1$-CR finite element solution ($\gamma = 2.0$)}
\centering
\begin{tabular}{l|l|l|l|l|l|l|l|l} \hline
$M$& $N$ & $h $ & $H$& $N_p^{CR}$ & $Err_h^{CR}(H^1)$ & $r$ & $Err_h^{CR}(L^2)$ & $r$ \\ \hline \hline
4& 16 & 2.50e-01 & 1.00 & 2,848 & 7.9473e-02 & & 3.2264e-03 & \\
8& 64 & 1.25e-01 & 1.00 & 43,136 & 3.9647e-02 & 1.00 & 7.6153e-04 & 2.08 \\
16& 256 & 6.25e-02 & 1.00 & 672,256 & 1.9803e-02 & 1.00 & 1.8680e-04 & 2.03 \\
32& 1,024 & 3.13e-02 & 1.00 & 10,618,880 & 9.8984e-03 & 1.00 & 4.6458e-05 & 2.01 \\
\hline
\end{tabular}
\label{table6}
\end{table}
Observing the numerical results, the convergence indicators $r$ in each norms are respectively
\begin{align*}
\displaystyle
&| u - u_h^L |_{H^1(\Omega)} = \mathcal{O}(H), \quad \| u - u_h^L \| = \mathcal{O}(H^2), \\
&| u - u_h^{CR} |_{H^1(\mathbb{T}_h)} = \mathcal{O}(h), \quad \| u - u_h^{CR} \| = \mathcal{O}(h^2),
\end{align*}
where $H = \mathcal{O}( h^{2 - \gamma} )$. Meanwhile, the theoretical results are as follows:
\begin{align*}
\displaystyle
&| u - u_h^L |_{H^1(\Omega)} = \mathcal{O}(H), \quad \| u - u_h^L \| = \mathcal{O}(H^2), \\
&| u - u_h^{CR} |_{H^1(\mathbb{T}_h)} = \mathcal{O}(H), \quad \| u - u_h^{CR} \| = \mathcal{O}(H^2),
\end{align*}
if $\Omega$ is convex and $u \in H^2(\Omega) \cap H_0^1(\Omega)$. In this numerical examples, the CR finite element approximation is superior to the Lagrange finite element approximation on this anisotropic meshes. The theoretical explanation of this point is still open.
\begin{acknowledgements}
This work was supported by JSPS KAKENHI Grant Number \\ JP16H03950. We would like to thank the anonymous referee for the valuable comments.
\end{acknowledgements}
|
1,116,691,499,200 | arxiv | \section{Introduction}
\label{Introduction}
Frequency multiplication is an important phenomenon of nonlinear oscillators from which one obtains high frequency outputs given low frequency inputs. It has a broad use in communication circuits~\cite{quievy1990,sanderford1998,kool1999,salvi1999,chien2000,chattopadhyay2004}, optical experiments~\cite{Miller1964,Guyot-Sionnest1986,Shen1989,Campagnola2011,Chen2012} as well as spintronic devices~\cite{Fiebig2002,Bibes2007,Bykov2012,Sebastian2013,Nemec2018,Leroux2020}. In spintronic experiments, different methods going beyond pure spintronics techniques are often employed to controllably create and manipulate magnons at different frequencies.
We predict that the nonlinear dynamics of topological magnetic structures provides an in-materio scalable pure spintronics-based alternative for frequency multipliers, see Fig.~\ref{fig:FrequencyMultiplier}.
The excitation modes of magnetic vortices~\cite{Shinjo2000,VanWaeyenberge2006,Dussaux2010}, skyrmions~\cite{Nagaosa2013,Buttner2015,Fert2017,Lonsky2020}, and droplets~\cite{Mohseni2013,Macia2014,Iacocca2014,Sulymenko2018} have relevant applications in magnetic tunnel junctions~\cite{Dussaux2010,Nishimura2002}, racetrack memories~\cite{Tomasello2014,Carpentieri2015}, microwave generators~\cite{Dussaux2010,Ruotolo2009}, and non-conventional computing~\cite{Torrejon2017,Bourianoff2018,Finocchio2019,Pinna2018}.
Excitation modes of ferromagnetic topological textures are bound magnon modes.~\cite{Vogt2011,Macia2014,Garst2017,Rodrigues2017a,Prokopenko2019,Mohseni2020,Lonsky2020}. They are typically classified by an integer number $n$ associated to a quantized angular momentum: i) breathing modes ($n=0$)~\cite{Mohseni2013,Kim2014d,McKeever2018,Penthorn2019}, ii) gyration modes ($|n|=1$)~\cite{VanWaeyenberge2006,Schwarze2015,Schutte2014}, and iii) higher order modes ($|n|>1$)~\cite{Vogt2011,Schutte2014,Rodrigues2017a,Kravchuk2017a}, see Fig.~\ref{fig:ExcitationModes}.
The sign of $n$ determines the rotation direction, clockwise or counterclockwise. So far, most applications of skyrmions rely on linearly approximating the dynamics. These excitation modes, however, are fundamentally non-linear. This implies, for example, an amplitude dependence of the excitation mode frequencies, as well as the existence of harmonic generation, i.e. the excitation of integer multiples of applied frequencies.
In general, the nonlinearity of magnetization dynamics is associated with many-magnon scattering~\cite{Fleury1968,Hurben1998,Zakeri2007,Schultheiss2009,Schultheiss2012,Chumak2014,Okano2019}. An example of the use of nonlinear phenomena creating multiple magnons is parallel pumping~\cite{Melkov2001,Sandweg2011,Ando2011c,Edwards2012,Bauer2015,Bracher2017a,Verba2018} which has been demonstrated both theoretically and experimentally. Parallel pumping has been used to generate magnon Bose-Einstein condensates, for example~\cite{Giamarchi2008,Serga2014,Bracher2017a,Okano2019}. In this nonlinear process, associated with parametric excitation, one can excite a certain magnon mode by applying an AC field with twice the corresponding eigenfrequency.~\cite{Zakharov1975,Bryant1988,Bertotti2001} The high frequency of the required driving field as well as the resonant magnon excitation at the driving field's frequency are caveats of the experimental implementation of parallel pumping. Moreover, the resonance at the frequency of the driving field is very sensitive to defects~\cite{Melkov2001,Hals2010}. Interactions of magnons with topological magnetic textures provide new possibilities for controlled and robust many magnon scattering manipulation~\cite{Sheka2001,Schutte2014,Iwasaki2014,Finocchio2015,Garst2017,Lonsky2020,Tatara2020,Seki2020,Wang2020}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Figure1.pdf}
\caption{Sketch of an example of frequency multiplication using a magnetic skyrmion. Perturbations with a fraction of the eigenfrequency $\omega_{n}$ can excite the corresponding eigenmode of the skyrmion. Moreover, when the input perturbation are created by an AC magnetic field, the amplitude of the exited eigenmode can be larger than the amplitude of the input magnon.}
\label{fig:FrequencyMultiplier}
\end{center}
\end{figure}
In this work, we propose the excitation of topological magnetic textures via frequency multiplication. Applying a perturbation with a fraction of an eigenfrequency of a bound mode, leads to a resonance at the corresponding eigenfrequency, see Fig.~\ref{fig:FrequencyMultiplier}. In particular, we obtain the excitation of the breathing $n=0$ mode of isolated skyrmions by applying AC magnetic fields, both in-plane and out-of-plane, with half and a third of the corresponding eigenfrequency. We also analyze the amplitude dependence of the excited eigenmode in terms of the amplitude of the applied AC field and the damping parameter.
The resonance of the bound eigenmodes with perturbations at fractions of the eigenfrequencies presents several advantages for applications:
i) above a certain threshold amplitude for the applied field, the excitation with a fraction of the eigenfrequency is more efficient than with same frequency, i.e.\ the amplitude of the eigenmode is bigger when applying a fraction of the eigenfrequency than applying the eigenfrequency itself; ii) the perturbations with fractional frequency are not eigenstates of the system, and thus, decay quickly away from the topological texture not contributing significantly to instabilities; iii) the eigenfrequencies can be tuned by changing the material parameters, for example by temperature changes, or by applying constant magnetic fields. This means that for given a frequency, it is possible to design a topological texture that allows for a resonant frequency multiplication.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Figure5.pdf}
\caption{Illustrative example of general principles of excitation by frequency multiplication. We show the excitation of a skyrmion by magnetic fields with amplitude $B=5\,\mathrm{mT}$ at two different driving frequencies $\omega_{\textrm{app}}$. Due to the localized nonlinear potential of the skyrmion configuration, both driven excitations generate harmonics corresponding to integer multiples of the frequency. In the case in which the harmonic coincide with an eigenstate, there is resonance and the corresponding multiple has a higher amplitude.}
\label{fig:IntuitivePic}
\end{center}
\end{figure}
{This manuscript is organized as follows. In Sec.~\ref{sec:Model} we discuss the general model, which we use to predict the possibility to excite eigenmodes of magnetic topological objects by fractions of its eigenfrequency. We describe the general case of frequency multiplication and the underlying physical behavior and assumptions. The theory is valid for topological objects in general and is expected to happen independently of the particular local topological magnetic configuration. In Sec.~\ref{sec:Skyrmion} we present frequency multiplication for excitation of skyrmions as an example. We show that it is possible to excite the skyrmion modes, including higher order modes that have not yet been experimentally studied. We show that the method of excitation by a fraction of the eigenmode can be more efficient than the usual linear excitation. Moreover, we study via micromagnetic simulations the efficiency of the frequency multiplication dependence on the amplitude of the driving force and the damping. We conclude in Sec.~\ref{sec:Discussion} by emphasizing the advantages of the frequency multiplication and discussing its difference compared to parametric excitation. Furthermore, we finalize by proposing applications for the method described in this manuscript.}
\section{General Model}
\label{sec:Model}
{Frequency multiplication is a well known phenomenon in non-linear optics where it is also named higher harmonic generation~\cite{Franken1961,Lupke1999,Downer2001}.} It relies on the non-linear properties of the systems and can be explained in a general picture. Considering a field $\phi$ of a nonlinear system, the dynamics of small perturbations $\tilde{\phi}(t) \ll 1$ of the static background $\phi_{0}$ can be expressed as the expansion
\begin{equation}\label{eq:nonlinear}
\frac{d\tilde{\phi}}{dt} \approx \mathcal{L}_{1}(\phi_{0})\tilde{\phi}+ \mathcal{L}_{2\,}(\phi_{0})\tilde{\phi}^2 + \dots + \mathcal{L}_{p}(\phi_{0})\tilde{\phi}^{p} + \cdots\,.
\end{equation}
The first term on the right $\mathcal{L}_{1}\tilde{\phi}$ corresponds to a linear approximation for the nonlinear system and provides the eigenstates of the system with frequencies $\omega_{n}$. Terms $\mathcal{L}_k$ with $k>1$ correspond to interactions between the perturbations. They renormalize the value of the eigenfrequencies $\omega_{n}$ and lead to amplitude-dependent frequencies. If we consider a perturbation with a fraction of an eigenfrequency, such as $ \tilde{\phi}(t) \approx \phi_{\omega_{n}/2}\cos(\omega_{n}t/2)$, the quadratic term generates a contribution $\tilde{\phi}(t)^2\approx \tilde{\phi}_{\omega_{n}/2}^{2}\cos(\omega_{n}t)$ which corresponds to a solution to the linear term. This principle can be extended to $m>2$ and reveals why fractions of the eigenfrequency may lead to the excitation of the corresponding eigenmode. {In general, the amplitude of the eigenmode $\omega_{n}$ grows with the $m$th-power of the driving amplitude with frequency $\omega_{n}/m$. Moreover, the expansion \eqref{eq:nonlinear} is independent of the amplitude of the perturbation $\varphi$.}
In particular, the dynamics of the unitary magnetization $\vect{m} = \vect{M}/M_{s}$, where $M_s$ is the saturation magnetization, is well described by the Landau-Lifshitz-Gilbert (LLG) equation~\cite{Gilbert2004}
\begin{equation}
\frac{d\vect{m}}{dt} = - \frac{\gamma}{M_{s}}\, \vect{m} \times \vect{B}_{\mathrm{eff}} + \alpha\, \vect{m} \times \dot{\vect{m}}.
\label{eq:model:LLG}
\end{equation}
Here $\gamma$ is the gyromagnetic ratio, and $\vect{B}_{\mathrm{eff}}=-\delta E[\vect{m}] / \delta\vect{m}$ is the effective magnetic field in a system with total magnetic energy density $E[\vect{m}]$. {The non-linearity of Eq.~\eqref{eq:model:LLG} allows for the existence of solitons corresponding to localized stable non-collinear magnetic configurations characterized by topological properties. Moreover, low frequency perturbations to these topological configurations are bound to the vicinity of the non-collinear structure and behave as localized traveling waves.~\cite{Vogt2011,Macia2014,Garst2017,Rodrigues2017a,Mohseni2020,Lonsky2020} The non-linear potential associated with these topological objects gives rise to frequency multiplication of driven perturbations. If we consider perturbations $\delta\vect{m}$ of the topological configuration, given by $\vect{m}_{0}$, we can expand the LLG Eq.~\eqref{eq:model:LLG} as Eq.~\eqref{eq:nonlinear} including higher orders of $\delta\vect{m}$.}
{The exact expansion depends on the configuration $\vect{m}_{0}$ and its symmetries. From an intuitive picture, however, one can derive some general principles taking into account the traveling perturbations in the non-collinear region of the topological object, see Fig.~\ref{fig:IntuitivePic}. First, we consider that any low frequency perturbation $\delta \vect{m}$ around the non-collinear region of the configuration $\vect{m}_{0}$ behaves as an underdamped traveling wave given by}
\begin{align}
\delta \vect{m} = \sum_{n}\Big(\phi_{1}(r, \psi, t)\left(\vect{m}_{0}\times\hat{\vect{n}}\right)\notag\\
+ \phi_{2}(r, \psi, t)\left((\vect{m}_{0}\times\hat{\vect{n}})\times\vect{m}_{0}\right)\Big),
\end{align}
{where $\phi_{1},\phi_{2} \ll 1$ are dynamically conjugated and can be described in terms of collective coordinates.~\cite{Malozemoff1979,Clarke2008,Rodrigues2017} Here, $r,\psi$ are polar coordinates, and $\hat{\vect{n}}$ corresponds to the direction of the field-polarized background. If the polar and time dependence of $\phi_{1\,n},\phi_{2\,n}$ are given by the combination $\omega_{n}(t - t_{0}) - n\psi$, with $n$ an integer, the perturbation corresponds to the eigenmode of the topological excitation with eigenfrequency $\omega_{n}$. Perturbations propagate in the localized nonlinear potential of the topological structure as an underdamped traveling wave. The frequency of the amplitude depends on the amplitude of the perturbation and spatial landscape of the potential. Since the damping has a viscous nature, faster perturbations experience a higher damping.}
{Traveling waves propagating in nonlinear potentials naturally generate harmonics corresponding to excitations with integer multiples of the original frequency.~\cite{Franken1961,Lupke1999,Downer2001} When an harmonic of the driven perturbation coincide with the natural eigenfrequencies of the system, there is resonance and the corresponding eigenmode is excited, see Fig.~\ref{fig:IntuitivePic}. Since excitations with lower frequency are less damped, one expects that at a certain regime of amplitude of the driving field and range of damping values, the excitation by frequency multiplication to be more efficient than the linear resonance of the eigenmode.}
{The basic assumption for the frequency multiplication is the existence of a localized non-linear potential associated to the non-collinear topological structure. The frequency multiplication happens for any driven perturbation. For this reason, there is no threshold for the driving field. Increasing amplitude of the driving perturbation however leads to a higher efficiency of non-linear process. In addition, selection rules may be derived from the spatial distribution of the driven perturbation. We consider a Fourier expansion of the perturbation in terms of the eigenmodes of the topological object, such as}
\begin{equation}
\phi_{i\,n}(r, \psi, t) = \sum_{n}\phi_{i\,n}(r)\phi_{n}(r, n\psi - \omega_{n}t),
\end{equation}
{where $\phi_{n}$ are the eigenmodes. We claim that a perturbation can excite via frequency multiplication any mode with non-vanishing $\phi_{i\,n}(r)$, see Fig.~\ref{fig:ExcitationModes} for example. Note that, increasing the driving force may lead to an increase of amplitudes $\phi_{i\,n}(r)$ which are almost vanishing for smaller driving forces.}
{A quantitative analysis depends on the specific magnetization dynamics and the underlying topological object. In the next section we consider the skyrmion as an example to demonstrate frequency multiplication in magnetic topological objects.}
\section{Example: magnetic skyrmions}
\label{sec:Skyrmion}
While the results above are quite general, in this manuscript we focus on the excitation modes of isolated skyrmions stabilized by perpendicular anisotropy~\cite{Rohart2013,Kravchuk2017a} without loss of generality.
The excitation modes of skyrmions have been studied by analytical and numerical methods which considered both linearized equations of motion as well as full micromagnetic simulations~\cite{Lin2014,Iwasaki2014,Schutte2014,Garst2017,Kravchuk2017a,Rodrigues2017a}. The experimental excitation and detection of these internal modes, in particular modes with $|n| > 1$ are still a challenge~\cite{Garst2017,Pollath2019,Penthorn2019,Finco2020,Lonsky2020}. This is because some modes only couple at linear level with fields that obey certain spatial distributions and symmetries~\cite{Garst2017,Beg2017}. We demonstrate that bound modes can be excited by homogeneous alternating fields with fractions of the corresponding eigenfrequencies. Since the fractional excitation is more efficient for certain field amplitudes, this provides a path for the experimental excitation and detection of all skyrmion modes.
To obtain explicit results for an anisotropy stabilized isolated skyrmion we considered the following model
\begin{equation}
\begin{split}
E[\vect{m}] = \int d V \,\,
\big[ A \,(\nabla \vect{m})^2 - D \,\vect{m} \cdot \left(\left(\hat{\vect{z}}\!\times\!\nabla\right)\!\times\!\vect{m}\right) \\
- K \,(m_z^2-1) \big],
\end{split}
\label{eq:model:energy}
\end{equation}
where $A$ is the magnetic stiffness, $D$ characterizes the strength of the interfacial Dzyaloshinskii-Moriya (DM) interaction, and $K$ is the strength of the effective uniaxial anisotropy which incorporates a correction due to a local approximation of the magnetostatic interactions~\cite{Bogdanov1994}. We consider $D < D_{c} = 4\sqrt{AK}/\pi$ such that the ferromagnetic state is the ground state of the system~\cite{Rohart2013,Kravchuk2017a,McKeever2018}. The energy contribution due to an external applied magnetic field $B_{\textrm{ext}}$ {is given by the Zeeman term}
\begin{equation}\label{eq:Zeeman}
E_{B_{\textrm{ext}}}[\vect{m}] = \mu_{0} B_{\textrm{ext}}(t) \int d V \,\, \hat{\vect{n}}\cdot\vect{m}(\vect{x},t).
\end{equation}
{where $\mu_{0}$ is the vacuum permeability.}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{Figure2.pdf}
\caption{Excitation modes of skyrmions. a) Sketch of the decomposition of a smooth deformation into the internal skyrmion modes. The magnetization inside and outside the boundary have opposite directions, going from black to white.
$R_{0}$ is the ground state radius of the skyrmion, and, $r_{n}$ characterizes the bound modes.
The mode $n = 0$ represents the breathing mode. Modes with $|n|>0$ rotate (counter-)~clockwise with an amplitude-dependent frequency.
b) Frequency spectrum of a Néel skyrmion as a function of rescaled DMI strength $D/D_c$ (top panel) and detailed analysis for a fixed DMI strength $D/D_c = 0.86$ (lower panel). The power spectrum in the lower panel was obtained by performing a Fourier transform of the magnetization dynamics~\cite{Kumar2017}.
}
\label{fig:ExcitationModes}
\end{figure}
For a skyrmion modeled by Eq.~\eqref{eq:model:energy}, the spin wave eigenstates are given by the bound modes at the skyrmion with frequencies $\omega_{n}$ as well as a continuous distribution of magnon modes above the anisotropy gap with frequencies $\omega = (2\gamma/M_{s})(K + A k^2)$ where $k$ is the wave number~\cite{Kravchuk2017a}.
In Fig.~\ref{fig:ExcitationModes}b) we show the magnon spectrum obtained from micromagnetic simulations. Beyond the eigenmodes obtained from linearization calculations~\cite{Kravchuk2017a} we are able to identify the second harmonic of the breathing mode and a gyrotropic mode. It is important to notice that, for anisotropy stabilized skyrmions (at zero magnetic field), most modes exist below the magnon gap only for $D$ close to $D_{c}$~\cite{Kravchuk2017a,Rodrigues2017a}.
The micromagnetic simulations of this manuscript were performed with MuMax$3$~\cite{Vansteenkiste2014} using the following parameters:
$M_s= 1.1 \cdot 10^6 A/m$, $A= 1.6 \cdot 10^{-11} J/m$, $K=5.1 \cdot 10^5 J/{m^3}$.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{Figure3.pdf}
\caption{ Amplitude of the a) breathing ($\omega_0$), b) elliptical ($\omega_2$) mode as a function of the frequency of the in-plane applied field $B_{\mathrm{ext}}=0.05\, \mathrm{T}$ with $\alpha =10^{-3}$. Fractions of the corresponding frequency can still strongly excite the desired mode. }
\label{fig:Fractions}
\end{center}
\end{figure}
To reveal the resonance with a fraction of the eigenfrequency, we computed the amplitude of the eigenmodes with frequency $\omega_{n}$ as a function of the applied frequency $\omega$ for an in-plane magnetic field excitation.
{Perturbations of an isolated circular skyrmion driven by out-of-plane magnetic fields are radially symmetric and only couple to the breathing mode. In the case of an out-of-plane field applied to a skyrmion with no radial symmetry or an in-plane driving field generate perturbations that can excite any eigenmode.}
In Fig.~\ref{fig:Fractions} we show the results of exciting a) the breathing $n=0$ and b) the elliptical $n=2$ modes at different applied frequencies but same amplitude of the in-plane applied field.
We noticed that the eigenmodes are excited by applying fractions of the corresponding frequency, i.e.\ $\omega = \omega_{n}/m$ where $m \in \mathbb{Z}$.
The same behavior was obtained for the triangular $n=3$ mode. The observation of resonance peaks at integer fractions of the corresponding eigenmodes is an important result of this manuscript.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Figure4.pdf}
\caption{Plots of the amplitude of the excited modes in terms of a) the amplitude of the applied field with $\alpha = 10^{-3}$ and b) the damping parameter for an out-of-plane AC magnetic field $B_{\mathrm{ext}}=0.003$. In a) the straight lines with slope $2$ ($3$) indicate the growth with a power $2$ ($3$) for the second (third) harmonic generation. We notice that, above a certain amplitude of the applied field, the breathing mode is more excited than the mode with same frequency as the perturbation. {The other two peaks on the inset of the third harmonic generation correspond to the second and forth harmonic generation. They do not resonate with any eigenmodes of the skyrmion.}}
\label{fig:AmplitudeDependence}
\end{center}
\end{figure}
We focussed on the excitation of the breathing mode by an out-of-plane field in order to analyze the amplitude of the eigenmode $r_{n}$, dependence on the applied field amplitude $B_{\textrm{ext}}$, and the material damping $\alpha$~\footnote{Notice that an out-of-plane AC field can only excite the breathing mode of an isolated radially symmetric skyrmion. Exciting the other modes requires a non-radially symmetric perturbation. This can either be done by other means or by out-of-plane fields accompanied by an additional radial symmetry breaking mechanism such as introducing temperature fluctuations.}.
We applied out-of-plane AC magnetic fields with half and one third of the breathing mode frequency $\omega_0$.
In Fig.~\ref{fig:AmplitudeDependence}a), as a function of the applied AC magnetic field strengths $B_{\mathrm{ext}}$ we show the amplitudes, extracted from the frequency spectrum (see right panel), of the breathing mode $r_0$ and the forced perturbation $\tilde{m}$ at the AC field frequency as a log-log plot.
The top (bottom) panel corresponds to $\omega_0/2$ ($\omega_0/3$) for a fix damping constant $\alpha= 10^{-3}$.
While $\tilde{m}$ grows linearly with $B_{\mathrm{ext}}$, the amplitude of the breathing mode grows with a power $2$ for $\omega_{0}/2$ and with a power $3$ for $\omega_{0}/3$, in a certain range of the applied field, as expected for second and third harmonic generation.
As the amplitude of the breathing mode grows beyond the linear approximation, the power law coefficient reduces due to further scattering of the magnons.
Furthermore, as another important result of this manuscript, we find that above a certain amplitude of the applied field, the breathing mode is more excited than $\tilde{m}$.
In Fig.~\ref{fig:AmplitudeDependence}b), we show $r_0$ and $\tilde{m}$ as a function of the damping parameter $\alpha$ for $B_\mathrm{ext}= 0.003 \, \mathrm{T}$. Analogous to a), the left (right) panel corresponds to an applied AC magnetic field with frequency $\omega_0/2$ ($\omega_0/3$).
We see that the forced perturbation amplitude $\tilde{m}$ is independent of the damping, as expected.
The breathing mode $r_0$, however, decreases as a function of $\alpha$. The Gilbert dissipation damps the energy flow from the forced perturbation to the resonant mode and reduces the amplitude from the resonant mode. Thus frequency multiplication is more efficient for systems with small damping.
\section{Discussion}
\label{sec:Discussion}
From the results above, we notice that the phenomenon of frequency multiplication might resemble the well-known parametric excitation~\cite{Zakharov1975,Bryant1988,Bertotti2001}. The main difference, however, is that the resultant frequency is a multiple and not a fraction of the applied frequency. This crucial difference presents considerable advantages of frequency multiplication for two main reasons: i) lower frequencies are easier to produce and guide experimentally, and ii) there are no magnons that are resonantly excited at the pumping frequency since the applied frequency is well below the magnon gap. The latter presents an experimental obstacle for parametric pumping since directly excited magnons at the pumping frequency are hard to avoid in a real experimental situation. These magnons can interact with the parametrically excited ones and, thus, hinder the ability to fully control the excited modes. Moreover, the excitation and scattering of magnons with higher frequency during parametric excitation are very sensitive to sample inhomogeneities.
To summarize, we have proposed the excitation of the eigenmodes of topological magnetic textures by frequency multiplication based on the general non-linear properties of magnetization dynamics. While we have explicitly demonstrated this effect by means of micromagnetic simulations on the excitation modes of isolated magnetic skyrmions, our theoretical analysis reveals its universal behavior, i.e.\ being independent of microscopic details of the magnetic structures as well as the source of the perturbation.
Not only does this provide a new method to excite the eigenmodes by applying lower frequency amplitudes, this frequency multiplication mechanism propounds novel applications in spintronics devices such as an in-material frequency multiplier for magnonic applications~\cite{Dussaux2010,Finocchio2015,Wagner2016,Hamalainen2018,Hollander2018}. In particular, bound modes not excited via linear resonance and which are weakly coupled (e.g. by dipolar stray fields or spin-wave excitations) serve as a building block for so-called parametrons, a computing scheme which has seen a strong revival in the recent discussions of oscillator-based computing~\cite{Csaba2020}. Furthermore, the tunability of the magnetic textures by altering the material properties, temperature and applied fields make them very versatile.
\section{Acknowledgement}
\label{sec:Acknowledgement}
We thank V.K. Bharadwaj, Ar. Abanov, O. Gomonay and T. Br\"acher for discussions.
We acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): TRR 173 – projects 268565370 (B01) and 268565370 (B12),
SPP 2137 – projects 403233384 and 403512431 as well as the Emmy Noether Project of K.E.S., project number 320163632.
D.R. and K.E.S acknowledge funding from the Emergent AI Center funded by the Carl-Zeiss-Stiftung and from JGU TopDyn.
J.N. acknowledges support from the Max Plack Graduate Center.
This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
|
1,116,691,499,201 | arxiv | \part{0}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\doi{S0963548301004989}
\begin{document}
\label{firstpage}
\maketitle
\begin{abstract}
We have developed an analytic model to describe coupling of plasma and
neutral fluids in the partially ionized heliosphere plasma
medium. The sources employed in our analytic model are based on a
$\kappa$-distribution as opposed to the Maxwellian distribution
function. Our model uses the $\kappa$-distribution to analytically
model the energetic neutral atoms that result in the heliosphere
partially ionized plasma from charge exchange with the protons and
subsequently produce a long tail which is otherwise not describable by
the Maxwellian distribution. We present our analytic formulation and
describe major differences in the sources emerging from these two
distinct distributions.
\end{abstract}
\section{Introduction}
With the Voyager spacecraft now in the heliosheath (see Fig 1), the in
situ character of the solar wind plasma can be explored. Surprisingly,
the supersonic solar wind plasma, probed by the ACE/WIND/Cluster
spacecrafts near 1 AU (Astronomical Unit), depicts an entirely
different character when contrasted with the Voyager I and 2
observations in the heliosheath region (typically beyond 84 AU)
(Goldstein et al 1995, Burlaga et al 2005, 2006, 2008, 2009; Stone et
al 2005; Decker et al 2005; Richardson et al 2008, Zank 1999). Figure
1 shows an idealized cartoon reflecting our current understanding
based on theory, simulations and modeling together with
observations. Little is known about the physical processes that govern
the intricate multiscale (associated with waves, structures,
turbulence) interactions outlined in Fig 1. The supersonic solar wind
(SW) plasma interacts with local interstellar medium (LISM) neutral
hydrogen (H) gas through charge exchange leading to the creation of
energetic pick up ions (PUI). The SW is decelerated, compressed and
heated at a shock, the termination shock (TS), across which it
develops small scale turbulence (Shukla 1978, Shaikh \& Zank 2008,
2010, Shaikh 2010, Shaikh et al 2006, Mendonca \& Shukla 2007). In
the heliosheath region, the nonlinear structures, such as magnetic
hole and humps are found (Burlaga et al 2008, 2009). The SW protons
continue to interact with neutrals via charge exchange to produce
significant number of pick up ions. Both in the supersonic and
subsonic SW (at least in the outer heliosphere) the pressure
associated with the PUIs exceeds that of the solar wind protons
(Burlaga et al 2009). Both Voyager 1 and 2 are reporting a number of
puzzling observations that were not anticipated by existing analytic
or simulation models. An intriguing example is that of magnetic field
distribution. The latter is lognormal in supersonic solar wind,
whereas it exhibits a Gaussian distribution in subsonic
heliosheath. Surprisingly, Voyager 2 indicates that the magnetic field
distribution is lognormal in the subsonic heliosheath plasma. The
source of this apparent discrepancy in the magnetic field
distributions reported by Voyagers 1 \& 2 in the heliosheath is not
known. Another example is that of plasma in heliosheath which is
compressed, turbulent and it is an admixture of waves, fluctuations
and magnetic structures (magnetic hole/hump, see section 2 for
details) (Burlaga 2006, 2009). The effect of PUIs on the formation and
evolution of nonlinear magnetic structures, waves and fluctuations in
outer heliosphere and the heliosheath plasma are an open
question. These issues continue to pose severe challenges to our
understanding of the heliosheath plasma.
\begin{figure}[t]
\includegraphics[width=13cm]{fig1.ps
\caption{Schematic overview of different regions in the global heliosphere. The solar
wind emanating from the Sun propagates outward and interacts with
partially ionized interstellar gas predominantly via charge exchange,
and creates pick up ions (PUIs). At the termination shock (TS), the
supersonic SW decelerated, heated, compressed becoming subsonic, in
the heliosheath, and again interacts with interstellar neutrals via
charge exchange before it reaches heliopause (HP). The subsonic SW
flows down into the heliotail. Magnetic structures such as magnetic
holes/humps are observed in heliosheath plasma. During its journey
from the Sun to the HP, the solar wind plasma develops multitude of
length and time scales that interact with the partially ionized
interstellar gas, TS, and nonlinear structures develop in a complex
manner. }
\label{fig1}
\end{figure}
Although there exists wealth of in situ measurements by the Voyager
spacecrafts, they do not provide much information about the global
structure of the heliosphere interactions. For instance, the coupling
of plasma protons with the interstellar neutral atoms has
traditionally been done through Maxwellian sources. However, careful
studies have revealed that the distribution of hydrogen neutral (after
charge exchanging, they turn into energetic neutral atoms, ENA) does
not exactly follow a Maxwellian functions. Recently, Prested et
al. (2008) used a $\kappa$-distribution for the ENA parent population
to obtain ENA maps. The advantage of using this distribution, as
opposed to a Maxwellian, is that it has a power-law tail, and is
therefore capable of producing ENA’s at suprathermal energies. Now
there has been an increasing consensus that the plasma and neutral
fluids follow nearly kappa distribution (Heerikhuisen et al 2008).
A realistic modeling of the heliosheath plasma, one that includes a
self-consistent treatment of the PUIs, is therefore critically
important and essential to our understanding of the highly variable
heliosphere plasma. The central them of this paper is therefore to
model complex coupling between plasma and neutral fluids via
$\kappa$-distribution as oppose to the Maxwellian distribution. Note
here that the $\kappa$ distribution modifies the charge exchange
interactions in fluid equations. The kappa-distribution emphasizes
charge exchange by high temperature protons. We will investigate the
effects of the $\kappa$-distribution in heliosphere plasma turbulence
for single fluid plasma-neutral coupled turbulence models.
In section 2, we describe $\kappa$-distribution for neutral and plasma
distribution and derive sources for the complex coupling interactions
between the two distinct fluids. Section 3 describes complete source
terms for the coupling interactions. Finally, a summary is presented
in section 5.
\section{Plasma neutral coupling via $\kappa$-distribution source}
The charge exchange terms can be obtained from the Boltzmann transport
equation that describes the evolution of a neutral distribution
function $f({\bf r},{\bf v},t)$ in a six-dimensional phase space
defined respectively by position and velocity vectors ($x, v_x, v_y,
v_z$) at each time $t$. Here we follow Pauls et al. (1995) in
computing the charge exchange terms, based on $\kappa$-distribution
functions, from various moments of the Boltzmann equation. The
Boltzmann equation for the neutral distribution contains a source term
proportional to the proton distribution function $f_p$ and a loss
term proportional to the neutral distribution function $f_n$.
\begin{eqnarray}\label{prodp}
&&\frac{\partial }{\partial t}f_p({\bf r},{\bf v},t)+{\bf
v}\cdot\nabla f_p({\bf r},{\bf v},t)+ \frac{{\bf
F}}{m}\cdot\nabla_{v}f_p({\bf r},{\bf v},t)=
\nonumber\\
&&f_n({\bf r},{\bf v},t)\int f_p({\bf r},{\bf v_p},t)\left|{\bf
v_p}- {\bf v}\right|\sigma_{ex}(v_{rel})d^3{\bf v_p}-\nonumber\\
&&f_p({\bf r},{\bf v},t)\int f_n({\bf r},{\bf v_n},t)\left|{\bf
v_n}- {\bf v}\right|\sigma_{ex}(v_{rel})d^3{\bf v_n};
\end{eqnarray}
where, $\sigma_{ex}(v_{rel})$ is the charge exchange cross section.
The charge exchange parameter has a logarithmically weak dependence on
the relative speed ($v_{rel}=|{u}_p- {v}_n|$) of the neutrals and the
protons through $\sigma_{\rm ex} = [(2.1-0.092 \ln (v_{rel})) 10^{-7} cm]^2$
(Fite et al 1962). This cross-section is valid as long as energy does not
exceed $1eV$, which usually is the case in the inner/outer
heliosphere. Beyond $1eV$ energy, this cross-section yields a higher
neutral density. This issue is not applicable to our model and hence
we will not consider it here. The density, momentum, and energy of
the thermally equilibrated Maxwellian proton and neutral fluids can be
computed from \eq{prodp} by using the zeroth, first and second
moments $\int f_{\xi} d^3\xi, \int m{\bf \xi} f_{\xi} d^3\xi$ and
$\int m\xi^2/2 f_{\xi} d^3\xi$ respectively, where $\xi={u}_p$ or
${v}_n$. Since charge exchange conserves the density of the proton and
neutral fluids, there are no sources in the corresponding continuity
equations. We, therefore, need not compute the zeroth moment of the
distribution function. Computing directly the first moment
from \eq{prodp}, we obtain the neutral fluid momentum equation.
A similar evolution equation can be written for the
neutral distribution function $f_n({\bf r},{\bf v_n},t)$. We
consider the case where both $f_p({\bf r},{\bf v_p},t)$ and
$f_n({\bf r},{\bf v_n},t)$ are given by a $\kappa$ distribution of
the following type:
\begin{equation}\label{kappap}
f_p({\bf r},{\bf v_p}) = \frac{n_p}{\pi^{\frac{3}{2}}v_{T_p}^3}
\frac{\Gamma(\kappa+1)} {\kappa^{\frac{3}{2}}\Gamma\left(\kappa-
\frac{1}{2}\right)} \left[1+\frac{({\bf v_p}-{\bf U_p})^2}{\kappa
v_{T_p}^2}\right]^{-(\kappa +1)};
\end{equation}
\begin{equation}\label{kappan}
f_n({\bf r},{\bf v_n}) = \frac{n_n}{\pi^{\frac{3}{2}}v_{T_n}^3}
\frac{\Gamma(\kappa+1)} {\kappa^{\frac{3}{2}}\Gamma\left(\kappa-
\frac{1}{2}\right)} \left[1+\frac{({\bf v_n}-{\bf U_n})^2}{\kappa
v_{T_n}^2}\right]^{-(\kappa +1)}.
\end{equation}
First we evaluate the following integral:
\begin{equation}\label{betap1}
\beta_p({\bf r},{\bf v},t)= \sigma_{ex}(v_{rel})\int f_p({\bf
r},{\bf v_p},t)\left|{\bf v_p}- {\bf v}\right|d^3{\bf v_p};
\end{equation}
where $\sigma_{ex}(v_{rel})$ is taken out of the integral, as it
varies slowly with respect to $(v_{rel})$. The integral
(\ref{betap1}) is fully written as
\begin{equation}\label{betap2}
\beta_p({\bf r},{\bf v},t)= \sigma_{ex}(v_{rel})
\frac{n_p}{\pi^{\frac{3}{2}}v_{T_p}^3} A_\kappa \int
\left[1+\frac{({\bf v_p}-{\bf U_p})^2}{\kappa
v_{T_p}^2}\right]^{-(\kappa +1)}\left|{\bf v_p}- {\bf
v}\right|d^3{\bf v_p};
\end{equation} with $$ A_\kappa=\frac{\Gamma(\kappa+1)}
{\kappa^{\frac{3}{2}}\Gamma\left(\kappa- \frac{1}{2}\right)}.$$ We
write,
$${\bf v_p}-{\bf U_p} = ({\bf v_p}-{\bf v}) -({\bf U_p}-{\bf v})$$
and define new variables as
$${\bf V}=({\bf v_p}-{\bf v})/\sqrt{\kappa v_{T_p}^2};\quad
{\bf x}=({\bf U_p}-{\bf v})/\sqrt{\kappa v_{T_p}^2};$$
with the new variables, the integral in Eq (\ref{betap2}) becomes,
\begin{equation}\label{betap3}
\beta_p({\bf r},{\bf v},t)= \sigma_{ex}(v_{rel})
\frac{n_p}{\pi^{\frac{3}{2}}v_{T_p}^3} (\kappa v_{T_p}^2)^{3/2}
\sqrt{\kappa v_{T_p}^2}A_\kappa \int_{-\infty}^\infty
\left[1+({\bf V}-{\bf x})^2\right]^{-(\kappa +1)}V d^3{\bf V};
\end{equation}
where we have used,
$$|{\bf v_p}-{\bf v}| =\sqrt{\kappa v_{T_p}^2} V; \quad d^3{\bf
v_p}=(\kappa v_{T_p}^2)^{3/2}d^3{\bf V}$$ and the constant before
the integral in Eq (\ref{betap3}) is,
$$\sigma_{ex}(v_{rel})\frac{n_p}{\pi^{\frac{3}{2}} v_{T_p}^3}
(\kappa v_{T_p}^2)^{3/2}\sqrt{\kappa
v_{T_p}^2}~\frac{\Gamma(\kappa+1)} {\kappa^{\frac{3}{2}}\Gamma
\left(\kappa- \frac{1}{2}\right)}=\sigma_{ex}
\frac{n_pv_{T_p}}{\pi^{\frac{3}{2}}}\frac{\sqrt{\kappa}~\Gamma(\kappa+1)}
{\Gamma \left(\kappa- \frac{1}{2}\right)}.$$
We now proceed to evaluate the integral Eq (\ref{betap3}). In spherical
coordinate,
\[d^3{\bf V}= V^2 dv \sin\theta ~d\theta ~d\phi, \]
where
$\theta$ is the angle between {\bf V} and {\bf x}, after performing
the $\phi$ integration, with $\mu =\cos\theta$
\begin{equation}\label{integ1}
I=2\pi\int^\infty_0 V^3 dV\int^1_{-1}d\mu(1+V^2-2Vx\mu+x^2)^{-
(\kappa+1)}
\end{equation}
This integration becomes,
$$I =2\left[\int^x_0z^2(1+z^2)^{-\kappa}dz+
x^2\int^x_0(1+z^2)^{-\kappa}dz +2x\int^\infty_x
z(1+z^2)^{-\kappa}dz\right],$$ with ${\bf x}=({\bf U_p}-{\bf
v})/\sqrt{\kappa v_{T_p}^2}$. We now proceed to determine the
explicit values of the above definite integrals. The first two
integrals are given in terms of the hypergeometric functions,
${_2F_1}$, which are
$$\int^x_0z^2(1+z^2)^{-\kappa}dz = \frac{x^3}{3}~{_2F_1}\left(
\frac{3}{2},\kappa;\frac{5}{2}; -x^2\right),$$
$$x^2\int^x_0(1+z^2)^{-\kappa}dz =x~{_2F_1}\left(
\frac{1}{2},\kappa;\frac{3}{2}; -x^2\right);$$ where the
Hypergeometric function ${_2F_1}(a, b; c; z)$ (with $a, b, c $ are
constant numbers and $z$ is the variable) is expressed as a power
series in $z$:
{{
\begin{eqnarray}\label{hypergeom1}
{_2F_1}(a, b; c; z)&=&1 + \frac{ab}{c}\frac{z}{1!}+
\frac{a(a+1)b(b+1)}{c(c+1)}\frac{z^2}{2!}\nonumber\\
&&\quad +\frac{a(a+1)(a+2)b(b+1)(b+2)}{c(c+1)(c+2)}\frac{z^3}{3!}+
...
\end{eqnarray}}}
Using Kummer identity for hypergeometric functions,
$${_2F_1}(a,b;c;z) = {_2F_1}(b,a;c;z)=
(1-z)^{-b}{_2F_1}[b,c-a;c;z/(z-1)] \eqno(\ref{hypergeom1}a)$$
$${_2F_1}\left(\frac{3}{2},\kappa;\frac{5}{2}; -x^2\right)=
(1+x^2)^{-\kappa}{_2F_1}\left(\frac{5-3}{2},\kappa;\frac{5}{2};
\frac{x^2}{1+x^2}\right)=(1+x^2)^{-\kappa}{_2F_1}\left(1,
\kappa;\frac{5}{2};\frac{x^2}{1+x^2}\right);
\eqno(\ref{hypergeom1}b)$$
Similarly,
$${_2F_1}\left(\frac{1}{2},\kappa;\frac{3}{2}; -x^2\right)=
(1+x^2)^{-\kappa}{_2F_1}\left( 1,\kappa;\frac{3}{2};
\frac{x^2}{1+x^2}\right)\eqno(\ref{hypergeom1}c)$$
The last integral can be evaluated easily
$$\int^\infty_x z(1+z^2)^{-\kappa}dz=\left.\frac{(1+z^2)^{-\kappa+1}}
{2(-\kappa+1)}\right|^\infty_x =\frac{(1+x^2)^{-\kappa+1}}
{2(\kappa-1)}\quad (\kappa ~{\rm must~be}>1)$$\\
Collecting the above terms, the integral $\beta_p({\bf r},{\bf
v})$ is,
\begin{eqnarray}\label{betap4}
\beta_p({\bf r},{\bf v},t)&=& \frac{2n_p\sigma_{ex}v_{T_p}}
{\sqrt{\pi\kappa}~x}\frac{\Gamma(\kappa+1)} {\Gamma \left(\kappa-
\frac{1}{2}\right)}(1+x^2)^{-\kappa}\left[x^2{_2F_1}\left(
1,\kappa;\frac{3}{2}; \frac{x^2}{1+x^2}\right)\right.\nonumber\\
&&+\left.\frac{x^2}{3}{_2F_1}\left(1,\kappa;\frac{5}{2};\frac{x^2}{1+x^2}\right)
+\frac{1+x^2}{\kappa-1}\right];\quad x= |{\bf U_p}-{\bf
v}|/\sqrt{\kappa v_{T_p}^2}\nonumber\\
\end{eqnarray}
An approximate value of the above expression in the two limits
$\sqrt{\kappa}~x \ll 1$ and $x\gg 1 $ can be obtained as follows:
\begin{equation}\label{betaps}
\sqrt{\kappa}~x \ll 1:\qquad\beta=\frac{2n_p\sigma_{ex}v_{T_p}}
{\sqrt{\pi\kappa}}\frac{\Gamma(\kappa+1)} {\Gamma \left(\kappa-
\frac{1}{2}\right)}\left[\frac{1}{\kappa-1}+\frac{({\bf U_p}-{\bf
v})^2}{3\kappa v_{T_p}^2}\right]
\end{equation}
\begin{equation}\label{betapl}
x \gg 1:\qquad\beta=n_p\sigma_{ex}\sqrt{\frac{4v_{T_p}
\Gamma^2(\kappa+1)} {\pi\kappa(\kappa-1)\Gamma^2 \left(\kappa-
\frac{1}{2}\right)}+({\bf U_p}-{\bf v})^2}
\end{equation}
Note that in an asymptotic limit, the Gamma functions for large argument
is
$$\lim_{\kappa\rightarrow\infty}\kappa^{b-a}\frac{\Gamma(\kappa+a)}
{\Gamma(\kappa+b)}\rightarrow 1,\quad \Rightarrow\quad
\lim_{\kappa\rightarrow\infty}\Gamma(\kappa+a)\simeq \kappa^a
\Gamma(\kappa)$$
\section{Complete expressions for the source terms}
To find the source terms, we take the moments of Eq (\ref{prodp}) by
multipling both sides with various powers of the velocity {\bf v}. The
zeroth order moment would contribute to the source term of the mass
continuity equation, the first order moment would contribute to the
source term of the momentum equation, the second order moment would
contribute to the source term of the energy equation. The moments for
the left hand terms of Eq (\ref{prodp}) are well known, so we shall
show the moments of the right hand terms, using kappa distribution for
$f_p$ and $f_n$. We derive full expression for the integrals on the
r.h.s of Eq (\ref{prodp}) by using the complete expression for the
integral for $ \beta_p({\bf r},{\bf v},t)$ given by Eq (\ref{betap4})
without any approximation.
Since the charge exchange process conserves the proton and neutral
density, there will be no source term for the mass continuity
term, so we need not calculate the zeroth moment. Hence we start
with the first moment of the right hand side of Eq (\ref{prodp}).
With
$$f_n({\bf r},{\bf v}) = \frac{n_n}{\pi^{\frac{3}{2}}v_{T_n}^3}
\frac{\Gamma(\kappa+1)} {\kappa^{\frac{3}{2}}\Gamma\left(\kappa-
\frac{1}{2}\right)} \left[1+\frac{({\bf v}-{\bf U_n})^2}{\kappa
v_{T_n}^2}\right]^{-(\kappa +1)}=f_n({\bf r},{\bf v}-{\bf U_n}),$$
$${\rm and}\hspace*{15ex} \beta_p({\bf r},{\bf v},t)\equiv \beta_p({\bf r},x,t)=
\beta_p({\bf r},{\bf U_p}-{\bf v})$$
The production term for \textbf{Momentum transport}, is,
\begin{eqnarray}\label{sourcemom}
Q_{MP}&=&\int_0^\infty d^3 v{\bf v}f_n({\bf r},{\bf v}-{\bf U_n})
\beta_p({\bf r},{\bf U_p}-{\bf v})\nonumber\\
&=&\int_0^\infty d^3 v{\bf U_n}f_n({\bf r},{\bf v}-{\bf U_n})
\beta_p[{\bf r},({\bf U_p}-{\bf U_n})-({\bf v}-{\bf U_n})]\nonumber\\
&&+\int_0^\infty d^3 v({\bf v}-{\bf U_n})f_n({\bf r},{\bf v}-{\bf
U_n}) \beta_p[{\bf r},({\bf U_p}-{\bf U_n})-({\bf v}-{\bf U_n})],\nonumber\\
&=&{\bf U_n}\int_0^\infty d^3 uf_n({\bf r},{\bf u}) \beta_p[{\bf
r},(\Delta{\bf U}-{\bf u})] +\int_0^\infty d^3 u{\bf u}f_n({\bf r},{\bf u})
\beta_p[{\bf r},(\Delta{\bf U}-{\bf u})];\nonumber\\
\end{eqnarray}where we used, ${\bf u} ={\bf v}-{\bf U_n};\quad \Delta{\bf
U}={\bf U_p}-{\bf U_n}$.
The production term for \textbf{Energy transport}, is,
\begin{eqnarray}\label{sourceenerg}
Q_{EP}&=&\int_0^\infty d^3 v|{\bf v}|^2f_n({\bf r},{\bf v}-{\bf
U_n})\beta_p({\bf r},{\bf U_p}-{\bf v})\nonumber\\
&=&\int_0^\infty d^3 v|{\bf v}-{\bf U_n}|^2f_n({\bf r},{\bf
v}-{\bf U_n})\beta_p({\bf r},{\bf U_p}-{\bf v})\nonumber\\
&&+2{\bf U_n}\cdot \int_0^\infty d^3 v{\bf v}f_n({\bf r},{\bf
v}-{\bf U_n})\beta_p({\bf r},{\bf U_p}-{\bf v})\nonumber\\
&&-U_n^2\int_0^\infty d^3 vf_n({\bf r},{\bf v}-{\bf
U_n})\beta_p({\bf r},{\bf U_p}-{\bf v})\nonumber\\
&=&\int_0^\infty d^3 v|{\bf v}-{\bf U_n}|^2f_n({\bf r},{\bf
v}-{\bf U_n})\beta_p({\bf r},{\bf U_p}-{\bf v})\nonumber\\
&&+2{\bf U_n}\cdot \int_0^\infty d^3 v({\bf v}-{\bf U_n})f_n({\bf r},
{\bf v}-{\bf U_n})\beta_p({\bf r},{\bf U_p}-{\bf v})\nonumber\\
&&+2 U_n^2\int_0^\infty d^3 vf_n({\bf r},{\bf v}-{\bf
U_n})\beta_p({\bf r},{\bf U_p}-{\bf v})\nonumber\\
&&-U_n^2\int_0^\infty d^3 vf_n({\bf r},{\bf v}-{\bf
U_n})\beta_p({\bf r},{\bf U_p}-{\bf v})
\end{eqnarray}
As before, we introduce the variables ${\bf u} ={\bf v}-{\bf
U_n};\quad \Delta{\bf U}={\bf U_p}-{\bf U_n}$ and write
$\beta_p({\bf r},{\bf U_p}-{\bf v})=\beta_p({\bf r},({\bf U_p}
-{\bf U_n})-({\bf v}-{\bf U_n}))=\beta_p({\bf r},\Delta{\bf
U}-{\bf u})$. Expression (\ref{sourceenerg}) can be written as
\begin{eqnarray}\label{sourceenerg1}
Q_{EP}&=&\int_0^\infty d^3 uu^2f_n({\bf r},{\bf u})\beta_p({\bf
r},\Delta{\bf U}-{\bf u})+2{\bf U_n}\cdot \int_0^\infty d^3 u{\bf
u}f_n({\bf r},{\bf u})\beta_p({\bf r},\Delta{\bf U}-{\bf u})
\nonumber\\
&& +U_n^2\int_0^\infty d^3 uf_n({\bf r},{\bf u}) \beta_p({\bf
r},\Delta{\bf U}-{\bf u})\end{eqnarray}
\begin{figure}[t]
\includegraphics[width=13cm]{fig2.ps
\caption{Comparision between the kappa and Maxwellian distribution
functions for the sources. Clearly, the tail of the distribution for
$\kappa$ function is long and wide as opposed to the Maxwellian
distribution. It is because of this feature, we have computed sources
based on a $\kappa$-distribution function. }
\label{fig2}
\end{figure}
\section{Summary and conclusion}
In summary, our major results are Eq (\ref{sourcemom}) \& Eq
(\ref{sourceenerg}). A tentative comparison of the sources based on
Maxwellian and kappa distribution functions is shown in Fig (2). It
is evident from this figure that the sources based on a Maxwellian
distribution function falls off sharply and without any tail region.
This therefore excludes the energertic component of the neutral atoms
and hence is inappropriate for typical ENAs. By contrast, the sources
based on a kappa distribution function depicts a well-behaved tail
distribution that represents ENA distribution.
Our previous work in Shaikh \& Zank (2008) has shown that charge
exchange modes modify the helioshperic turbulence cascades
dramatically by enhancing nonlinear interaction time-scales on large
scales. Thus the coupled plasma system evolves differently than the
uncoupled system where large-scale turbulent fluctuations are strongly
correlated with charge-exchange modes and they efficiently behave as
driven (by charge exchange) energy containing modes of helioshperic
turbulence. By contrast, small scale turbulent fluctuations are
unaffected by charge exchange modes which evolve like the uncoupled
system as the latter becomes less important near the larger $k$ part
of the helioshperic turbulent spectrum. The neutral fluid, under the
action of charge exchange, tends to enhance the cascade rates by
isotropizing the helioshperic plasma turbulence on a relatively long
time scale. This tends to modify the characteristics of helioshperic
plasma turbulence which can be significantly different from the
Kolmogorov phenomenology of fully developed turbulence. It remains to
be seen how these modified sources influence nonlinear turbulent
properties of the small scale helioshperic plasma fluctuations.
\section{Acknowledgment}
The partial support of NASA grants NNX09AB40G, NNX07AH18G, NNG05EC85C,
NNX09AG63G, NNX08AJ21G, NNX09AB24G, NNX09AG29G, and NNX09AG62G is
acknowledged.
\begin{thereferences}{19}
\bibitem{ref1}
Burlaga, L. F., N. F. Ness, M. H. Acuna, J. D. Richardson, E. Stone,
and F. B. McDonald, The Astrophysical Journal,
692:1125–1130, 2009.
\bibitem{ref1}
Burlaga, L. F. and Ness, N. F., Astrophysical Journal, 703, 311, 2009.
\bibitem{ref1}
Burlaga, L. F.; Ness, N. F.; Acuña, M. H.; Lepping, R. P.;
Connerney, J. E. P.; Richardson, J. D., Nature, 454, 75, 2008.
\bibitem{ref1}
Burlaga, L.F. et al., Science, 309, 2027, 2005.
\bibitem{ref1}
Burlaga, L. F.Ness, N. F.;Acuña, M. H., Astrophys. J., 642, 584, 2006.
\bibitem{ref1}
Decker, R. B., S. M. Krimigis, E. C. Roelof, M. E. Hill,
T. P. Armstrong, G. Gloeckler, D. C. Hamilton, and L. J. Lanzerotti,
Science 309, 2020-2024 , 2005.
\bibitem{fite}
Fite, W. L., Smith, A. C. H., and Stebbings, R. F.,
Proc. R. Soc. London, em A. 268, 527, 1962.
\bibitem{Goldstein95}
Goldstein, M. L., Roberts, D. A., andMatthaeus, W. H. Ann. Rev.
Astron. \& Astrophys., 33, 283, 1995.
\bibitem{ref1}
Heerikhuisen, J.; Pogorelov, N. V.; Florinski, V.; Zank, G. P.;
le Roux, J. A., Astrophysical Journal, 682, 679, 2008.
\bibitem{ref1}
Heerikhuisen, Jacob; Shaikh, Dastgeer; Zank, Gary,
AIP Conference Proceedings, 932, 123, 2007.
\bibitem{men1}
Mendonca, J. T., and Shukla, P. K., Physics
of Plasmas, Volume 14, Issue 12, pp. 122304-122304-4, 2007.
\bibitem{pauls1995}
Pauls, H. L., Zank, G. P., and Williams, L. L.,
J. Geophys. Res. A11, 21595, 1995.
\bibitem{ref1}
Prested, C.; Schwadron, N.; Passuite, J.; Randol, B.; Stuart, B.;
Crew, G.; Heerikhuisen, J.; Pogorelov, N.; Zank, G.; Opher, M.;
Allegrini, F.; McComas, D. J.; Reno, M.; Roelof, E.; Fuselier, S.;
Funsten, H.; Moebius, E.; Saul, L., Journal of Geophysical Research,
Volume 113, Issue A6, CiteID A06102, 2008.
\bibitem{ref1}
Richardson, John D., Plasma Near the Termination Shock and in the
Heliosheath, AIPC, 1039, 418, 2008.; Li, H.; Wang, C.;
Richardson, J. D., Geophysical Res. Lett., 3519107, 2008.
\bibitem{dastgeer}
Shaikh, Dastgeer; Zank, G. P., AIP Conference Proceedings, Volume
1216, pp. 164-167, 2010.
\bibitem{dastgeer}
Shaikh, Dastgeer, Journal of Plasma Physics, In press,
2009arXiv0912.1568S, 2010.
\bibitem{dastgeer}
Shaikh, Dastgeer; Zank, Gary P.; Pogorelov, Nikolai,
AIP Conference Proceedings, Volume 858, pp. 308-313, 2006.
\bibitem{dastgeer}
Shaikh, D., and Zank, G. P.,
The Astrophysical Journal, Volume 688, Issue 1, pp. 683-694, 2008.
\bibitem{Shukla78}
Shukla, P. K., Nature, 274, 874, 1978.
\bibitem{zank1999}
Zank, G. P., 1999,
Space Sci. Rev., 89, 413-688, 1999.
\end{thereferences}
\end{document}
|
1,116,691,499,202 | arxiv | \section{Introduction}
The recent experimental realization of an attractive quasi-two-dimensional ultracold Fermi gas with widely tunable interactions \cite{Martiyanov:2010kt,Frohlich:2011bh,Dyke:2011ja,Sommer:2012bs}, the experimental observation of the critical properties \cite{Murthy:2015ix}, along with many Montecarlo investigations \cite{Bertaina:2011kk,Anderson:2015er} renovated the interested in the two-dimensional BCS-BEC crossover and motivate further theoretical pursuits in this field.
In three dimensions a mean-field theory of the BCS-BEC crossover at $T=0$ can provide good qualitative agreement with experiments; however in lower dimensional systems the role of fluctuations is more important \cite{Hadzibabic:2012} and one expects that a mean-field theory may not provide a reasonable agreement with experimental data. It has been indeed reported that in two dimensions at $T=0$ an ultracold Fermi gas is correctly described by the mean-field theory only in the weak-coupling limit, the correct composite boson equation of state of the strong coupling limit being correctly recovered only when the order parameter fluctuations are taken into account \cite{Salasnich:2015kl}.
In the present work, using a path integral approach \cite{Tempere:2012cm} we derive the one-loop equation of state \cite{He:2015dc}, which we use to calculate the first sound speed and the superfluid density at finite temperature. The first sound speed is compared with experimental data, showing that the Gaussian fluctuations of the order parameter are fundamental even in the zero-temperature limit. The superfluid density as a function of the crossover allows one to determine the Berezinskii-Kosterlitz-Thouless critical temperature \cite{Berezinsky:1970fr,Kosterlitz:1973xp} through the Kosterlitz-Nelson condition \cite{Nelson:1977zz}.
\section{Mean-field results}
Let us consider a two-dimensional system of fermions, interacting through an attractive $s$-wave contact potential, contained in a volume $V=L^2$, described within the grand canonical ensemble at fixed chemical potential $\mu$ and temperature $T$. Within the path integral formalism the partition function for the system can be written as \cite{Tempere:2012cm}
\begin{equation}
\mathcal{Z} = \int \mathcal{D} \psi_\sigma \mathcal{D} \bar{\psi}_\sigma e^{- \frac{1}{\hbar} \int_0^{\hbar \beta} \mathrm{d} \tau \int_{L^2} \mathrm{d}^2 r \mathscr{L}}
\label{eq:z}
\end{equation}
where $\psi$, $\bar{\psi}$ are complex Grassmann fields, $\beta=1/k_B T$, $k_B$ is Boltzmann constant and the (Euclidean) Lagrangian density reads:
\begin{equation}
\mathscr{L} = \bar{\psi}_{\sigma} \left[ \hbar \partial_{\tau}
- \frac{\hbar^2}{2m}\nabla^2 - \mu \right] \psi_{\sigma}
+ g \, \bar{\psi}_{\uparrow} \, \bar{\psi}_{\downarrow}
\, \psi_{\downarrow} \, \psi_{\uparrow} \; .
\label{eq:l}
\end{equation}
The quartic interaction cannot be treated exactly; the Hubbard-Stratonovich \cite{Stratonovich:1958vq,Hubbard:1959ub} transformation introduces an additional auxiliary pairing field, corresponding to a Cooper pair, decoupling the quartic interaction. The transformation essentially boils down to the following identity
\begin{equation}
e^{- \frac{1}{\hbar} \int_0^{\hbar \beta} \mathrm{d} \tau \int_{L^2} \mathrm{d}^2 r g \, \bar{\psi}_{\uparrow} \, \bar{\psi}_{\downarrow}
\, \psi_{\downarrow} \, \psi_{\uparrow}} \propto \int \mathcal{D} \Delta \mathcal{D} \bar{\Delta} \exp^{- \frac{1}{\hbar} S_{hs} [\psi, \bar{\psi}, \Delta, \bar{\Delta}]}
\end{equation}
where
\begin{eqnarray}
S_{hs} [\psi, \bar{\psi}, \Delta, \bar{\Delta}] = - \int_0^{\hbar \beta} \mathrm{d} \tau \int_{L^2} \mathrm{d}^2 r \left( \frac{|\Delta|^2}{g} + \bar{\Delta} \psi_\downarrow \psi_\uparrow + \Delta \bar{\psi}_\uparrow \bar{\psi}_\downarrow \right) \; ,
\end{eqnarray}
which can be straightforwardly verified by completing the square and performing the Gaussian integration. The partition function now reads:
\begin{equation}
\mathcal{Z} = \int \mathcal{D} \psi_\sigma \mathcal{D} \bar{\psi}_\sigma \mathcal{D} \Delta \mathcal{D} \bar{\Delta} e^{- \frac{1}{\hbar} \int_0^{\hbar \beta} \mathrm{d} \tau \int_{L^2} \mathrm{d}^2 r \mathscr{L}}
\label{eq:z2}
\end{equation}
with the following Lagrangian density
\begin{equation}
\mathscr{L} =
\bar{\psi}_{\sigma} \left[ \hbar \partial_{\tau}
- {\hbar^2\over 2m}\nabla^2 - \mu \right] \psi_{\sigma}
+ \bar{\Delta} \, \psi_{\downarrow} \, \psi_{\uparrow}
+ \Delta \bar{\psi}_{\uparrow} \, \bar{\psi}_{\downarrow}
- {|\Delta|^2\over g} \; .
\label{ltilde}
\end{equation}
The integration over $\psi$, $\bar{\psi}$ can be carried out exactly \cite{Tempere:2012cm}, obtaining:
\begin{equation}
\mathcal{Z} = \int \mathcal{D} \Delta \mathcal{D} \bar{\Delta} e^{- \frac{1}{\hbar} \int_0^{\hbar \beta} \mathrm{d} \tau \int_{L^2} \mathrm{d}^2 r \left( - \ln ( - \mathbb{G}^{-1} \right) - {|\Delta|^2 / g})}
\label{eq:z3}
\end{equation}
and the physics of the system is encoded in the inverse Green's function, which in coordinate-space representation reads:
\begin{equation}
- \mathbb{G}^{-1} = \begin{pmatrix} \hbar \partial_\tau - \frac{\hbar^2}{2m} \nabla^2 - \mu & \Delta(\mathbf{x},\tau) \\
\bar{\Delta}(\mathbf{x},\tau) & \hbar \partial_\tau + \frac{\hbar^2}{2m} \nabla^2 +\mu \end{pmatrix} \; .
\end{equation}
The full inverse Green function can be decomposed in a mean-field component $- \mathbb{G}^{-1}_0$, where the pairing field $\Delta$ is replaced by its uniform and constant saddle point value, plus a fluctuation part $\mathbb{F}$:
\begin{equation}
- \mathbb{G}^{-1} = - \mathbb{G}^{-1}_{0} + \mathbb{F} = \begin{pmatrix} \hbar \partial_\tau - \frac{\hbar^2}{2m} \nabla^2 - \mu & \Delta_0 \\
\Delta_0 & \hbar \partial_\tau + \frac{\hbar^2}{2m} \nabla^2 +\mu \end{pmatrix} + \begin{pmatrix} 0 & \eta(\mathbf{x},\tau) \\
\bar{\eta}(\mathbf{x},\tau) & 0 \end{pmatrix}
\label{eq:expansion}
\end{equation}
and clearly $\Delta(\mathbf{x},\tau) = \Delta_0 + \eta(\mathbf{x},\tau)$. We now analyze the mean-field approximation, which simply consists in neglecting the fluctuation field $\eta$ and, consequently, $\mathbb{F}$. The single-particle excitation spectrum is found solving for the poles of the Nambu-Gor'kov Green's function $\mathbb{G}_0$ in momentum space \cite{Stoof:2009tf}:
\begin{equation}
E_\mathbf{k} = \sqrt{(k^2/2m - \mu)^2 + \Delta_0^2} \;.
\label{eq:ek}
\end{equation}
Using the bound state equation to relate the interaction strength $g$ to the bound state\footnote{A bound state is present in two-dimension for every value of the attractive interaction.} energy $\epsilon_B$
\begin{equation}
- \frac{1}{g} = \frac{1}{2L^2} \sum_{\bf k} \frac{1}{\epsilon_k +
\frac{1}{2} \epsilon_B}
\label{g-eb}
\end{equation}
at $T=0$ the integrations defining $\mathcal{Z}$ can be carried out analytically in two dimensions, finally the thermodynamic grand potential reads:
\begin{eqnarray}
\Omega_{mf} (\mu, \Delta_0) = - \frac{1}{\beta} \ln \mathcal{Z} = - {m L^2\over 4\pi \hbar^2} \Big[
\mu^2 + \mu \sqrt{\mu^2+\Delta_0^2} + {1\over 2} \Delta_0^2 - \Delta_0^2 \ln{\Big({-\mu + \sqrt{\mu^2 + \Delta_0^2}
\over \epsilon_B}\Big)} \Big] \; .
\label{eq:mfeosa}
\end{eqnarray}
From the grand potential we impose the saddle-point condition for $\Delta_0$,
i.e. $(\partial \Omega_{mf} / \partial \Delta_0)_{\mu,V} = 0$, obtaining the gap equation:
\begin{equation}
\Delta_0 = \sqrt{2\epsilon_b ( \mu + {1\over 2}\epsilon_B ) } \; ,
\label{gapeq}
\end{equation}
inserting the result from Eq. (\ref{gapeq}) into Eq. (\ref{eq:mfeosa}) one gets the mean-field equation of state \cite{Marini:1998ej}:
\begin{equation}
\Omega_{mf} (\mu) = - {m L^2\over 2\pi \hbar^2}
(\mu + {1\over 2} \epsilon_B )^2 \; .
\label{eq:mfeos}
\end{equation}
The number of particles is obtained from the thermodynamic relation $N = - \partial \Omega / \partial \mu$ and reads:
\begin{equation}
N = {m L^2\over \pi \hbar^2}
(\mu + {1\over 2} \epsilon_B ) \; .
\end{equation}
\section{Pairing fluctuations and equation of state}
In the previous Section we simply discarded the fluctuation part of the inverse Green's function, here we rewrite the logarithm appearing in Eq. (\ref{eq:z3}) as
\begin{equation}
\ln \left( - \mathbb{G}^{-1} \right) = \ln \left( - \mathbb{G}^{-1}_{0} \left( \mathbb{1} - \mathbb{G}^{-1}_{0} \mathbb{F} \right) \right) = \ln \left( - \mathbb{G}^{-1}_{0} \right) + \ln \left( \mathbb{1} - \mathbb{G}^{-1}_{0} \mathbb{F} \right)
\label{eq:f1}
\end{equation}
and the Gaussian (one-loop) approximation consists in the following expansion for the second term in the r.h.s. of Eq. (\ref{eq:f1})
\begin{equation}
\ln \left( \mathbb{1} - \mathbb{G}^{-1}_{0} \mathbb{F} \right) = \sum_{m=1}^\infty \frac{\left( \mathbb{G}^{-1}_{0} \mathbb{F} \right)^m}{m} \approx - \mathbb{G}^{-1}_{0} \mathbb{F} - \frac{1}{2} \mathbb{G}^{-1}_{0} \mathbb{F} \mathbb{G}^{-1}_{0} \mathbb{F} \; .
\end{equation}
The pairing field has been written as $\Delta(\mathbf{x},\tau) = \Delta_0 + \eta (\mathbf{x},\tau)$ where $\Delta_0$ the saddle point value, so the fluctuation fields $\eta$, $\bar{\eta}$ are small, justifying the present expansion in powers of $\mathbb{F}$. Re-arranging the terms and rewriting the action in momentum space \cite{Tempere:2012cm} one gets
\begin{equation}
\mathcal{Z} = \int \mathcal{D} \eta \mathcal{D} \bar{\eta} e^{- \frac{1}{\hbar} S_g [\eta,\bar{\eta}]}
\label{eq:z4}
\end{equation}
with the following action
\begin{equation}
S_{g} [\eta,\bar{\eta}] = {1\over 2} \sum_{\mathbf{q}, m}
({\bar\eta}(\mathbf{q}, \mathrm{i} \Omega_m),\eta(-\mathbf{q}, - \mathrm{i} \Omega_m)) \ \mathbb{M} (\mathbf{q}, \mathrm{i} \Omega_m) \left(
\begin{array}{c}
\eta(\mathbf{q}, \mathrm{i} \Omega_m) \\
{\bar\eta}(-\mathbf{q}, - \mathrm{i} \Omega_m)
\end{array}
\right) \; ,
\end{equation}
having introduced the Fourier transform of the fluctuation field $\eta$, along with the bosonic Matsubara frequencies $\Omega_m = \left( 2 m + 1 \right) \pi/ \beta$; the matrix elements of the inverse pair fluctuation propagator $\mathbb{M}$, along with a complete derivation can be found in Ref. \cite{Tempere:2012cm}. The poles of the propagator determine the collective mode spectrum, which in the strong-coupling regime can be parametrized as
\begin{equation}
\hbar \omega_q = \sqrt{\epsilon_q \left( \lambda \epsilon_q + 2 m c_s^2 \right)} \; ,
\end{equation}
where $\lambda$ and $c_s$ are a function of the crossover. In order to write the one-loop contribution to the equation of state \cite{Diener:2008kh,He:2015dc} one needs to introduce a regularization procedure \cite{Diener:2008kh,He:2015dc}, as follows:
\begin{equation}
\Omega_g (\mu) = {1\over 2\beta} \sum_{\mathbf{q}, m} \ln \left[ \frac{\mathbb{M}_{11} (\mathbb{q}, \mathrm{i} \Omega_m)}{\mathbb{M}_{22} (\mathbf{q}, \mathrm{i} \Omega_m)} \det(\mathbb{M} (\mathbf{q}, \mathrm{i} \Omega_m)) \right] e^{\mathrm{i} \Omega_m 0^+} \;.
\label{eq:fulleos}
\end{equation}
In Fig. \ref{fig:mudelta} we plot the mean-field value of the chemical potential and of the pairing gap as a function of the crossover, calculated using the mean-field equation of state in Eq. (\ref{eq:mfeos}) and the full one-loop equation of state given by the sum of Eq. (\ref{eq:mfeos}) and Eq. (\ref{eq:fulleos}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\linewidth]{eos.eps}\hspace{2pc}%
\caption{The chemical potential $\mu$ (red lines) and the pairing gap $\Delta_0$ (black lines), as calculated from the mean-field equation of state (dashed lines) and from the one-loop equation of state (solid lines), as a function of the scaled binding energy $\epsilon_B/\epsilon_F$.}
\label{fig:mudelta}
\end{figure}
\section{First sound}
Within this framework the first sound speed at $T=0$ can be calculated using the elegant thermodynamic relation \cite{Lipparini:2008} due to Landau
\begin{equation}
c_s^2 = \frac{n}{m} \frac{\partial \mu}{\partial n} \; .
\end{equation}
Using the mean-field equation of state in Eq. (\ref{eq:mfeos}) one finds the result
\begin{equation}
c_s^2 = \frac{v_f^2}{2} \; ,
\end{equation}
where $v_F$ is the Fermi velocity, this result being plotted in dashed blue in Fig. \ref{fig:cs}. On the other hand, introducing the one-loop equation of state in Eq. (\ref{eq:fulleos}) one finds that the mean-field result is accurate only in the deep-BCS regime: when the coupling is increased the sound speed greatly deviates from its mean-field value tending to the composite boson limit \cite{Salasnich:2015kl}
\begin{equation}
c_s^2 = \frac{4 \pi \hbar^2}{m^2_B} \frac{n_B}{\ln \left( \frac{1}{n_B a_B^2} \right)}
\end{equation}
shown in dashed red in Fig. \ref{fig:cs}, $m_B=2 m$ being the mass of each composite boson, $n_B = n / 2$ being the bosonic density and $a_B = 0.551 a_s$ being the boson-boson scattering length \cite{Salasnich:2015kl}. Our theoretical findings are also compared with preliminary experimental data \cite{Luick:2014}, showing good agreement, see Fig \ref{fig:cs}. The results can be readily extended to finite temperature \cite{Bighin:2016uk} using the thermodynamic approach in \cite{Salasnich:2010jw}, however the temperature dependance is very weak across the whole crossover between $T=0$ and the Berezinskii-Kosterlitz-Thouless critical temperature $T_{BKT}$ \cite{Bighin:2016uk}.
We stress that the present result in Fig. \ref{fig:cs} clearly shows that the inclusion of Gaussian fluctuations in the equation of state is of fundamental importance in the two-dimensional BCS-BEC crossover, the mean-field equation of state being accurate only in the deep-BEC regime.
\begin{figure}[h]
\begin{minipage}{0.48\linewidth}
\includegraphics[width=0.95\linewidth]{cs.eps}
\caption{\label{fig:cs}The first sound speed at $T=0$ from the one-loop equation of state (solid black line), compared with the theoretical prediction using the mean-field equation of state (blue dashed line) and the composite boson limit valid for $\epsilon_B \gg \epsilon_F$ (red dashed line). The data points are preliminary experimental data reported in \cite{Luick:2014}, blue dots for data measured from the speed of density waves and red squares for data measured from the equation of state.}
\end{minipage}\hspace{0.04\linewidth}%
\begin{minipage}{0.48\linewidth}
\includegraphics[width=0.95\linewidth]{rhos.eps}
\caption{\label{fig:rhos}Superfluid density for three different values of the scaled binding energy, from the weakly-coupled regime to the strong interacting one. The black dashed line marks the Nelson-Kosterlitz condition, setting the Berezinskii-Kosterlitz-Thouless critical temperature $T_{BKT}$; a more detailed analysis of $T_{BKT}$ across the whole crossover is reported in \cite{Bighin:2016uk}.}
\end{minipage}
\end{figure}
\section{Superfluid density}
The fermionic and bosonic contributions to the normal density are calculated using Landau's quasiparticle excitation formulas \cite{Fetter:1971}
\begin{equation}
n_{n,f} = \beta \int \frac{\mathrm{d}^2 k}{(2 \pi)^2} k^2 \frac{e^{\beta E_k}}{(e^{\beta E_k} + 1)^2}
\label{eq:nnf}
\end{equation}
and
\begin{equation}
n_{n,b} = \frac{\beta}{2} \int \frac{\mathrm{d}^2 q}{(2 \pi)^2} q^2 \frac{e^{\beta \omega_q}}{(e^{\beta \omega_q} - 1)^2} \;.
\label{eq:nnb}
\end{equation}
The superfluid density is then $n_s=n-n_{n,f}-n_{n,b}$, having neglected the hybridization between the single-particle and collective excitations which gets relevant at higher temperature scales \cite{Taylor:2006ky,Bighin:2016uk}. As mentioned in the Introduction the role of the fluctuations is enhanced in lower dimensionalities \cite{Hadzibabic:2012}, in particular in one- and two-dimensional systems there cannot the spontaneous breaking of a continuos symmetry at finite temperature \cite{Mermin:1966da,Coleman:1973ci}; as a consequence off-diagonal long-range order is achieved strictly only at $T=0$. Nonetheless in a two-dimensional systems a finite-temperature transition is to be observed, between the normal state and a state characterized by \textit{algebraic} off-diagonal long-range order \cite{Berezinsky:1970fr,Kosterlitz:1973xp}. Figure \ref{fig:rhos} shows three superfluid density profiles for $\log (\epsilon_B/\epsilon_F)=10$, $\log (\epsilon_B/\epsilon_F)=0$ and $\log (\epsilon_B/\epsilon_F)=-10$, respectively in the deep-BEC, intermediate and deep-BCS regimes, along with the line corresponding to the Nelson-Kosterlitz condition \cite{Nelson:1977zz}
\begin{equation}
k_B T_{BKT} = \frac{\hbar^2 \pi}{8 m} n_s (T_{BKT})
\label{eq:kncondition}
\end{equation}
determining the critical temperature $T_{BKT}$.
With respect to the mean-field result \cite{Salasnich:2013kq} a full Gaussian-level equation of state here predicts a lower critical temperature in the deep-BEC regime. This treatment can be extended to the whole crossover, a full comparison with the recent experimental observation of $T_{BKT}$ \cite{Murthy:2015ix} in a quasi-two-dimensional ultracold Fermi gas is reported in Ref. \cite{Bighin:2016uk}.
\section{Conclusions}
The BCS-BEC crossover in a two-dimensional Fermi gas is the subject of great interest, both from an experimental and theoretical point of view. In this work, using a one-loop equation of state, we have calculated the first sound speed and the superfluid density, showing that the Gaussian fluctuations are needed in order to recover the correct composite boson limit even in the zero-temperature case. We also discussed the superfluid density at finite temperature, using this result to calculate the Berezinskii-Kosterlitz-Thouless critical temperature \cite{Bighin:2016uk}.
The treatment in this work can easily extended to account for the description of many properties of two-dimensional ultracold Fermi gases, e.g. the second sound, the Berezinskii-Kosterlitz-Thouless critical temperature across the whole crossover, the condensate fraction \cite{Salasnich:2007im,Bighin:2016uk}. Moreover the energy density, the pressure and the contact have been calculated \cite{He:2015dc} within an approach equivalent to the present one and show good agreement with Montecarlo data.
A description of a trapped non-uniform system, as well as a full calculation of the superfluid density beyond the approximation in Eqs. (\ref{eq:nnb}) and (\ref{eq:nnf}) is the subject of ongoing work.
\section*{Acknowledgements}
The authors gratefully acknowledge Ministero Istruzione Universit\`a Ricerca (PRIN project 2010LLKJBX) for partial support.
\section*{References}
|
1,116,691,499,203 | arxiv | \section{Introduction}
In recent years, condensed matter physics has witnessed several remarkable
discoveries that, starting from graphene\cite{novo}, have shown the existence
of electron systems where the quasiparticles have linear dispersion about
degenerate points at the intersection between valence and conduction bands.
This has led to introduce the class of so-called Dirac semimetals, in which
the low-energy excitations can be described in terms of a number of Dirac
spinors. Thus, there is already clear evidence that materials like Na$_3$Bi
or Cd$_3$As$_2$ have Dirac points in their electronic
dispersion\cite{liu,neu,bor}, providing in that respect a 3D electronic
analogue of graphene. More recently, there have been claims that materials
like TaAs or the pyrochlore iridates should be examples of 3D Weyl
semimetals\cite{taas1,taas2}, characterized by having Weyl points hosting each
of them fermions with a given chirality.
Since the appearance of graphene, the possibility that the quasiparticles of a
condensed matter system may behave as Dirac fermions has been very appealing,
as the particular geometrical features of these mathematical objects have
shown to be indeed at the origin of several unconventional effects. The Klein
paradox\cite{kats} and the difficulty that such quasiparticles have to undergo
backscattering\cite{ando} are a direct consequence of the additional pseudospin
degree of freedom inherent to Dirac fermions. The topological insulators and
their peculiar surface states can be taken also as an illustration of the
fruitful interplay arising between the symmetry properties of Dirac fermions
and the topology of the space\cite{kane,qi}.
Another important feature of known Dirac semimetals is that they need to be
modeled as interacting electron systems which are naturally placed in the
strong-coupling regime. This is so as the long-range Coulomb repulsion becomes
the dominant interaction, while the Fermi velocity of the electrons
takes values which are at least two orders of magnitude below the speed of
light. Thus, the equivalent of the fine structure constant in the electron
system can be more than one hundred times larger than its counterpart in the
conventional fully relativistic Quantum Electrodynamics (QED). This leads to
expect relevant many-body effects as, for instance, the imperfect screening of
impurities with sufficiently large charge on top of
graphene\cite{nil,fog,shy,ter}. Another important property is the scale
dependence of physical observables, like that predicted theoretically for
the Fermi velocity in 2D Dirac semimetals\cite{np2,prbr} and measured recently
in suspended graphene samples at very low doping levels\cite{exp2}. We may
think therefore of the Dirac semimetals as an ideal playground to observe
scaling phenomena that would be otherwise confined to the investigation of
field theories in particle physics.
In any event, a proper account of the scaling effects must rely on the
renormalizability of the many-body theory. The formulation of the interacting
theory of Dirac semimetals requires the introduction of a high-energy cutoff
in the electronic spectrum, which can be set to a rather arbitrary level. In
order to ensure the predictability of the theory, it becomes therefore crucial
that all the dependences on the high-energy cutoff can be absorbed into the
redefinition of a finite number of parameters. In the case of Dirac semimetals,
such a renormalizability cannot be taken for granted, as the many-body theory
does not enjoy the full covariance which enforces that property in typical
relativistic field theories. Nevertheless, the investigation of the 2D Dirac
semimetals has already provided evidence that the interacting field theory in
these electron systems is renormalizable\cite{jhep}. This condition has allowed
for instance to compute very high order contributions to the renormalization of
relevant order parameters, leading to an estimate of the critical coupling for
chiral symmetry breaking that agrees with the value obtained with a quite
different nonperturbative method like the resolution of the gap
equation\cite{gama}.
The aim of this paper is to investigate the strong-coupling phases that may
arise in 3D Dirac and Weyl semimetals under the effect of the long-range
Coulomb interaction. In this respect, the many-body theory of these electron
systems can be viewed as a variant of conventional QED. As already pointed out,
a main difference with this theory lies in that the Fermi velocity of the
electrons does not coincide with the speed of light, which makes that
quasiparticle parameter susceptible of being renormalized and therefore
dependent on the energy scale. In three spatial dimensions, the electron charge
needs also to be redefined to account for cutoff dependences of the bare
theory, which renders the question of the renormalizability even more
interesting in the present context.
In this paper we will apply two different nonperturbative approximations to
the QED of 3D Dirac and Weyl semimetals, namely the ladder approach and the
large-$N$ approximation ($N$ being the number of different fermion flavors),
which can be viewed as complementary computational methods. In both cases, we
will see that the field theory appears to be renormalizable, making possible
to absorb all the dependences on the high-energy cutoff into a finite number
of renormalization factors given only in terms of the renormalized coupling.
We will show that the QED of 3D Dirac semimetals has two competing effects at
strong coupling. One of them is the tendency to chiral symmetry breaking and
dynamical mass generation, which are analogous to the same phenomena arising
in the conventional QED at strong
coupling\cite{mas,fom,fuk,mir,gus2,kon,atk,min}. This trend is however
outweighed by the strong suppression of electron quasiparticles that takes
place at large $N$, leading then to a different type of critical point at
sufficiently large interaction strength\cite{rc}, shared also by the 3D Weyl
semimetals. We will see that the nonperturbative approaches applied in the
paper afford a precise characterization of the respective critical behaviors
in terms of the anomalous dimensions of relevant operators. At the end, the
phase diagram of the 3D Dirac semimetals turns out to be richer than that of
their 2D counterparts, displaying a transition to a phase with non-Fermi liquid
behavior (genuine also of the 3D Weyl semimetals) which may be observed in
materials hosting a sufficiently large number of Dirac or Weyl points.
\section{Field theory of 3D Dirac and Weyl semimetals}
Our starting point is the field theory describing the 3D fermions with linear
dependence of the energy $\varepsilon $ on momentum ${\bf p}$,
$\varepsilon ({\bf p}) = v_F |{\bf p}|$, and interacting through the scalar
part $\phi $ of the electromagnetic potential. The fermion modes can be
encoded into a number $N$ of spinor fields $\psi_i$, $i = 1, \ldots N$, where
the index $i$ represents in general the way in which the modes are
distributed into different Dirac or Weyl points in momentum space. For
convenience, we will choose to pair the fermion chiralities into four-component
spinors, extending the notation to think of each $\psi_i$ field as being made
of two different chiralities from a given Dirac point, or from two different
Weyl points in the case of a Weyl semimetal. The hamiltonian can be written
then as
\begin{equation}
H = - i v_F \int d^3 r \; \overline{\psi}_i({\bf r})
\boldsymbol{\gamma} \cdot \boldsymbol{\partial} \psi_i ({\bf r}) +
e_0 \int d^3 r \; \overline{\psi}_i({\bf r}) \gamma_0
\psi_i ({\bf r}) \: \phi ({\bf r})
\label{h}
\end{equation}
where $\overline{\psi}_i = \psi_i^{\dagger} \gamma_0 $ and $\{ \gamma_{\sigma} \}$
is a collection of four-dimensional matrices such that
$\{ \gamma_\mu, \gamma_\nu \} = 2 \: {\rm diag } (1,-1,-1,-1)$.
In order to complete the field theory, we have to provide the propagator of the
scalar field $\phi $, for which we take the Lorentz gauge in the dynamics of
the full electromagnetic potential. Thus, we get from the original relativistic
theory
\begin{equation}
\langle T \phi ({\bf r}, t) \phi ({\bf r}', t') \rangle =
-i \int \frac{d^3 q}{(2\pi )^3} \int \frac{d \omega}{2\pi }
e^{i {\bf q} \cdot ({\bf r} - {\bf r}')} e^{-i \omega (t-t')}
\frac{1}{{\bf q}^2 - \omega^2/c^2 - i \eta }
\label{lorentz}
\end{equation}
We are interested in systems where the Fermi velocity $v_F$ is much smaller
than the speed of light $c$, which makes possible to adopt a simpler
description taking the limit $c \rightarrow \infty $. This justifies that we
can neglect magnetic interactions between our nonrelativistic fermions, and
that we can just deal with the zero-frequency limit of (\ref{lorentz}).
Thus, the free propagator for the scalar potential will be taken in what
follows as
\begin{equation}
D_0 ({\bf q}) = \frac{1}{{\bf q}^2}
\label{scalar}
\end{equation}
A remarkable property is that the hamiltonian (\ref{h}) together with the
interaction mediated by (\ref{scalar}) define an interacting field theory that
is scale invariant at the classical level. That is, under a change in the scale
of the space and time variables
\begin{equation}
t \rightarrow s t \;\;\;\; , \;\;\;\; {\bf r} \rightarrow s {\bf r}
\end{equation}
the fields can be taken to transform accordingly as
\begin{equation}
\phi ({\bf r}) \rightarrow \frac{1}{s} \phi ({\bf r}) \;\;\;\; ,
\;\;\;\; \psi ({\bf r}) \rightarrow \frac{1}{s^{3/2}} \psi ({\bf r})
\end{equation}
This gives rise to a homogeneous scaling of the hamiltonian (\ref{h}),
consistent with the transformation of an energy variable.
Such a scale invariance has important consequences, since it makes possible
to give meaning to the divergences that the field theory develops at high
energies. The computation of the quantum corrections requires indeed the
introduction of a high-energy cutoff that spoils the classical scale
invariance. However, provided the field theory is renormalizable, the
singular dependences on the cutoff can be still absorbed into a finite number
of renormalization factors redefining the parameters of the theory. Writing
the action $S$ corresponding to the hamiltonian (\ref{h}), we may start with
a formulation introducing suitable renormalization
factors $Z_{\psi }, Z_v, Z_e$ and $Z_{\phi }$:
\begin{equation}
S = \int dt \: d^3 r \; Z_{\psi } \: \overline{\psi}_i({\bf r})
(i \gamma_0 \partial_t + i Z_v v_R \boldsymbol{\gamma} \cdot \boldsymbol{\partial} )
\psi_i ({\bf r}) -
Z_e e \int dt \: d^3 r \; Z_{\psi } \: \overline{\psi}_i({\bf r}) \gamma_0
\psi_i ({\bf r}) \: Z_{\phi } \phi ({\bf r})
\label{s}
\end{equation}
The gauge invariance of the model implies that $Z_e Z_{\phi } = 1$. The point
is that, if the field theory is well-behaved, one has to be able to render all
the physical observables cutoff-independent, when written in terms of the
renormalized parameters $v_R$ and $e$.
The first example of the need to implement a high-energy cutoff can be taken
from the computation of the electron self-energy. The first perturbative
contribution is given by the first rainbow diagram at the right-hand-side in
Fig. \ref{one}, which corresponds to the expression
\begin{equation}
i \Sigma_1 ({\bf k}) = - e_0^2
\int \frac{d^3 p}{(2\pi)^3} \int \frac{d\omega_p}{2\pi } \gamma_0
\frac{-\gamma_0 \omega_p + v_F \boldsymbol{\gamma} \cdot {\bf p} }
{-\omega_p^ 2 + v_F^2 {\bf p}^2 - i\eta } \gamma_0
\frac{1}{({\bf k}-{\bf p})^2}
\label{fo}
\end{equation}
This contribution to the electron self-energy does not depend on frequency,
but shows a divergent behavior at the upper end of the momentum integration,
pointing at the renormalization of the Fermi velocity $v_F$. In what follows,
instead of dealing with a cutoff in momentum space, we will prefer to use a
regularization method that is able to preserve the gauge invariance of the
model. For this purpose, we will adopt the analytic continuation of the
momentum integrals to dimension $D = 3 - \epsilon $, which will convert all
the high-energy divergences into poles in the $\epsilon $ parameter\cite{ram}.
In the dimensional regularization method, the bare charge $e_0$ must get
dimensions through an auxiliary momentum scale $\mu $, being related to the
physical dimensionless charge $e$ by
\begin{equation}
e_0 = \mu^{\epsilon/2} e
\end{equation}
The computation of the first-order contribution to the self-energy (\ref{fo})
gives then
\begin{eqnarray}
\Sigma_1 ({\bf k}) & = & \frac{e^2}{2} \mu^\epsilon
\int \frac{d^D p}{(2\pi)^D} \boldsymbol{\gamma} \cdot {\bf p}
\frac{1}{|{\bf p}|} \frac{1}{({\bf k}-{\bf p})^2} \nonumber \\
& = & \frac{e^2}{4} \mu^\epsilon
\int \frac{d^D p}{(2\pi)^D} \boldsymbol{\gamma} \cdot {\bf p}
\int_0^1 dx \frac{ (1-x)^{-1/2}}{[({\bf k}-{\bf p})^2 x + {\bf p}^2 (1-x)]^{3/2}}
\nonumber \\
& = & \frac{e^2}{4}
\boldsymbol{\gamma} \cdot {\bf k} \: \mu^\epsilon \int \frac{d^D p}{(2\pi)^D}
\int_0^1 dx \frac{x (1-x)^{-1/2}}{[{\bf p}^2 + {\bf k}^2 x(1-x)]^{3/2}}
\nonumber \\
& = & \frac{e^2}{2 \sqrt{\pi}} \boldsymbol{\gamma} \cdot {\bf k}
\frac{\Gamma \left(\tfrac{3}{2} - \tfrac{D}{2} \right)}{(4\pi)^{D/2}} \mu^\epsilon \int_0^1 dx
\frac{x (1-x)^{-1/2}}{ [{\bf k}^2 x(1-x)]^{3/2 - D/2} } \nonumber \\
& = & \frac{e^2}{(4\pi)^{2-\epsilon/2}} \boldsymbol{\gamma} \cdot {\bf k}
\frac{\mu^\epsilon}{|{\bf k}|^{\epsilon}}
\frac{\Gamma \left(\tfrac{1}{2}\epsilon\right)
\Gamma \left(2 - \tfrac{\epsilon}{2}\right)
\Gamma \left(\tfrac{1}{2}-\tfrac{\epsilon}{2}\right)}
{\Gamma \left(\tfrac{5}{2}-\epsilon \right)}
\label{loop}
\end{eqnarray}
The expression (\ref{loop}) shows the development of a $1/\epsilon$ pole
as $\epsilon \rightarrow 0$. This divergence can be indeed
absorbed into a renormalization of the Fermi velocity $v_F$, as the full
fermion propagator $G ({\bf k}, \omega )$ is related to the self-energy
$\Sigma ({\bf k}, \omega)$ by
\begin{equation}
G ({\bf k}, \omega )^{-1} =
Z_{\psi} (\gamma_0 \omega - Z_v v_R \boldsymbol{\gamma} \cdot {\bf k})
- Z_{\psi} \Sigma ({\bf k}, \omega)
\label{g_1}
\end{equation}
Thus, to first order in $e^2$, the fermion propagator can be made finite in the
limit $\epsilon \rightarrow 0$ by taking
\begin{equation}
Z_v = 1 - \frac{e^2}{6\pi^2 v_R} \frac{1}{\epsilon }
\label{zv}
\end{equation}
This renormalization of the fermion propagator already gives an
idea of the effective dependence of the Fermi velocity on the energy scale. The
bare Fermi velocity $Z_v v_R$ cannot depend on the momentum scale $\mu $, since
the original theory does not know about that auxiliary scale. We have therefore
\begin{equation}
\mu \frac{\partial }{\partial \mu } (Z_v v_R) = 0
\label{sv0}
\end{equation}
For the same reason, the bare coupling $e_0$ cannot depend on $\mu $, which
leads to the equation
\begin{equation}
\mu \frac{\partial }{\partial \mu } e = - \frac{\epsilon }{2} e
\label{se0}
\end{equation}
Combining (\ref{sv0}) and (\ref{se0}), we arrive at the scaling equation
\begin{equation}
\mu \frac{\partial }{\partial \mu } v_R = - \frac{1}{6\pi^2 } e^2
\label{se1}
\end{equation}
This is the expression of the growth of the renormalized Fermi velocity $v_R$
in the low-energy limit, approached here as $\mu \rightarrow 0$. This scaling,
which parallels the behavior of the Fermi velocity in 2D Dirac semimetals, is
the main physical property deriving from the lowest-order renormalization of
the 3D Dirac and Weyl semimetals\cite{ros}.
To close this section, we comment that the result (\ref{loop}) for the electron
self-energy has actually a wider range of validity, beyond the lowest-order
approximation in which it has been obtained. This can be seen by exploiting
a feature that is also shared with the 2D Dirac semimetals in the so-called
ladder approximation, that is when corrections to the interaction vertex and
the interaction propagator are neglected in the electron
self-energy\cite{jhep}. Then we
remain with the sum of diagrams encoded in the self-consistent equation
represented in Fig. \ref{one}. The electron
self-energy in this approximation, $\Sigma_{\rm ladder} ({\bf k})$, must have
the form
\begin{equation}
\Sigma_{\rm ladder} ({\bf k}) = f ({\bf k}) \: \boldsymbol{\gamma} \cdot {\bf k}
\end{equation}
and therefore it is bound to satisfy the equation
\begin{equation}
i\Sigma_{\rm ladder} ({\bf k}) = i\Sigma_1 ({\bf k})
+ e_0^2 \int \frac{d^D p}{(2\pi)^D}
\int \frac{d\omega_p}{2\pi }
\Sigma_{\rm ladder} ({\bf p})
\frac{ \omega_p^2 + v_F^2 {\bf p}^2 }
{(-\omega_p^ 2 + v_F^2 {\bf p}^2 - i\eta)^2}
\frac{1}{({\bf k}-{\bf p})^2}
\label{selfl}
\end{equation}
It is now easy to see that the second term at the right-hand-side of
(\ref{selfl}) identically vanishes. By performing a Wick rotation
$\omega_p = i\overline{\omega }_p$, we get
\begin{equation}
\int \frac{d\omega_p}{2\pi }
\frac{ \omega_p^2 + v_F^2 {\bf p}^2 }
{(-\omega_p^ 2 + v_F^2 {\bf p}^2 - i\eta)^2}
= i \int \frac{d \overline{\omega}_p}{2\pi }
\frac{-\overline{\omega}_p^2 + v_F^2 {\bf p}^2 }
{(\overline{\omega }_p^ 2 + v_F^2 {\bf p}^2 )^2}
= 0
\label{res}
\end{equation}
showing that
\begin{equation}
\Sigma_{\rm ladder} ({\bf k}) = \Sigma_1 ({\bf k})
\label{ident}
\end{equation}
\begin{figure}
\vspace{0.5cm}
\begin{center}
\mbox{\epsfxsize 12cm \epsfbox{selfladder.eps}}
\end{center}
\caption{Diagrammatic representation of the ladder approximation for the
electron self-energy.}
\label{one}
\end{figure}
The result (\ref{ident}) has important consequences for the ladder
approximation, since it implies that the low-energy scaling of the Fermi
velocity is again the main physical effect derived from the electron
self-energy, limiting the possible electronic instabilities that
can arise in that nonperturbative approach.
\section{3D Dirac semimetals in ladder approximation}
The iterative sequence of interactions, illustrated in Fig. \ref{one} for
the electron self-energy, defines a partial sum of perturbation theory which
can be also applied to analyze the renormalization of composite operators.
This provides a suitable approach to investigate possible instabilities of the
electron system, since some of those operators correspond to order parameters
characterizing the breakdown of respective symmetries of the field
theory. Once the renormalization program is accomplished, the singularities
found in the correlators of the composite operators may signal the
condensation of a given order parameter, pointing at the onset of a new
phase of the electron system.
The most relevant composite operators are made of bilinears of the fermion
fields, having associated vertex functions that can be used to measure
different susceptibilities of the electron system. At this point, the
discussion is confined to the 3D Dirac semimetals (excluding the case of Weyl
semimetals), for which we can write different composite operators in terms of
four-component spinors about the same Dirac point in momentum space. We pay
attention in particular to those operators corresponding to the charge, the
current, and the fermion mass, given by
\begin{eqnarray}
\rho_0 ({\bf r}) & = & \overline{\psi}_i ({\bf r}) \gamma_0 \psi_i ({\bf r}) \\
\boldsymbol{\rho}_{c} ({\bf r}) & = &
\overline{\psi}_i ({\bf r}) \: \boldsymbol{\gamma} \: \psi_i ({\bf r}) \\
\rho_m ({\bf r}) & = & \overline{\psi}_i ({\bf r}) \psi_i ({\bf r})
\label{mass}
\end{eqnarray}
The respective one-particle-irreducible (1PI) vertices are
\begin{eqnarray}
\Gamma_0 ({\bf q},\omega_q;{\bf k},\omega_k) & = &
\langle \rho_0 ({\bf q},\omega_q)
\psi_i ({\bf k}+{\bf q},\omega_k + \omega_q)
\overline{\psi}_i ({\bf k},\omega_k) \rangle_{\rm 1PI} \label{f} \\
\boldsymbol{\Gamma}_{c} ({\bf q},\omega_q;{\bf k},\omega_k) & = &
\langle \boldsymbol{\rho}_{c} ({\bf q},\omega_q)
\psi_i ({\bf k}+{\bf q},\omega_k + \omega_q)
\overline{\psi}_i ({\bf k},\omega_k) \rangle_{\rm 1PI} \\
\Gamma_m ({\bf q},\omega_q;{\bf k},\omega_k) & = &
\langle \rho_m ({\bf q},\omega_q)
\psi_i ({\bf k}+{\bf q},\omega_k + \omega_q)
\overline{\psi}_i ({\bf k},\omega_k) \rangle_{\rm 1PI} \label{l}
\end{eqnarray}
The point is that, assuming the renormalizability of the field theory, there
must exist a multiplicative renormalization of the vertices
\begin{eqnarray}
\Gamma_{0,{\rm ren}} & = & Z_0 \Gamma_0 \\
\boldsymbol{\Gamma}_{c,{\rm ren}} & = & Z_c \boldsymbol{\Gamma}_{c} \\
\Gamma_{m,{\rm ren}} & = & Z_m \Gamma_m
\label{zm}
\end{eqnarray}
that renders $\Gamma_{0,{\rm ren}}, \boldsymbol{\Gamma}_{c,{\rm ren}}$ and
$\Gamma_{m,{\rm ren}}$ independent of the high-energy cutoff. We check next
that property of the field theory, dealing with the same iteration of the
interaction applied before to the electron self-energy. In the present context,
this defines the ladder approximation as given by the self-consistent
diagrammatic equation represented for a generic vertex in Fig. \ref{two}.
\begin{figure}
\vspace{0.5cm}
\begin{center}
\mbox{\epsfxsize 10cm \epsfbox{selfcons.eps}}
\end{center}
\caption{Self-consistent diagrammatic equation for a generic vertex $\Gamma_i$
in the ladder approximation.}
\label{two}
\end{figure}
We first observe that the renormalization of $\Gamma_0$ must be dictated by the
gauge invariance of the model, since the composite operator $\rho_0 ({\bf r})$
already appears in the action (\ref{s}). This implies the result
\begin{equation}
Z_0 = Z_{\psi }
\label{w1}
\end{equation}
meaning that, according to the previous analysis of the self-energy, it must
be $Z_0 = 1$ in the ladder approximation. Moreover, the composite operator
$\boldsymbol{\rho}_{c}$ may be introduced in the action multiplying it by
the vector potential, showing that its renormalization can be related
by gauge invariance to that of the kinetic term in (\ref{h}). We conclude
therefore that
\begin{equation}
Z_c = Z_{\psi } Z_v
\label{w2}
\end{equation}
which, in the ladder approximation, leads to $Z_c = Z_v$.
The results (\ref{w1}) and (\ref{w2}) can be also obtained more formally from
the Ward idendities that reflect the gauge invariance of the model at the
quantum level. They lead in particular to the equations
\begin{eqnarray}
\frac{\partial }{\partial \omega}
\left( \gamma_0 \omega - \Sigma ({\bf k},\omega) \right) & = &
\Gamma_0 ({\bf 0},0;{\bf k},\omega) \\
\frac{1}{v_F} \frac{\partial }{\partial {\bf k}}
\left( v_F \boldsymbol{\gamma} \cdot {\bf k} +
\Sigma ({\bf k},\omega) \right) & = &
\boldsymbol{\Gamma}_{c}({\bf 0},0;{\bf k},\omega)
\end{eqnarray}
These identities were used in Ref. \cite{jhep} to check the suitability
of the dimensional regularization method to preserve the gauge invariance of
the field theory of 2D Dirac semimetals. It was shown there that, if the
the electron self-energy is computed with the set of diagrams encoded in the
equation of Fig. \ref{one}, then the Ward identities are satisfied when one
takes for the vertex the series of ladder diagrams, but dressed with the
lowest-order electron self-energy correction. With this scope of the
nonperturbative approach, it can be also shown by direct renormalization of
$\boldsymbol{\Gamma}_{c}$ that $Z_c$ becomes as simple as the expression given
by (\ref{zv}), which arises then from a remarkable cancellation of
different contributions in the computation of the vertex.
We conclude that the charge operator is not renormalized, while the current
operator requires a multiplicative redefinition by the simple factor (\ref{zv})
in the ladder approximation. The absence of a singularity for any value of the
coupling constant excludes the possibility of having an instability related to
the condensation of the charge or the current in the electron system. We turn
now to the case of the mass vertex (\ref{l}), whose renormalization is not
dictated by the previous analysis of the electron self-energy.
\subsection{Mass vertex and dynamical mass generation}
Our starting point for the analysis of the mass vertex in the ladder
approximation is the self-consistent equation represented in
Fig. \ref{two}. We are going to be mainly interested in the renormalization
of $\Gamma_m $ and, for that purpose, it is enough to
consider the limit of momentum transfer ${\bf q} \rightarrow 0$ and
$\omega_q \rightarrow 0$. The vertex must satisfy then the equation
\begin{eqnarray}
\lefteqn{\Gamma_m ({\bf 0},0;{\bf k},\omega_k) = }
\nonumber \\
& & 1 +
i e^2_0 \int \frac{d^D p}{(2\pi )^D} \frac{d\omega_p}{2\pi }
\gamma_0 \frac{-\gamma_0 \omega_p + v_F \boldsymbol{\gamma} \cdot {\bf p} }
{-\omega_p^2 + v_F^2 {\bf p}^2 - i\eta }
\Gamma_m ({\bf 0},0;{\bf p},\omega_p)
\frac{-\gamma_0 \omega_p + v_F \boldsymbol{\gamma} \cdot {\bf p} }
{-\omega_p^2 + v_F^2 {\bf p}^2 - i\eta } \gamma_0
\frac{1}{({\bf k}-{\bf p})^2} \;\;\;\;\;\;\;\;\;\;
\label{selfm}
\end{eqnarray}
The resolution of (\ref{selfm}) implies that $\Gamma_m$ has to be proportional
to the identity matrix. Moreover, it turns out to be a function independent of
the frequency $\omega_k$ in this approximation. After a little algebra, we
arrive then at the simplified equation
\begin{equation}
\Gamma_m ({\bf 0},0;{\bf k},\omega_k) = 1 + \frac{e_0^2}{2}
\int \frac{d^D p}{(2\pi )^D} \Gamma_m ({\bf 0},0;{\bf p},\omega_k)
\frac{1}{v_F |{\bf p}|} \frac{1}{({\bf k}-{\bf p})^2}
\label{selfcons}
\end{equation}
Eq. (\ref{selfcons}) can be solved by means of an iterative procedure,
expressing the vertex as a power series in the effective coupling
$\lambda_0 = e_0^2/4\pi v_F$,
\begin{equation}
\Gamma_m ({\bf 0},0;{\bf k},\omega_k) =
1 + \sum_{n=1}^{\infty} \lambda_0^n \: \Gamma_m^{(n)} ({\bf k})
\label{pow}
\end{equation}
Assuming that the term $\Gamma_m^{(n)} ({\bf k})$ is proportional to
$1/|{\bf k}|^{n\epsilon }$, the next order can be computed consistently from
(\ref{selfcons}), taking into account that
\begin{eqnarray}
\Gamma_m^{(n+1)} ({\bf k}) & \sim & \frac{1}{2} \int \frac{d^D p}{(2\pi )^D}
\frac{1}{|{\bf p}|^{1+n\epsilon } } \frac{1}{({\bf k}-{\bf p})^2}
\nonumber \\
& = & \frac{(4\pi )^{\epsilon /2}}{16 \pi^{3/2}}
\frac{\Gamma \left(\tfrac{n+1}{2} \epsilon \right)
\Gamma \left(1 - \tfrac{(n+1)\epsilon}{2} \right)
\Gamma \left(\tfrac{1-\epsilon}{2} \right) }
{ \Gamma \left(\tfrac{1+n\epsilon}{2} \right)
\Gamma \left(\tfrac{3 - (n + 2)\epsilon}{2} \right) }
\frac{1}{|{\bf k}|^{(n+1)\epsilon}}
\end{eqnarray}
Thus, the vertex can be written as an expansion
\begin{equation}
\Gamma_m ({\bf 0},0;{\bf k},\omega_k) =
1 + \sum_{n=1}^{\infty} \lambda_0^n
\frac{s_n }{|{\bf k}|^{n\epsilon}}
\label{ser}
\end{equation}
where each order can be obtained from the previous one according to the
recurrence relation
\begin{equation}
s_{n+1} = A_{n+1} (\epsilon) \: s_n
\label{rec}
\end{equation}
with
\begin{equation}
A_{n+1} (\epsilon) =
\frac{(4\pi )^{\epsilon /2}}{4 \sqrt{\pi }}
\frac{\Gamma \left(\tfrac{n+1}{2} \epsilon \right)
\Gamma \left(1 - \tfrac{(n+1)\epsilon}{2} \right)
\Gamma \left(\tfrac{1-\epsilon}{2} \right) }
{ \Gamma \left(\tfrac{1+n\epsilon}{2} \right)
\Gamma \left(\tfrac{3 - (n + 2)\epsilon}{2} \right) }
\end{equation}
We observe from (\ref{ser}) and (\ref{rec}) that $\Gamma_m $ develops
higher order poles in the $\epsilon $ parameter as one progresses in the
perturbative expansion. At this point, one has to check the renormalizability
of the theory by ensuring that all the poles can be reabsorbed by means of
a redefinition of the vertex like that in (\ref{zm}). In terms of the
dimensionless coupling
\begin{equation}
\lambda = \frac{e^2}{4\pi v_F}
\end{equation}
the renormalization factor $Z_m$ must have the structure
\begin{equation}
Z_m = 1 + \sum_{i=1}^{\infty} \frac{d_i (\lambda )}{\epsilon^i}
\end{equation}
with residues $d_i$ depending only on $\lambda $.
In the present case, it may be actually seen that the renormalized vertex
$\Gamma_{m,{\rm ren}}$ can be made finite in the limit
$\epsilon \rightarrow 0$, with an appropriate choice of the functions
$d_i (\lambda )$. We obtain for instance for the first perturbative orders
\begin{eqnarray}
d_1 (\lambda ) & = & - \frac{1}{\pi} \lambda - \frac{1}{2\pi^2} \: \lambda^2
- \frac{1}{\pi^3} \left(\frac{2}{3}-\frac{\pi ^2}{36}\right) \: \lambda^3
- \frac{1}{12\pi^4} \left(15-\pi ^2\right) \: \lambda^4
\nonumber \\
& & - \frac{1}{400\pi^5} \left(1120-100 \pi ^2+\pi ^4\right) \: \lambda^5
- \frac{1}{\pi^6} \left(7-\frac{7 \pi^2}{9}+\frac{23 \pi ^4}{1440}\right) \: \lambda^6
+ \ldots \\
d_2 (\lambda ) & = & \frac{1}{2\pi^2} \: \lambda^2 + \frac{1}{2\pi^3} \: \lambda^3
+ \frac{1}{72\pi^4} \left(57-2 \pi ^2\right) \: \lambda^4
+ \frac{1}{72\pi^5} \left(114-7 \pi ^2\right) \: \lambda^5
\nonumber \\
& & + \frac{1}{64800\pi^6} \left(236340-20100 \pi ^2+187 \pi ^4\right) \: \lambda^6 + \ldots
\\
d_3 (\lambda ) & = & - \frac{1}{6\pi^3} \: \lambda^3
- \frac{1}{4\pi^4} \: \lambda^4 - \frac{1}{72\pi^5} \left(33-\pi ^2\right) \: \lambda^5
- \frac{1}{\pi^6} \left(\frac{47}{48}-\frac{\pi ^2}{18}\right) \: \lambda^6 + \ldots
\\
d_4 (\lambda ) & = & \frac{1}{24\pi^4} \: \lambda^4
+ \frac{1}{12\pi^5} \: \lambda^5 + \frac{1}{432\pi^6} \left(75-2 \pi ^2\right) \: \lambda^6
+ \ldots \\
d_5 (\lambda ) & = & - \frac{1}{120\pi^5} \: \lambda^5
- \frac{1}{48\pi^6} \: \lambda^6 + \ldots \\
d_6 (\lambda ) & = & \frac{1}{720\pi^6} \: \lambda^6 + \ldots
\end{eqnarray}
An important point about the residues $d_i (\lambda )$ is that they can be
chosen without having any dependence on the momentum ${\bf k}$ of the vertex.
This is a signature of the renormalizability of the theory, by which we are
able to render it independent of the high-energy cutoff, redefining just a
finite number of local operators.
On the other hand, the renormalization factor $Z_m$ contains important
information about the behavior of the theory in the low-energy limit. This
stems from the anomalous scale dependence that the vertex gets as a consequence
of its renormalization\cite{amit}. The bare unrenormalized theory does not know
about the momentum scale $\mu $, but the factor $Z_m$ lends to
$\Gamma_{m,{\rm ren}}$ an anomalous scaling of the form
\begin{equation}
\Gamma_{m, {\rm ren}} \sim \mu^{\gamma_m}
\end{equation}
The anomalous dimension $\gamma_m $ can be thus obtained as
\begin{equation}
\gamma_m = \frac{\mu }{Z_m} \frac{\partial Z_m}{\partial \mu}
\label{adm}
\end{equation}
The renormalization factor $Z_m$ is given by an infinite series of poles in
the $\epsilon $ parameter, and then it is highly nontrivial that the
computation of $\gamma_m $ from (\ref{adm}) may provide a finite result in
the limit $\epsilon \rightarrow 0$. At this stage, the dependence of $Z_m$
on $\mu $ arises from the scaling (\ref{se0}), since no self-energy corrections
are taken into account yet. That leads to the equation
\begin{equation}
\left( 1 + \sum_{i=1}^{\infty} \frac{d_i (\lambda )}{\epsilon^i} \right) \gamma_m
= - \lambda \sum_{i=0}^{\infty} \frac{d}{d\lambda }d_{i+1}(\lambda ) \: \frac{1}{\epsilon^i}
\label{altm}
\end{equation}
The term with no poles in the $\epsilon $ parameter already gives the result
for the anomalous dimension
\begin{equation}
\gamma_m = -\lambda \frac{d }{d \lambda}d_1(\lambda )
\label{anomm}
\end{equation}
But it remains however to ensure that the rest of the poles identically cancel
in Eq. (\ref{altm}), which requires the fulfillment of the hierarchy of equations
\begin{equation}
\frac{d }{d \lambda}d_{i+1}(\lambda ) = d_i (\lambda ) \: \frac{d }{d \lambda}d_1(\lambda )
\label{rr}
\end{equation}
Quite remarkably, we have checked that the set of equations (\ref{rr}) is
satisfied, up to the order we have been able to compute the residues
$d_i (\lambda )$ in the ladder approximation. In this task, we have managed to
obtain numerically the first residues up to order $\lambda^{28}$, with a
precision of 36 digits. Besides verifying the hierarchy (\ref{rr}) at this
level of approximation, we have also analyzed the trend of the power series for
such large perturbative orders, finding that a singularity must exist at a
certain critical coupling. Concentrating in particular on the function
$d_1 (\lambda )$, we have found that the terms in its perturbative expansion
\begin{equation}
d_1 (\lambda ) = \sum_{n=1}^{\infty} d_1^{(n)} \lambda^n
\end{equation}
approach a geometric sequence at large $n$. The values we have obtained for
the coefficients $d_1^{(n)}$ are represented in Fig. \ref{three} up to order
$n = 28$. From the precise numerical computation of the coefficients, it
is actually possible to verify that their ratio follows very accurately the
asymptotic behavior
\begin{equation}
\frac{d_1^{(n)}}{d_1^{(n-1)}} = d + \frac{d'}{n} + \frac{d''}{n^2} + \frac{d'''}{n^3} + \ldots
\label{scn}
\end{equation}
From the estimate of the limit value $d$, we have obtained the
finite radius of convergence for a value of the coupling constant
\begin{equation}
\lambda_c = \frac{1}{d} \approx 1.27324
\label{crit}
\end{equation}
\begin{figure}
\vspace{0.5cm}
\begin{center}
\mbox{\epsfxsize 7cm \epsfbox{d1n3D.eps}}
\end{center}
\caption{Plot of the absolute value of the coefficients $d_1^{(n)}$ in the
expansion of $d_1 (\lambda )$ as a power series of the renormalized coupling
$\lambda $.}
\label{three}
\end{figure}
The coupling $\lambda_c $ corresponds to a point where the anomalous
dimension $\gamma_m$ diverges, which signals in turn the development of
a singular susceptibility with respect to the mass parameter. That is,
$\lambda_c $ has to be viewed as a critical coupling above which the
electron system enters a new phase with dynamical generation of
mass\cite{nomu}. This characterization is also supported by the fact that, in
the case of the 2D Dirac semimetals, a similar renormalization in the ladder
approximation\cite{jhep} has shown to lead to a value of the critical coupling
that coincides very precisely with the point for dynamical mass generation
determined from the resolution of the mass gap equation\cite{gama}. It is
indeed remarkable that these quite different approaches give an identical
result for the point of chiral symmetry breaking in the electron system. This
reassures the predictability of our renormalization approach, that moreover
allows to extend the analysis beyond the ladder approximation as we discuss
in what follows.
\subsection{Electron self-energy corrections to the ladder approximation}
The most relevant way to improve the ladder approximation corresponds to
including electron self-energy corrections in the diagrams encoded in the
equation of Fig. \ref{two}. With these additional contributions we may account
for an important feature of the electron system, which is the growth of the
Fermi velocity in the low-energy limit. This leads to a reduction of the
effective interaction strength, from which we can expect the need to push the
nominal coupling to larger values in order to reach the phase with dynamical
mass generation.
A fully consistent approach can be devised by considering the self-energy
contributions in the same ladder approximation discussed in Sec. II. In this
case we know that the electron self-energy coincides with the lowest-order
result given by Eq. (\ref{loop}). This can be translated into a redefinition
of $v_F$, leading in the fermion propagator to an effective Fermi velocity
\begin{equation}
\tilde{v}_F ({\bf k})
= v_F + \frac{e_0^2}{4\pi } B(\epsilon ) \frac{1}{|{\bf k}|^{\epsilon}}
\label{til}
\end{equation}
with
\begin{equation}
B(\epsilon ) = \frac{1}{(4\pi)^{1-\epsilon/2}}
\frac{\Gamma \left(\tfrac{1}{2}\epsilon\right)
\Gamma \left(2 - \tfrac{\epsilon}{2}\right)
\Gamma \left(\tfrac{1}{2}-\tfrac{\epsilon}{2}\right)}
{\Gamma \left(\tfrac{5}{2}-\epsilon \right)}
\end{equation}
The electron self-energy corrections are then accounted for automatically after
replacing the parameter $v_F$ by $\tilde{v}_F ({\bf k})$ in the self-consistent
equation (\ref{selfcons}) for the mass vertex. It can be easily checked that
the perturbative expansion of $1/\tilde{v}_F ({\bf k})$ introduced in that
equation corresponds to the iteration of self-energy rainbow diagrams
correcting the fermion propagators in the original ladder approximation.
As in the previous subsection, we can assume that the mass vertex admits now
the expansion
\begin{equation}
\Gamma_m ({\bf 0},0;{\bf k},\omega_k) =
1 + \sum_{n=1}^{\infty} \lambda_0^n
\frac{t_n }{|{\bf k}|^{n\epsilon}}
\label{ser2}
\end{equation}
We can apply a recurrent procedure as before to compute the different orders in
(\ref{ser2}), with a main difference in that now each $t_n$ is going to depend
on all the lower orders in the expansion. This is so as the $n$-th order, when
inserted at the improved right-hand-side of (\ref{selfcons}), can give rise to
contributions to any higher order as the factor $1/\tilde{v}_F ({\bf k})$ is
also expanded. At the end, we arrive at the result that
\begin{equation}
t_{n+1} = A_{n+1}(\epsilon) \sum_{l=0}^{n} \left(-B(\epsilon) \right)^{n-l} t_l
\end{equation}
We have to check again that the vertex can be made finite in the limit
$\epsilon \rightarrow 0$ by a suitable multiplicative renormalization. In this
case, we have first to express all quantities in terms of the
renormalized Fermi velocity $v_R$ arising after subtraction of the pole at the
right-hand-side of (\ref{til}). $v_R$ is defined by
\begin{equation}
v_F = Z_v v_R
\end{equation}
where $Z_v$ has already appeared in (\ref{zv}). The renormalized
coupling is given now by
\begin{equation}
\lambda = \frac{e^2}{4\pi v_R}
\end{equation}
When expressed in terms of $\lambda $, the new renormalization factor $Z_m$
must have the structure
\begin{equation}
Z_m = 1 + \sum_{i=1}^{\infty} \frac{\tilde{d}_i (\lambda )}{\epsilon^i}
\end{equation}
The first terms in the expansions of the residues $\tilde{d}_i (\lambda )$
can be obtained analytically, with the result that
\begin{eqnarray}
\tilde{d}_1 (\lambda ) & = & - \frac{1}{\pi} \lambda - \frac{1}{9\pi^2} \: \lambda^2
- \frac{76-5 \pi ^2}{324 \pi ^3} \lambda^3
- \frac{1908 -630 \zeta (3)-91 \pi ^2}{3888 \pi ^4} \lambda^4
\nonumber \\
& & - \frac{168 (386-91 \zeta (3))-4004 \pi ^2-33 \pi ^4}{58320 \pi ^5} \: \lambda^5
\nonumber \\
& & - \frac{35 \pi ^2 (615 \zeta (3)-3014)-15 (28028 \zeta (3)+9765 \zeta (5)-94464)+632
\pi ^4}{524880 \pi ^6} \: \lambda^6 + \ldots \;\;\;\;\; \label{df} \\
\tilde{d}_2 (\lambda ) & = & \frac{1}{6 \pi ^2} \: \lambda^2
+ \frac{5}{81 \pi ^3} \: \lambda^3
+ \frac{5 \left(16-\pi ^2\right)}{648 \pi ^4} \lambda^4
+ \frac{14876 -4410 \zeta (3)-737 \pi ^2}{58320 \pi ^5} \: \lambda^5
\nonumber \\
& & + \frac{604904 -141204 \zeta (3)-38562 \pi ^2-139 \pi ^4}{1049760 \pi ^6}
\: \lambda^6 + \ldots \\
\tilde{d}_3 (\lambda ) & = & \frac{1}{54 \pi ^3} \: \lambda^3
+ \frac{1}{162 \pi ^4} \: \lambda^4
+ \frac{218-15 \pi ^2}{14580 \pi ^5} \lambda^5
+ \frac{34492 -11970 \zeta (3)-1629 \pi ^2}{1049760 \pi ^6} \: \lambda^6
+ \ldots \\
\tilde{d}_4 (\lambda ) & = & \frac{1}{216 \pi ^4} \: \lambda^4
+ \frac{1}{810 \pi ^5} \: \lambda^5
+ \frac{1792-135 \pi ^2}{524880 \pi ^6} \: \lambda^6 + \ldots \\
\tilde{d}_5 (\lambda ) & = & \frac{1}{648 \pi ^5} \: \lambda^5
+ \frac{1}{3240 \pi ^6} \: \lambda^6 + \ldots \\
\tilde{d}_6 (\lambda ) & = & \frac{7}{11664 \pi ^6} \: \lambda^6 + \ldots
\label{dl}
\end{eqnarray}
The important point is again that the functions $\tilde{d}_i (\lambda )$ can
be chosen with no nonlocal dependence (in fact, with no dependence) on the
external momentum of the vertex, which means that the renormalization can be
accomplished with the redefinition of a finite number of purely local
operators.
The expansion of the residue $\tilde{d}_1 (\lambda )$ allows us to estimate
the effect of the self-energy corrections on the anomalous dimension of the
vertex. This is given as before by Eq. (\ref{adm}), while now we have to
account for the change in the scaling of the renormalized coupling $\lambda $.
This can be obtained from Eqs. (\ref{se0}) and (\ref{se1}), with the result
that
\begin{equation}
\mu \frac{\partial }{\partial \mu } \lambda = - \epsilon \lambda
+ \frac{2}{3\pi } \lambda^2
\end{equation}
The computation of the anomalous dimension proceeds then as
\begin{equation}
\gamma_m = \frac{\mu }{Z_m}
\frac{\partial \lambda}{\partial \mu}
\frac{\partial Z_m}{\partial \lambda}
\label{chr}
\end{equation}
Eq. (\ref{chr}) provides an expression for $\gamma_m $ that contains poles
of all orders in the $\epsilon $ parameter. As in the previous subsection, it
is therefore highly nontrivial that a finite result may be obtained for the
anomalous dimension in the limit $\epsilon \rightarrow 0$. From (\ref{chr}),
the equation that we have to inspect in this case is
\begin{equation}
\left( 1 + \sum_{i=1}^{\infty } \frac{\tilde{d}_i (\lambda)}{\epsilon^i} \right) \gamma_m
= - \lambda \sum_{i=0}^{\infty } \frac{1}{\epsilon^i} \frac{d}{d\lambda } \tilde{d}_{i+1} (\lambda)
+ \frac{2}{3\pi } \lambda^2
\sum_{i=1}^{\infty } \frac{1}{\epsilon^i} \frac{d}{d\lambda } \tilde{d}_i (\lambda)
\label{insp}
\end{equation}
The zeroth order in $\epsilon $ leads to the equation for the anomalous
dimension
\begin{equation}
\gamma_m = -\lambda \frac{d}{d\lambda } \tilde{d}_1 (\lambda)
\label{gamm}
\end{equation}
However, this derivation makes sense only when the cancellation of all the
poles in (\ref{insp}) can be guaranteed, which implies the set of equations
\begin{equation}
\frac{d}{d\lambda } \tilde{d}_{i+1} (\lambda)
- \tilde{d}_i (\lambda) \: \frac{d}{d\lambda } \tilde{d}_1 (\lambda)
- \frac{2}{3\pi } \lambda \: \frac{d}{d\lambda } \tilde{d}_i (\lambda) = 0
\label{rem}
\end{equation}
It is reassuring to see that the analytic expressions given in
(\ref{df})-(\ref{dl}) satisfy the hierarchy of equations (\ref{rem}). A more
comprehensive check can be performed however with the numerical computation of
the expansions of the first residues, that we have carried out up to order
$\lambda^{30}$ and with a precision of 40 digits. In this way, we have been
able to certify that the conditions (\ref{rem}) are also fulfilled at that
level of approximation.
The residue $\tilde{d}_1 (\lambda)$ has in particular an expansion
\begin{equation}
\tilde{d}_1 (\lambda ) = \sum_{n=1}^{\infty} \tilde{d}_1^{(n)} \lambda^n
\end{equation}
with coefficients that have been represented in Fig. \ref{four} up to order
$n = 30$. We observe that the $\tilde{d}_1^{(n)}$ series approaches a
geometric sequence at large $n$. The ratio between consecutive orders can be
fitted indeed with great accuracy by a behavior like (\ref{scn}),
allowing to compute the asymptotic value
\begin{equation}
\frac{\tilde{d}_1^{(n)}}{\tilde{d}_1^{(n-1)}} \rightarrow \tilde{d}
\end{equation}
in the limit $n \rightarrow \infty$.
We have obtained in this way the finite radius of convergence of the
perturbative series
\begin{equation}
\lambda_c = \frac{1}{\tilde{d}} \approx 1.8660
\label{crit2}
\end{equation}
with an error estimated to be (as in (\ref{crit})) in the last digit.
\begin{figure}
\vspace{0.5cm}
\begin{center}
\mbox{\epsfxsize 7cm \epsfbox{dbar1n3D.eps}}
\end{center}
\caption{Plot of the absolute value of the coefficients $\tilde{d}_1^{(n)}$ in the
expansion of $\tilde{d}_1 (\lambda )$ as a power series of the renormalized
coupling $\lambda $.}
\label{four}
\end{figure}
As remarked before, the critical coupling $\lambda_c$ corresponds to the point
at which the anomalous dimension $\gamma_m$ diverges. This is in turn the
signature of the development of a nonvanishing expectation value of the mass
operator (\ref{mass}). Thus we see that, even after taking into account the
effect of the Fermi velocity renormalization, there still remains a
strong-coupling phase in the electron system characterized by the dynamical
generation of mass for the Dirac fermions. The value of the critical coupling
(\ref{crit2}) is sensibly larger than that obtained in the previous
subsection, which is consistent with the fact that the self-energy corrections
effectively reduce the interaction strength. The situation is in this respect
rather similar to the case of the 2D Dirac semimetals, where the effective
growth of the Fermi velocity at low energies has been invoked as the reason
why no gap is observed in the electronic spectrum of graphene\cite{sabio,qed},
even in the free-standing samples prepared at very low doping levels about the
Dirac point\cite{exp2}.
\section{Large-$N$ approximation for 3D Dirac and Weyl semimetals}
We deal next with an approach complementary to the ladder approximation, paying
attention to the effect of the photon self-energy corrections on the electron
quasiparticles. This will take into account the renormalization of the electron
charge, which is a relevant feature in the field theory of 3D semimetals as well
as in the fully relativistic 3D QED\cite{landau}. In order to include a
consistent set of quantum corrections, we will dress the interaction with the
sum of bubble diagrams obtained by iteration of the electron-hole polarization,
as represented in Fig. \ref{five}. This approximation can be then considered as
the result of taking the leading order in a $1/N$ expansion, providing a
resolution of the theory in a well-defined limit as $N \rightarrow \infty$.
\begin{figure}[h]
\begin{center}
\mbox{\epsfxsize 2.0cm \epsfbox{fermion.eps}} \hspace{0.5cm} {\Large $=$}
\hspace{0.5cm} \mbox{\epsfxsize 2.0cm \epsfbox{fermion0.eps}}
\hspace{0.5cm} {\Large $+$} \hspace{0.5cm}
\mbox{\epsfxsize 4.8cm \epsfbox{fermionself0.eps}} \\ \vspace{1cm}
\raisebox{0.2cm}{\epsfxsize 2.0cm \epsfbox{photon.eps}} \hspace{0.5cm}
\raisebox{0.16cm}{\Large $=$}
\hspace{0.5cm} \raisebox{0.2cm}{\epsfxsize 2.0cm \epsfbox{photon0.eps}}
\hspace{0.5cm}
\raisebox{0.16cm}{\Large $+$} \hspace{0.5cm}
\raisebox{-0.6cm}{\epsfxsize 4.8cm \epsfbox{photonself0.eps}}
\end{center}
\caption{Diagrammatic equations in the large-$N$ approximation. The thick(thin) straight line stands for the dressed(free) Dirac fermion propagator and the thick(thin) wiggly line stands for the dressed(free) interaction propagator.}
\label{five}
\end{figure}
The corrections to the bare interaction are represented by the polarization
$\Pi ({\bf q},\omega_q )$, from which the full interaction propagator
$D ({\bf q},\omega_q )$ is obtained according to
\begin{equation}
D ({\bf q},\omega_q )^{-1} = D_0 ({\bf q})^{-1} - \Pi ({\bf q},\omega_q )
\end{equation}
In the present approximation, the polarization is given by the dominant
contribution in the large-$N$ limit
\begin{equation}
i\Pi ({\bf q},\omega_q ) = N e_0^2
\int \frac{d^D p}{(2\pi )^D} \frac{d\omega_p }{2\pi }
{\rm Tr} \left[ \gamma_0 G_0({\bf p} + {\bf q}, \omega_p + \omega_q)
\gamma_0 G_0({\bf p},\omega_p ) \right]
\label{pi}
\end{equation}
with the free Dirac propagator
\begin{equation}
G_0({\bf p},\omega_p ) =
\frac{-\gamma_0 \omega_p + v_F \boldsymbol{\gamma} \cdot {\bf p} }
{-\omega_p^ 2 + v_F^2 {\bf p}^2 - i\eta }
\end{equation}
Computing the integrals in analytic continuation to $D = 3 - \epsilon $,
we get the result
\begin{equation}
\Pi ({\bf q},\omega_q ) =
- N C(\epsilon ) \frac{e_0^2}{2\pi^2 v_F}
\frac{{\bf q}^2}{({\bf q}^2 - \omega_q^2/v_F^2)^{\epsilon/2}}
\label{plrz}
\end{equation}
with
\begin{equation}
C(\epsilon ) = (4\pi )^{\epsilon/2}
\Gamma \left( \tfrac{1}{2} \epsilon \right)
\frac{\Gamma \left(2 - \tfrac{\epsilon}{2} \right)^2}{\Gamma (4 - \epsilon) }
\end{equation}
The polarization (\ref{plrz}) diverges in the limit $\epsilon \rightarrow 0$,
which points at the need to renormalize the scalar field $\phi $ mediating the
Coulomb interaction. As already mentioned, the gauge invariance of the action
(\ref{s}) implies that $Z_e Z_{\phi } = 1$, which means that the electron
charge is consequently renormalized. In the present approximation, these
effects can be taken into account at once in the large-$N$ expression of the
electron self-energy
\begin{eqnarray}
i\Sigma ({\bf k},\omega_k ) & = & - \int \frac{d^D p}{(2\pi )^D} \frac{d\omega_p }{2\pi } \:
\gamma_0 \frac{ -\gamma_0 (\omega_k - \omega_p ) +
v_F \mbox{\boldmath $\gamma $} \cdot ({\bf k} - {\bf p})}
{ - (\omega_k - \omega_p)^2 + v_F^2 ({\bf k} - {\bf p})^2 - i\eta} \gamma_0 \:
\nonumber \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\times \frac{e_0^2}
{{\bf p}^2 \left(1 + N C(\epsilon ) \frac{e_0^2}{2\pi^2 v_F}
\frac{1}{({\bf p}^2 - \omega_p^2/v_F^2)^{\epsilon/2}} \right)}
\label{selfn}
\end{eqnarray}
Thus, we can determine the electron charge renormalization by devising
a finite limit of the effective $e$-$e$ interaction in (\ref{selfn}) as
$\epsilon \rightarrow 0$. This can be achieved by absorbing the pole in
(\ref{plrz}) into the bare charge $e_0$ according to the redefinition
\begin{equation}
\frac{1}{e^2} = \frac{\mu^\epsilon }{e_0^2} + \frac{N}{6\pi^2 v_F} \frac{1}{\epsilon }
\label{eren}
\end{equation}
In terms of the renormalized charge $e$, the electron self-energy becomes
\begin{eqnarray}
i\Sigma ({\bf k},\omega_k ) & = & \int \frac{d^D p}{(2\pi )^D} \frac{d\omega_p }{2\pi } \:
\frac{ \gamma_0 (\omega_k - \omega_p ) + v_F \mbox{\boldmath $\gamma $} \cdot ({\bf k} - {\bf p})}
{ - (\omega_k - \omega_p)^2 + v_F^2 ({\bf k} - {\bf p})^2 - i\eta} \:
\nonumber \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\times \frac{\mu^\epsilon e^2}
{{\bf p}^2 \left(1 - \frac{Ne^2}{6\pi^2 v_F} \frac{1}{\epsilon } +
C(\epsilon ) \frac{Ne^2}{2\pi^2 v_F}
\frac{\mu^\epsilon}{({\bf p}^2 - \omega_p^2/v_F^2 )^{\epsilon/2}} \right)}
\label{selfr}
\end{eqnarray}
The renormalization program has to be completed anyhow by accounting for the
divergences that arise when performing the integrals in (\ref{selfr}).
In order to make sense of the theory in the large-$N$ limit, we can assume
formally that $Ne^2 / 2\pi^2 v_F \sim O(N^0 )$. The difference between $v_F$
and its renormalized counterpart $v_R$ is then a quantity of order $\sim 1/N$,
and the electron self-energy gets a natural dependence on the effective
coupling
\begin{equation}
g = \frac{Ne^2}{2\pi^2 v_R}
\label{gdef}
\end{equation}
We may actually resort to a perturbative expansion
\begin{eqnarray}
\Sigma ({\bf k},i\overline{\omega}_k ) & = &
\frac{2\pi^2 v_F}{N} \mu^\epsilon \int \frac{d^D p}{(2\pi )^D} \frac{d\overline{\omega}_p }{2\pi } \:
\frac{i \gamma_0 (\overline{\omega}_k - \overline{\omega}_p )
+ v_F \mbox{\boldmath $\gamma $} \cdot ({\bf k} - {\bf p})}
{ v_F^2 ({\bf k} - {\bf p})^2 + (\overline{\omega}_k - \overline{\omega}_p)^2} \:
\frac{1}{{\bf p}^2} \nonumber \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\sum_{n=0}^{\infty} (-1)^n g^{n+1} \left(- \frac{1}{3\epsilon } + C(\epsilon )
\frac{\mu^\epsilon}{({\bf p}^2 + \overline{\omega}_p^2/v_F^2)^{\epsilon/2}} \right)^n
\label{pexp}
\end{eqnarray}
It may be seen from (\ref{pexp}) that, to order $g^n$, the self-energy can
develop divergences as large as $\sim 1/\epsilon^n $. If the theory is
renormalizable, it must be possible to absorb these poles into the definition
of suitable renormalization factors in the full fermion propagator (\ref{g_1}).
We look then for the large-$N$ limit of $Z_\psi $ and $Z_v$, which must have
the general structure
\begin{eqnarray}
Z_\psi & = & 1 + \frac{1}{N}\sum_{i=1}^{\infty } \frac{c_i (g)}{\epsilon^i}
+ O \left( \frac{1}{N^2} \right)
\label{zpn} \\
Z_v & = & 1 + \frac{1}{N}\sum_{i=1}^{\infty } \frac{b_i (g)}{\epsilon^i}
+ O \left( \frac{1}{N^2} \right)
\label{zvn}
\end{eqnarray}
with coefficients $b_i$ and $c_i$ depending only on the renormalized coupling
$g$.
It is not difficult to compute the integrals that are needed to get in general
the $n$-th order of $\Sigma ({\bf k},i\overline{\omega}_k )$. We are interested
in terms that are linear in $\overline{\omega}_k$ and ${\bf k}$, and the only
technical point is that these contributions must be also regularized in the
infrared, using for instance the own external frequency or momentum, or
some other suitable parameter. It can be checked that the choice of a
particular infrared regulator is not important when extracting the high-energy
divergences as $\epsilon \rightarrow 0$. For this reason, we have resorted to
introduce a fictitious Dirac mass $\nu $, which has the ability of keeping
well-behaved both types of contributions linear in $\overline{\omega}_k$ and
${\bf k}$. We rely then on the results for the generic integrals
\begin{eqnarray}
I_n & = & v_F \int \frac{d^D p}{(2\pi )^D} \frac{d\overline{\omega}_p }{2\pi } \:
\frac{i \gamma_0 (\overline{\omega}_k - \overline{\omega}_p ) }
{ v_F^2 ({\bf k} - {\bf p})^2 + (\overline{\omega}_k - \overline{\omega}_p)^2 + v_F^2 \nu^2 } \:
\frac{1}{{\bf p}^2} \frac{1}{({\bf p}^2 + \overline{\omega}_p^2/v_F^2)^{n\epsilon/2}}
\nonumber \\
& \approx & i \gamma_0 \overline{\omega}_k \frac{1}{(4\pi )^{2-\epsilon/2} }
\frac{n\epsilon }{1-\epsilon }
\frac{\Gamma \left(\tfrac{n+1}{2} \epsilon \right)
\Gamma \left(1-\tfrac{n+1}{2}\epsilon \right) }
{\Gamma \left(2 - \tfrac{\epsilon}{2} \right)} \frac{1}{|\nu |^{(n+1)\epsilon }}
\label{in}
\end{eqnarray}
and
\begin{eqnarray}
J_n & = & v_F \int \frac{d^D p}{(2\pi )^D} \frac{d\overline{\omega}_p }{2\pi } \:
\frac{ v_F \mbox{\boldmath $\gamma $} \cdot ({\bf k} - {\bf p})}
{ v_F^2 ({\bf k} - {\bf p})^2 + (\overline{\omega}_k - \overline{\omega}_p)^2 + v_F^2 \nu^2 } \:
\frac{1}{{\bf p}^2} \frac{1}{({\bf p}^2 + \overline{\omega}_p^2/v_F^2)^{n\epsilon/2}}
\nonumber \\
& \approx & v_F \mbox{\boldmath $\gamma $} \cdot {\bf k} \frac{2}{(4\pi )^{2-\epsilon/2} }
\frac{1 + (1-\epsilon) \left(1+\tfrac{n}{2}\epsilon \right) }{(1-\epsilon)(3-\epsilon) }
\frac{\Gamma \left(\tfrac{n+1}{2} \epsilon \right)
\Gamma \left(1-\tfrac{n+1}{2}\epsilon \right) }
{\Gamma \left(2 - \tfrac{\epsilon}{2} \right)} \frac{1}{|\nu |^{(n+1)\epsilon }}
\label{jn}
\end{eqnarray}
With the help of (\ref{in}) and (\ref{jn}), one can obtain analytically for
instance the first terms in the expansions of the residues in (\ref{zpn}) and
(\ref{zvn}), by imposing that the renormalized fermion propagator becomes
finite in the limit $\epsilon \rightarrow 0$. We find in this way
\begin{eqnarray}
c_1 (g ) & = & - \frac{1}{24} g^2 - \frac{1}{162} \: g^3
- \frac{5}{5184} \: g^4
- \left( \frac{1}{6480} + \frac{\zeta (3)}{6480} \right) \: g^5
\nonumber \\
& & - \left( \frac{7}{279936}+\frac{\pi ^4}{2799360}+ \frac{\zeta (3)}{34992} \right) \: g^6
+ \ldots \label{cf} \\
c_2 (g ) & = & - \frac{1}{108} \: g^3 - \frac{1}{648} \: g^4
- \frac{1}{3888} \: g^5
- \left( \frac{1}{23328}+\frac{\zeta (3)}{23328} \right) \: g^6
+ \ldots \\
c_3 (g ) & = & - \frac{1}{432} \: g^4
- \frac{1}{2430} \: g^5 - \frac{5}{69984} \: g^6
+ \ldots \\
c_4 (g ) & = & - \frac{1}{1620} \: g^5
- \frac{1}{8748} \: g^6
+ \ldots \\
c_5 (g ) & = & - \frac{1}{5832} \: g^6
+ \ldots
\label{cl}
\end{eqnarray}
and for the Fermi velocity renormalization
\begin{eqnarray}
b_1 (g ) & = & - c_1 (g) - \frac{1}{3} g - \frac{1}{72} \: g^2
- \frac{1}{324} \: g^3
- \left( \frac{1}{1728}+\frac{\zeta (3)}{1296} \right) \: g^4
\nonumber \\
& & - \left( \frac{1}{9720}+\frac{\pi ^4}{583200}+\frac{\zeta (3)}{19440} \right) \: g^5
\nonumber \\
& & - \left( \frac{5}{279936}+\frac{\pi^4}{8398080}
+\frac{\zeta (3)}{69984}+\frac{\zeta (5)}{23328} \right) \: g^6
+ \ldots \label{bf} \\
b_2 (g ) & = & - c_2 (g) - \frac{1}{18} \: g^2 - \frac{1}{324} \: g^3
- \frac{1}{1296} \: g^4
- \left( \frac{1}{6480}+\frac{\zeta (3)}{4860} \right) \: g^5
\nonumber \\
& & - \left( \frac{1}{34992}+\frac{\pi ^4}{2099520}+\frac{\zeta (3)}{69984} \right) \: g^6
+ \ldots \\
b_3 (g ) & = & - c_3 (g) - \frac{1}{81} \: g^3
- \frac{1}{1296} \: g^4 - \frac{1}{4860} \: g^5
- \left( \frac{1}{23328}+\frac{\zeta (3)}{17496} \right) \: g^6
+ \ldots \\
b_4 (g ) & = & - c_4 (g) - \frac{1}{324} \: g^4
- \frac{1}{4860} \: g^5 - \frac{1}{17496} \: g^6
+ \ldots \\
b_5 (g ) & = & - c_5 (g) - \frac{1}{1215} \: g^5 - \frac{1}{17496} \: g^6
+ \ldots \\
b_6 (g ) & = & - \frac{1}{4374} \: g^6 + \ldots
\label{bl}
\end{eqnarray}
The important point is that these expansions of the residues do not show any
dependence on the auxiliary scales $\mu $ and $\nu $ (or on the external
frequency and momentum when these are used alternatively to regularize the
self-energy in the infrared). This is a nice check of the
renormalizability of the theory which guarantees that, at least in the
large-$N$ approximation, the high-energy divergences can be absorbed into a
finite number of renormalization factors depending only on the renormalized
coupling $g$.
The renormalization factors $Z_\psi$ and $Z_v$ provide important information
about the behavior of the electron system in the low-energy limit. In the
renormalized theory, the Dirac fermion field gets an anomalous scaling
dimension $\gamma_\psi (g)$, which can be computed as\cite{amit}
\begin{equation}
\gamma_\psi = \frac{\mu }{Z_\psi } \frac{\partial Z_\psi }{\partial \mu }
\end{equation}
This anomalous dimension governs the scaling of the correlators involving
the Dirac field. When sitting at a fixed-point of the renormalized
parameters, we would get for instance for the Dirac propagator
\begin{equation}
G(s{\bf k}, s\omega ) \approx s^{-1 + \gamma_\psi } G({\bf k}, \omega )
\label{anom}
\end{equation}
On the other hand, the renormalized Fermi velocity $v_R$ gets also a scaling
dependence which can be assessed in terms of a function $\gamma_v (g)$ such
that
\begin{equation}
\frac{\mu }{v_R } \frac{\partial v_R}{\partial \mu } = \gamma_v (g)
\label{vrsc}
\end{equation}
Thus, the knowledge of $\gamma_\psi (g)$ and $\gamma_v (g)$ allows us to
inspect the theory in the limit of long wavelengths and low energies as
$\mu \rightarrow 0$.
The anomalous dimension $\gamma_\psi (g)$ can be computed from the dependence of
the renormalized coupling $g$ on the auxiliary scale $\mu $, taking into account
that
\begin{equation}
\gamma_\psi = \frac{\mu }{Z_\psi } \frac{\partial g }{\partial \mu }
\frac{\partial Z_\psi }{\partial g }
\label{chain}
\end{equation}
The scaling of $g$ can be obtained in turn by differentiating the expression
(\ref{eren}) with respect to $\mu $, bearing in mind the independence of $e_0$
with respect to that auxiliary parameter. This leads to
\begin{equation}
\mu \frac{\partial }{\partial \mu } e^2 = - \epsilon e^2
+ \frac{N}{6\pi^2 v_F } e^4
\end{equation}
As already mentioned, the difference between $v_F$ and $v_R$ can be taken
formally as a quantity of order $\sim 1/N$, so that we end up in the large-$N$
limit with the equation
\begin{equation}
\mu \frac{\partial }{\partial \mu } g = - \epsilon g
+ \frac{1}{3 } g^2
\label{gren}
\end{equation}
This expression can be plugged into (\ref{chain}) to get
\begin{equation}
\gamma_\psi = \frac{1}{Z_\psi } \frac{1}{N} \left(
- g \sum_{i=0}^{\infty } \frac{1}{\epsilon^i} \frac{d}{d g} c_{i+1} (g) + \frac{1}{3} g^2
\sum_{i=1}^{\infty } \frac{1}{\epsilon^i} \frac{d}{d g} c_{i} (g) \right)
\label{poles}
\end{equation}
Working to leading order in the $1/N$ expansion, we can set $Z_\psi = 1$ at
the right-hand-side of (\ref{poles}). The term with no poles in that equation
leads to the result
\begin{equation}
\gamma_\psi = - \frac{1}{N} g \frac{d}{d g} c_1 (g)
\end{equation}
This expression of the anomalous dimension makes only sense, however, provided
that one can certify the cancellation of the rest of terms carrying poles of
all orders in the $\epsilon $ parameter, which implies the hierarchy of
equations
\begin{equation}
\frac{d}{d g} c_{i+1} (g) - \frac{1}{3} g \frac{d}{d g} c_i (g) = 0
\label{hi}
\end{equation}
It is remarkable that the power series of the residues given in
(\ref{cf})-(\ref{cl}) satisfy indeed the hierarchy (\ref{hi}). Beyond the
analytic approach, we have computed numerically the expansion of the
first residues $c_i (g)$ up to order $g^{32}$, with a precision of 40 digits.
In this way, we have been able to check that the conditions (\ref{hi}) are
fulfilled at that level of approximation, reassuring the
consistency of the large-$N$ approach for the present field theory.
Another important detail of the numerical calculation of the residues is the
evidence that the coefficients of each perturbative expansion approach a
geometric sequence for large orders of the coupling. In the case of the residue
$c_1 (g)$, we have for instance the power series
\begin{equation}
c_1 (g ) = \sum_{n=1}^{\infty} c_1^{(n)} g^n
\label{geom}
\end{equation}
with coefficients that we have represented in Fig. \ref{six} up to the order
we have carried out the numerical computation. The behavior observed for the
series $c_1^{(n)}$ implies that the perturbative expansion must have a finite
radius of convergence. This can be obtained by approaching the ratio between
consecutive coefficients according to the dependence
\begin{equation}
\frac{c_1^{(n)}}{c_1^{(n-1)}} =
r + \frac{r'}{n} + \frac{r''}{n^2} + \frac{r'''}{n^3} + \ldots
\end{equation}
which provides indeed an excellent fit at large $n$. We get in this way the
estimate
\begin{equation}
r \approx 0.3333333
\label{critn}
\end{equation}
where the error lies in the last digit. We find then the singular point for
the coupling $g$ (the radius of convergence) at
\begin{equation}
g_c = \frac{1}{r} \approx 3.0 \pm 1.0 \times 10^{-7}
\label{gc3}
\end{equation}
\begin{figure}
\vspace{0.5cm}
\begin{center}
\mbox{\epsfxsize 7cm \epsfbox{plotc1n.eps}}
\end{center}
\caption{Plot of the absolute value of the coefficients $c_1^{(n)}$ in the
expansion of $c_1 (g)$ as a power series of the renormalized coupling $g$.}
\label{six}
\end{figure}
The critical value $g_c$ corresponds to a point where the anomalous scaling
dimension $\gamma_\psi (g)$ diverges. This can be appreciated in Fig.
\ref{seven}, where we have represented that function from the results of our
numerical calculation. The singularity found at $g_c$ has a precise physical
meaning, since $\gamma_\psi $ governs the scaling of all the correlators
involving the Dirac fermion field. Away from a fixed-point in the renormalized
parameters, the scaling is not so simple as in (\ref{anom}), but from that
expression we already get the idea that the divergence of $\gamma_\psi$ implies
the decay of the Dirac propagator\cite{barnes2}. In the limit
$g \rightarrow g_c$, the singularity in the scaling dimension leads indeed to
the suppression of the quasiparticle weight. This characterizes a particular
form of correlated behavior, which has been identified in several other
instances of interacting electrons and has led to constitute the class of
so-called non-Fermi liquids\cite{bares,nayak,hou,cast}. The distinctive
feature of this class is the absence of a quasiparticle pole in the electron
propagator, which leads to a very appealing paradigm to explain unconventional
properties like those found in the normal state of copper-oxide
superconductors.
\begin{figure}
\vspace{0.5cm}
\begin{center}
\mbox{\epsfxsize 7cm \epsfbox{gam.eps}}
\end{center}
\caption{Plot of the anomalous scaling dimension $\gamma_\psi (g)$, multiplied by
the number $N$ of four-component Dirac fermions in the electron system.}
\label{seven}
\end{figure}
In order to ensure the relevance of the divergence at $g_c$, we have anyhow
to verify that such a singular behavior is not prevented by some other
instability in the low-energy scaling of the electron system. We pay attention
in particular to the renormalization of the Fermi velocity, which has a natural
tendency to grow in the low-energy limit. The function $\gamma_v (g)$ that
dictates the scaling in Eq. (\ref{vrsc}) can be obtained from the independence
of the bare Fermi velocity on the auxiliary scale $\mu $:
\begin{equation}
\frac{\mu }{v_R} \frac{\partial }{\partial \mu } \left( Z_v v_R \right) = 0
\label{sfree}
\end{equation}
The dependence of $Z_v$ on $\mu $ comes only from the coupling $g$ so that,
using Eq. (\ref{gren}), we get
\begin{equation}
Z_v \frac{\mu }{v_R} \frac{\partial }{\partial \mu } v_R
- \frac{1}{N} g \sum_{i=0}^{\infty }
\frac{1}{\epsilon^i} \frac{d}{d g} b_{i+1} (g)
+ \frac{1}{N} \frac{1}{3} g^2 \sum_{i=1}^{\infty }
\frac{1}{\epsilon^i} \frac{d}{d g} b_{i} (g) = 0
\label{poles2}
\end{equation}
Working to leading order in the $1/N$ expansion, the term free of poles in
Eq. (\ref{poles2}) leads to the result
\begin{equation}
\frac{\mu }{v_R} \frac{\partial }{\partial \mu } v_R = \frac{1}{N} g \frac{d}{d g} b_1 (g)
\end{equation}
As in the case of the anomalous scaling dimension, one has to make sure
however that the rest of terms carrying poles of all orders in
Eq. (\ref{poles2}) cancel out identically, which is enforced by the conditions
\begin{equation}
\frac{d}{d g} b_{i+1} (g) - \frac{1}{3} g \frac{d}{d g} b_i (g) = 0
\label{hip}
\end{equation}
The expressions of the residues in (\ref{bf})-(\ref{bl}) satisfy indeed the set
of equations (\ref{hip}), and a more extensive numerical computation confirms
that this is also the case for the power series of the $b_i (g)$ evaluated up
to order $g^{32}$. This analysis also shows that these expansions do not lead
to any singularity in the range of couplings up to the critical point $g_c$.
The corresponding function $\gamma_v (g)$ obtained from $b_1 (g)$ is
represented in Fig. \ref{eight}. The regular behavior observed in the plot
implies that the scaling of the Fermi velocity is not an obstacle for the
suppression of the fermion quasiparticles as the interaction strength becomes
sufficiently large to hit the critical point at $g_c$.
\begin{figure}
\vspace{0.5cm}
\begin{center}
\mbox{\epsfxsize 7cm \epsfbox{bet.eps}}
\end{center}
\caption{Plot of the rate of variation $\gamma_v (g)$ of the renormalized Fermi
velocity with respect to energy, multiplied by the number $N$ of four-component
Dirac fermions in the electron system.}
\label{eight}
\end{figure}
As a last check, we look also for the possible tendency towards chiral symmetry
breaking and dynamical mass generation in the large-$N$ approximation. For this
purpose, we may analyze the scaling of the vertex for the mass operator
represented in Fig. \ref{nine}. Computing in the limit where both frequency and
momentum transfer vanish, we get
\begin{eqnarray}
\Gamma_m ({\bf 0}, 0;{\bf k}, \omega_k) & = &
1 + i \int \frac{d^D p}{(2\pi )^D} \frac{d\omega_p }{2\pi } \: \gamma_0
\left( -\frac{\gamma_0 (\omega_k - \omega_p ) +
v_F \mbox{\boldmath $\gamma $} \cdot ({\bf k} - {\bf p})}
{ - (\omega_k - \omega_p)^2 + v_F^2 ({\bf k} - {\bf p})^2 - i\eta} \right)^2 \gamma_0 \:
\nonumber \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\times \frac{e_0^2}
{{\bf p}^2 \left(1 + N C(\epsilon ) \frac{e_0^2}{2\pi^2 v_F}
\frac{1}{({\bf p}^2 - \omega_p^2/v_F^2)^{\epsilon/2}} \right)}
\end{eqnarray}
We have to account as before for the renormalization of the charge, which leads
to an expression in terms of the renormalized coupling $g$
\begin{eqnarray}
\Gamma_m ({\bf 0}, 0;{\bf k}, i\overline{\omega}_k) & = &
1 + \frac{2\pi^2 v_F}{N} \mu^\epsilon \int \frac{d^D p}{(2\pi )^D}
\frac{d\overline{\omega}_p }{2\pi } \:
\frac{1}{ v_F^2 ({\bf k} - {\bf p})^2 + (\overline{\omega}_k - \overline{\omega}_p)^2} \:
\frac{1}{{\bf p}^2} \nonumber \\
& & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\sum_{n=0}^{\infty} (-1)^n g^{n+1} \left(- \frac{1}{3\epsilon } + C(\epsilon )
\frac{\mu^\epsilon}{({\bf p}^2 + \overline{\omega}_p^2/v_F^2)^{\epsilon/2}} \right)^n
\label{vertn}
\end{eqnarray}
As already done in the case of the electron self-energy, we can compute the
high-energy divergences of the vertex in the limit of vanishing ${\bf k}$ and
$\overline{\omega}_k$, regularizing the integrals in the infrared with an
auxiliary mass $\nu $ in the Dirac propagators. In this way, the different terms in the expansion
(\ref{vertn}) can be obtained using the general result
\begin{eqnarray}
K_n & = & v_F \int \frac{d^D p}{(2\pi )^D} \frac{d\overline{\omega}_p }{2\pi } \:
\frac{1}
{ v_F^2 {\bf p}^2 + \overline{\omega}_p^2 + v_F^2 \nu^2 } \:
\frac{1}{{\bf p}^2} \frac{1}{({\bf p}^2 + \overline{\omega}_p^2/v_F^2)^{n\epsilon/2}}
\nonumber \\
& = & \frac{1}{(4\pi )^{2-\epsilon/2} }
\frac{2}{1-\epsilon }
\frac{\Gamma \left(\tfrac{n+1}{2} \epsilon \right)
\Gamma \left(1-\tfrac{n+1}{2}\epsilon \right) }
{\Gamma \left(1 - \tfrac{\epsilon}{2} \right)} \frac{1}{|\nu |^{(n+1)\epsilon }}
\label{kn}
\end{eqnarray}
\begin{figure}
\vspace{0.5cm}
\begin{center}
\mbox{\epsfxsize 4cm \epsfbox{massvert.eps}}
\end{center}
\caption{Corrections to the vertex for the mass operator in the large-$N$
approximation. The cross represents the operator $\overline{\psi } \psi $ and
the thick wiggly line stands for the dressed interaction propagator as defined
in Fig. \ref{five}.}
\label{nine}
\end{figure}
We see that the vertex can develop in general divergences of order
$\sim 1/\epsilon^n $ at the level $g^n$ in the perturbative expansion. These
have to be reabsorbed into a multiplicative renormalization of the type shown
in (\ref{zm}). The point to bear in mind is that now $Z_m$ is made of two
different factors, coming independently from the renormalization of the
composite mass operator (\ref{mass}) and the Dirac fermion fields in the
vertex\cite{amit}:
\begin{equation}
Z_m = Z_{\psi } Z_{\psi^2 }
\end{equation}
The renormalization factor $Z_{\psi^2 }$ for the mass operator can have the
general structure
\begin{equation}
Z_{\psi^2 } = 1 + \frac{1}{N}\sum_{i=1}^{\infty } \frac{\bar{d}_i (g)}{\epsilon^i}
+ O \left( \frac{1}{N^2} \right)
\end{equation}
Under the assumption that the theory is renormalizable, it must be possible
to choose appropriately the residues $\bar{d}_i (g)$ to end up with a
renormalized vertex, finite in the limit $\epsilon \rightarrow 0$, given by
\begin{equation}
\Gamma_{m,{\rm ren}} = Z_\psi Z_{\psi^2 } \: \Gamma_m
\label{gmr}
\end{equation}
We have checked that the vertex (\ref{gmr}) can be made free of poles, at least
up to order $g^{32}$ we have been able to compute it numerically, with a set of
residues $\bar{d}_i (g)$ that depend only on the renormalized coupling $g$. We
have also seen that these functions have a regular behavior in the range
up to the critical coupling $g_c$ where the anomalous dimension $\gamma_\psi$
diverges. This makes clear that the dominant instability in the large-$N$
approximation is indeed characterized by the suppression of the electron
quasiparticles.
Of all the residues $\bar{d}_i (g)$, the first of them conveys the most
relevant piece of information, since it is related to the anomalous dimension
of the composite mass operator, defined by
\begin{equation}
\gamma_{\psi^2 } = \frac{\mu }{Z_{\psi^2 } }
\frac{\partial Z_{\psi^2 } }{\partial \mu }
\label{ad2}
\end{equation}
Paralleling the above derivation of similar scaling dimensions, one arrives at
the result
\begin{equation}
\gamma_{\psi^2 } = - \frac{1}{N} g \frac{d}{d g} \bar{d}_1 (g)
\end{equation}
The plot of this function obtained from our numerical computation of
$\bar{d}_1 (g)$ is represented in Fig. \ref{ten}. The regular behavior
observed in the figure is the evidence that no singularity can be expected
in the correlators of the mass operator $\rho_m $, whose magnitude has got
to be bound to the scaling dictated by the anomalous dimension
$\gamma_{\psi^2 }$ in the low-energy limit.
\begin{figure}[h]
\vspace{0.5cm}
\begin{center}
\mbox{\epsfxsize 7cm \epsfbox{gam2.eps}}
\end{center}
\caption{Plot of the anomalous scaling dimension $\gamma_{\psi^2 } (g)$,
multiplied by the number $N$ of four-component Dirac fermions in the electron
system.}
\label{ten}
\end{figure}
\section{Conclusions}
We have studied the development of the strong-coupling phases in the QED of
3D Dirac and Weyl semimetals by means of two different nonperturbative
approaches, consisting in the sum of ladder diagrams on one hand, and
taking the limit of large number of fermion flavors on the other hand.
We have benefited from the renormalizability that the theory shows in both
cases, which makes possible to render all the renormalized quantities
independent of the high-energy cutoff. Thus we have been able to compute the
anomalous scaling dimensions of a number of operators exclusively in terms of
the renormalized coupling constant, which has allowed us to determine the
precise location of the singularities signaling the onset of the
strong-coupling phases.
We have seen that, in the ladder approximation, the 3D Dirac semimetals have
a critical point marking the boundary of a phase with chiral symmetry breaking
and dynamical generation of mass, in analogy with the same strong-coupling
phenomenon in conventional QED\cite{mas,fom,fuk,mir,gus2,kon,atk,min}.
We have found however that such a breakdown of
symmetry does not persist in the large-$N$ limit of the theory, which is
instead characterized by the growth of the anomalous dimension of the electron
quasiparticles at large interaction strength. The picture that emerges by
combining the results from the two nonperturbative approaches is that chiral
symmetry breaking must govern the strong-coupling physics of the 3D Dirac
semimetals for sufficiently small $N$, while there has to be a transition to a
different phase with strong suppression of electron quasiparticles prevailing
above a certain value of $N$.
With the results obtained for the different critical points we can draw
an approximate phase diagram in terms of the number $N$ of Dirac fermions and
the renormalized coupling $g$ defined in (\ref{gdef}). In principle, the
critical coupling (\ref{gc3}) can provide a reliable estimate for the onset of
non-Fermi liquid behavior at sufficiently large $N$. On the other hand, the
critical value of $g$ deriving from (\ref{crit2}) may lead to a sensible map of
the phase with chiral symmetry breaking as long as its magnitude does not
become larger than that of (\ref{gc3}). The resulting phase diagram of the 3D
Dirac semimetals, represented in Fig. \ref{eleven}, shows the regions where the
behavior of the phase boundaries may be captured by our alternative
approximations, away from the intermediate regime about $N = 4$ where the
competition between the two strong-coupling phases cannot be reliably described
within our analytic framework.
\begin{figure}[h]
\begin{center}
\includegraphics[width=6.0cm]{csbnfl.eps}
\end{center}
\caption{Phase diagram of the QED of 3D Dirac semimetals showing an approximate
map of the phases corresponding to chiral symmetry breaking (CSB) and non-Fermi
liquid behavior (NFL), obtained from the values of the respective critical
couplings in the ladder approach and the large-$N$ approximation.}
\label{eleven}
\end{figure}
It is interesting to compare at this point the phase diagram in Fig.
\ref{eleven} with that obtained with the resolution of the Schwinger-Dyson
equations, which can be trusted for all values of $N$. We can see from the
results of Ref. \cite{qed} that such a numerical approach sets indeed the
interplay between the phases with chiral symmetry breaking and non-Fermi liquid
behavior at $N = 4$. For $N > 4$, we observe that the critical line for that
latter phase is not straight, although it approaches a
constant asymptotic limit at large $N$. The self-consistent resolution leads
also to a critical line for chiral symmetry breaking in the regime $N \leq 4$
that has an approximate linear behavior as a function of $N$. It is worth to
mention that the values of the critical couplings found in the numerical
resolution of Ref. \cite{qed} are at first sight much larger than their
counterparts in the diagram of Fig. \ref{eleven}. We have to bear in mind,
however, that the critical values in that paper were given for the bare
couplings, in the theory with a finite high-energy cutoff, while critical
points like (\ref{crit2}) and (\ref{gc3}) are referred in the present context
to the renormalized couplings. Overall, we may conclude that there is good
qualitative agreement between the location of the phases in the diagram of
Fig. \ref{eleven} and in the more complete map obtained from the
resolution of the Schwinger-Dyson equations.
We end up remarking that our analysis can be applied to characterize
not only the strong-coupling regime of the 3D Dirac semimetals, but also of
the Weyl semimetals. In these materials, each Weyl point hosts a fermion with
a definite chirality, which means that the electron system cannot undergo
chiral symmetry breaking through condensation of some order parameter at
zero momentum\cite{hut}. This obstruction does not hold however for the
strong-coupling phase that we have identified in terms of the suppression of
fermion quasiparticles. It is clear that the large-$N$ approach of Sec. IV
applies equally well for Weyl and Dirac fermions, so that Weyl semimetals
are susceptible of developing the phase with non-Fermi liquid behavior that
we have mapped at large $N$ in the diagram of Fig. \ref{eleven}. In this
respect, there should be good prospects to observe such an unconventional
behavior in present candidates for Weyl semimetals, like TaAs or the pyrochlore
iridates, which have up to 12 pairs of Weyl points\cite{taas1,taas2}.
These considerations show that the strong-coupling phases studied here are not
beyond reach, and that they may be actually realized in 3D Dirac or Weyl
semimetals with suitably small values of the Fermi velocity or with the large
number of fermion flavors already exhibited by known materials.
\section{Acknowledgments}
We acknowledge the financial support from MICINN (Spain) through grant
FIS2011-23713 and from MINECO (Spain) through grant FIS2014-57432-P.
\vspace{2cm}
|
1,116,691,499,204 | arxiv | \section{Derivation of $\lambda_{th}$}
We present here the derivation of the many-body parametric driving threshold amplitude for $N$ resonators that are equally coupled to one another. For the prupose of this calculation, it suffices to consider $N$ coupled \emph{linear} resonators.
The slow-flow equation describing this system (cf. Eq.(2) in the paper with coupling $\beta_{ij}=\beta$ for all $i\ne j$) is given by:
\begin{equation}\label{eq:sf}
\dot{\bm{X}} = A \bm{X},
\end{equation}
where the matrix $A$ is given by:
\begin{align}
A &= \begin{pmatrix}
a & b & \cdots &b\\
b&a & \ddots & \vdots \\
\vdots & \ddots &\ddots &b \\
b&\cdots&b&a
\end{pmatrix}.
\end{align}
and the individual matrix entries $a$ and $b$ are given by:
\begin{align}
a &= \left(\begin{array}{cc} -\frac{\gamma}{2} & -\frac{1}{4 \omega} \left(\lambda \omega_0^2 +2(\omega_0^2 - \omega^2)\right) \\
-\frac{1}{4 \omega} \left(\lambda \omega_0^2 -2(\omega_0^2 - \omega^2)\right) & -\frac{\gamma}{2} \end{array}\right), \\
b &= \left(\begin{array}{cc}
0 & \frac{\beta}{2 \omega} \\
- \frac{\beta}{2 \omega} & 0
\end{array}\right)\,.
\end{align}
The dynamics of the linear system can be deduced by decomposing the initial state $\bm{X}$ into the eigenvectors of $A$. The time evolution of each eigenvector is then determined by $e^{\Lambda t}$, where $\Lambda$ is the corresponding eigenvalue. ${\rm Im}\Lambda \ne 0 $ imposes an oscillatory behavior whose envelope decreases exponentially for ${\rm Re}\Lambda < 0 $ and increases exponentially for ${\rm Re}\Lambda > 0 $.
To evaluate these eigenvalues and eigenvectors, it is useful to rewrite the matrix $A$ as:
\begin{equation}
A = Id_N \otimes a + \left(\begin{array}{cccc}
0 & 1 & \cdots & 1\\
1 & \ddots &\ddots &\vdots\\
\vdots & \ddots & \ddots &1\\
1 &\cdots & 1& 0
\end{array}\right) \otimes b = Id_N \otimes a+ M_N\otimes b\,.
\end{equation}
Based on the structure of $A$, the eigenvectors obey the ansatz $w_m = r_m \otimes s_m$, where $r_m $ are the $N$ dimensional eigenvectors of $M_N$ with eigenvalue $\rho_m$ and $s_m$ are the 2-dimensional eigenvectors of $a+\rho_m b$ with eigenvalues $\sigma_m$. Since $M_N$ has $N$ eigenvectors, $r_m$, with $2$ corresponding $s_m$, this ansatz describes all the $2N$ eigenvectors and eigenvalues of the matrix $A$. The 2-vector $s_m$ describes the amplitude and momentum $(u_m,v_m)$ in a particular mode's phase-space and $r_m$ generically describes the relative amplitudes and the phase configuration, e.g., $r_m = (1,-1)$ means that the two oscillators have opposite phases. We can readily show that this ansatz is indeed an eigenvector of $A$:
\begin{align}
A w_m &= Id_N r_m \otimes a s_m + M_N r_m\otimes b s_m \\
&= r_m \otimes (a + \rho_m b) s_m\nonumber\\
&= \sigma_m w_m\,.\nonumber
\end{align}
Since $M_N$ has a simple structure, we see that the eigenvectors $\{r_m\}$ take the form $r_0=(1,1,\cdots)$ with eigenvalue $\rho_0 = N-1$ and $r_m = (0, \cdots 0, 1, -1, 0 \cdots)^T$, where the $+1$ is the $m^{th}$ entry, are eigenvectors of $M_N$ with eigenvalues $\rho_m = -1$ ($m \in \mathbb{N}$, $0<m \leq N - 1$). Note that the eigenvectors $r_m$ effectively determine the normal mode transformations of the problem.
Next, we evaluate the eigenvectors $s_m$ and eigenvalues $\sigma_m$ of
\begin{align}
a + \rho_m b = \left(\begin{array}{cc}
a_1 & a_2 -a_3 \\
a_2 + a_3 & a_1
\end{array}\right)\,,
\end{align}
where $a_1 = -\frac{\gamma }{2}$, $a_2=-\frac{ \lambda \omega_0^2}{4 \omega }$ and $a_3= \frac{ \left(\omega_0^2-\omega ^2\right)}{2 \omega } -\rho_m \frac{\beta }{2 \omega }$.
These are given by,
\begin{align}
\sigma_{m,\pm} &= a_1 \pm \sqrt{a_2^2 - a_3^2}\,,\\
s_{m,\pm} &= \left(\begin{array}{cc}
\pm \sqrt{a_2 -a_3} \\
\sqrt{a_2 + a_3}
\end{array}\right)\,.
\end{align}
To summarize, the $2N$ eigenvectors of the matrix $A$ are given by
\begin{align}\label{eq:wc}
w_{m,\pm} &= r_{m} \otimes s_{m,\pm}\,,
\end{align}
with corresponding eigenvalues $\sigma_{m,\pm}$ and $0\le m \leq N-1$.
If ${\rm Re}\sigma_{m,\pm} >0 $, the corresponding $w_{\pm,m}$ grows exponentially indicating a parametric instability. We obtain the parametric driving threshold $\lambda_{th, m}$ for this instability by imposing the condition:
\begin{align}\label{eq:lamth}
\sigma_{m,+} = a_1 + \sqrt{a_2^2 - a_3^2} &= 0\,.
\end{align}
Note that we have $a_1<0$, whereas $\sqrt{a_2^2 - a_3^2}$ can be either real-valued and positive or complex-valued. Solving Eq.~\ref{eq:lamth}, we obtain
\begin{align}
\lambda_{th, m} = \frac{4\omega}{\omega_0 ^2}\sqrt{a1^2 + a_3^2} &= \frac{4\omega}{\omega_0 ^2}\sqrt{\frac{\gamma^2}{4} + \left( \frac{\omega^2 - \omega_0^2}{2\omega } + \rho_m \frac{\beta}{2 \omega}\right)^2}\,.
\end{align}
For identical oscillators, we see that there are primarily two instability thresholds corresponding to
(i) the instability of the symmetric normal mode, $w_{0,+}$, and (ii) to the instability of all other normal modes: $w_{m,+}$ including the antisymmetric mode.
\section{Calibration measurements}
In Fig.~\ref{fig:figureS1}, we present test measurements that we have performed to ensure that the weakly coupled strings were degenerate in frequency. On timescales of hours, thermal drift sometimes caused detuning between the strings, which we balanced by adjusting the tension of the strings separately. In Fig.~\ref{fig:figureS2}, we show the fits used to extract the nonlinear coefficients of the two weakly coupled strings. Please refer to Ref.~[23] of the main text for details regarding the model of a nonlinear parametric oscillator.
\begin{figure}[ht!]
\includegraphics[width=\columnwidth]{FigS1.pdf
\caption{\label{fig:figureS1} (a) Amplitude and (b) phase response of the two resonators in the linear regime. We use weak external driving, no parametric drive, and weak coupling to observe a Lorentzian response. Light and dark blue correspond to resonator 1 and 2, respectively. These measurements are taken immediately before the nonlinear parametric measurements shown in Fig. 3 of the main text to ensure that the two modes are degenerate in frequency.}
\end{figure}
\begin{figure}[ht!]
\includegraphics[width=\columnwidth]{FigS2.pdf
\caption{\label{fig:figureS2} (a) Amplitude response of resonator 1 and (b) resonator 2 with weak coupling and strong parametric driving. Blue and magenta lines correspond to sweeps with increasing and decreasing frequency, respectively. These are the same data as shown in Fig. 3c of the main text. Solid and dashed black lines are stable and unstable theory solutions, respectively. From fitting these solutions to the measured data, we retrieve the values of $\alpha_{1,2}$ and $\eta_{1,2}$ stated in the main text. Note that in the strong coupling case we find that the nonlinear damping decreases, as determined from the frequency at which the stable and unstable solutions merge.}
\end{figure}
\end{document}
|
1,116,691,499,205 | arxiv | \section{Introduction}
\label{sec:intro}
In past few years, we have witnessed exponential growth of deep neural network (DNN) based image coding approaches. These neural network based image coding (a.k.a., NIC - neural image coding for short) methods~\cite{balle2016end,balle2018variational,li2018learning,minnen2018joint,mentzer2020high,Cheng2020Learned,chen2020,hu2021learning,lu2021transformer} have emerged with outstanding compression efficiency with noticeable gains over conventional solutions like JPEG~\cite{wallace1991jpeg}, JPEG 2000~\cite{JPEG2K}, and even the latest Versatile Video Coding (VVC) based Intra Profile (VVC Intra)~\cite{vvc_overview}, promising an encouraging prospects of NIC-based applications and services, particularly for those rate-constrained or bandwidth-limited networked scenarios.
Unprecedented coding gains of aforementioned NIC methods mostly came from the introduction of variational autoencoder (VAE) framework~\cite{balle2018variational}, joint entropy context modeling conditioned on both hyperpriors and autoregressive neighbors~\cite{minnen2018joint}, nonlocal/local attention optimization~\cite{chen2020,Cheng2020Learned}, etc. It was also worth to point out that learning-based NIC offered great flexibility for quickly supporting different image sources, e.g., RGB, YUV, Bayer raw image, etc~\cite{Zhihao_RAW_Coding}, and for fast enabling the use of a variety of loss functions (quality metrics)~\cite{MS-SSIM,wang2021subjective} in task-oriented applications. On the contrary, it might require substantial efforts to enable additional functionalities in traditional image codecs because we had to fully understand the statistical properties of newly-introduced image sources and optimization metrics to develop effective coding toolkit. For example, it took years to facilitate efficient compression of screen content sources when standardizing the extensions of High-Efficiency Video Coding (HEVC).
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{./figs/recompression.pdf}
\caption{{\bf Image Recompression.} Row-wisely visual comparison of reconstructions compressed 50 times using JPEG (\url{http://libjpeg.sourceforge.net/}), pretrained NIC baseline model (a.k.a., Ball\'e 2018~\cite{balle2018variational} before finetuning as an example), and the adversarial finetuned NIC model (Ball\'e 2018~\cite{balle2018variational} after finetuning). Similar recompression artifacts using learning-based image coder is also reported in~\cite{kim2020}.}
\label{fig:recompression}
\end{figure}
Recently, after a serial exploration studies, the ISO/IEC JPEG (Joint Photographic Experts Group) committee had made concrete progress on the performance and complexity evaluation of various NIC solutions, and now is initiating the standardization call of next-generation image codec using deep learning techniques.
Targeting for the international standard for the interoperability enabling, besides the superior compression efficiency, model robustness/generalization is yet another critical factor for the potential success of such learnt approach, which however, is often overlooked and not well examined in the past.
{\bf Observations.}
Though rules-based conventional lossy image coding methods, such as the JPEG or VVC Intra would inevitably introduce compression artifacts, yielding visually unpleasant blurring, blocking, ringing impairments in decoded reconstructions.
These artifacts mainly deteriorate pixel intensity, whilst the local structure of decoded spatial texture, e.g., edge orientations, are mostly preserved as the input source. However, unexpected pixel impairments would be presented if the image is compressed multiple times using learnt approaches as shown in Fig.~\ref{fig:recompression}. Such iterative or successive compression is a common practice for image-based applications over the Internet where images may be re-edited, re-compressed and re-distributed very often. Similar observations are also reported by Kim {\it et. al} in~\cite{kim2020}.
Such pixel distortion in decoded image is closely related to the generalization and stability of underlying compression model and is extremely critical for the potential success of NIC solution in practice. {This is because the fidelity of compressed image reconstruction plays a vital role in vast applications} such as the medical imaging, autonomous self-driving, face recognition, security surveillance, etc. For instance, having incorrect (unexpected) spatial textures in decoded image may lead to very different outcomes of a specific task (e.g., classification)~\cite{choi2020task} (see Fig.~\ref{license}), or may also present annoying/disturbing artifacts~\cite{kim2020} with severely-degraded user experience (see Fig.~\ref{fig:anchors}).
{\bf Approach.}
Recent attempt in~\cite{kim2020} investigated the instability of successive deep image compression.
However, such instability is not solely with the image recompression task. A systematic stability and generalization evaluation of popular VAE based compression models on full resolution image datasets still remains untouched and is urgently needed for the pursuit of robust NIC.
Unfortunately, it is almost impossible to manually
analyze and search for all potential cases that may cause model instability. As inspired by emerged popular adversarial attack studies on high-level vision tasks like classification~\cite{AdvExp_Goodfellow,43405,nguyen2015deep}, semantic segmentation~\cite{Xiao_2018_ECCV} and object detection~\cite{ijcai2019-134}, and low-level tasks like super resolution~\cite{yin2018sr} and optical flow derivation~\cite{ranjan2019attacking} to respectively examine their model robustness, we suggest to automatically generate adversarial examples to attack the NIC models used for image compression, analyze the impacts on coding efficiency and reconstruction quality (quantitatively and qualitatively), exploit the insights behind, and develop effective solutions to defend these attacks.
Hence, we try to optimize the generation of adversarial examples with only a small amount of almost negligible perturbation added to the original source images but lead to severe distortion in the corresponding reconstruction, with which we can examine the instability of existing NIC methods.
Such adversarial attack is referred to as the ``untargeted'' attack without designated purpose that particularly aims for misleading the semantic understanding of the reconstructed content. In Sec.~\ref{sec:targeted}, we also demonstrate how targeted attack can affect the reconstruction of NIC methods.
Later we leverage generated adversarial examples to iteratively finetune the original models, improving the robustness of NIC to adversarial attacks. We also show that adversarial finetuned model can effectively eliminate the distortions induced by image recompression as in Fig.~\ref{fig:recompression}.
{\bf Contribution.}
\begin{enumerate}
\item To the best of our knowledge, this is the first work that performs adversarial attack to systematically study the model instability of learnt image compression on full resolution images (Sec.~\ref{sec:method});
\item Our extensive experiments report the general vulnerability of existing learning-based image compression method regardless of the underlying model settings and perturbation generation strategy
(Sec.~\ref{sec:exp});
\item The proposed iterative adversarial finetuning simply augments the original training dataset with adversarial examples to finetune the existing compression model, by which it not only improves the model robustness substantially but also retains most of the original compression efficiency (Sec.~\ref{sec:attack_defense});
\end{enumerate}
Overall, our methodology that includes the noise injection based adversarial example generation, and adversarial model finetuning using generated examples, is simple, effective and generalizable to most popular learnt image coder approaches, making it attractive and necessarily desired for practical application.
\begin{table}[t]
\centering
\caption{ Frequently Used Abbreviations}
\label{tab:notations}
\begin{tabular}{c|c}
\hline
Abbreviation & Description\\
\hline
NIC & Neural Image Compression \\
JPEG & Joint Photographic Experts Group\\
VVC & Versatile Video Coding\\
\hline
MSE & Mean Squared Error\\
PSNR & Peak Signal-to-Noise Ratio\\
MS-SSIM~\cite{MS-SSIM} & Multscale Structural Similarity\\
LPIPS~\cite{zhang2018unreasonable} & Learned Perceptual Image Patch Similarity\\
bpp & bit rate using bits per pixel\\
GAN & Generative Adversarial Network\\
VAE & Variational AutoEncoder\\
\hline
\end{tabular}
\end{table}
\section{Related Works}
\label{sec:related_work}
This section briefs relevant techniques on respective lossy image compression and adversarial attack.
\begin{figure*}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[scale=0.75]{./figs/VAE.pdf}
\caption{}
\label{fig:vae}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\centering
\begin{tabular}{c|c}
\textbf{Encoder} & \textbf{Decoder} \\ \midrule
Conv/Resblock s2$\downarrow$ & Deconv/Resblock s2$\uparrow$ \\
GDN/ReLU & IGDN/ReLU \\
Conv/Resblock s2$\downarrow$ & Deconv/Resblock s2$\uparrow$ \\
GDN/ReLU & IGDN/ReLU \\
Conv/Resblock s2$\downarrow$ & Deconv/Resblock s2$\uparrow$ \\
GDN/ReLU & IGDN/ReLU \\
Conv/Resblock s2$\downarrow$ & Deconv/Resblock s2$\uparrow$ \\
\end{tabular}
\caption{}
\label{table:vae}
\end{subfigure}
\caption{{\bf Neural Image Compression (NIC).} (a) General architecture of VAE-based NIC Solutions. ${\mathbf{Q}}$ denotes the quantization, and entropy estimator provides the probability estimation for both AE (Arithmetic Encoding) and AD (Arithmetic Decoding); $\bf x$ is an original source image (or input source), and correspondingly $\bf \hat{x}$ is the output reconstruction (after decoding). (b) Typical paired Encoder and Decoder components used in NIC framework, such as normal Convolutional layer (Conv) or ResNet block (Resblock)~\cite{he2016deep}, resolution re-sampling (upscaling s2$\downarrow$ and downscaling s2$\uparrow$ at a factor of 2), nonlinear activations using GDN (Generalized Divisive Normalization)~\cite{balle2015density} or ReLU (rectified linear unit). Deconv and IGDN are inverted processes of respective Conv and GDN.}
\end{figure*}
\subsection{Lossy Image Compression}\label{sec:related_lossy}
Lossy image compression approaches, such as the JPEG~\cite{wallace1991jpeg}, JPEG 2000~\cite{JPEG2K}, VVC Intra, etc, usually search for appropriate compact representation through rate-distortion optimization (RDO). Typically solutions often adopt the ``transform coding'' or ``hybrid transform/prediction coding'' to transform input pixels, or pixel residuals (after intra prediction) into frequency domain coefficients for quantization and entropy coding. These transforms are generally comprised of a set of basis functions that are weighted to represent arbitrary pixel-domain blocks/patches using a few sparse, nonzero coefficients.
{\bf Classical Methods.} Classical transforms include Discrete Cosine Transform (DCT)~\cite{DCT}, Wavelet Transform~\cite{J2K_Wavelet}, and so on. Taking DCT as an example, its transform coefficients are usually clustered at few low-frequency components which can be leveraged for energy compaction. The DCT is invertible, i.e., the original image $\bf x$ can be reconstructed losslessly by applying the inverse DCT (IDCT), e.g., ${\bf x}=f^{-1}_{\text{dct}}(f_{\text{dct}}({\bf x}))$. Here $f_{\text{dct}}$ and $f^{-1}_{\text{dct}}$ are DCT and IDCT respectively.
{\bf Over-complete Dictionary.} Image data typically exhibits non-stationary behavior, making it difficult to be de-correlated sufficiently by existing DCT or wavelets. Dictionary learning is then introduced to use a number of content-adaptive, over-complete dictionary atoms to represent image blocks using more sparse and easy-to-compress coefficients~\cite{zepeda2011image,Yao_SelfAdaptive}.
{\bf Neural Transform.} Built upon its excessive representation capacity of high-dimensional data, deeply stacked convolutional neural networks (CNNs) are therefore utilized to construct neural transforms to represent image blocks in a more compact representation (see Fig.~\ref{table:vae}). Fig.~\ref{fig:vae} plots a popular {VAE} architecture that applies neural transform and successfully demonstrates superior coding efficiency~\cite{balle2016end,balle2018variational,minnen2018joint,lee2018context,chen2020,Cheng2020Learned}.
The forward transform in encoder analyzes the original image source $\bf x$ and represents it using compact, vectorized latent features $\bf z$ for quantization (e.g., $\hat{\bf z} = {\mathbf{Q}}({\bf z})$) and entropy coding with learnt adaptive contexts. Oppositely, the backward transform in decoder mirrors the forward steps in encoding process to generate the reconstruction $\hat{\bf x}$. As usual, the RDO module controls the compression trade-off, e.g.,
\begin{equation}
R + \lambda D = \underbrace{\mathbb{E}[-\log_2p({\bf \hat{z}})]}_{rate}+\lambda\underbrace{\mathbb{E}||{\bf x}-\hat{\bf x}||_2^2}_{distortion},
\label{eq:vae_rd}
\end{equation} with $p({\bf\hat{z}})$ standing for the symbol probability of feature elements at bottleneck layer~\cite{chen2020}.
Unlike DCT or wavelet transform, forward neural transform $f_{{E}}()$ in encoder and its inverse presentation $f_{{D}}()$ in decoder are non-linear functions, and usually not invertible even without quantization error,
\begin{equation}
{\bf x} \neq f_{{D}}(f_{{E}}({\bf x})).
\label{eq:irreversible_neural_tramsform}
\end{equation}
Even having invertible neural transform as studied in~\cite{helminger2021lossy,xie2021enhanced}, the quantization error may also be amplified during decoding for lossy compression. For either case, $f_D()$ and $f_E()$ are made up of a stack of convolutional layers that consist of linear convolution, activation function, and possible spatial re-sampling (or spatial-channel shuffling), with which it would potentially amplify any perturbation through layer-by-layer computation and lead to unexpected impairments. As explained in previous work~\cite{AdvExp_Goodfellow}, linear behavior in high-dimensional spaces is sufficient to cause the vulnerability of deep neural networks against adversarial examples,
implying that deep neural transform based NIC methods are very possibly vulnerable to input pertubation.
\begin{table*}[t]
\centering
\caption{Tested NIC Approaches and Their Key Components.}
\label{tab:methods}
\begin{tabular}{l|l|l|l}
\hline
\multirow{2}{*}{\textbf{Method}} & \multicolumn{3}{c}{\textbf{Key Components}} \\
\cline{2-4}
& {\bf Nonlinear Transform} & \textbf{Entropy Model} & \textbf{Loss} \\
\hline
Ball\'e 2016~\cite{balle2016end} & Conv+GDN & factorized & MSE / MS-SSIM \\
\hline
Ball\'e 2018~\cite{balle2018variational} & Conv+GDN & hyperprior & MSE / MS-SSIM \\
\hline
Minnen~\cite{minnen2018joint} & Conv+GDN & joint hyperprior \& autoregressive & MSE \\
\hline
Cheng 2020~\cite{Cheng2020Learned} & Resblock+Spatial-Channel Attention & joint hyperprior \& autoregressive & MSE \\
\hline
NLAIC~\cite{chen2020} & Resblock+Nonlocal Attention & joint hyperprior \& autoregressive & MS-SSIM \\
\hline
HiFiC~\cite{mentzer2020high} & Conv+Channel Normalization & hyperprior & MSE + LPIPS~\cite{zhang2018unreasonable} + GAN \\
\hline
Weixin 2021~\cite{weixin2021fix} & Fixed-point NLAIC & joint hyperprior \& autoregressive & MSE \\
\hline
InvCompress~\cite{xie2021enhanced} & INN+Attention+Feature Enhancement & joint hyperprior \& autoregressive & MSE \\
\hline
\end{tabular}
\end{table*}
\subsection{Adversarial Attack}
Adversarial attacks have been widely utilized to measure the robustness of DNN models in various approaches, particularly for those in high-level vision tasks as aforementioned.
As known, these DNN models basically wish to construct non-linear mapping function $g$ to effectively characterize the relationships between input data and output result.
Adversarial attack can be either \textit{targeted} or \textit{untargeted}. Given a vectorized input $\bf x$ and a mapping function $y = g({\bf x})$,
for \textit{untargeted} attack, adversarial examples ${\bf x}^{\ast}$ are used to produce undesignated but incorrect output. They can be derived using:
\begin{align}
\mathop{\arg\!\min}_{{\bf x}^{\ast}} {d}\{{\bf x},{\bf x}^{\ast}\} \ \text{s.t.} \ g({\bf x}^{\ast}) \neq g({\bf x}). \label{eq:untargeted_attack}
\end{align} Here ${d}\{\cdot\}$ measures the signal distance between original $\bf x$ and adversarial example ${\bf x}^{\ast}$, for which conventional MSE or other metrics can be used.
Adversarial attack can be also {\it targeted}. Having an image classification example, a targeted attack, as the name implies, attempts to use example ${\bf x}^*$ that has only limited perturbation compared with original input $\bf x$, to “fool” the classifier $f$ to mis-classify ${\bf x}^*$ to some targeted label $y^t = f({\bf x}^*)$ that is totally different from the original outcome $y$. Such targeted adversarial examples can be generated through:
\begin{align}
\mathop{\arg\!\min}_{{\bf x}^{\ast}} {d}\{{\bf x},{\bf x}^{\ast}\} \ \text{s.t.} \ f({\bf x}^{\ast}) = y^t. \label{eq:targeted_attack}
\end{align}
Many popular methods like stochastic gradient decent (SGD), Adam~\cite{adam} and fast gradient-based techniques like FGSM~\cite{AdvExp_Goodfellow} can be used for solving \eqref{eq:untargeted_attack} and \eqref{eq:targeted_attack} automatically.
\if 0
Alternatively, if there is no specific target to mislead inference engine, and we just want to introduce pixel diversities in reconstructed images, it is an untargeted attack.
The adversarial examples of untargeted attack can be produced using:
\begin{align}
\mathop{\arg\!\min}_{{\bf x}^{\ast}} {d}\{{\bf x},{\bf x}^{\ast}\} \ \text{s.t.} \ f({\bf x}^{\ast}) \neq f({\bf x}). \label{eq:untargeted_attack}
\end{align}
Many popular methods like stochastic gradient decent (SGD), Adam~\cite{adam} and fast gradient-based techniques can be used for solving \eqref{eq:targeted_attack} and \eqref{eq:untargeted_attack} to generate respective adversarial examples to measure the robustness of neural models.
\fi
Existing attempts of adversarial attacks on VAE-based methods~\cite{Tabacof2016AdversarialIF, kos2018adversarial} only focus on generative models like VAE-GAN~\cite{larsen2016autoencoding} and mainly use very low-resolution (e.g. 28$\times28$), simple digital number datasets like MNIST and SVHN~\cite{netzer2011reading}.
To the best of our knowledge, there has been no existing work on attacking full resolution image compression framework.
As aforementioned, the stability of decoded reconstruction in image compression tasks is critical to applications. Thus we hope to see, whether limited perturbations in input can produce unexpected distortion on the output image after compressed by NIC methods. Technically, this is an untargeted attack.
Therefore, this work will first demonstrate how existing image compression models can be affected by the untargeted attack and how to defend them for robust image compression. Later then, we also show that our methodology can be easily extended to targeted attack.
\section{Adversarial Examples}
\label{sec:method}
Images are mostly compressed for vast applications due to rate/bandwidth constraints in practice.
The compression technology behind is mandated to guarantee the interoperability across heterogeneous platforms, client devices, etc, which is often presented as an international standard or industrial recommendation that is reproducible and publicly accessible.
Hence, we generally assume that the attacker can have the full access of the NIC codec and model parameters to best generate adversarial examples for attack~\cite{kos2018adversarial}. So technically the proposed attack belongs to \textit{white-box} attack.
Without making any modification of the pretrained encoder $f_E()$ and decoder $f_D()$ of a specific NIC method, the attacker wishes to inject a small amount of noise $\bf n$ upon the original source image $\bf x$ to generate the adversarial example (or input) $\bf x^*= x + \mathbf{n}$. These images are respectively encoded and decoded to have adversarial reconstructions $\hat{\bf x}^* = f_D(f_E({\bf x^*}))$, and original reconstructions $\hat{\bf x} = f_D(f_E({\bf x}))$.
The generated ${\bf x}^*$ should have only limited or negligible perturbation compared with the original input $\bf x$, while the decoded adversarial reconstruction $\hat{\bf x}^*$ of the adversarial input ${\bf x}^*$ is expected to largely differ from to the reconstruction $\hat{\bf x}$ of original $\bf x$ with visible distortion. Therefore, the adversarial examples for such untargeted/distortion attack can be derived by optimizing the added noise $\bf n$ with:
\begin{align}
\mathop{\arg\min}_{\mathbf{n}}{L_{d}} =\left\{ \begin{array}{lc}
||\mathbf{n}||_2^2, & ||\mathbf{n}||_2^2 \geq \epsilon, \\
1-||\hat{\bf x}-\hat{\bf x}^*||_2^2, & ||\mathbf{n}||_2^2 < \epsilon.
\end{array}
\right.
\label{eq:dloss}
\end{align}
where $\epsilon$ is the $l_2$ threshold of the input noise.
When augmented noise is strong, e.g., having $||\mathbf{n}||_2^2 \geq \epsilon$, the objective function is first to decrease the noise intensity until $||\mathbf{n}||_2^2 < \epsilon$;
hereafter, we use the negative $l_2$ distance between the original reconstruction $\hat{\bf x}$ and adversarial reconstruction $\hat{\bf x}^*$ to maximize their distance. We preset $\epsilon$ to enforce the same level of perturbation added in the input, and maximize the output distortion in the meantime. As seen, we can easily adapt $\epsilon$ to get proper adversarial examples as expected. {As will be shown in later section, other distance metrics, such as the $l_1$ distance, structural similarity based MS-SSIM, can be also used to guide the example generation.}
The proposed perturbation generation strategy in \eqref{eq:dloss} is a general approach that is effective for any NIC models used in practice.
As will be shown in later section, these generated adversarial examples can be used for finetuning existing NIC
models. The finetuned model can effectively defend the artifacts induced by adversarial attack as well as by practical applications like successive image recompression.
\if 0
A targeted attack is typically applied for semantic content understanding tasks upon decoded images, which attempts to mislead the inference model of underlying NIC solution to give a targeted output that is designated to be different with the original output.
Assuming that we have a target image ${\bf x}^t$, and a noise augmented image $\bf x^*= x + \mathbf{n}$, these images are encoded and decoded to have $\hat{\bf x}^* = f_D(f_E({\bf x^*}))$, and ${\bf x}^t = f_D(f_E({\bf x}^t))$.
By fixing the encoder $f_E()$ and decoder $f_D()$ of a specific NIC method, the generation of adversarial examples for targeted attack can be optimized to minimize the input noise as well as the output distance with the target output:
\begin{align}
\mathop{\arg\min}_{\mathbf{n}}{L_{t}} =\left\{ \begin{array}{lc}
||\mathbf{n}||_2^2, & ||\mathbf{n}||_2^2 \geq \epsilon \\
||\hat{\bf x}^{*}-\hat{\bf x}^{t}||_2^2, & ||\mathbf{n}||_2^2 < \epsilon
\end{array}
\right.
\label{eq:tloss}
\end{align}
where $\epsilon$ is the $L_2$ threshold of the input noise.
When the input noise is strong, having $||\mathbf{n}||_2^2 \geq \epsilon$, the objective function is first to decrease the noise intensity until $||\mathbf{n}||_2^2 < \epsilon$; hereafter, the objective is to optimize the $L_2$ distance between the reconstruction of target image $\hat{\bf x}^t$ and the reconstruction of the adversarial example $\hat{\bf x}^*$. With the optimization strategy in~\eqref{eq:tloss}, we preset $\epsilon$ to enforce the small noise perturbation for almost the same input, and minimize the output loss towards target at the same time. As seen, we can easily adapt $\epsilon$ to get proper adversarial examples as expected.
\begin{figure}[t]
\centering
\subcaptionbox{}{\includegraphics[scale=0.43]{./figs/mask.pdf}}%
\hfill
\subcaptionbox{}{\includegraphics[scale=1.6]{./figs/mnist_bigin.pdf}}%
\hfill
\subcaptionbox{}{\includegraphics[scale=1.6]{./figs/mnist_bigout.pdf}}
\caption{Attack with target area mask. (a) example of the masked ROI (region of interest) target and associated background, (b) adversarial example as the input, (c) output reconstruction of adversarial example. Random noise is injected at first, and then the optimization strategy in~\eqref{eq:masked_tloss} gradually adapt the noise intensities to derive the adversarial example in (b) to fool the classifier for the targeted outcome in (c). More noise is observed on background area than on masked object shown in adversarial example of (b). }
\label{fig:back_roi}
\end{figure}
Note that \eqref{eq:tloss} works well for low-resolution, thumbnail images that have single objects, such as the handwritten digits dataset MNIST. Nowadays, an image often exhibits at a much higher spatial resolution, such as the 4K, 8K, etc, and usually present multiple semantic cues, e.g., diverse foreground objects, within the same scene, making it difficult to directly apply the \eqref{eq:tloss}.
For most real-life images, we then suggest a more practical strategy that focuses on alerting salient object or area (e.g. text, face, etc) that is the most attentive target to the human observer.
The target in an image usually occupies a small fraction of the entire scene, thus we can simply mask out the target area while keeping the background unchanged as in Fig.~\ref{fig:back_roi}(a). Popular target segmentation (or saliency detection) algorithms can be used for masking.
Recalling that resolution resampling used in most NIC solutions~\cite{minnen2018joint,chen2020} would enlarge the receptive field of the convolutional filter from layer to layer, it implies that the noise injection in surrounding area of the masked target would potentially affect the reconstruction of the target. Though it is possible to just perturb pixels/elements associated with the target and surrounding area, it does not fit for the general use cases. Thus, we still assume the random noise injection to the entire image, and set different weights of target area ${\bf x}_{\mathsf{roi}}$ and background ${\bf x}_{\mathbf{bkg}}$ in optimization, e.g.,
\begin{align}
&\mathop{\arg\min}_{\mathbf{n}}{L_{t}} \nonumber\\
& = \left\{ \begin{array}{lc}
\lambda ||{\bf x}_{\mathsf{roi}}, {\bf x}_{\mathsf{roi}}^*||_2^2 + ||{\bf x}_{\mathsf{bkg}}, {\bf x}_{\mathsf{bkg}}^*||_2^2, & ||\mathbf{n}||_2^2 \geq \epsilon,\\
||\hat{\bf x}^{*}-\hat{\bf x}^{t}||_2^2, & ||\mathbf{n}||_2^2 < \epsilon.
\end{array}
\right.
\label{eq:masked_tloss}
\end{align} Having $\lambda < 1$ can adapt more penalty on target area than on background area to generate the adversarial example shown in Fig.~\ref{fig:back_roi}(b) for targeted misclassification in Fig.~\ref{fig:back_roi}(c).
{\textcolor{red}{move later!}}
Attacking results on MNIST by Eq.~\ref{eq:tloss} are shown in Fig~\ref{mnist_comparison}. Results by Eq.~\ref{eq:masked_tloss} on complex high resolution real-scene dataset can be found in Fig.~\ref{license}.
Previous discussions regarding the targeted attack mainly try to mislead the semantic model for incorrect inference. In addition to the vision tasks, image compression is dominantly utilized in rate-constrained services for content sharing and consumption. Thus, a more general, untargeted attack is to produce the decoded image that diverse from default reconstruction with altered pixels, leading to deteriorated reconstruction quality and visible distortions. Hence, such untargeted attack is also referred to as the ``distortion'' attack.
In this case, the generated ${\bf x}^*$ should have only limited perturbation compared with the original input $\bf x$, while the decoded $\hat{\bf x}^*$ of the generated ${\bf x}^*$ is expected to have large distortion to the reconstruction $\hat{\bf x}$ of original $\bf x$. Therefore, the adversarial examples for such untargeted/distortion attack can be derived using:
\begin{align}
\mathop{\arg\min}_{\mathbf{n}}{L_{d}} =\left\{ \begin{array}{lc}
||\mathbf{n}||_2^2, & ||\mathbf{n}||_2^2 \geq \epsilon, \\
1-||\hat{\bf x}-\hat{\bf x}^*||_2^2, & ||\mathbf{n}||_2^2 < \epsilon.
\end{array}
\right.
\label{eq:dloss}
\end{align}
Here, we use the negative $L_2$ distance between the original reconstruction $\hat{\bf x}$ and the reconstruction of the adversarial example $\hat{\bf x}^*$ to maximize the output distortion, which is different from the optimization towards the target in \eqref{eq:tloss} and \eqref{eq:masked_tloss}. The same preset of $\epsilon$ is to adapt the random noise for the generation of adversarial examples.
Detailed experiments can be found in Sec.~\ref{sec:method_distorion}.
\fi
\begin{figure*}[t]
\centering
\includegraphics[scale=0.2]{./figs/anchor_test_big.pdf}
\caption{{\bf Model Instability Incurred Reconstruction Distortions.} Both adversarial inputs and decoded reconstructions are visualized. Input images are compressed using existing NICs listed in Tab.~\ref{tab:methods}. {The bitrate is measured by bpp (bits per pixel). All PSNR and MS-SSIM values are compared with the uncompressed original image $\bf x$ for quantitative measurement. Such measurement is extended to all other simulations in this study.}}
\label{fig:anchors}
\end{figure*}
\section{Attack Evaluation}
\label{sec:exp}
This section follows the optimization strategies proposed in \eqref{eq:dloss} to measure the robustness of the popular NIC methods. Adam~\cite{adam} optimizer is used to optimize the \eqref{eq:dloss}. Extensive simulations reveal that distortion attack works effectively and leads to visible impairments in adversarial reconstruction regardless of underlying NIC methods, noise injection schemes, etc.
\subsection{NIC: Approaches and Settings }
\label{sec:method_distorion}
\subsubsection{General Observation} We choose a variety of VAE-based NICs shown in Table~\ref{tab:methods} to demonstrate the general existence of instability among recently emerged NIC solutions. These NICs exemplify major milestones of key components during the development of learning-based image coding.
\begin{table}[htbp]
\centering
\caption{Original reconstruction and adversarial reconstruction quality comparison for different NIC methods averaged on Kodak dataset. The ``mse'' and ``ms-ssim'' indicate the model optimized using MSE and MS-SSIM loss respectively.}
\label{tab:nics}
\begin{tabular}{l|c|c|c|c}
\hline
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c}{\textbf{Original}} & \multicolumn{2}{|c}{\textbf{Adversarial}} \\
\cline{2-5}
& {\bf PSNR} & \textbf{MS-SSIM} & \textbf{PSNR} & \textbf{MS-SSIM} \\
\hline
Ball\'e2016 (mse) & 29.6194 & 0.9554 & 17.1008 & 0.8521\\
Ball\'e2016 (ms-ssim) & 27.2918 & 0.9650 & \textcolor{red}{11.5161} & \textcolor{red}{0.6087}\\
\hline
Ball\'e2018 (mse) & 30.9674 & 0.9609 & 17.0802 & 0.8511 \\
Ball\'e2018 (ms-ssim) & 27.1062 & 0.9706 & \textcolor{red}{11.6085} & \textcolor{red}{0.6752} \\
\hline
Minnen 2018 (mse) & 31.3861 & 0.9627 & 16.1951 & 0.8384 \\
\hline
Cheng 2020 & 31.3035 & 0.9624 & 20.1133 & 0.8768 \\
\hline
HiFiC & 27.0348 & 0.9217 & 19.5471 & 0.7438\\
\hline
NLAIC & 27.2177 & 0.9556 & 14.3657 & 0.7317\\
\hline
Weixin 2021 & 30.3835 & 0.9748 & 15.7722 & 0.8729\\
\hline
InvCompress & 31.3094 & 0.9541 & \textcolor{green}{21.3836} & \textcolor{green}{0.9266} \\
\hline
\end{tabular}
\end{table}
Theoretically, image coding efficiency is highly related to the efficiency of nonlinear transform and entropy context modeling. Earlier attempt made in Ball\'e 2016~\cite{balle2016end} first introduced the Generalized Divisive Normalization (GDN) with convolutional layer, and applied simple factorized entropy model that was further improved in Ball\'e 2018~\cite{balle2018variational} and Minnen~\cite{minnen2018joint} by hyper prior, and joint hyper prior and autoregressive neighbors for better entropy modeling; Because of the superior efficiency by using joint hyper prior and autoregressive neighbors, succeeding works almost reused the same mechanism in context modeling.
Given that convolution mostly captures the local correlation, NLAIC~\cite{chen2020} and {Cheng 2020}~\cite{Cheng2020Learned} suggested the attention mechanism to aggregate local or nonlocal correlations to enhance the performance. Previous discussions mainly applied the MSE or MS-SSIM as the loss function in training while HiFiC~\cite{mentzer2020high} combined the MSE, LPIPS (Learned Perceptual Image Patch Similarity) and GAN (Generative Adversarial Network) loss to noticeably improve the perceptual quality of reconstructed images.
In addition to the coding efficiency, other aspects were considered as well. For example, aforementioned models with floating point computation were platform dependent and complexity intensive. In Weixin 2021~\cite{weixin2021fix}, it developed the fixed point strategy to optimize the NLAIC~\cite{chen2020}, yielding negligible performance loss but significant complexity reduction. Note that nonlinear transforms in~\cite{balle2016end,balle2018variational,minnen2018joint,chen2020,Cheng2020Learned} were not invertible and could not avoid the information loss, alternative InvCompress~\cite{xie2021enhanced} used invertible neural network (INN) as its core transform to resolve this issue.
We generally utilize open-sourced code and pretrained models of these methods (e.g., compressAI\footnote{https://github.com/InterDigitalInc/CompressAI}~\cite{begaint2020compressai}, NLAIC\footnote{https://njuvision.github.io/NIC/}, Weixin 2021\footnote{https://njuvision.github.io/fixed-point/}, HiFiC\footnote{https://github.com/tensorflow/compression/tree/master/models/hific} and InvCompress\footnote{https://github.com/xyq7/InvCompress}) to avoid any ambiguity.
For those missing models,
we strictly follow the training procedures to reproduce them from scratch.
\begin{figure}[t]
\centering
\includegraphics[scale=0.14]{./figs/metrics-cmp.pdf}
\caption{{\bf Impact of Loss Functions Used for Training NIC.} Illustration of adversarial examples, adversarial reconstructions (via decoding compressed adversarial inputs), and original noise-free reconstructions for three different NIC models optimized using MSE, MS-SSIM and LPIPS respectively. Both PSNR and MS-SSIM are evaluated against the original noise-free input. Compressed bit rates in bpp increase when encoding the adversarial examples that have noise perturbation.}
\label{fig:lambda_comparison}
\end{figure}
For fair comparison, the noise threshold $\epsilon$ in \eqref{eq:dloss} is set to 1e-3, having the PSNR between the original image and corresponding adversarial example equal to 30dB. We run 100,00 steps with learning rate at 1e-3 to generate adversarial inputs for NIC methods listed in Table~\ref{tab:methods} using images from Kodak dataset~\cite{kodak}.
In Fig.~\ref{fig:anchors}\footnote{We use different images for diverse NIC solutions to exemplify the severe quality degradation of decoded image incurred by the model instability. Such visible distortions are presented for other images as well.}, the adversarial inputs and corresponding reconstructions are visualized for each method. Visible impairments are clearly presented in these reconstructions for most popular NICs (even for methods using invetible neural network~\cite{xie2021enhanced} and fixed-point computation~\cite{weixin2021fix}), implying that the instability issue applies to all learning-based image coding frameworks regardless of diverse components and optimization methods utilized.
Extremely-low PSNR and MS-SSIM measurements of reconstructions, e.g., $< 20$ dB, also confirm the aforementioned subjective loss. More numerical comparison results can be found in Table~\ref{tab:nics} that lists the reconstruction quality of the original no noise input and of adversarial input averaged on 24 Kodak images. Note that all PSNR and MS-SSIM values are measured against the uncompressed, original image $\bf x$ for quantitative evaluation in this study.
\begin{figure}[t]
\centering
\includegraphics[scale=0.14]{./figs/bitrates-new.pdf}
\caption{{\bf Impact of Quality Scales Used in NIC.}
Illustration of adversarial examples, adversarial reconstructions and original noise-free reconstructions for three different quality scales used in NIC model.
The quality scale parameter in CompressAI is set to [1,4,8] to represent low-quality (e.g., low-bitrate), intermediate-quality and high-quality (high-bitrate) compression.}
\label{fig:bitrate}
\end{figure}
Note that the first three methods Ball\'e 2016~\cite{balle2016end}, Ball\'e 2018~\cite{balle2018variational} and Minnen 2018~\cite{minnen2018joint} use the same encoder and decoder transform. Their main difference is the entropy estimator used for coding the latent feature (see more details in Table~\ref{tab:methods}). As seen in Table~\ref{tab:nics}, these methods show clear vulnerability to adversarial attacks regardless of different entropy approximations.
InvCompress~\cite{xie2021enhanced} that applies the invertible neural network to avoid information loss shows the best adversarial reconstruction quality among all tested methods in Table~\ref{tab:nics}. However, the model instability is still presented with obvious distortion in Fig.~\ref{fig:anchors}, which might be caused by the use of feature enhancement modules before and after the invertible neural network transforms.
For the sake of simplicity, we then take the classic and most representative Ball\'e 2018~\cite{balle2018variational} method as an example in the rest of the article, to systematically evaluate how the adversarial attack and defense would affect the robustness of learnt image compression.
\subsubsection{Loss Functions}
One advantage of learning-based NIC solution to existing rules-based JPEG, VVC Intra, etc, is that we can easily adapt specific loss function towards designated quality optimization. In addition to the traditional MSE (Mean squared error) and MS-SSIM (Structural similarity) that are extensively used in model training, recent works have suggested that LPIPS, feature loss, as well as the GAN loss can be applied seperately or jointly for better perceptual quality of decoded reconstructions because these metrics show better correlation with the subjective sensation~\cite{zhang2018unreasonable,huang2019extreme}.
We separately optimize the Ball\'e 2018~\cite{balle2018variational} using three popular loss functions, e.g., MSE, MS-SSIM, and LPIPS.
As illustrated in Fig.~\ref{fig:lambda_comparison}, model instability incurred distortions are again presented, besides the impairments observed in Fig.~\ref{fig:anchors} for image coded by GAN loss optimized HiFiC,
implying successful attacks on underlying NIC model regardless of the loss function used in model optimization. Sharp drops of PSNR and MS-SSIM are noted when quantitatively comparing the reconstruction of compressed adversarial examples at 2nd row to those of compressed inputs without adversarial attack at 3rd row. We also notice that the bit rate measured in bpp increases when compressing the adversarial examples to corresponding attack-free inputs, which reveals that the injected noise is more difficult to model for efficient coding.
\subsubsection{Quality Scales} In practice, images will be encoded at various quality scales (or bit rates) for satisfying diverse application requests.
Rate-distortion optimization shows that a larger bit rate would basically come with a better quality scale after reconstruction.
However, as shown in Fig.~\ref{fig:bitrate}, the NIC models are constantly vulnerable to adversarial attacks at all quality scales.
\begin{equation}
{\lim_{r \to +\infty}}||x - f_D(f_E(x))|| = \cancelto{\varphi}{0}
\label{eq:aelimit}
\end{equation}
Actually, due to the irreversibility of VAE-based methods as mentioned in~\eqref{eq:irreversible_neural_tramsform}, there exists a distortion up-bound or \textbf{AE limit} $\varphi$ in~\eqref{eq:aelimit} as mentioned in~\cite{helminger2021lossy} when the bitrate $r$ increases. It explains why it is possible to generate adversarial examples for models trained for arbitrary bitrates.
Along with the increase of compressed bitrate, the quality of adversarial reconstruction gets even worse. It is because, at low bitrates, even though adversarial examples are with the pertubation, higher quantization level would remove more details as well as noises, which could partially stop the noise accumulation and present less distortions. However, when bitrate gets higher, the model tries to retain more information of the input image, which also accumulates the perturbation layer by layer to produce severer distortion.
\begin{figure}[t]
\centering
\includegraphics[scale=0.16]{./figs/epsilons.pdf}
\caption{{\bf Impact of $\epsilon$.} Illustration of adversarial examples at first row, associated noise at second row and corresponding decoded reconstructions at third row for different perturbation levels. $\epsilon$ is adjusted to adapt the intensity of noise for input perturbation, i.e., larger $\epsilon$ is for stronger perturbation.}
\label{fig:epsilon}
\end{figure}
\subsection{Adversarial Example Generation Settings}
Previous sections report that existing NIC models are vulnerable to adversarial attacks regardless of the implementation methods, loss functions, and compressed bitrates utilized. This section further shows that adversarial attack works in general when applying different noise threshold and distance measurement in~\eqref{eq:dloss} to inject perturbation.
\subsubsection{Noise Threshold $\epsilon$}
Figure~\ref{fig:epsilon} visualize adversarial examples when having different levels of input perturbation through $\epsilon$ adjustment. The results is consistent with the intuition, i.e., larger $\epsilon$ leads to stronger noise for input image perturbation, and thus yields more severely-degraded reconstructions. Surprisingly, even with negligible visual artifacts when having $\epsilon=1e-5$, impaired pixels are still presented in decoded reconstruction, implying the very vulnerability of NIC models.
\begin{figure}[t]
\centering
\includegraphics[scale=0.14]{./figs/metrics.pdf}
\caption{{\bf Impact of Distance Measurements Used for Generating Adversarial Examples.} Illustration of adversarial examples and corresponding adversarial reconstructed using different distance measurements in~\eqref{eq:dloss}.}
\label{fig:metrics_attack}
\end{figure}
\subsubsection{Distance Measurement}
We originally apply the common $l_2$ distance in \eqref{eq:dloss} to optimize the noise injection for the generation of adversarial examples. Here, we show that $l_1$ distance and MS-SSIM measurement can be used to produce adversarial examples that can also effectively attack NIC models for impaired reconstructions in Fig.~\ref{fig:metrics_attack}.
\section{Attack Defense}
\label{sec:attack_defense}
Extensive experiments in last section report that existing NIC models are very vulnerable to adversarial attacks, suggesting that designating different network structures and applying various loss functions cannot successfully defend the attack. The distortion in adversarial reconstructions is incurred possibly because NIC models trained using noise-free dataset is incapable of characterizing noised content with different distribution that was unseen during training. In the meantime, previous works on adversarial training with adversarial examples~\cite{Szegedy2014, 43405, 2015Learning, alex2018} have already shown the ability to improve the robustness of the underlying learnt model for various vision tasks. Thus, we suggest to apply the simple yet effective adversarial finetuning strategy to get robust NIC models.
\subsection{Iterative Adversarial Finetuning} \label{sec:data_augmentation}
Existing NIC approaches often use noise-free (or at least less noise), high-quality, uncompressed image samples to train respective models. They work well on clean and uncompressed content, but are mostly vulnerable to noisy inputs as aforementioned.
Experiments in previous sections have shown that such noisy inputs can be simulated by adversarial example generation, we then propose to augment these generated adversarial examples into original training dataset, by which we can basically enrich the distribution of training data for improving the model generalization afterwards.
As described in Algorithm~\ref{method:iter}, we iteratively implement the \textit{attack-and-finetune} strategy to update the pretrained model.
Here $\theta$, $\phi$ are trainable parameters in encoder $f_E()$ and decoder $f_D()$ respectively.
$N$ is the total iterations of adversarial finetuning and in each iteration, a batch of 8 images are randomly sampled from training dataset ${\mathbb{D}}$, in which four of them are kept as original images $\Vec{\bf x}_{\bf ori}$ and the other four images would be used to generate adversarial examples $\Vec{\bf x}^*$ with $M$ steps of optimization following~\eqref{eq:dloss}. $\Vec{\bf x}_{\bf ori}$ and $\Vec{\bf x}^*$ is then concatenated together to update the pretrained models w ith Adam optimizer. In this paper, we set $N$ = 1000, $M$ = 1000.
Randomly-cropped 256$\times$256$\times$3 patches from DIV2K~\cite{Agustsson_2017_CVPR_Workshops} dataset are used as the training baseline dataset ${\mathbb{D}}$.
Note that $\epsilon = 1e-3$ and $l_2$ distance is used. The generation of each batch of adversarial examples costs about 22s on the platform with Intel Xeon Silver 4210 [email protected] and single NVIDIA RTX 2080Ti GPU.
\begin{algorithm}[t]
\caption{Iterative Adversarial Finetuning}
\hspace*{0.02in} {\bf Input:}
\hspace*{0.02in}
Encoder $f_E()$ parameters $\theta$, Decoder $f_D()$ parameters $\phi$, Dataset $\mathbb{D}$, finetuning iterations $N$, adversarial example generation iterations $M$, \\
\hspace*{0.02in} {\bf Output:}
$\theta^*$, $\phi^*$
\begin{algorithmic}[1]
\STATE $\theta^*, \phi^* =\theta, \phi$ //initialize using pretrained baseline model
\STATE $i$ = 0
\FOR{i $<$ $N$}
\STATE $\Vec{\bf x}_{\bf ori}$ = RANDOM\_SAMPLE\_FROM($\mathbb{D}$, batchsize=4)
\STATE $\Vec{\bf x}_{\bf adv}$ = RANDOM\_SAMPLE\_FROM($\mathbb{D}$, batchsize=4)
\STATE {$j$ = $0$, $\bf n$ = RANDOM\_NOISE(), $\Vec{\bf x}^*$ = $\Vec{\bf x}_{\bf adv}$ + n}
\FOR{$j \leq$ $M$}
\STATE optimize $\Vec{\bf x}^*$ by Eq.~\eqref{eq:dloss}
\STATE $j$ = $j$ + 1
\ENDFOR
\STATE $\Vec{\bf x}_{\bf new}$ = [$\Vec{\bf x}_{\bf ori}$, $\Vec{\bf x}^*$]
\STATE $dloss$ = DISTORTION\_FUNCTION($f_{D}(f_{E}(\Vec{\bf x}_{\bf new}))$, $\Vec{\bf x}_{\bf new}$)
\STATE $rloss$ = RATE\_FUNCTION($f_{E}$($\Vec{\bf x}_{\bf new}$))
\STATE $loss$ = $\lambda dloss + rloss$
\STATE $\theta^*$ = ADAM\_OPTIMIZER($\theta^*, loss$)
\STATE $\phi^*$ = ADAM\_OPTIMIZER($\phi^*, loss$)
\STATE $i$ = $i$ + 1
\ENDFOR
\STATE \RETURN $\theta^*$, $\phi^*$
\end{algorithmic}
\label{method:iter}
\end{algorithm}
\begin{figure*}[htbp]
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[scale=0.16]{./figs/visual_more_a.pdf}
\caption{}
\label{fig:augment_a}
\end{subfigure}%
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[scale=0.16]{./figs/visual_more_b.pdf}
\caption{}
\label{fig:augment_b}
\end{subfigure}%
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[scale=0.16]{./figs/visual_more_c.pdf}
\caption{}
\label{fig:augment_c}
\end{subfigure}%
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[scale=0.22]{./figs/distrib-vertical.pdf}
\caption{}
\label{fig:distribution}
\end{subfigure}
\caption{{\bf Attack Defense Using Adversarially-Finetuned Model.} (a) Original (noise-free) input, decoded reconstruction using pretrained NIC model, and 41th channel response of compressed latent features; (b) Adversarial input, adversarial reconstruction using pretrained NIC model, and 41th channel response of compressed latent features; (c) Adversarial input, adversarial reconstruction using adversarially finetuned NIC model, and 41th channel response of compressed latent features; The reconstruction without adversarial finetuning in (b) exhibits severe distortion; while the distortion in decoded reconstruction using finetuned NIC model in (c) is greatly alleviated and removed. Note that the latent features are presented at a size of 1/16$\times$1/16 of the original image, they are upscaled for better visualization. (d) {Distribution comparison in latent space between original input and adversarial input coded using pretrained model}. For some channels (e.g. 17-th, 37-th, 149-th), they are very similar. However, latent features in 41-st, 124-th, 136-th, present quite distant distribution. For better visualization, feature elements with value 0 are removed.}
\end{figure*}
\subsection{Defense Efficiency}
We mainly show the efficiency of attack defense in two folds: one aspect is about the distortion removal in adversarial reconstructions; and the other one is about the coding efficiency of the adversarially finetuned model. Here, the most vulnerable MS-SSIM optimized Ball\'e 2018 method is used as an example to demonstrate the defense efficiency.
\subsubsection{Distortion Alleviation Using Finetuned Model}
{The subplots at second row of both Fig.~\ref{fig:augment_b} and \ref{fig:augment_c} visualize adversarial reconstructions of the same adversarial input by respectively applying pretrained and finetuned Ball\'e 2018 model. As for comparative study, we also plot the original input (without attack) and its decoded reconstruction using the pretrained NIC model at the first and second rows of Fig.~\ref{fig:augment_a}. As seen, adversarially-finetuned NIC model can effectively defend noise perturbation augmented in input image, for which we believe it is because the pretrained NIC model is improved by adversarial finetuning to better characterize the distribution of noisy input.
In Fig.~\ref{fig:augment_a} and~\ref{fig:augment_b}, we also picture the compressed latent features ${\bf z}=f_{E}(\bf x)$ of the original image and adversarial input by baseline model.
Here we choose the channel that is spatially most correlated with the output distortion. Noticeable variations in features confirm the insufficient capacity of native, pretrained baseline model to model and reconstruct these pertubations.
Specifically, Fig.~\ref{fig:distribution} gives the overall distribution comparison in latent feature space between original input and adversarial input using the baseline model. As we can see, feature distributions are almost the same for some channels like the 17-th, 37-th and 149-th channels, while they are very different for some channels like the 41-st, 124-th and 136-th channels, demonstrating the capability of the adversarial examples to enrich the distribution of the original dataset.
Therefore, by finetuning the baseline model with these augmented adversarial examples,
we can extend the model representative capacity to effectively reconstruct the perturbation. As shown in Fig.~\ref{fig:augment_c}, with similar response in latent features as in Fig.~\ref{fig:augment_b}, the reconstruction distortion is greatly alleviated.
\begin{figure}[t]
\centering
\includegraphics[scale=0.32]{./figs/rd.pdf}
\caption{{\bf Coding Efficiency.} Rate-distortion efficiency averaged using total 24 {Kodak} images for pretrained baseline model (a.k.a., Ball\'e 2018~\cite{balle2018variational}), and finetuned model using generated adversarial examples from DIV2K dataset $\mathbb{D}$.}
\label{fig:rd}
\end{figure}
{More importantly, we then attempt to generate new adversarial examples to attack the finetuned NIC model. Experiment results in Tab.~\ref{tab:cmp} report the effective defense of these attacks without noticeable distortion in adversarial reconstructions, further revealing the improved robustness of underlying model with proposed methodology. }
\begin{table}[htbp]
\centering
\caption{Adversarial reconstruction quality comparison between baseline model and adversarial finetuned model averaged on Kodak dataset. The reconstruction quality is significantly improved for model that finetuned on adversarial examples.}
\label{tab:cmp}
\begin{tabular}{l|c|c}
\hline
\textbf{Methods} & PSNR (dB) & MS-SSIM\\
\hline
Balle 2018 MS-SSIM opt. (baseline model) & 11.6085 & 0.6752\\
\hline
finetuned model & \textcolor{green}{23.7074} & \textcolor{green}{0.9064}\\
\hline
\end{tabular}
\end{table}
}
\subsubsection{Coding Efficiency of Finetuned Model}
Previous discussions show that finetuned compression model can effectively defense noise attacks. Another important question then arises: whether the coding efficiency of finetuned model is retained.
To better understand this question, we perform a comparative study using two models:
\begin{itemize}
\item {Baseline Model}: native, pretrained Ball\'e 2018 model, provided by compressAI~\cite{begaint2020compressai};
\item {Adversarial Finetuned Model}: finetuning the pretrained baseline using generated adversarial examples.
\end{itemize}
We then use these models to independently encode 24 Kodak images and measure the average rate-distortion efficiency. As in Fig.~\ref{fig:rd},
the coding efficiency of finetuned model
shows only slight drop compared with the baseline model, revealing that adversarial finetuning could not only effectively improve the model robustness, but also clearly retain the coding efficiency as the original model. This reports that our methodology is a quick plug-and-play strategy without requiring any changes to underlying compression framework, showing that our solution is simple, effective and generalizable to most existing neural image compression approaches.
\subsection{Recompression Application}
Image recompression is very common in applications. As reported in~\cite{kim2020}, successive recompression using learnt image coder would lead to severe distortion in reconstruction. To tackle it, Kim {\it et. al}~\cite{kim2020} included images that were repeatedly compressed in training to improve the model robustness on purpose. Our proposed adversarial fintune stratey is not designed for any specific scenario. However, given that compression artifacts can also be treated as the additive noises, we then test the finetuned compression model on this recompression task.
The visual comparison using Kodak images is shown in Fig.~\ref{fig:recompression}. After 50 repetitions of recompression, severe distortions can be found in reconstructions encoded using pretrained compression model, while these unexpected distortions are
completely eliminated or greatly alleviated in decoded images that were compressed using finetuned model.
This clearly evidences the generalization of our method to real-life applications like image recompression. We also expect that our methodology can be applied to other pixel domain regression tasks for robustness evaluation and improvement in future works such as quality enhancement, super resolution, etc..
\section{Extending the Methodology to Targeted Attack}
\label{sec:targeted}
This companion section showcases that our methodology can be also applied to targeted attack. Different from the untargeted attack in previous sections that tends to just maximize the reconstruction distortion, A targeted attack is typically applied for semantic content understanding, which attempts to produce a targeted output that is designated to be different from the original output.
\subsection{Targeted Adversarial Examples}
Assuming that we have the source image $\bf x$ and a target image ${\bf x}^t$ that is different from $\bf x$, we hope to generate adversarial input $\bf x^*= x + \mathbf{n}$, so that the reconstructed $\hat{\bf x}^* = f_D(f_E({\bf x^*}))$ would be close to the reconstruction of the target image $\hat{\bf x}^t = f_D(f_E({\bf x}^t))$.
By fixing the encoder $f_E()$ and decoder $f_D()$ of a specific NIC method, the generation of adversarial examples for targeted attack can be formulated as minimizing the input noise as well as the output distance with the target output:
\begin{align}
\mathop{\arg\min}_{\mathbf{n}}{L_{t}} =\left\{ \begin{array}{lc}
||\mathbf{n}||_2^2, & ||\mathbf{n}||_2^2 \geq \epsilon \\
||\hat{\bf x}^{*}-\hat{\bf x}^{t}||_2^2, & ||\mathbf{n}||_2^2 < \epsilon
\end{array}
\right.
\label{eq:tloss}
\end{align}
Similar to~\eqref{eq:dloss}, here $\epsilon$ is the threshold of the input noise to control the pertubation level.
The main difference between targeted attack in~\eqref{eq:tloss} and untargeted attack in~\eqref{eq:dloss} is that: instead of using negative $l_2$ distance that maximizes the output distortion as in~\eqref{eq:dloss}, here the objective is to minimize the distance between the reconstruction of target image $\hat{\bf x}^t$ and the reconstruction of the adversarial example $\hat{\bf x}^*$.
Note that \eqref{eq:tloss} simply hopes to minimize the distance between source image and target image of all pixels, which works well for low-resolution, thumbnail images, such as the handwritten digits dataset MNIST. Nowadays, an image often exhibits at a much higher spatial resolution, such as the 4K, 8K, etc, and usually present multiple semantic cues, e.g., diverse foreground objects, within the same scene, making it difficult to directly apply the \eqref{eq:tloss}.
For most real-life images, we then suggest a more practical strategy that focuses on alerting salient object or area (e.g. text, face, etc) that is attentive to human observers.
The target in an image usually occupies a small fraction of the entire scene, thus we can simply mask out the target area manually or by any target segmentation / saliency detection algorithms, while keeping the background unchanged as in Fig.~\ref{fig:back_roi}(a).
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{./figs/mnist_mask.pdf}
\caption{\textbf{Attack with target area mask.} (a) example of adversarial attack with masked ROI (region of interest) target. Zero pertubation is added in the background area. (b) pertubations are added in both target area and background area with $\lambda=0.01$ in~\eqref{eq:masked_tloss}.
With same level of target area input pertubation, the attack efficiency is further improved by allowing some extra pertubation in the background area.
\label{fig:back_roi}
\end{figure}
Considering that the stacked convolution layers and resolution resampling used in most NIC solutions would enlarge the receptive field of the network, it implies that the pertubation added in surrounding background area would also potentially affect the reconstruction of the target area.
To fit the general use cases. we still assume that the noise can be added to the entire image, but set different weights in target area ${\bf x}_{\mathsf{roi}}$ and background area ${\bf x}_{\mathbf{bkg}}$ in optimization, e.g.,
\begin{align}
&\mathop{\arg\min}_{\mathbf{n}}{L_{t}} \nonumber\\
& = \left\{ \begin{array}{lc}
||{\bf x}_{\mathsf{roi}}, {\bf x}_{\mathsf{roi}}^*||_2^2 + \lambda||{\bf x}_{\mathsf{bkg}}, {\bf x}_{\mathsf{bkg}}^*||_2^2, & ||\mathbf{n_\mathsf{roi}}||_2^2 \geq \epsilon,\\
||\hat{\bf x}_{\mathsf{roi}}^{*}-\hat{\bf x}_{\mathsf{roi}}^{t}||_2^2 + \lambda||\hat{\bf x}_{\mathsf{bkg}}^{*}-\hat{\bf x}_{\mathsf{bkg}}^{t}||_2^2, & ||\mathbf{n_\mathsf{roi}}||_2^2 < \epsilon.
\end{array}
\right.
\label{eq:masked_tloss}
\end{align} Here the ${\bf n}_\mathsf{roi}$ and $\epsilon$ are the noise added in target area and its threshold.
If $\lambda=\infty$ as in Fig.~\ref{fig:back_roi}(a), Equation~\eqref{eq:masked_tloss} is degenerated to~\eqref{eq:tloss} that no noise is allowed in the background area. Having a smaller $\lambda$ that allows some pertubation in the background area can improve the attack efficiency as shown in Fig.~\ref{fig:back_roi}(b).
\subsection{Attack Evaluation}
Adversarial examples generated with the optimization function described in~\eqref{eq:tloss} on MNIST dataset are shown in Fig.~\ref{mnist_comparison}. Note that original MNIST dataset are composed of single channel grayscale images. To feed them into image compression models that normally input 3-channel RGB color images, we duplicate them to 3 channels with equal values. Experiment results show that by injecting noise perturbation to input image as in \eqref{eq:tloss}, the adversarial reconstruction significantly differs from the original input and is visually even closer to the designated target. Both MSE and MS-SSIM optimized compression models are vulnerable to such targeted attack.
\if 0
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.33]{./figs/result_dB.pdf}
\caption{Targetd attack performance comparison between with (w/) and without (w/o) target area masking on Cityscape dataset. With input pertubation in target area controlled at the same level but more noise added in the background area, the reconstruction distortion in the target area gets better.}
\label{fig:rec_loss}
\end{figure}
\fi
\begin{figure}[t]
\centering
\subcaptionbox{MS-SSIM opt.}{\includegraphics[scale=0.9]{./figs/mnist.pdf}}
\par\medskip
\subcaptionbox{MSE opt.}{\includegraphics[scale=0.9]{./figs/mnistMSE.pdf}}
\caption{Targeted adversarial examples and reconstructed output of MNIST~\cite{lecun1998gradient} samples compressed using (a) MS-SSIM loss optimized NIC model; and (b) MSE loss optimized NIC model; (1) original source images, (2) decoded reconstructions of original images, (3) adversarial examples, (4) adversarial reconstructions. Same target image is shown on the right for both MSE and MS-SSIM optimized image compression model.}
\label{mnist_comparison}
\end{figure}
A high-resolution large-scale dataset Cityscapes~\cite{Cordts2016Cityscapes} that contains a diverse set of stereo videos recorded in street scenes from 50 different cities, is also tested to demonstrate the effectiveness of targeted adversarial attacks with area masking on car plate.
We apply \eqref{eq:masked_tloss} to generate targeted adversarial examples where we set $\lambda = 0.1$.
As shown in Fig.~\ref{license}, the numbers in car plate can be succesfully modified to have the adversarial reconstruction closer to the designated target, which may consequently affect the inference engine for decision-making in autonomous driving and surveillance. The impact of image resolution is also tested, showing the same effectiveness of targeted attack at different resolutions.
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{./figs/licenseplate-2222.pdf}
\caption{Targeted adversarial attack on Cityscapes dataset with plate ROI mask. The license plate number area is marked in red box as the target area. Masked patches at native resolution 240$\times$240 and downscaled resolution with a factor of 2 at each dimension (120$\times120$) are both tested. Plots are scaled to the same size for better visualization.}
\label{license}
\end{figure}
\subsection{Application Limitations}
First, similar to the defense strategy in Sec.~\ref{sec:attack_defense}, we can also include targeted adversarial examples into the training stage to finetune the model for improved robustness. However, in real-life applications, the ``target'' of adversarial attack in pixel domain is abundant, making it impossible to include all potential ``targets''. A more practical solution for the defense of targeted attack is to limit the methodology developed in this work to some specific task.
Besides, we mainly focus on pixel-wise attack that affects the reconstruction visually in this paper. Another important domain of adversarial attack is to fool the artificial system (e.g. the accuracy of face recognition system) instead of human eyes. We leave the problem of the defense of targeted attack and the attack of NIC solutions in computer vision tasks for future study.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose to generate adversarial examples to examine the model robustness of existing learnt image compression methods. Experiments show that all tested methods are vulnerable to adversarial attack regardless of their network architectures, loss functions, and quality settings. To tackle it, we then propose to include adversarial examples to finetune the underlying models. Results have shown the effectiveness of our methodology, not only successfully defending the adversarial attack with great distortion alleviation in reconstruction, but also retaining the outstanding compression performance. It is also demonstrated that our methodology can be easily extended to targeted attack. Our proposed method is generalizable to most popular learnt image compression frameworks, and can be applied to other pixel domain regression tasks as well in future work.
\section{Acknowledgement}
We are very grateful for the authors~\cite{balle2016end,balle2018variational,minnen2018joint,Cheng2020Learned,chen2020,mentzer2020high,xie2021enhanced} who have made their learning-based image compression methods open source to public.
\bibliographystyle{IEEETran}
|
1,116,691,499,206 | arxiv | \section{Introduction}\label{sec: intro}
Let ${\mathcal{F}_{g,n}}$ be the moduli space of $n$-pointed $K3$ surfaces of genus $g>2$, i.e.,
primitively polarized of degree $2g-2$.
It is a quasi-projective variety of dimension $19+2n$ with a natural morphism ${\mathcal{F}_{g,n}}\to {\mathcal{F}_{g}}$
to the moduli space ${\mathcal{F}_{g}}$ of $K3$ surfaces of genus $g$, which is generically a $K3^{n}$-fibration.
In this paper we study holomorphic differential forms on a smooth projective model of ${\mathcal{F}_{g,n}}$.
They do not depend on the choice of a smooth projective model,
and thus are fundamental birational invariants of ${\mathcal{F}_{g,n}}$.
We prove a vanishing result for about half numbers of degrees,
and for the remaining degrees give a correspondence with modular forms on the period domain.
Our main result is stated as follows.
\begin{theorem}\label{thm: main}
Let ${\bar{\mathcal{F}}_{g,n}}$ be a smooth projective model of ${\mathcal{F}_{g,n}}$ with $g>2$.
Then we have a natural isomorphism
\begin{equation}\label{eqn: main}
H^{0}({\bar{\mathcal{F}}_{g,n}}, \Omega^{k}) \simeq
\begin{cases}
0 & \: \: 0<k\leq 9 \\
M_{\wedge^{k},k}({\Gamma_{g}}) & \: \: 10 \leq k \leq 18 \\
0 & \: \: k>19, \: k\in 2{\mathbb{Z}} \\
S\!_{19+m}({\Gamma_{g}}, \det)\otimes {\mathbb{C}}\mathcal{S}_{n,m} & \: \: k=19+2m, \: 0\leq m \leq n
\end{cases}
\end{equation}
\end{theorem}
Here ${\Gamma_{g}}$ is the modular group for $K3$ surfaces of genus $g$,
which is defined as the kernel of ${\rm O}^{+}(L_{g})\to {\rm O}(L_{g}^{\vee}/L_{g})$
where $L_{g}=2U\oplus 2E_{8}\oplus \langle 2-2g \rangle$ is the period lattice of $K3$ surfaces of genus $g$.
In the second case, $M_{\wedge^{k},k}({\Gamma_{g}})$ stands for
the space of vector-valued modular forms of weight $(\wedge^{k},k)$ for ${\Gamma_{g}}$ (\cite{Ma3}).
In the last case, $S\!_{19+m}({\Gamma_{g}}, \det)$ stands for the space of scalar-valued cusp forms of weight $19+m$ and
determinant character for ${\Gamma_{g}}$,
and $\mathcal{S}_{n,m}$ stands for the coset $\frak{S}_{n}/(\frak{S}_{m}\times \frak{S}_{n-m})$.
Theorem \ref{thm: main} is actually formulated and proved in the generality of lattice-polarization (Theorem \ref{thm: main lattice-pol}).
In the case of the top degree $k=19+2n$, namely for canonical forms, the isomorphism \eqref{eqn: main} is proved in \cite{Ma1}.
Theorem \ref{thm: main} is the extension of this result to all degrees $k<19+2n$.
The spaces in the right hand side of \eqref{eqn: main} can also be geometrically explained as follows.
In the case $k\leq 18$, $M_{\wedge^{k},k}({\Gamma_{g}})$ is identified with
the space of holomorphic $k$-forms on a smooth projective model of ${\mathcal{F}_{g}}$, pulled back by ${\mathcal{F}_{g,n}}\to {\mathcal{F}_{g}}$.
In the case $k=19+2m$, $S\!_{19+m}({\Gamma_{g}}, \det)$ is identified with the space of canonical forms on $\bar{\mathcal{F}}_{g,m}$,
and the tensor product $S\!_{19+m}({\Gamma_{g}}, \det)\otimes {\mathbb{C}}\mathcal{S}_{n,m}$
is the direct sum of pullback of such canonical forms by various projections ${\mathcal{F}_{g,n}}\to \mathcal{F}_{g,m}$.
Therefore Theorem \ref{thm: main} can be understood as a kind of classification result which says that
except for canonical forms, there is essentially no new differential forms on the tower $({\mathcal{F}_{g,n}})_{n}$ of moduli spaces.
In fact, this is how the proof proceeds.
The space $S\!_{l}({\Gamma_{g}}, \det)$ is nonzero for every sufficiently large $l$,
so the space $H^{0}({\bar{\mathcal{F}}_{g,n}}, \Omega^{k})$ for odd $k\geq 19$ is typically nonzero (at least when $k$ is large).
On the other hand, it is not clear at present whether $M_{\wedge^{k},k}({\Gamma_{g}})\ne 0$ or not in the range $10\leq k \leq 18$.
This is a subject of study in the theory of vector-valued orthogonal modular forms.
The isomorphism \eqref{eqn: main} in the case $k=19+2m$ is an $\frak{S}_{n}$-equivariant isomorphism,
where $\frak{S}_{n}$ acts on $H^{0}({\bar{\mathcal{F}}_{g,n}}, \Omega^{k})$ by its permutation action on ${\mathcal{F}_{g,n}}$,
while it acts on $S\!_{19+m}({\Gamma_{g}}, \det)\otimes {\mathbb{C}}\mathcal{S}_{n,m}$
by its natural action on the coset $\mathcal{S}_{n,m}=\frak{S}_{n}/(\frak{S}_{m}\times \frak{S}_{n-m})$.
Therefore, taking the $\frak{S}_{n}$-invariant part, we obtain the following simpler result
for the unordered pointed moduli space ${\mathcal{F}_{g,n}}/\frak{S}_{n}$, which is birationally a $K3^{[n]}$-fibration over ${\mathcal{F}_{g}}$.
\begin{corollary}
Let $\overline{{\mathcal{F}_{g,n}}/\frak{S}_{n}}$ be a smooth projective model of ${\mathcal{F}_{g,n}}/\frak{S}_{n}$.
Then we have a natural isomorphism
\begin{equation*}
H^{0}(\overline{{\mathcal{F}_{g,n}}/\frak{S}_{n}}, \Omega^{k}) \simeq
\begin{cases}
0 & \: \: 0<k\leq 9 \\
M_{\wedge^{k},k}({\Gamma_{g}}) & \: \: 10\leq k \leq 18 \\
0 & \: \: k>19, \: k\in 2{\mathbb{Z}} \\
S\!_{19+m}({\Gamma_{g}}, \det) & \: \: k=19+2m, \: 0 \leq m \leq n
\end{cases}
\end{equation*}
\end{corollary}
The universal $K3$ surface $\mathcal{F}_{g,1}$ is an analogue of elliptic modular surfaces (\cite{Shi}),
and the moduli spaces ${\mathcal{F}_{g,n}}$ for general $n$ are analogues of the so-called Kuga varieties over modular curves (\cite{Sho}).
Starting with the case of elliptic modular surfaces \cite{Shi},
holomorphic differential forms on the Kuga varieties have been described in terms of elliptic modular forms:
\cite{Sho} for canonical forms, and \cite{Gor} for the case of lower degrees (somewhat implicitly).
Theorem \ref{thm: main} can be regarded as a $K3$ version of these results.
As a final remark,
in view of the analogy between universal $K3$ surfaces and elliptic modular surfaces,
invoking the classical fact that elliptic modular surfaces have maximal Picard number (\cite{Shi}) now raises the question
if $H^{k,0}({\bar{\mathcal{F}}_{g,n}})\oplus H^{0,k}({\bar{\mathcal{F}}_{g,n}})$ is a sub ${\mathbb{Q}}$-Hodge structure of $H^{k}({\bar{\mathcal{F}}_{g,n}}, {\mathbb{C}})$.
This is independent of the choice of a smooth projective model ${\bar{\mathcal{F}}_{g,n}}$.
The rest of this paper is devoted to the proof of Theorem \ref{thm: main}.
In \S \ref{ssec: hol Leray} we compute a part of the holomorphic Leray spectral sequence
associated to a certain type of $K3^{n}$-fibration.
This is the main step of the proof.
In \S \ref{ssec: extension} we study differential forms on a compactification of such a fibration.
In \S \ref{ssec: univ K3} we deduce (a generalized version of) Theorem \ref{thm: main}
by combining the result of \S \ref{ssec: extension}
with some results from \cite{Pom}, \cite{Ma1}, \cite{Ma2}, \cite{Ma3}.
Sometimes we drop the subscript $X$ from the notation $\Omega_{X}^{k}$ when the variety $X$ is clear from the context.
\section{Proof}
\subsection{Holomorphic Leray spectral sequence}\label{ssec: hol Leray}
Let $\pi\colon X\to B$ be a smooth family of $K3$ surfaces over a smooth connected base $B$.
In this subsection $X$ and $B$ may be analytic.
We put the following assumption:
\begin{condition}\label{condition}
In a neighborhood of every point of $B$, the period map is an embedding.
\end{condition}
\noindent
This is equivalent to the condition that
the differential of the period map
\begin{equation*}
T_{b}B \to {\rm Hom}(H^{2,0}(X_{b}), H^{1,1}(X_{b}))
\end{equation*}
is injective for every $b\in B$, where $X_{b}$ is the fiber of $\pi$ over $b$.
For a natural number $n>0$ we denote by $X_{n}=X\times_{B}\cdots \times_{B}X$
the $n$-fold fiber product of $X$ over $B$,
and let $\pi_{n}\colon X_{n}\to B$ be the projection.
We denote by $\Omega_{\pi_{n}}$ the relative cotangent bundle of $\pi_{n}$, and
$\Omega_{\pi_{n}}^{p}=\wedge^{p}\Omega_{\pi_{n}}$ for $p\geq 0$ as usual.
\begin{proposition}\label{prop: hol Leray}
Let $\pi\colon X\to B$ be a $K3$ fibration satisfying Condition \ref{condition}.
Then we have a natural isomorphism
\begin{equation*}
(\pi_{n})_{\ast}\Omega_{X_{n}}^{k} \simeq
\begin{cases}
\Omega_{B}^{k} & \; \; k\leq \dim B \\
0 & \; \; k>\dim B, \: k\not \equiv \dim B \: (2) \\
K_{B}\otimes (\pi_{n})_{\ast}\Omega_{\pi_{n}}^{2m} & \; \; k=\dim B+2m, \: 0\leq m \leq n
\end{cases}
\end{equation*}
\end{proposition}
This assertion amounts to a partial degeneration of the holomorphic Leray spectral sequence.
Recall (\cite{Voi} \S 5.2) that $\Omega_{X_{n}}^k$ has the holomorphic Leray filtration $L^{\bullet}\Omega_{X_{n}}^k$ defined by
\begin{equation*}
L^{l}\Omega_{X_{n}}^k= \pi_{n}^{\ast}\Omega_{B}^{l}\wedge \Omega_{X_{n}}^{k-l},
\end{equation*}
whose graded quotients are naturally isomorphic to
\begin{equation*}
{\rm Gr}_{L}^{l}\Omega_{X_{n}}^{k} = \pi_{n}^{\ast}\Omega_{B}^{l} \otimes \Omega_{\pi_{n}}^{k-l}.
\end{equation*}
This filtration induces the holomorphic Leray spectral sequence
\begin{equation*}
(E_{r}^{l,q}, d_{r}) \: \: \: \Rightarrow \: \: \: E_{\infty}^{l+q} = R^{l+q}(\pi_{n})_{\ast}\Omega_{X_{n}}^{k}
\end{equation*}
which converges to the filtration
\begin{equation*}
L^{l}R^{l+q}(\pi_{n})_{\ast}\Omega_{X_{n}}^{k} \: = \:
{\rm Im}(R^{l+q}(\pi_{n})_{\ast}L^{l}\Omega_{X_{n}}^{k} \to R^{l+q}(\pi_{n})_{\ast}\Omega_{X_{n}}^{k}).
\end{equation*}
By \cite{Voi} Proposition 5.9, the $E_{1}$ page coincides with
the collection of the Koszul complexes associated to the variation of Hodge structures for $\pi_{n}$:
\begin{equation}\label{eqn: E1}
(E_{1}^{l,q}, d_{1}) = (\mathcal{H}^{k-l,l+q}\otimes \Omega_{B}^{l}, \bar{\nabla}).
\end{equation}
Here $\mathcal{H}^{\ast, \ast}$ are the Hodge bundles associated to the fibration $\pi_{n}\colon X_{n}\to B$, and
\begin{equation*}
\bar{\nabla} : \mathcal{H}^{\ast, \ast}\otimes \Omega_{B}^{\ast} \to \mathcal{H}^{\ast-1, \ast+1}\otimes \Omega_{B}^{\ast+1}
\end{equation*}
are the differentials in the Koszul complexes (see \cite{Voi} \S 5.1.3).
For degree reasons, the range of $(l, q)$ in the $E_{1}$ page satisfies the inequalities
\begin{equation*}
0\leq l \leq \dim B, \quad 0\leq k-l \leq 2n, \quad 0\leq l+q \leq 2n.
\end{equation*}
The first two can be unified:
\begin{equation}\label{eqn: (l,q)}
\max(0, k-2n) \leq l \leq \min(\dim B, k), \quad 0\leq l+q \leq 2n.
\end{equation}
We calculate the $E_{1}$ to $E_{2}$ pages on the edge line $l+q=0$.
\begin{lemma}\label{lem: spectral sequence}
The following holds.
(1) $E_{1}^{l,-l}=0$ when $l\leq \min(\dim B, k)$ with $l\not\equiv k$ mod $2$.
(2) $E_{2}^{l,-l}=0$ when $l< \min(\dim B, k)$.
(3) For $l_{0} = \min(\dim B, k)$ we have
$E_{1}^{l_{0},-l_{0}}=E_{2}^{l_{0},-l_{0}}= \cdots = E_{\infty}^{l_{0},-l_{0}}$.
\end{lemma}
\begin{proof}
By \eqref{eqn: E1}, we have $E_{1}^{l,-l}=\mathcal{H}^{k-l,0}\otimes \Omega_{B}^{l}$.
By the K\"unneth formula, the fiber of $\mathcal{H}^{k-l,0}$ over a point $b\in B$ is identified with
\begin{equation}\label{eqn: Hk-l,0}
H^{k-l,0}(X_{b}^{n}) = \bigoplus_{(p_{1}, \cdots, p_{n})} H^{p_{1},0}(X_{b}) \otimes \cdots \otimes H^{p_{n},0}(X_{b}),
\end{equation}
where $(p_{1}, \cdots, p_{n})$ ranges over all indices with
$\sum_{i}p_{i}=k-l$ and $0\leq p_{i} \leq 2$.
(1) When $k-l$ is odd, every index $(p_{1}, \cdots, p_{n})$ in \eqref{eqn: Hk-l,0}
must contain a component $p_{i}=1$.
Since $H^{1,0}(X_b)=0$, we see that $H^{k-l,0}(X_{b}^{n})=0$.
Therefore $\mathcal{H}^{k-l,0}=0$ when $k-l$ is odd.
(3) Let $l_{0} = \min(\dim B, k)$.
By the range \eqref{eqn: (l,q)} of $(l, q)$,
we see that for every $r\geq 1$
the source of $d_{r}$ that hits to $E_{r}^{l_{0}, -l_{0}}$ is zero,
and the target of $d_{r}$ that starts from $E_{r}^{l_{0}, -l_{0}}$ is also zero.
This proves our assertion.
(2) Let $l < \min(\dim B, k)$.
In view of (1), we may assume that $l=k-2m$ for some $m>0$.
By \eqref{eqn: (l,q)}, the source of $d_{1}$ that hits to $E_{1}^{l,-l}$ is zero.
We shall show that
$d_{1}\colon E_{1}^{l,-l}\to E_{1}^{l+1,-l}$
is injective.
By \eqref{eqn: E1}, this morphism is identified with
\begin{equation}\label{eqn: d1 Koszul}
\bar{\nabla} : \mathcal{H}^{2m,0}\otimes \Omega_{B}^{l} \to \mathcal{H}^{2m-1,1}\otimes \Omega_{B}^{l+1}.
\end{equation}
By the K\"unneth formula as in \eqref{eqn: Hk-l,0},
the fibers of the Hodge bundles $\mathcal{H}^{2m,0}$, $\mathcal{H}^{2m-1,1}$ over $b\in B$
are respectively identified with
\begin{equation}\label{eqn: H2m,0}
H^{2m,0}(X_{b}^{n}) = \bigoplus_{|\sigma|=m} H^{2,0}(X_{b})^{\otimes \sigma},
\end{equation}
\begin{eqnarray}\label{eqn: H2m-1,1}
H^{2m-1,1}(X_{b}^{n})
& = & \bigoplus_{|\sigma'|=m-1} \bigoplus_{i\not\in \sigma'} H^{2,0}(X_{b})^{\otimes \sigma'}\otimes H^{1,1}(X_{b}) \\
& = & \bigoplus_{|\sigma|=m} \bigoplus_{i\in \sigma} H^{2,0}(X_{b})^{\otimes \sigma-\{ i \}}\otimes H^{1,1}(X_{b}). \nonumber
\end{eqnarray}
In \eqref{eqn: H2m,0}, $\sigma$ ranges over
all subsets of $\{ 1, \cdots, n\}$ consisting of $m$ elements,
and $H^{2,0}(X_{b})^{\otimes \sigma}$ stands for
the tensor product of $H^{2,0}(X_{b})$ for the $j$-th factors $X_{b}$ of $X_{b}^{n}$ over all $j\in \sigma$.
The notations $\sigma', \sigma$ in \eqref{eqn: H2m-1,1} are similar,
and $H^{1,1}(X_{b})$ in \eqref{eqn: H2m-1,1} is the $H^{1,1}$ of the $i$-th factor $X_{b}$ of $X_{b}^{n}$.
Let us write $V=H^{2,0}(X_{b})$ and $W=(T_{b}B)^{\vee}$ for simplicity.
The homomorphism \eqref{eqn: d1 Koszul} over $b\in B$ is written as
\begin{equation}\label{eqn: d1 Hodge}
\bigoplus_{|\sigma|=m} \left( V^{\otimes \sigma}\otimes \wedge^{l}W \to
\bigoplus_{i\in \sigma} V^{\otimes \sigma - \{ i\}}\otimes H^{1,1}(X_{b})\otimes \wedge^{l+1}W \right).
\end{equation}
By \cite{Voi} Lemma 5.8, the $(\sigma, i)$-component
\begin{equation}\label{eqn: (sigma, i)}
V^{\otimes \sigma}\otimes \wedge^{l}W \to
V^{\otimes \sigma - \{ i\}}\otimes H^{1,1}(X_{b})\otimes \wedge^{l+1}W
\end{equation}
factorizes as
\begin{eqnarray*}
V^{\otimes \sigma}\otimes \wedge^{l}W & \to &
V^{\otimes \sigma - \{ i\}}\otimes H^{1,1}(X_{b}) \otimes W \otimes \wedge^{l}W \\
& \to & V^{\otimes \sigma - \{ i\}}\otimes H^{1,1}(X_{b})\otimes \wedge^{l+1}W,
\end{eqnarray*}
where the first map is induced by the adjunction
$V\to H^{1,1}(X_b)\otimes W$ of the differential of the period map for the $i$-th factor $X_{b}$,
and the second map is induced by the wedge product
$W \otimes \wedge^{l}W \to \wedge^{l+1}W$.
By linear algebra, this composition can also be decomposed as
\begin{eqnarray}\label{eqn: decompose}
V^{\otimes \sigma}\otimes \wedge^{l}W & \to &
V^{\otimes \sigma - \{ i\}}\otimes V \otimes W^{\vee} \otimes \wedge^{l+1}W \\
& \to & V^{\otimes \sigma - \{ i\}}\otimes H^{1,1}(X_{b})\otimes \wedge^{l+1}W, \nonumber
\end{eqnarray}
where the first map is induced by the adjunction $\wedge^{l}W \to W^{\vee} \otimes \wedge^{l+1}W$
of the wedge product,
and the second map is induced by the adjunction $V\otimes W^{\vee}\to H^{1,1}(X_{b})$ of the differential of the period map.
By our initial assumption \ref{condition}, the second map of \eqref{eqn: decompose} is injective.
Moreover, since $l+1\leq \dim W$ by our assumption,
the wedge product $\wedge^{l}W\times W \to \wedge^{l+1}W$ is nondegenerate,
so its adjunction $\wedge^{l}W \to W^{\vee} \otimes \wedge^{l+1}W$ is injective.
Thus the first map of \eqref{eqn: decompose} is also injective.
It follows that \eqref{eqn: (sigma, i)} is injective.
Since the map \eqref{eqn: d1 Hodge} is the direct sum of its $(\sigma, i)$-components, it is injective.
This finishes the proof of Lemma \ref{lem: spectral sequence}.
\end{proof}
We can now complete the proof of Proposition \ref{prop: hol Leray}.
\begin{proof}[(Proof of Proposition \ref{prop: hol Leray})]
By Lemma \ref{lem: spectral sequence} (2),
we have $E_{\infty}^{l,-l}=0$ when $l<l_{0}=\min(\dim B, k)$.
Together with Lemma \ref{lem: spectral sequence} (3), we obtain
\begin{equation*}
(\pi_{n})_{\ast}\Omega_{X_{n}}^{k} = E_{\infty}^{0} = E_{\infty}^{l_{0}, -l_{0}} = E_{1}^{l_{0}, -l_{0}}.
\end{equation*}
When $k\leq \dim B$, we have $l_{0}=k$, and
$E_{1}^{l_{0}, -l_{0}}=\Omega_{B}^{k}$ by \eqref{eqn: E1}.
When $k> \dim B$, we have $l_{0}= \dim B$, and
$E_{1}^{l_{0}, -l_{0}}=\mathcal{H}^{k-\dim B, 0}\otimes K_{B}$ by \eqref{eqn: E1}.
When $k-\dim B$ is odd, this vanishes by Lemma \ref{lem: spectral sequence} (1).
\end{proof}
In the case $k=\dim B + 2m$, the vector bundle
$\mathcal{H}^{2m,0}\otimes K_{B}=(\pi_{n})_{\ast}\Omega_{\pi_{n}}^{2m}\otimes K_{B}$
can be written more specifically as follows.
For a subset $\sigma$ of $\{ 1, \cdots, n \}$ with cardinality $| \sigma |=m$,
we denote by $X_{\sigma}\simeq X_{m}$ the fiber product of
the $i$-th factors $X\to B$ of $X_{n}\to B$ over all $i\in \sigma$.
We denote by
\begin{equation*}
X_{n} \stackrel{\pi_{\sigma}}{\to} X_{\sigma} \stackrel{\pi^{\sigma}}{\to} B
\end{equation*}
the natural projections.
The K\"unneth formula \eqref{eqn: H2m,0} says that
\begin{equation*}
(\pi_{n})_{\ast}\Omega_{\pi_{n}}^{2m} \simeq
\bigoplus_{|\sigma|=m} \pi^{\sigma}_{\ast}K_{\pi^{\sigma}}.
\end{equation*}
Combining this with the isomorphism
\begin{equation}\label{eqn: KXsigma}
\pi^{\sigma}_{\ast}K_{X_{\sigma}}\simeq K_{B}\otimes \pi^{\sigma}_{\ast}K_{\pi^{\sigma}}
\end{equation}
for each $X_{\sigma}$, we can rewrite the isomorphism in the last case of Proposition \ref{prop: hol Leray} as
\begin{equation}\label{eqn: push OmegaX k>dimB}
(\pi_{n})_{\ast}\Omega_{X_{n}}^{\dim B+2m} \simeq
\bigoplus_{|\sigma|=m} \pi^{\sigma}_{\ast}K_{X_{\sigma}}.
\end{equation}
\subsection{Extension over compactification}\label{ssec: extension}
Let $\pi\colon X\to B$ be a $K3$ fibration as in \S \ref{ssec: hol Leray}.
We now assume that $X, B$ are quasi-projective and $\pi$ is a morphism of algebraic varieties.
We take smooth projective compactifications of $X_{n}, X_{\sigma}, B$ and denote them by
$\bar{X}_{n}, \bar{X}_{\sigma}, \bar{B}$ respectively.
\begin{proposition}\label{prop: extension}
We have
\begin{equation*}
H^{0}(\bar{X}_{n}, \Omega^{k}) \simeq
\begin{cases}
H^{0}(\bar{B}, \Omega^{k}) & \: \: k\leq \dim B \\
0 & \: \: k>\dim B, \: k\not\equiv \dim B \: (2) \\
\oplus_{\sigma}H^{0}(\bar{X}_{\sigma}, K_{\bar{X}_{\sigma}}) & \: \: k=\dim B+2m, \: 0\leq m \leq n
\end{cases}
\end{equation*}
In the last case, $\sigma$ ranges over all subsets of $\{ 1, \cdots, n \}$ with $|\sigma|=m$.
The isomorphism in the first case is given by the pullback by $\pi_{n}\colon X_{n}\to B$,
and the isomorphism in the last case is given by the direct sum of the pullbacks by
$\pi_{\sigma}\colon X_{n}\to X_{\sigma}$ for all $\sigma$.
\end{proposition}
\begin{proof}
The assertion in the case $k>\dim B$ with $k\not\equiv \dim B$ mod $2$ follows directly from
the second case of Proposition \ref{prop: hol Leray}.
Next we consider the case $k\leq \dim B$.
We may assume that $\pi_{n}\colon X_{n}\to B$ extends to a surjective morphism $\bar{X}_{n}\to \bar{B}$.
Let $\omega$ be a holomorphic $k$-form on $\bar{X}_{n}$.
By the first case of Proposition \ref{prop: hol Leray}, we have
$\omega|_{X_{n}}=\pi_{n}^{\ast}\omega_{B}$ for a holomorphic $k$-form $\omega_{B}$ on $B$.
Since $\omega$ is holomorphic over $\bar{X}_{n}$,
$\omega_{B}$ is holomorphic over $\bar{B}$ as well by a standard property of holomorphic differential forms.
(Otherwise $\omega$ must have pole at the divisors of $\bar{X}_{n}$ dominating
the divisors of $\bar{B}$ where $\omega_{B}$ has pole.)
Therefore the pullback
$H^{0}(\bar{B}, \Omega^{k})\to H^{0}(\bar{X}_{n}, \Omega^{k})$
is surjective.
Finally, we consider the case $k=\dim B+2m$, $0\leq m \leq n$.
Let $\omega$ be a holomorphic $k$-form on $\bar{X}_{n}$.
By \eqref{eqn: push OmegaX k>dimB}, we can uniquely write
$\omega|_{X_{n}}=\sum_{\sigma}\pi_{\sigma}^{\ast}\omega_{\sigma}$
for some canonical forms $\omega_{\sigma}$ on $X_{\sigma}$.
\begin{claim}
For each $\sigma$, $\omega_{\sigma}$ is holomorphic over $\bar{X}_{\sigma}$.
\end{claim}
\begin{proof}
We identify $X_{n}$ with the fiber product $X_{\sigma}\times_{B}X_{\tau}$
where $\tau=\{ 1, \cdots, n\} - \sigma$ is the coset of $\sigma$.
We may assume that this fiber product diagram extends to a commutative diagram of surjective morphisms
\begin{equation*}
\begin{CD}
\bar{X}_{n} @>{\pi_{\tau}}>> \bar{X}_{\tau} \\
@V{\pi_{\sigma}}VV @VV{\pi^{\tau}}V \\
\bar{X}_{\sigma} @>{\pi^{\sigma}}>> \bar{B}
\end{CD}
\end{equation*}
between smooth projective models.
We take an irreducible subvariety $\tilde{B}\subset \bar{X}_{\tau}$ such that
$\tilde{B}\to \bar{B}$ is surjective and generically finite.
Then $\pi_{\tau}^{-1}(\tilde{B})\subset \bar{X}_{n}$ has a unique irreducible component dominating $\tilde{B}$.
We take its desingularization and denote it by $Y$.
By construction $\pi_{\sigma}|_{Y} \colon Y\to \bar{X}_{\sigma}$ is dominant (and so surjective) and generically finite.
On the other hand, for any $\sigma'\ne \sigma$ with $|\sigma'|=m$,
the projection $\pi_{\sigma'}|_{Y} \colon Y\dashrightarrow X_{\sigma'}$ is not dominant.
Indeed, such $\sigma'$ contains at least one component $i\in \tau$,
so if $Y\dashrightarrow X_{\sigma'}$ was dominant,
then the $i$-th projection $Y\dashrightarrow X$ would be also dominant,
which is absurd because it factorizes as $Y\to \tilde{B}\subset \bar{X}_{\tau}\dashrightarrow X$.
We pullback the differential form
$\omega=\pi_{\sigma}^{\ast}\omega_{\sigma}+\sum_{\sigma'\ne \sigma}\pi_{\sigma'}^{\ast}\omega_{\sigma'}$
to $Y$ and denote it by $\omega|_{Y}$.
Since $\omega$ is holomorphic over $\bar{X}_{n}$, $\omega|_{Y}$ is holomorphic over $Y$.
Since $\pi_{\sigma'}^{\ast}\omega_{\sigma'}|_{Y}$ is the pullback of the canonical form $\omega_{\sigma'}$
on $X_{\sigma'}$ by the non-dominant map $Y \dashrightarrow X_{\sigma'}$, it vanishes identically.
Hence $\pi_{\sigma}^{\ast}\omega_{\sigma}|_{Y}=\omega|_{Y}$ is holomorphic over $Y$.
Since $\pi_{\sigma}|_{Y}\colon Y \to \bar{X}_{\sigma}$ is surjective,
this implies that $\omega_{\sigma}$ is holomorphic over $\bar{X}_{\sigma}$ as before.
\end{proof}
The above argument will be clear if we consider over the generic point $\eta$ of $B$:
we restrict $\omega$ to the fiber of $(X_{\eta})^{n}\to (X_{\eta})^{\tau}$ over
the geometric point $\tilde{B}$ of $(X_{\eta})^{\tau}$ over $\eta$.
By this claim, the pullback
\begin{equation*}
(\pi_{\sigma}^{\ast})_{\sigma} :
\bigoplus_{|\sigma|=m} H^{0}(\bar{X}_{\sigma}, K_{\bar{X}_{\sigma}}) \to H^{0}(\bar{X}_{n}, \Omega^{\dim B + 2m})
\end{equation*}
is surjective.
It is also injective as implied by \eqref{eqn: push OmegaX k>dimB}.
This proves Proposition \ref{prop: extension}.
\end{proof}
\subsection{Universal $K3$ surface}\label{ssec: univ K3}
Now we prove Theorem \ref{thm: main}, in the generality of lattice-polarization.
Let $L$ be an even lattice of signature $(2, d)$
which can be embedded as a primitive sublattice of the $K3$ lattice $3U\oplus 2E_{8}$.
We denote by
\begin{equation*}
\mathcal{D} = \{ \: {\mathbb{C}} \omega \in {{\mathbb P}}L_{{\mathbb{C}}} \: | \: (\omega, \omega)=0, (\omega, \bar{\omega})>0 \: \}^{+}
\end{equation*}
the Hermitian symmetric domain associated to $L$, where $+$ means a connected component.
Let $\pi\colon X\to B$ be a smooth projective family of $K3$ surfaces over a smooth quasi-projective connected base $B$.
We say (\cite{Ma2}) that the family $\pi\colon X\to B$ is \textit{lattice-polarized with period lattice} $L$
if there exists a sub local system $\Lambda$ of $R^{2}\pi_{\ast}{\mathbb{Z}}$ such that
each fiber $\Lambda_{b}$ is a hyperbolic sublattice of the N\'eron-Severi lattice $NS(X_{b})$ and
the fibers of the orthogonal complement $\Lambda^{\perp}$ are isometric to $L$.
Then we have a period map
\begin{equation*}
\mathcal{P} : B \to {\Gamma}\backslash \mathcal{D}
\end{equation*}
for some finite-index subgroup ${\Gamma}$ of ${\rm O}^{+}(L)$.
By Borel's extension theorem, $\mathcal{P}$ is a morphism of algebraic varieties.
Let us put the assumption
\begin{equation}\label{eqn: period map birational}
\mathcal{P} \; \textrm{is birational and} \: -{\rm id}\not\in {\Gamma}.
\end{equation}
For such a family $\pi\colon X\to B$, if we shrink $B$ as necessary,
then $\mathcal{P}$ is an open immersion and Condition \ref{condition} is satisfied.
For example, the universal $K3$ surface $\mathcal{F}_{g,1}\to {\mathcal{F}_{g}}$ for $g>2$
restricted over a Zariski open set of ${\mathcal{F}_{g}}$ satisfies this assumption with
$L=L_{g}$ and ${\Gamma}={\Gamma_{g}}$ (see \S \ref{sec: intro} for these notations).
As in \S \ref{sec: intro}, we denote by
$M_{\wedge^{k},k}({\Gamma})$ the space of vector-valued modular forms of weight $(\wedge^{k},k)$ for ${\Gamma}$,
$S\!_{l}({\Gamma}, \det)$ the space of scalar-valued cusp forms of weight $l$ and character $\det$ for ${\Gamma}$,
and $\mathcal{S}_{n,m}=\frak{S}_{n}/(\frak{S}_{m}\times \frak{S}_{n-m})$.
\begin{theorem}\label{thm: main lattice-pol}
Let $\pi\colon X\to B$ be a lattice-polarized $K3$ family with period lattice $L$ of signature $(2, d)$ with $d\geq 3$
and monodromy group ${\Gamma}$ satisfying \eqref{eqn: period map birational}.
Then we have an $\frak{S}_{n}$-equivariant isomorphism
\begin{equation*}
H^{0}(\bar{X}_{n}, \Omega^{k}) \simeq
\begin{cases}
0 & \: \: 0<k< d/2 \\
M_{\wedge^{k},k}({\Gamma}) & \: \: d/2 \leq k < d \\
0 & \: \: k>d, \: k-d\not\in 2{\mathbb{Z}} \\
S\!_{d+m}({\Gamma}, \det)\otimes {\mathbb{C}}\mathcal{S}_{n,m} & \: \: k=d+2m, \: 0\leq m \leq n
\end{cases}
\end{equation*}
\end{theorem}
\begin{proof}
When $k\leq d$, we have
$H^{0}(\bar{X}_{n}, \Omega^{k}) \simeq H^{0}(\bar{B}, \Omega^{k})$ by Proposition \ref{prop: extension}.
Then $\bar{B}$ is a smooth projective model of the modular variety ${\Gamma}\backslash \mathcal{D}$.
By a theorem of Pommerening \cite{Pom},
the space $H^{0}(\bar{B}, \Omega^{k})$ for $k<d$ is isomorphic to the space of ${\Gamma}$-invariant holomorphic $k$-forms on $\mathcal{D}$,
which in turn is identified with the space $M_{\wedge^{k},k}({\Gamma})$
of vector-valued modular forms of weight $(\wedge^{k},k)$ for ${\Gamma}$ (\cite{Ma3}).
The vanishing of this space in $0<k<d/2$ is proved in \cite{Ma3} Theorem 1.2 in the case when $L$ has Witt index $2$,
and in \cite{Ma3} Theorem 1.5 (1) in the case when $L$ has Witt index $\leq 1$.
The vanishing in the case $k>d$ with $k\not\equiv d$ mod $2$ follows from Proposition \ref{prop: extension}.
Finally, we consider the case $k=d+2m$, $0\leq m \leq n$.
By Proposition \ref{prop: extension}, we have a natural $\frak{S}_{n}$-equivariant isomorphism
\begin{equation*}
H^{0}(\bar{X}_{n}, \Omega^{d+2m}) \simeq
\bigoplus_{|\sigma|=m} H^{0}(\bar{X}_{\sigma}, K_{\bar{X}_{\sigma}})
\end{equation*}
where $\frak{S}_{n}$ permutes the subsets $\sigma$ of $\{ 1, \cdots, n \}$.
Here note that the stabilizer of each $\sigma$ acts on $H^{0}(\bar{X}_{\sigma}, K_{\bar{X}_{\sigma}})$ trivially by \eqref{eqn: KXsigma}.
Therefore, as an $\frak{S}_{n}$-representation, the right hand side can be written as
\begin{equation*}
H^{0}(\bar{X}_{m}, K_{\bar{X}_{m}}) \otimes \left( \bigoplus_{|\sigma|=m}{\mathbb{C}}\sigma \right)
\simeq H^{0}(\bar{X}_{m}, K_{\bar{X}_{m}}) \otimes {\mathbb{C}}\mathcal{S}_{n,m}.
\end{equation*}
Finally, we have
$H^{0}(\bar{X}_{m}, K_{\bar{X}_{m}})\simeq S\!_{d+m}({\Gamma}, \det)$
by \cite{Ma2} Theorem 3.1.
\end{proof}
\begin{remark}
The case $k\geq d$ of Theorem \ref{thm: main lattice-pol} holds also when $d=1, 2$.
We put the assumption $d\geq 3$ for the requirement of the Koecher principle from \cite{Pom}.
Therefore, in fact, only the case $(d, k)=(2, 1)$ with Witt index $2$ is not covered.
\end{remark}
|
1,116,691,499,207 | arxiv | \section{Introduction}
\label{sec:introduction}
High harmonic generation (HHG) is a highly non-linear process in which a laser field with a given fundamental frequency $\Omega$ generates ``overtones" in the emitted radiation, at multiples of the fundamental frequency.\cite{Corkum1993PRL, Lewenstein_1994, Krausz_2009, Ghimire_2011, Ghimire_2018} HHG in atomic and molecular gases has been studied for decades\cite{Corkum1993PRL, Lewenstein_1994, Krausz_2009}, but the topic has gained renewed interest in recent years due to applications in condensed matter.\cite{Ghimire_2011, Schubert2014, Luu_2015,Vampa2015Nature,Langer2016Nature,Hohenleutner2015Nature,Ndabashimiye2016,Liu2017,You2016,Yoshikawa2017Science,Kaneshima2018, Ghimire_2018} There are different motivations to study HHG in solids.
On the one hand, a proper understanding of the physics underlying HHG may lead to table-top sources of high frequency radiation.\cite{Brabec_2000} On the other hand, HHG is also useful as a spectroscopic tool. The latter fact is exemplified by the recently demonstrated reconstruction of a material's band structure from its high harmonic spectrum \cite{Luu_2015,Vampa_2015b} and the measurement of the Berry curvature of a material.\cite{Luu_2018}
Both in atomic physics and in the condensed matter context the phenomenon of HHG has mostly been described in terms of single-particle pictures.\cite{Golde_2008, Ghimire_2011, Kemper2013NJP, Higuchi_2014, Vampa2014PRL, Vampa2015Nature, Wu2015,Tamaya2016,Luu2016,Otobe2016,Ikemachi2017,Osika2017,Hansen2017,Tancogne-Dejean2017b,Tancogne-Dejean2017,Ikemachi2018,Ikeda2018PRA}
These studies revealed that the HHG in semiconductors originates from the intraband and interband dynamics of the excited electrons, where the latter is described by extensions of the successful three step model used in atomic systems.\cite{Vampa2014PRL,Ikemachi2017} Although these analyses provide an intuitive understanding of the basic mechanisms of HHG in semiconductors, we currently lack a detailed understanding of the effect of correlations, which may be essential for justifying the ultrafast dephasing conventionally used.\cite{Orlando2018,Orlando2019} Furthermore, HHG spectra from different types of condensed matter systems, such as amorphous systems or liquids, have recently been reported. \cite{You_2017, Luu_2018_2,Luu2018Amorphas, Chinzei_2020} These observations also raise questions about the role of correlations and the possibility of HHG from materials which are not band insulators, and motivate studies of strongly correlated systems.
Several numerical simulations of HHG in Mott insulators as well and other strongly correlated systems have been conducted\cite{Silva_2018, Murakami_2018a, Murakami_2018b, Dejean_2018, Imai_2019} and it has been shown that in the strong-field regime, the characteristic features of the high harmonic spectrum can be understood by considering quasi-local processes. \cite{Murakami_2018a, Murakami_2018b}
Theoretical investigations in this field pose technical challenges since they require numerical techniques capable of treating strongly correlated systems and at the same time non-perturbative driving fields. Dynamical mean field theory (DMFT)\cite{Georges_1996} has become a standard tool for the study of strongly correlated electron systems in equilibrium, thanks to the development of powerful methods (impurity solvers) for the solution of the DMFT equations.\cite{Rubtsov_2005, Dai_2005, Werner_2006} Recently, several methods for solving the time-dependent impurity problem in nonequilibrium DMFT\cite{Aoki_2014} have been developed.\cite{Werner_2009, Eckstein_2010, Werner_2013, Murakami_2015}
Some of these methods allow to simulate strong field physics in strongly correlated materials and allow to access long enough times that the high harmonic spectrum produced by a few-cycle electric field pulse can be computed.
The aim of this work is to extend the previous Hubbard-model based analysis of HHG in Mott insulators \cite{Silva_2018, Murakami_2018a} to more complicated but realistic systems and to reveal various ways in which HHG can act as a spectroscopic tool. In particular, we will focus on models which admit bosonic excitations and discuss the resulting signatures in the high harmonic spectrum. Specifically, we will consider a model with a dynamically screened interaction which changes from a large bare value at frequencies much above some plasmon frequency to a reduced screened interaction in the static limit, and we will show that HHG allows one to reveal several characteristic energy scales of these systems. We will also discuss a multi-orbital Hubbard model in which the motion of charge carriers produces string states by creating local Hund excitations. The annihilation of these strings will be shown to enhance the high-energy radiation compared to the single-band Hubbard model.
The paper is organized as follows. In Sec.~\ref{sec:method} we describe the DMFT method used and the models considered in our study. The results for the electron-plasmon model are presented in Sec.~\ref{sec:plasmon} and those for the two-orbital model in Sec.~\ref{sec:2orbital}. The conclusions of our study are summarized in Sec.~\ref{sec:conclusions}.
\section{Method}
\label{sec:method}
\subsection{DMFT for a Bethe lattice with electric field}
We consider lattice models with a Hamiltonian of the general form $H_\text{latt}=\sum_i H_{\text{loc},i}+\sum_{\langle i,j\rangle}H_{\text{hop},i,j}$, where $H_{\text{loc},i}$ describes the interaction and chemical potential terms on site $i$ and $H_{\text{hop},i,j}$ the hopping between sites $i$ and $j$, which is diagonal in the spin and orbital indices $\sigma$ and $\alpha$. This lattice model is solved within the DMFT approximation\cite{Georges_1996} which maps the lattice problem onto a self-consistently determined quantum impurity model of the form $H_\text{imp}=H_\text{loc}+H_\text{bath}+H_\text{hyb}$. Here, $H_\text{loc}$ is the same local Hamiltonian as in the lattice model, $H_\text{bath}$ describes a bath of noninteracting electrons whose parameters are optimized to mimic the lattice environment, and $H_\text{hyb}$ describes the hybridization between the impurity and the bath. In an action formulation the bath is integrated out and replaced by a hybridization function $\Delta_{\alpha,\sigma}(t,t')$ which controls how electrons hop in and out of the impurity orbital.
Assuming that the lattice self-energy is local, this hybridization function is optimized in such a way that the Green function of the impurity, $G_{\text{imp},\alpha,\sigma}(t,t')$, is the same as the local lattice Green function, $G_{\text{latt},i,i,\alpha,\sigma}(t,t')$. Since we use the nonquilibrium formalism, the time indices are defined on the L-shaped contour,\cite{Aoki_2014} which runs from time 0 to some time $t_\text{max}$ along the real-time axis, back to time zero, and then to time $-i\beta$ (with $\beta$ the inverse temperature) along the imaginary-time axis. The basic classes and routines for the handling nonequilibrium Green's functions, on which our simulations are built, have recently been published in Ref.~\onlinecite{Schuler_2019}.
We will consider a lattice with a semi-circular density of states (the infinitely connected Bethe lattice).
For this lattice, the self-consistency relation can be expressed in a simple form.\cite{Georges_1996} In equilibrium, it reads
\begin{equation}
\Delta_{\alpha,\sigma}(t,t')=v_\alpha G_{\text{imp},\alpha,\sigma}(t,t')v_\alpha,
\end{equation}
with $v_\alpha$ corresponding to one quarter of the noninteracting bandwidth for band $\alpha$.
Since we are interested in effects induced by strong electric fields, let us briefly explain how the electric field enters into DMFT calculations with Bethe-type self-consistency (details can be found in Ref.~\onlinecite{Werner_2017b}). The idea is to distinguish hopping processes ``parallel to the field" and ``antiparallel to the field" and to add the corresponding Peierls phases to the hopping terms in the Bethe-lattice type self-consistency equation:
\begin{align}
\Delta_{\alpha,\sigma}(t,t')=&\frac{1}{2}\Big[v_\alpha e^{i\phi(t)}G_{\text{imp},\alpha,\sigma}(t,t')v_\alpha e^{-i\phi(t')}\nonumber\\
&+v_\alpha e^{-i\phi(t)}G_{\text{imp},\alpha,\sigma}(t,t')v_\alpha e^{i\phi(t')}\Big]\nonumber\\
& \equiv \Delta_{L,\alpha,\sigma}+\Delta_{R,\alpha,\sigma},
\end{align}
where $\phi(t)=-\int_0^t ds E(s)ea/\hbar c$ is the Peierls phase for the electric field with amplitude $E$ and $a$ is the lattice spacing. The factor $1/2$ is a convention used to recover the usual Bethe lattice self-consistency in the model without field. In this set-up the kinetic energy and current can be measured as follows:
\begin{align}
E_\text{kin}(t)&= \text{Re}[\Gamma_{L,\alpha,\sigma}(t) + \Gamma_{R,\alpha,\sigma}(t)],\\
j(t)&= \text{Im}[\Gamma_{L,\alpha,\sigma}(t) - \Gamma_{R,\alpha,\sigma}(t)],
\end{align}
with $\Gamma_{L/R}(t)=-i[G_\text{imp}*\Delta_{L/R}]^<(t,t)$. While a Bethe lattice with field may at first sight look suspicious, the above procedure is consistent with the spirit of DMFT, and as we will show below, it reproduces all the electric field induced features which have been previously discussed for a hypercubic lattice implementation.\cite{Murakami_2018a} As in the latter work, the HHG intensity is calculated as the square of the Fourier transform of the dipole acceleration $(d/dt)j(t)$, i.e., as $|\omega j(\omega)|^2$.
The impurity problem will be solved by the non-crossing approximation NCA, which is the lowest order self-consistent expansion in the hybridization function $\Delta$.\cite{Keiter_1971,Eckstein_2010,Werner_2013} This method is numerically cheap and is expected to give qualitatively correct results in the Mott insulating regime where the local interaction term dominates the hybridization term.
In the following subsections, we will describe in some more detail the two models which will be studied in this paper, namely a Holstein-Hubbard model representing electrons coupling to plasmons, and a two-orbital Hubbard model with Hund coupling. From now on we will set $v_\alpha=1$ (bare bandwidth $4$), i.e. we measure energies in units of $v_\alpha$ and time in units of $\frac{\hbar}{v_\alpha}$. The lattice spacing and $\hbar$ are set to unity.
\subsection{Holstein-Hubbard model}
In order to investigate the effects of an electron-plasmon coupling, we consider a Hubbard-Holstein model with a single orbital per site and a local Hamiltonian of the form
\begin{align}
H_\text{loc}=&U_\text{bare} n_\uparrow n_\downarrow -\mu (n_\uparrow +n_\downarrow)\nonumber\\
&+g(n_\uparrow +n_\downarrow-1)(b+b^\dagger) + \omega_0 b^\dagger b,
\end{align}
with $n_\sigma$ the density for spin $\sigma$, $\mu$ the chemical potential, and $U_\text{bare}$ the bare on-site repulsion. The electrons are coupled via local density fluctuations to bosons with frequency $\omega_0$. The electron-boson coupling is $g$ and the boson creation operator is denoted by $b^\dagger$.
In an action formulation, the bosons can be integrated out, which results in a retarded, or frequency dependent, effective interaction between the electrons.\cite{Altland_2010, Werner_2016, Murakami_2015} On the Matsubara axis, this frequency dependent interaction has the form
\begin{equation} \label{I13}
U(i\omega_n) = U_{\textrm{bare}} + \frac{2g^2 \omega_0}{(i\omega_n)^2-\omega_0^2}.
\end{equation}
Upon analytical continuation ($i\omega_n \rightarrow \omega + i0^{+}$), the real and imaginary parts become
\begin{equation} \label{I14}
\begin{aligned}
&\textrm{ReU}(\omega) = U_{\textrm{bare}} + \frac{2g^2\omega_0}{\omega^2-\omega_0^2}, \\
&\textrm{ImU}(\omega) =-g^2\pi(\delta(\omega-\omega_0)-\delta(\omega+\omega_0)).
\end{aligned}
\end{equation}
Hence, electrons oscillating with a frequency $\omega$ much higher than $\omega_0$ experience an effective interaction $U\approx U_{\textrm{bare}}$, whereas for $\omega \ll \omega_0$, the electrons experience $U\approx U_{\textrm{bare}}- \frac{2g^2}{\omega_0}\equiv U_{\textrm{scr}}$. In the following, we denote the difference between the bare and screened $U$ by $\lambda$ (=$\frac{2g^2}{\omega_0}$).
The real and imaginary parts of $U(\omega)$ for a set of parameters corresponding to a large plasmon energy $\omega_0$ are shown by the black lines in Fig.~\ref{fig:bosons2}.\footnote{Our choice of parameters ($\text{bandwidth}=4$, $\omega_0=17$, $U_\text{scr}\approx 5$, $U_\text{bare}\approx 25$) may be considered representative of transition metal oxides.}
One notices a transition between the bare and screened regime with a pole-like structure around $\pm\omega_0$. In the spectral function of the Mott insulating Holstein-Hubbard model, these structures lead to plasmon satellite peaks which are split off from the Hubbard bands at $\pm U_\text{scr}/2$ by energies of $\pm n\omega_0$, see black spectrum in Fig.~\ref{fig:spectral_function}. In the equilibrium system at low temperature, only the high-energy sidebands are visible because they correspond (in the case of the upper band) to the insertion of an electron with simultaneous emission of a boson. (The analogous process with absorption of a boson is suppressed, because the bosonic system is in the ground state.)
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{U_figure.pdf}
\caption{Real and imaginary parts of the frequency dependent
interaction, $U(\omega)$, for $\omega_0=17$, $U_\text{scr}=5$ and $\lambda=20$. The horizontal dashed lines are the asymptotic values for Re$U$ in the limit of $\omega \rightarrow \pm \infty$ and $\omega\rightarrow 0$. Besides the single boson case ($\sigma=0$) we also show the results for a distribution of bosons with a width defined by $\sigma$ (see Eq.~(\ref{multiboson})).
}
\label{fig:bosons2}
\end{figure}
Sharp singularities as in the effective $U(\omega)$ for the Holstein-Hubbard model are not present in the downfolded effective interactions of realistic materials, obtained for example by the constrained random phase approximation.\cite{Ayrasetiawan_2004} In realistic systems, the plasmon couples to single-particle excitations and the delta-function like structure of the plasmon in $\text{Im}U(\omega)$ becomes broadened (the pole-like structure in $\text{Re}U(\omega)$ becomes a smooth crossover from $U_\text{bare}$ to $U_\text{scr}$).\cite{Miyake_2008, Casula_2012, Werner_2015b}
In order to model such a more realistic situation we can extend the single boson model to a model with a distribution of bosons, where the coupling constant for the boson with frequency $\omega_i$ obeys
\begin{equation}
\big( \frac{g_i}{\omega_i} \big )^2 = \big( \frac{g}{\omega_0} \big )^2 \frac{\exp{ (- \frac{(\omega_i-\omega_0)^2}{2\sigma^2} )} }{ \sum_i \exp{( - \frac{(\omega_i-\omega_0)^2}{2\sigma^2} )} }.
\label{multiboson}
\end{equation}
This choice of coupling constants ensures that the renormalized hopping (width of the main Hubbard bands near $\pm U_\text{scr}/2$) is the same in the single-boson and multi-boson case, since
$t_{\textrm{eff}}=t \exp{(-\frac{g^2}{\omega_0^2})}$, where $t$ is the hopping parameter in the absence of electron boson coupling. \cite{Werner_2016}
Furthermore, $\lambda$ stays the same which can be verified straightforwardly by computing $\sum_i \frac{2g_i^2}{\omega_i}$ with Eq.~\eqref{multiboson}.
The spectral functions for $\sigma=0,1,2$, $\omega_0=17$, $U_{\textrm{scr}}=5$ and $\lambda=20$ are shown in Fig.~\ref{fig:spectral_function} and the corresponding $U(\omega)$ are plotted in Fig.~\ref{fig:bosons2}. Here, we adjusted the number of bosonic modes $\omega_i$ such that the energy separation between neighboring modes is constant with spacing $0.2$, while the modes extend to a fixed $r\sigma$ of the Gaussian distribution, with $r$ some fixed real number. Note that in the multi-boson case, the sidebands of the main Hubbard band get broadened, but the main Hubbard bands are left unaltered.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth, bb=0 10 385 260, clip=false]{high_w0_superimposed_figure.pdf}
\caption{Spectral functions for different boson distributions with $\sigma=0,1,2$. The parametes are $\omega_0=17$, $U_{\textrm{scr}}=5$, $\lambda=20$, and the inverse temperature is $\beta=5$. For $\sigma=1$ ($2$) we use $31$ ($61$) bosonic modes. The vertical lines show $U_\text{scr}/2$ (solid black) and $U_\text{bare}/2$ (dashed black) and the line $U_\text{scr}/2 + \omega_0$ (gray). For the main Hubbard band, all the data overlap. }
\label{fig:spectral_function}
\end{figure}
For the treatment of bosonic couplings in NCA, we use the procedure detailed in Ref.~\onlinecite{Werner_2013}, which has previously been used in nonequilibrium studies of electron-phonon problems.\cite{Werner_2013,Werner_2015a,Sayyad_2019} This method involves an approximation in the treatment of the boson couplings which is well justified in the limit of high boson frequency. It is thus particularly suited for the study of plasmon excitations, which have energies comparable to or larger than the bandwidth.
We note in Fig.~\ref{fig:spectral_function} that in contrast to the energy scale $U_\text{scr}$, which fixes the position of the first (main) band, and $\omega_0$, which determines the energy splitting between sidebands, the energy scale $U_\text{bare}$ is not directly evident in the single-particle spectrum. Furthermore, the weight of the boson side-peaks monotonically decreases with increasing energy.
Numerical evidence suggests that these are generic properties for systems with large $\omega_0 > g$.
A qualitatively different situation is encountered for $\omega_0 < g$, as illustrated in Fig.~\ref{fig:spectral_function_small_w0}, which shows the spectral function for $\omega_0=2$, $U_{\textrm{scr}}=5$ and $\lambda=10$. This parameter regime may be relevant for the description of sub-plasmons, which are collective excitations within a subset of orbitals. Here, the lowest peak in the upper Hubbard band is still at an energy close to $U_\text{scr}/2$, but an envelope drawn over the relatively tightly spaced subbands exhibits a peak around $U_{\textrm{bare}}/2$, so that the energy scale $U_\text{bare}$ manifests itself clearly in the single-particle spectrum.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth, bb=5 10 385 260, clip=false]{low_w0_superimposed_figure.pdf}
\caption{Spectral functions for $\omega_0=2$, $U_{\textrm{scr}}=5$, $\lambda=10$, $\beta=5$, and different boson distributions parametrized by $\sigma$. The vertical lines indicate $U_{\text{scr}}/2$ (solid black), $U_{\text{bare}}/2$ (dashed black) and $U_\text{scr}/2+\omega_0$ (gray).}
\label{fig:spectral_function_small_w0}
\end{figure}
Based on the qualitative features of the spectral functions illustrated in Figs.~\ref{fig:spectral_function} and \ref{fig:spectral_function_small_w0} we can speculate how the
HHG spectrum will look when plotted in the space of field strength $E_0$ and frequency $\omega$. An earlier analysis of the HHG response of the single-band Hubbard model in the Mott regime has found that the harmonic intensity is strong in a triangular region defined by $U-E_0 \lesssim \omega \lesssim U+E_0$, where $U$ is the on-site interaction of the Hubbard model and $E_0$ the amplitude of the AC electric field.\cite{Murakami_2018a} This suggests that the dominant contribution to the high harmonic emission can be attributed to the recombination of doublons and holons from nearest neighbor sites. At high field strength, higher order processes with associated cutoff energies $U+nE_0$ ($n>1$) could also be identified. The linear scaling of the cutoff as a function of field strength has become a hallmark of high harmonic generation in solids. \cite{Ghimire_2018, Wegener_2005}
In the large-$\omega_0$ case ($\omega_0>g$) we expect to find a similar behavior with cutoff laws determined by $U_\text{scr}$. In addition, we may expect to see features in the HHG spectrum associated with transitions from plasmon side bands, i.e. the absorption of plasmons, once the field strength exceeds $\omega_0$ and a large number of plasmons is excited. In models with $\omega_0<g$, where the screened and bare interaction determine the edge and the peak of the Hubbard band, respectively, we instead expect that the cutoff energies of the HHG plateaus will depend on the value of $E_0$ relative to the interaction strength. For $E_0\lesssim U_\text{scr}$, excitations between the lower and upper gap edge should govern the dynamics, and hence the first energy cutoff is expected to scale as $U_{\textrm{scr}}+E_0$. For $E_0\gtrsim U_\text{bare}$ transitions between the dominant peaks in the single-particle spectrum, located near $\pm U_{\textrm{bare}}/2$ will play a dominant role, so that the dominant cutoff may exhibit a scaling which is closer to $U_{\textrm{bare}}+E_0$ in the strong field regime.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=-0, width=0.49\columnwidth, bb=50 25 585 650, clip=true]{plot_a_square.pdf}
\includegraphics[angle=-0, width=0.49\columnwidth, bb=50 25 585 650, clip=true]{plot_a_quarter_square.pdf}
\caption{Equilibrium spectral functions of the two-orbital model for $U=10$, inverse temperature $\beta=5$ and indicated values of $J$. The left panel is for the half-filled system and the right panel for the quarter-filled system.}
\label{fig:a}
\end{center}
\end{figure}
\subsection{Two-orbital model}
\label{sec:twoorbitaltheory}
For the two-orbital Hubbard model we consider a local Hamiltonian with density-density interactions of the form
\begin{align}
H_\text{loc}=&U \sum_{\alpha=1,2} n_{\alpha\uparrow} n_{\alpha\downarrow} + (U-2J) \sum_{\sigma} n_{1\sigma} n_{2\bar\sigma} \nonumber\\
&+ (U-3J) \sum_{\sigma} n_{1\sigma} n_{2\sigma},
\end{align}
with $U$ the intra-orbital repulsion and $J$ the Hund coupling.
Low-temperature states of the half-filled system with $J>0$ are dominated by doubly occupied sites ($N=2$) in a high-spin configuration.
If we denote the lowest energy state of $H_\text{loc}$ with occupation $N$ by $E_N$, then the Mott gap can be estimated as $\Delta_\text{Mott}(N)=E_{N+1}+E_{N-1}-2E_N$.\cite{Werner_2009} Thus, the Hubbard bands in the half-filled Mott state are expected near $\omega\approx \pm \frac{U+J}{2}$.
The NCA spectral functions of half-filled systems are shown for $U=10$ and different $J$ in the left panel of Fig.~\ref{fig:a}. While the above argument explains the position of the main Hubbard bands, we also recognize a strong narrowing of these bands and the appearance of side-bands with increasing $J$.
The satellites are associated with local spin excitations (Hund excitations) and the reduction of the kinetic energy is a consequence of strings of Hund excitations formed by charge carriers (``singlons" or ``triplons") moving in the half-filled Mott background.
Looking at the equilibrium spectral functions shown in Fig.~\ref{fig:a} we anticipate that these Hund excitation processes can be clearly identified at large $J$, where the spectral function shows a well-defined satellite with an energy separation $J$.
More specifically, the process contributing to this satellite is triplon creation plus hopping: $(\uparrow,\uparrow)_j (\downarrow,\downarrow)_{j+1} \rightarrow (\uparrow\downarrow,\uparrow)_j (\downarrow,\downarrow)_{j+1} \rightarrow (\downarrow,\uparrow)_j (\uparrow\downarrow,\downarrow)_{j+1}$.
The hopping in the last step costs an extra energy $(U-2J)-(U-3J)=J$, and in a background of high-spin states leads to a short string-like distortion. It is natural to assume that in the HHG spectrum, such processes affect in particular the cutoff energies of the second and higher plateaus, since these are associated with singlon-triplon recombination plus additional hoppings.
The quarter filled two-orbital model represents a different case from the half-filled model, which is evident by comparing the spectral functions in the right panel of Fig.~\ref{fig:a} with those in the left panel. Adding an electron to a singly occupied site creates doublon states with energies $U-3J$, $U-2J$ or $U$, so that the upper Hubbard band for large $J$ splits into three subbands. While triplons moving in a half-filled background can leave behind a string of excited doublon states, there is no such mechanism in the quarter filled case, where all the singlons have the same energy. We thus expect different $J$-related effects in the high-harmonic spectrum of the half-filled and quarter-filled model.
\section{Results}
\label{sec:results}
\subsection{Electron-plasmon model}
\label{sec:plasmon}
This section presents the results for the single-band electron-plasmon model. In the present study, we excite the system with a few-cycle electric field pulse. The form of the 10 cycle pulse with a central frequency of $\Omega=1$ and a $\sin^2$ envelope is shown in the inset of Fig.~\ref{fig:E_vs_w_panel}. This set-up is different from Ref.~\onlinecite{Murakami_2018a}, which employed a Floquet DMFT formalism for time-periodic steady states. While the pulse protocol may lead to somewhat blurred high-harmonic features, it is more realistic from an experimental point of view.
\subsubsection{Cut-off behavior for $\omega_0>g$ }
The main panel of Fig.~\ref{fig:E_vs_w_panel} shows the HHG spectra obtained for fixed $U_\text{scr}=5$, $\lambda=20$ and plasmon energy $\omega_0=17$ (corresponding to $g\approx 13$). The solid black line shows the cutoff $U_{\textrm{scr}}+E_0$, the dashed black line shows $U_{\textrm{bare}}+ E_0$, while the gray line indicates $U_{\textrm{scr}}+\omega_0 + E_0$.
For strong fields, the high-intensity region of the HHG spectrum exhibits a $U_\text{scr}+\omega_0+E_0$ scaling. There is an almost abrupt change in the cutoff scaling from $U_\text{scr}+E_0$ to $U_\text{scr}+\omega_0+E_0$ near $E_0\approx \omega_0=17$ and one observes a high radiation intensity near $E_0\approx U_\text{scr}+\omega_0=22$. This indicates that the observed crossover to the higher-energy cutoff is triggered by doublon-holon recombinations from nearest neighbor sites, with simultaneous absorption of one plasmon. Expressed in terms of the spectral function (Fig.~\ref{fig:spectral_function_small_w0}), this corresponds to transitions from the first plasmon sideband of the upper Hubbard band to the lower Hubbard band. At field strengths $E_0\gtrsim \omega_0$, and especially around $E_0\approx U_\text{scr}+\omega_0$ the laser field excites plasmons and thus enables these types of recombinations.
For the present parameters, one cannot easily distinguish a $U_\text{scr}+\omega_0+E_0$ cutoff from a $U_\text{bare}+E_0$ cutoff (see black dashed line). However, a systematic check of different parameter sets confirms that the shift in the cutoff to higher energies is primarily controlled by $\omega_0$, and thus related to plasmon absorption.
\begin{figure}[t]
\centering
\includegraphics[angle=0, width=\columnwidth, width=\columnwidth, bb=90 70 750 530, clip=true]{HHG_with_pulse_inset.png}
\caption{HHG spectrum of the electron-plasmon model with $\omega_0=17$, $U_\text{scr}=5$, $\lambda=20$ and $\sigma=0$. The black solid line indicates $U_\text{scr}+E_0$, the black dashed line $U_\text{bare}+E_0$, while the gray line indicates $U_\text{scr}+\omega_0+E_0$. The gray vertical line is at $U_\text{scr}+\omega_0$. Inset: Shape of the 10-cycle excitation pulse with central frequency $\Omega=1$ and a $\sin^2$ type envelope.
}
\label{fig:E_vs_w_panel}
\end{figure}
\subsubsection{Cut-off behavior for $\omega_0<g$}
In this section, we consider the set of parameters corresponding to the single particle spectral function in Fig.~\ref{fig:spectral_function_small_w0}, which exhibits a relatively dense family of subbands with a dominant peak near $\pm U_\text{bare}/2$ ($\omega_0=2$, $U_{\textrm{scr}}=5$, $\lambda=10$, $g=3.16$). The corresponding HHG spectrum is shown in the top panel of Fig.~\ref{small_w0_sim}. The solid black lines indicate $U_{\textrm{scr}}+E_0$, as well as the higher oder cutoff energies $U_{\textrm{scr}}+nE_0$ ($n>1$). The dashed black lines are the cutoffs associated with $U_{\textrm{bare}}$ ($U_{\textrm{bare}} +E_0$, $U_{\textrm{bare}} +2E_0$ and $U_{\textrm{bare}} +3E_0$).
The qualitative features of the HHG spectrum are consistent with the naive expectations based on the properties of the single-particle spectral function shown in Fig.~\ref{fig:spectral_function_small_w0}. At low field strength, the cut-off is seen to follow essentially $U_{\textrm{scr}} + E_0$. However, at a sufficiently high field strength, the edge of the dominant HHG plateau crosses over to the $U_{\textrm{bare}}+E_0$ cutoff. Note that the $E_0$ range in this figure is extended compared to the previous one to establish that this crossover is also seen in the third-order cutoff, which switches from $U_{\textrm{scr}} + 3E_0$ to $U_{\textrm{bare}} + 3E_0$.
Since the identification of the cutoff energies in an intensity plot like Fig.~\ref{small_w0_sim} can be difficult, we also analyzed cuts at fixed $E_0$ to dermine the harmonic orders corresponding to the edges of a plateau, or, in the weak-field regime, to prominent peaks in the HHG spectrum. Examples of such cuts are shown in the lower panel of the figure. Even though the harmonics are not very well defined in the strong field regime, one can identify two plateau-like structures in the HHG signal. The actual cutoffs associated with the first plateau are indicated in the top panel by black dots, and the actual cutoffs associated with the second plateau are indicated by grey dots. The black dots confirm that the dominant plateau indeed exhibits a $U_{\textrm{scr}} + E_0$ scaling for $E_0\lesssim 8$. As one further increases $E_0$, the cutoff energy increases faster than $U_{\textrm{scr}} + E_0$, and eventually even exceeds $U_{\textrm{bare}} + E_0$, which we interpret as a constructive interference between second-neighbor recombination processes in a screened environment and nearest-neighbor processes in an unscreened environment. For $E_0\gtrsim 30$ the cutoff energy of the dominant HHG plateau clearly follows the $U_\text{bare}+E_0$ line.
Interestingly, in the strong-field regime, the second plateau is associated with next-next-nearest-neighbor recombination processes, as the corresponding cutoff energy crosses over from $U_{\textrm{scr}} + 3E_0$ to $U_{\textrm{bare}} + 3E_0$. No clear plateau-like structure can be associated with the next-nearest-neighbor processes (for $E_0\gtrsim 30$).
Another noteworthy observation is that in the strong-field regime, the plateaus themselves do not exhibit well-defined harmonics, while the regions in between the plateaus, where the intensity decays exponentially as a function of harmonic order, exhibits well defined peaks at odd multiples of $\Omega$. The rather messy HHG signal in the plateau regions may be due to the large number of interfering inter-(side)-band transitions in this system with a relatively small splitting between the multiple boson side bands.
\begin{figure}[t]
\centering
\includegraphics[angle=0, width=\columnwidth, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_trendlines.png}\\
\includegraphics[width=\columnwidth, bb=0 0 385 260, clip=true]{HHGcut_Hubb_Holst.pdf}
\caption{Top panel: Harmonic spectra for the Hubbard-Holstein model with $\omega_0=2$, $U_{\textrm{scr}}=5$ and $\lambda=10$. Solid black lines indicate $U_{\textrm{scr}} + nE_0$ and dashed black lines $U_{\textrm{bare}} \pm n E_0$ ($n=1,2,3$). Black dots mark the cutoff energies of the first plateau and gray dots those of the second plateau.
Bottom panel: HHG spectra at $E_0=6$ and $40$. Here, the solid (dashed) vertical lines indicate $U_{\textrm{scr}} + E_0$ ($U_{\textrm{bare}} + E_0$) as well as $U_{\textrm{scr}} + 3E_0$ ($U_{\textrm{bare}} + 3E_0$). The arrows mark the edges of the plateaus in the $E_0=40$ case.
}
\label{small_w0_sim}
\end{figure}
\begin{figure*}[ht]
\centering
\subfigure{\includegraphics[width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_difference_large_w.png}}\hfill
\subfigure{\includegraphics[width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_difference_small_w.png}}
\caption{Left panel: Ratio between the HHG spectra of the Holstein-Hubbard model with $\omega_0=17$, $U_\text{scr}=5$, $\lambda=20$ and the Hubbard model with $U=U_\text{scr}$ and renormalized bandwidth.
Right panel: Analogous comparison between the Holstein-Hubbard model with $\omega_0=2$, $U_\text{scr}=5$,
$\lambda=10$ with the Hubbard model with $U=U_{\textrm{scr}}$ and renormalized bandwidth. Black solid lines indicate $\omega=U_\text{scr}+nE_0$ and black dashed lines $\omega=U_\text{bare}+nE_0$. The vertical gray lines indicate $\omega_0+U_{\textrm{scr}}$ in both figures. }
\label{fig:E_vs_w_difference_panel}
\end{figure*}
\subsubsection{Comparison to the Hubbard Model}
As is evident from the pronounced plasmon sidebands in the spectral function (see Fig.~\ref{fig:spectral_function}),
the electron-boson coupling in the Holstein-Hubbard model introduces additional excitations. It is thus interesting to study the difference in the high-harmonic spectra of the Hubbard and Hubbard-Holstein models.
For a meaningful comparison between the two models, we choose $U_\text{Hubbard}=U_\text{scr}$ and renormalize the hopping parameter of the Hubbard model to $t_{\textrm{eff}}=t \exp{(-\frac{g^2}{\omega_0^2})}$. This ensures that any difference observed between the two models is not an effect of a modified gap or bandwidth, but the result of the electron-boson interaction.
Figure~\ref{fig:E_vs_w_difference_panel} shows the difference in the logarithms of the high-harmonic intensities, i.e. a log-scale plot of the ratio of the radiation powers $\log[|\omega j_\text{Holstein-Hubbard}(\omega)|^2/|\omega j_\text{Hubbard}(\omega)|^2]$. In the model with $g<\omega_0$ it is found that the electron-plasmon model produces additional harmonics only above $\omega\approx U_\text{scr}+\omega_0$ (gray line) and above the $U_\text{scr}+E_0$ cutoff. This confirms that the additional intensity in the high-energy radiation is associated with transitions between the first plasmon sideband of the upper Hubbard band and the lower Hubbard band, i.e., the absorption of plasmons from the highly excited nonequilibrium system. Note that the complicated structure in the region of high harmonics is at least partially explained by the fact that we divide by the spectrum of the $U_\text{scr}$ Hubbard model, which exhibits plateaus in the HHG spectrum with cutoff eneriges $U_\text{scr}+nE_0$.
The right hand panel shows the results of an analagous comparison for the model with $\omega_0<g$. Here, we notice that additional intensity (compared to the Hubbard model with $U=U_\text{scr}$ and renormalized bandwidth) is observed to the right of the $U_{\textrm{scr}} + E_0$ line, with a large increase for $\omega\gtrsim U_\text{bare}+E_0$. This indicates an important role played by the boson sidebands near $\pm U_\text{bare}/2$ in the single-particle spectrum, which are also the peaks with the dominant weight. The activation of these sidebands in the strong-field regime leads to a shift of the leading HHG cutoff from $U_\text{scr}+E_0$ to $U_\text{bare}+E_0$.
\begin{figure*}[ht]
\centering
\subfigure{\includegraphics[width=\columnwidth, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_w_phonon_line0_250.png}}\quad
\subfigure{\includegraphics[width=\columnwidth, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_w_phonon_line1_0.png}}\quad
\subfigure{\includegraphics[width=\columnwidth, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_small_w_sigma0_06.png}}\quad
\subfigure{\includegraphics[width=\columnwidth, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_small_w_sigma0_24.png}}
\caption{Upper panels: Electron-plasmon model with $\omega_0=17$, $U_{\textrm{scr}}=5$ and $\lambda=20$. Upper left: $\Omega=1$ with $\sigma=0.25$. Upper right: $\Omega=1$ with $\sigma=1$. Lower panels: $\omega_0=2$, $U_{\textrm{scr}}=5$ and $\lambda=10$. Lower left: $\Omega=1$ with $\sigma=0.06$. Lower right: $\Omega=1$ with $\sigma=0.24$. The arrows point out some features not seen in the left panel.
}
\label{fig:many_bosons_panel}
\end{figure*}
\subsubsection{Effects of multiple bosonic modes}
One may note from Fig.~\ref{fig:spectral_function} that in our electron-plasmon models, the effective hopping parameter of the doublons is about $0.5$ (the width of the Hubbard band is about half the bare bandwidth). Accounting also for the field induced renormalization of the bandwidth, we may easily realize a situation in which the driving frequency $\Omega$ exceeds the width of the Hubbard bands.
In the same figure it is also apparent that in the case of a boson distribution with width $\sigma>0$ the plasmon sidebands get broadened by a factor proportional to $\sigma$, and this broadening persists even in the presence of strong fields. A similar broadening effect and even merging of the sidebands is observed in Fig.~\ref{fig:spectral_function_small_w0}. Hence, depending on the value of $\sigma$ and $\Omega$, intraband excitations within these plasmon sidebands will or will not be possible, and this should manifest itself in the HHG spectra. In particular, we expect that the low-energy harmonics (associated with intraband currents) will be enhanced for $\sigma>0$, while the high-frequency part of the spectrum may exhibit interference effects due to the broad energy distribution of the plasmon-dressed doublons or holes.
In Fig.~\ref{fig:many_bosons_panel} we illustrate the drastic effect of these intraband transitions on the leading energy cutoff. The top panels show HHG spectra for $\Omega=1$ and large boson frequency ($\omega_0>g$). The left panel is for a boson distribution width $\sigma=0.25$, which is small compared to $\Omega$ and hence we find a crossover from the $U_\text{scr}+E_0$ to $U_\text{scr}+\omega_0+E_0$ cutoff which looks similar to the result found for the single-boson model (Fig.~\ref{fig:E_vs_w_panel}). In the right panel, we show the result for $\sigma=1=\Omega$. Here, the crossover disappears and we only observe the $U_\text{scr}+E_0$ cutoff up to the highest field amplitudes considered. It appears that the intra-band excitations enabled by the broader boson distribution
wipe out the additional radiation intensity, which in the single-boson case was associated with the activation of inter-band transitions (plasmon absorption). This may be a manifestation of destructive interference between recombination processes with slightly different energies.
The numerical results for $\omega_0>g$ suggest an approximate condition for the crossover to take place, namely
\begin{equation}
\sigma \lesssim \Omega.
\end{equation}
Viewing HHG as a spectroscopic method, and assuming a simple form of the dynamically screened interaction as well as a large plasmon energy, this inequality tells us that by varying the driving frequency $\Omega$ and monitoring the crossover behavior, one can deduce the width of the ``plasmon peak" in $\text{Im}U(\omega)$.
In the case of small boson frequency ($\omega_0<g$), where one observes a crossover from $U_\text{scr}+E_0$ to $U_\text{bare}+E_0$ in the single-boson case, a similar effect of the broadening is observed in the strong-field regime. Here, the crossover is not associated with the activation of transitions between neighboring subbands (single plasmon absorption), but with a field-dependent change in the relative importance of different subbands for the HHG process. As indicated by the left arrow, in the case $\omega_0<g$, a prominent effect of a broadened boson spectrum is an increase in the intensity of the low-energy harmonics, which is consistent with the merging of the side-bands evident in Fig.~\ref{fig:spectral_function_small_w0} and a correspondingly enhanced intra-band contribution to the HHG signal.
While the crossover to the $U_\text{bare}+E_0$ cutoff at strong fields $E_0$ disappears for larger $\sigma$ (see right arrow), the strong intensity up to approximately $U_\text{scr}+2E_0$ persists at intermediate values of $E_0$. This indicates that these harmonics indeed originate mainly from second nearest neighbor recombination processes in a screened environment.
\subsection{Two-orbital model}
\label{sec:2orbital}
\subsubsection{Half-filled model}
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=-0, width=0.485\columnwidth, bb=50 25 585 650, clip=true]{plot_field_square.pdf}
\includegraphics[angle=-0, width=0.485\columnwidth, bb=50 25 585 650, clip=true]{plot_j_square.pdf}
\caption{Left panel: shape of the few-cycle excitation pulse with frequency $\Omega=0.5$ and amplitude $E_0=1$ used in the two-orbital simulations. Right panel: current for indicated values of $E_0$ ($U=10, J=1$).
}
\label{fig:pulse}
\end{center}
\end{figure}
\begin{figure*}[ht]
\begin{center}
\includegraphics[angle=-0, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_j0_line1_final.png}\hfill
\includegraphics[angle=-0, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_j3_line1_final.png}
\caption{High-harmonic spectra $|\omega j(\omega)|^2$ as a function of $E_0$ for $J=0$ (left) and $J=3$ (right). The gray dashed lines represent the cutoffs $U+J\pm n E_0$ ($n=1,2,3,4$).
}
\label{fig:hhg_line1}
\end{center}
\end{figure*}
In this section we study HHG in the Mott insulating two-orbital model, with the focus on signatures of the Hund coupling. We start with the half-filled system and consider a Mott insulator with a large gap ($U=10$, $\beta=5$), which is driven by a few-cycle electric field pulse with a frequency $\Omega=0.5 \ll \text{gap}$. The form of the pulse with a peak field amplitude $E_0=1$ is plotted in the left panel of Fig.~\ref{fig:pulse}. The current measured for different values of $E_0$ in a system with $J=1$ is shown in the right hand panel. By Fourier transforming these curves, we obtain the HHG spectra $|\omega j(\omega)|^2$ shown for $J=0$ and $J=3$ and a range of field amplitudes in Fig.~\ref{fig:hhg_line1}.
We are now going to analyze the structures apparent in these HHG spectra, focussing on the strong-field regime. As we mentioned before, in the Mott phase of the single-band Hubbard model, the plateau structures and energy cutoffs can be explained by quasi-local processes: recombination of doublons and holons on $n$th-nearest-neighbor sites.\cite{Murakami_2018a} In a half-filled two-orbital Hubbard model with $J>0$, the half-filled Mott insulator will have predominantly two electrons per site, in a high-spin configuration. An excitation across the Mott gap creates a singlon-triplon pair at an energy cost of $U+J$. If this pair is created on $n$th nearest neighbor sites, the energy released in the recombination process will, in the presence of the oscillating field with strength $E_0$, be in the range $U+J\pm nE_0$ (assuming that no spin-flips occur). The corresponding cutoff values are indicated in
Fig.~\ref{fig:hhg_line1} by gray dashed lines. They explain some, but not all of the structures. For example, in Fig.~\ref{fig:hhg_cut}, where we plot a cut for $J=3$ at $E_0=8$, it is apparent that these gray dashed lines do not coincide with the edges in the HHG plateaus.
\begin{figure}[h]
\begin{center}
\includegraphics[angle=-0, width=1.0\columnwidth]{plot_spectrum_j3E8_final.pdf}
\caption{High-harmonic spectra for $E_0=8$ and $J=3$. The vertical lines show $U+J+nE_0$ (gray dashed), $U+J+nE_0+(n-1)J$ (solid black), and $2(U+J)+(n-1)E_0$ (dark red dashed), with $n\ge 1$.
}
\label{fig:hhg_cut}
\end{center}
\end{figure}
In the half-filled two-orbital case, the singlon-triplon creation/annihilation may involve local spin (de)exciations. In particular the singlon-triplon creation on $n$th nearest neighbor sites with $n>1$ typically involves string states associated with Hund excitations. An example for $n=2$ is
$(\downarrow,\downarrow)_{j-1}(\uparrow,\uparrow)_j (\downarrow,\downarrow)_{j+1} \rightarrow (0,\downarrow)_{j-1}(\uparrow\downarrow,\uparrow)_j (\downarrow,\downarrow)_{j+1} \rightarrow (0,\downarrow)_{j-1} (\downarrow,\uparrow)_j (\uparrow\downarrow,\downarrow)_{j+1}$,
see also the discussion in Sec.~\ref{sec:twoorbitaltheory}.
The corresponding recombination processes yield cutoff values shifted by multiples of $J$, with the maximum cutoff energy shifted by $(n-1)J$ in the case where $n-1$ sites are flipped back to the high-spin configuration.
In Fig.~\ref{fig:hhg_cut} we indicate the corresponding maximum cutoffs $U+J+nE_0+(n-1)J$ by solid black lines.
They show a much better agreement with the measured spectrum, which strongly suggests that the annihilation of string states plays a role in the HHG process in multi-orbital Hubbard systems with Hund coupling.
\begin{figure*}[t]
\begin{center}
\includegraphics[angle=-0, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_j3_j0_jlines_norenorm_final.png}\hfill
\includegraphics[angle=-0, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_j3_u12_6j0_jlines_final.png}
\caption{The left panel shows a log plot of the ratio of the HHG spectra for the half-filled two-orbital models with $U=13,J=0$ and $U=10,J=3$ (original bandwidth), while the right panel shows the ratio between the HHG spectra for $U=13,J=0$ and $U=12.6,J=0$ with a bandwidth multiplied by 0.53. The gray dashed lines represent the cutoffs $U+J\pm n E_0$ and the white lines the maximum cutoffs including spin-flip processes, $U+J+n E_0+(n-1)J$. The black dashed lines are the cutoffs associated with second order in $U$ processes.}
\label{fig:hhg_difference}
\end{center}
\end{figure*}
To further analyze this issue, we plot in Fig.~\ref{fig:hhg_difference} the ratio of the HHG spectra for $J=3$ and $J=0$ on a log scale. In the left panel we keep the bandwidths of both models the same ($=4$) and compare $U=10,J=3$ to $U=13,J=0$, so that $U+J$ is identical in both cases. In the right panel, we compare $U=10,J=3,\text{bandwidth}=4$ to $U=12.6,J=0,\text{banwidth}=0.53\cdot 4$. Here the parameters of the $J=0$ system have been chosen such that the Hubbard bands match the positions and widths of the main Hubbard bands at $\omega\approx\frac{U+J}{2}$ in the $U=10,J=3$ spectrum (see pink line in Fig.~\ref{fig:a}). Both figures confirm a shift of the edges of the HHG plateaus to higher energies in the model with Hund coupling. In particular, the cutoff line associated with next-nearest neighbor recombination processes is shifted by $J$, while for next-next-nearest neighbor recombinations, we find both evidence for shifts by $J$ and $2J$ (see white dashed lines which indicate the shifts by $J$ and $2J$, respectively). We also notice that even the $n=1$ line appears to be shifted, at least for large $E_0$, see right panel of Fig.~\ref{fig:hhg_line1} and Fig.~\ref{fig:hhg_difference}. This indicates that nearest-neighbor (in the direction of the field) recombination processes with simultaneous Hund de-excitation via hopping perpendicular to the field play a role in the HHG.
The complex structures evident in Fig.~\ref{fig:hhg_line1} suggest interference effects from other types of processes. At energies $\gtrsim 2U+2J$ one can expect to see higher order processes like $(\uparrow,\uparrow)_i (\downarrow,\downarrow)_j (\uparrow,\uparrow)_k (\downarrow,\downarrow)_l \rightarrow (0,\uparrow)_i (\uparrow\downarrow,\downarrow)_j(\uparrow,0)_k (\downarrow,\uparrow\downarrow)_l$. If the production of the two singlon-triplon pairs occurs in the direction of the field, the maximum cutoff value associated with the recombination is $2(U+J)+2E_0$. However, the second singlon-triplon pair could also be excited in a direction perpendicular to the field in which case the cutoff becomes $2(U+J)+E_0$. If it occurs against the field we expect a cutoff $2(U+J)$. The corresponding cutoffs energies are indicated in Fig.~\ref{fig:hhg_difference} by the dashed black lines and explain some of the intensity modulations seen at high $\omega$.
In Fig.~\ref{fig:hhg_cut}, which shows a cut at $E_0=8$, the corresponding cutoff energies are indicated by the dark red dashed lines. These confirm a (possibly negative) interference effect between first order and second order processes.
It thus appears that in strongly interacting Mott systems, in contrast to semi-conductors, higher-order interaction processes also play a role in the high-energy region of the HHG spectrum and the associated currents interfere with those of the leading order processes.
\subsubsection{Quarter-filled model}
As illustrated in Fig.~\ref{fig:a}, adding an electron to a singly occupied site creates doublon states with energies $U-3J$, $U-2J$ or $U$, so that the upper Hubbard band for large $J$ splits into three subbands. In the quarter filled model, a charge excitation across the gap corresponds to the creation of an empty site and a doubly occupied site. Doublons moving in the background of singly occupied states do not leave behind a string of Hund excitations, because the flipping of the spin does not cost any energy. We thus do not expect to see signatures of string annihilation in the cutoff energies of the high harmonic spectra. On the other hand, the presence of three Hubbard subbands will lead to a rich cutoff structure, which reveals the Hund energy $J$.
\begin{figure*}[t]
\begin{center}
\includegraphics[angle=-0, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_quarter_j2_final.png}\hfill
\includegraphics[angle=-0, width=\columnwidth, bb=90 70 750 530, clip=true]{plot_hhg_quarter_j2_j0_final.png}
\caption{HHG spectra of the quarter filled two-orbital model. The left panel shows the result for $U=14$, $J=2$ and the right panel the difference of this spectrum to the result for $U=9$, $J=0$. Light gray dashed lines show the cutoffs $U-3J+nE_0$, white dashed lines $U-2J+nE_0$ and dark gray dashed lines $U+nE_0$ (for $U=14$, $J=2$).
}
\label{fig:hhg_quarter}
\end{center}
\end{figure*}
In the left panel of Fig.~\ref{fig:hhg_quarter}, we plot the HHG spectrum for $U=14$, $J=2$. The light gray dashed lines indicate the cutoff energies $U-3J+nE_0$, the white dashed lines the cutoffs $U-2J+nE_0$ and the dark gray dashed lines the cutoffs $U+nE_0$. While the cutoffs associated with $U-3J$ and $U-2J$ are hard to distinguish, since the corresponding Hubbard subbands are not clearly separated for $J=2$ (see Fig.~\ref{fig:a}), the cutoffs associated with $U$ can be clearly identified.
In the right panel, we show a log plot of the ratio between the $U=14,J=2$ HHG spectrum and the result for a model with $U=9,J=0$. The $U$ value in the latter case has been chosen in between $14-3\cdot 2$ and $14-2\cdot 2$, so that the upper Hubbard band of the $J=0$ model covers approximately the same energy range as the two lower subbands in the $J=2$ model. Near the $U-3J+nE_0$ and $U-2J+nE_0$ cutoff lines, we expect a reduced intensity in the $J=2$ case, because of the reduced spectral weight compared to the $J=0$ case (see black and blue curves in the right panel of Fig.~\ref{fig:a}), while in the energy region between the $U-2J+nE_0$ and $U+nE_0$ cutoff lines we expect an enhancement. This is indeed what is found in the right hand panel of Fig.~\ref{fig:hhg_quarter}, where an increase in intensity is mainly observed between the white dashed and dark-gray dashed lines.
\section{Discussion and conclusions}
\label{sec:conclusions}
We have analyzed the high-harmonic spectra of two types of Mott insulating Hubbard-type systems whose dynamics is influenced by bosonic excitations. The results for the electron-plasmon system, described by a Hubbard-Holstein model with large boson frequency, revealed a crossover between two different cut-off laws.
In the weak-field regime the cutoff scales as $U_\text{scr}+E_0$, while in the strong-field regime the high harmonics plateau extends up to $U_\text{scr}+\omega_0+E_0$ (for $\omega_0>g$) or $U_\text{bare}+E_0$ (for $\omega_0<g$). Similar crossovers can also be observed in the cutoff energies of the higher-order plateaus associated with $n$th ($n>1$) nearerst neighbor recombination processes.
The two different crossover behaviors for $\omega_0 < g$ and $\omega_0> g$ were further supported by
comparing the Holstein-Hubbard model results to those for a Hubbard model with effectively renormalized bandwidth and $U=U_\text{scr}$. In the case of $\omega_0>g$, the additional intensity in the high-energy radiation contribution can be associated with the absorption of plasmons which are excited at field strengths $E_0\gtrsim \omega_0$.
In the $\omega_0<g$ case, a different picture emerged.
Here, the additional radiation power was observed for energies $\gtrsim U_\text{bare}+E_0$. In this coupling regime it is not the transitions between the relatively tightly spaced sidebands which drive the crossover, but the gradual shift in the relative importance of different sidebands for the HHG process. At low field strength the peaks in the single particle spectrum near $\pm U_\text{scr}/2$, which define the Mott gap, play a prominent role, while for larger $E_0$, the peaks near $\pm U_\text{bare}/2$, which have the largest weight, become more relevant.
If the width of the plasmon peak in $\text{Im}U(\omega)$ is larger than the driving frequency, intraband excitations are activated within the plasmon sidebands. This affects the population within these sidebands and the energies which can be released by interband transitions, resulting in a destructive interference between the plasmon assisted processes. In systems with $\omega_0>g$, where interband transitions underpin the crossover behavior, one thus observes the disappearance of the crossover and a leading cutoff which scales as $U_\text{scr}+E_0$ up to large field strengths.
In this regime, the sensitivity of the HHG spectrum on the width of the plasmon peak allows in principle to extract this quantity by tracking the crossover behavior as a function of driving frequency $\Omega$. Hence, by studying HHG spectra for a broad range of $E_0$ it is in principle possible to determine whether $g<\omega_0$ or $g>\omega_0$, and in the former case the value of the screened interaction and of $\omega_0$ can be extracted.
The second model which we considered was a two-orbital model with Hund coupling. In the half-filled system, local spin excitations have a strong effect on the dynamics of charge carriers.\cite{Strand_2017} Singlons and triplons moving in the background of predominantly high-spin doublon sites can leave behind a string of low-spin states (Hund excitations), thereby transferring kinetic energy into potential energy. In the presence of a strong periodic driving field, the energy stored in these strings (a large energy of order $nJ$, where $n$ is the length of the string) can be released upon recombination of the singlon and triplon. This results in high harmonic plateaus which extend to higher energies than what would be expected from the splitting of the Hubbard bands. We have demonstrated this effect by comparing spectra for models with and without Hund coupling and appropriately adjusted gap size and bandwidth. In the quarter filled model, the strings of low spin states are absent, but the Hubbard bands split into subbands, which for large enough $J$ results in three separate cutoffs, associated with the three different types of doublon states that can be produced by photo-excitation. As in the case of the electron-plasmon model, we found that all the relevant energy scales of the atomic problem (here $U$ and $J$) are reflected in the field dependence of the spectrum so that a careful analysis of high harmonic spectra of correlated multi-band materials may give access to these parameters, which are of crucial importance for the theoretical modelling and hard to obtain from ab-initio calculations.
\acknowledgements
We thank M. Eckstein for helpful discussions. The calculations have been performed on the Beo04 and Beo05 clusters at the University of Fribourg, using a software library developed by M. Eckstein and H. Strand. This work has been supported by the European Research Council through ERC Consolidator Grant No. 724103, and by the Swiss National Science Foundation through Grant No. 200021\_165539. PW acknowledges the hospitality of the Aspen Center for Physics during the Summer 2019 program.
\bibliographystyle{apsrev4-1}
|
1,116,691,499,208 | arxiv |
\subsection{Leveraging Edited Video}
\label{sec:benchmark}
As a reminder, a shot is a continuous take from the same camera, and a cut occurs between a pair of shots (section \ref{sec:introduction}). We introduce a data source devised for the task of learning suitable video cuts from professionally edited movies. The primary purpose of this data collection is to leverage already edited content for learning fine-grained audio-visual patterns that trigger cuts in the video editing process. It is composed of $10,707$ movie scenes along with $257,064$ cuts. This dataset includes several streams of data for every shot, including visual, audio, and transcripts. Around $32\%$ of shots in the dataset include speech; the remaining shots without speech come mainly from action-driven scenes. In the supplementary material we provide additional statistics.
\input{figures/dataset_sample}
\noindent\textbf{Gathering edited video.} Movies are a great source of video data containing creatively edited video. Following Bain et al. \cite{bain2020condensed}, we downloaded $10,707$ videos from \textit{MovieClips} \footnote{\href{https://www.youtube.com/c/MOVIECLIPS/videos}{MovieClips: Source of the videos}}. Each of these videos correspond to a single movie scene (a section of the movie occurring in a single location with a continuous and condensed plot). Then, we automatically detected shot boundaries using \cite{gygli2017ridiculously}. To asses the quality of the detections, we verified $5,000$ of them and found an error rate of $4.34\%$ (217 errors). Typical errors include shots with heavy visual effects, and partial dissolves.
We find that each scene contains $114$ shots on average. Thus, an editor has to make more than a hundred cuts for every scene in a movie. Such a level of involvement further shows the complexity and time-consuming nature of the editing process. An appealing property of the chosen video source, MovieClips, is that it has weekly uploads of famous movie scenes, along with their metadata. Since our shot annotations are automatically generated, we can easily augment the dataset in the future.
\noindent\textbf{Data source samples.} Figure \ref{fig:dataset-examples} shows examples from the collected data. As specified before, the cut can be driven by visual, audio, and audio-visual cues. In Figure \ref{fig:cut_on_action}, the visual action of entering the door triggers the cut, while the audio stream does it in Figure \ref{fig:sound_match_cut}. In the latter, the editor matched two similar sounds from two different space-time locations; this example shows a visually discontinuous cut, where the audio matches perfectly between the two shots. The last two examples are cuts driven by audio-visual cues.
On one hand, Figure \ref{fig:speaker_change_cut} is triggered by the nature of a fluid conversation -- we can appreciate that the cut happens after the active speaker changes. On the other hand, the cut in Figure \ref{fig:reaction_cut} is driven by an action observed in both streams -- the scene shows a person's reaction to an audio-visual action. Note that these are just some of the triggers for cuts, and many others exist, making it hard to list and model each of them independently. Therefore, our next goal is to leverage this data to learn cut triggers in a data-driven fashion.
\noindent\textbf{Dataset.} We use edited movies to learn the proxy task of cutting edited videos. To do so, we split our dataset into train and validation sets. Since our task requires both positive and negative examples per pair of shots, we ignore shots shorter than one second for both sets. We use the frame-cut info to remove all the snippets that contain shot transitions within them to avoid the degenerate shortcut of learning how to detect shots. After these two filters, the training set consists of $7,494$ scenes with $177,987$ shots, and the validation set consists of $3,213$ scenes with $79,077$ shots.
\section{Conclusion}
\label{sec:conclusion}
We introduced the task of cut plausibility ranking for computational video editing. We proposed a proxy task that aligns with the actual video editing process by leveraging knowledge from already edited scenes. Additionally, we collected more than 260K edited video clips. Using this edited footage, we created the first method capable of ranking cuts automatically, which learns in a data-driven fashion. We benchmarked our method with a set of proposed metrics that reflect the model's level of precision at retrieval and expertise at providing tighter cuts. Finally, we used our method in a real-case scenario, where our model ranked cuts from non-edited videos. We conducted a user study in which editors picked our model's cuts more often compared to those made by the baselines. Yet, there is still a long way to match editors' expertise in selecting the most smooth cuts. This work aims at opening the door for data-driven computational video editing to the research community. Future directions include the use of fine-grained features to learn more subtle patterns that approximate better the fine-grained process of cutting video. Additionally, other modalities such as speech and language could bring benefits for ranking video cuts.
\noindent\textbf{Acknowledgments} This work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research through the Visual Computing Center (VCC) funding.
\section{Experiments}
\label{sec:experiments}
This section describes the experimental pipeline that we follow to validate our approach's effectiveness in learning to rank cuts. We detail our performance metrics, implementation details of our model, and introduce baseline approaches. Then, we then study our method's performance on our dataset \ref{sec:benchmark} by comparing to the baselines and ablating each of its components. Finally, we use our model to rank cuts in an unedited set of video footage and assess the quality of the top ranked results via human studies.
\subsection{Experimental Setup}
\noindent\textbf{Metrics.} We aim to define metrics that measure the quality of automatically ranked video cuts. One desirable property of automatic cut generation method is the ability to rank the good cuts with higher confidence across a test subset. To capture this property, we measure Recall at $(\eta K)$, ($R@\eta K$), where $\eta \in \{1,5,10\}$ is a value that controls the number of retrieved results and $K$ is fixed to the number of ground truth cuts in the entire inference set. In our experiments, $K \approx 80000$, which corresponds to the number of shot pairs or cuts in the validation subset. To account for the ambiguity of cuts, we measure $R@(\eta K)$ using different temporal precision ranges when counting the true positive cuts. Our intuition is that cuts near the ground-truth might be equally good and still useful for many practical scenarios. Therefore, we introduce a distance to cut $d$, which augments the number of positives with all clip pairs that are within $d$ seconds from the ground-truth cut. In practice, $d$ is the number of clips away from the ground truth cut. We report $R@(\eta K)$ for three distance values $d\in\{1,2,3\}$. These metrics allow us to study the performance of methods under different use cases. For instance, if someone plans to evaluate performance for automatic retrieval, $R@1K$ would be the best fit; however, if there will be a human-in-the-loop, $R@10K$ seems a more appealing metric to optimize, as a human could select the best cut from a small list of candidates. Conversely, measuring performance at $d=1$ and $d=3$ resemble cases where an application requires high-precision (\eg editing for professional film) and low-precision cuts (\eg editing for social media), respectively.
\input{tables/baselines_comparison.tex}
\input{tables/ablation_complete.tex}
\noindent\textbf{Implementation Details.} We first extract frames for each scene video at $24$ fps. In terms of backbones, we use ResNexT-101-3D \cite{hara3dcnns} pre-trained on Kinetics \cite{kay2017kinetics} for the visual stream and ResNet-18 pre-trained on VGGSound \cite{Chen20} for the audio stream. We freeze each of them and extract features from each of the last convolutional layers after Global Average Pooling. We extract snippet features in a sliding window strategy with window size of $T=16$ frames and stride of $\Delta=8$. The feature dimensions are 2048 for the visual backbone and 512 for the audio backbone. We concatenate these features into a $2560$-dimensional feature vector. The final output is the feature map $F_n \in R^{t_{n} \times 2560}$. We jointly train the contrastive learning module (CLM) and the auxiliary task module (ATM) with an initial learning rate of $0.003$ using Adam optimizer. We use $l_1=2$ layers with a reduction factor of $c=2$ for the CLM, and $l_2=1$ layers for the ATM. We reduce the learning rate by a factor of $0.9$ if the validation loss does not decrease after an epoch. We choose $\lambda_1=1$ and $\lambda_2=0.2$ as trade off coefficients in Equation (\ref{equation:loss}), such that both losses are of the same scale. While we train both tasks together, we perform inference through a two-stage prediction by first choosing the top $M=30\%$ scoring snippets from the ATM and then having CLM rank these retrieved pairs. \\
\noindent\textbf{Baselines.} Our main task is to rank each of the pairs according to their similarity score. Ideally, the pairs that first appear in the ranked list are the ones that are a better fit to form the cut, \ie the right places to cut. Below, we define all the methods that will be compared on this task. \\
\noindent\underline{Random Baseline}: We assign a uniform random score for each pair of clips in the validation subset.\\
\noindent\underline{Audio-visual baseline}: Since the ranking is given by a similarity score, we can use the raw backbone features directly (\ie without passing them through CLM and ATM) to measure their correlations as the ranking score. In our comparisons, we study three alternatives: (i) using only visual features (visual), (ii) using only audio features (audio), and (iii) concatenating audio-visual features (Audio-visual).\\
\noindent\underline{Learning to Cut (Single-stage)}: We use the CLM scores for all the pairs to perform the ranking, \ie ATM is not used here. We refer to this baseline as Ours (\textit{w/o multi-stage}).\\
\noindent\underline{Learning to Cut (Full):} As explained earlier, we first use the individual clip scores from ATM to choose the top $M$ scoring clips, which are then ranked using CLM similarity scores. \\
\textbf{Inference time.} At run time, features are pre-computed and cached and the feature similarity (cut ranking) runs on the fly. Extracting features takes about $6$ seconds per minute of video ($240$ fps). Computing similarity can be done efficiently via dot product, which computation time is negligible compared to the forward pass.
\subsection{Ranking Results}
\noindent\textbf{Comparison with baselines}
We compare our Learning to Cut method against the different baselines in Table \ref{table:baselines_comparison}. We report Recall $R@\alpha K$ with $\alpha\in\{1,5,10\}$ and under distances $d\in\{1,2,3\}$ clips (or $d\in\{0.33, 0.67, 1\}$ seconds) from the ground truth cut. We observe how challenging the task is by looking at the Random baseline performance. In the most strict setting ($R@1K$ and $d=1$), this method can only retrieve $0.6\%$ of the positive pairs. Even in the loosest setting ($R@10K$ and $d=3$), it only retrieves $\sim34\%$ of the positive pairs. Since sharp visual changes often occur between shots in continual editing, the similarity score of cross-shot clips is very low \cite{liu2020multi}, even at the ground truth cuts. Thus, the raw visual features perform poorly as well.
When combining both raw audio-visual features, we observe a different trend. Even though the results are low, they are better than random chance. We attribute this performance discrepancy (visual \textit{vs.} audio-visual raw features) to the fact that most audios are continuous across the given shot pairs; making the audio features to spot temporal discontinuities. However, that is not enough. Overall, we observe that raw visual features are not appropriate enough to offer a good margin in ranking performance with respect to random chance.
In contrast, both variants of our models outperform random chance (and the baselines) by large margins across all metrics. Indeed, the most important finding is the effectiveness of the two-stage inference, which improves the performance of the single-stage model by two or even three times in each of the metrics. This finding shows the importance of combining CLM and ATM, which mimics the process of first finding individual cut moments and then finding pairs that make a smooth transition across shots. We find that there is no significant difference when evaluating only on scenes from movies that were not seen at training.
\input{figures/qualitative}
\input{figures/human_study}
\noindent\textbf{Ablation Study.}
To validate the design choices in our Learning to Cut model, we remove each of its main components, evaluate the resulting variant model, and report all ablation results in Table \ref{table:ablation-complete}.
We observe that modeling both Audio and Visual information jointly provides and edge in performance. This trend is particularly evidenced for the stricter ranking metrics (\eg $R@1K$). We attribute these results to the fact that a good cut is made of both: visual compositions and sound; and having a single modality is not enough to do a good ranking. Finally, we observe that one key component of our model is the auxiliary task. When it is not used to facilitate the learning process (\ie when $\lambda_2=0$), the model's performance drops several points in each metric. We validate the auxiliary task is an essential component of our model. This task aligns with the editing process, whereby the editor picks a place where to cut, and only then, they pick the best match to glue together \cite{filmgrammar, berthouzoz2012tools}. More detailed ablations can be found in the \textbf{supplementary material.}
\subsection{Human Studies}
We use our method for ranking cut plausibility on raw video footage licensed from Editstock.com. We licensed five different projects including more than eight hours of raw/unedited footage. Unlike previous experiments, we now address the cut ranking problem in the same setting an editor would encounter it. In this case, the method takes as input two untrimmed unedited videos. The task is to produce potential cut locations in each of these videos. While we have shown that our method offers good ranking results in our proxy task, does it work for practical scenarios, where videos are unedited and untrimmed?
To answer this question, we conduct a human study with ten professional video editors. Although we could have invited general audience to our study, we decide not to do so; it has been shown \cite{smith2008edit} that the general population is unable to perceive the fine-grained details of good cuts. We design a study that asks participants to select the best cut among two options, which can be either taken from the professional editors' cuts \footnote{We invited professional editors to create video cuts from five Editstock projects. From about eighth hours of footage, the editors curated $120$ cuts.} generated by the random method, generated by the audio-visual baseline, or generated by our proposed approach. We conduct the user study in a one vs. one manner: \ie we mixed all possible combination of choices for the participants, and all the options were faced against each other. Thus, every alternative appeared the same number of times. We report the results in Figure \ref{fig:human-study}. Interestingly, our method's cuts are chosen by the participants more often than those from the baselines. However, the editors can clearly identify the professionally curated cuts. Despite the progress made by this work, there is still a long way to generate cuts with professional quality. We do believe, though, that our approach and baselines are first steps towards this direction.
\noindent\textbf{Qualitative Results} We show a pair-wise feature correlation between a pair of adjacent shot in figure \ref{fig:qualitative}. Each entry of the matrix is the similarity between a snippet of the left and right shot of the cut. The axes are center around the cut, such that the center of the matrix has the similarities of the positive pairs. We observe how our method transforms the features and make them spike around the cut's region.
\section{Introduction}
\label{sec:introduction}
\input{figures/teaser.tex}
The lack of video editing expertise is a common blocker for aspiring video creators. It takes many training hours and extensive manual work to edit videos that convey engaging stories. Arguably, the most time-consuming and critical task in video editing is to compose the right \emph{cuts}, \ie, decide how (and when) to join two untrimmed videos to create a single clip that respects continuity editing\cite{smith2006attentional}. To the untrained eye, cutting might seem easy; however, experienced editors spend hours selecting the best frames for cutting and joining clips. In light of this complexity, it is pertinent to ask: could artificial systems rank video cuts by how plausible they are?
Before delving into addressing the question above, it is worth defining the task of video cut ranking in detail. As Figure \ref{fig:teaser} illustrates, given two untrimmed input videos, the goal is to find the best moments (in each video) to trigger cuts, which join the pair into a single continuous sequence. A key challenge is to generate videos that make the audience believe actions unfold continuously. This type of cutting is often called continuity editing and aims to evoke an illusion of reality \cite{smith2006attentional, filmgrammar}, even though, the source videos could be recorded at different times. Figure \ref{fig:teaser} shows a typical trigger for cuts -- the moment when the speaker changes. In practice, the director could give the shot order via a storyboard or script, and it is the editor's job to realize which patterns make smooth transitions between shots. Our hypothesis is that many of those cut-trigger patterns can be found by carefully analyzing of audio-visual cues.
Despite its importance, potential impact, and research challenges, the computer vision community has overlooked the video cut ranking problem. While there has been significant progress in shot boundary detection \cite{pal2015video}, video scene segmentation \cite{rao2020local}, video summarization \cite{money2008video}, and video-story understanding \cite{bain2020condensed, huang2020movienet}, few to no works have focused on pushing the envelope of computational video editing.
The most relevant works at addressing video cut ranking and generation are found in the graphics and HCI communities \cite{berthouzoz2012tools, truong2016quickcut, leake2017computational, wang2019write, fried2019text}. These attempts take on a different perspective and focus on human-computer experiences for faster video editing. Yet, they still demand extensive work from an editor in the loop. We hypothesize that the cut ranking task has been neglected due to the lack of data, \ie, raw footage, and its corresponding cuts done by an editor.
In this paper, we introduce the first learning-based method to rank the plausibility of video cuts. It is important to note that we do not have access to the raw footage for each shot since it is difficult, \ie, requires expertise and extensive manual work, to gather a dataset of raw videos with the corresponding edits and cuts. Therefore, Our key idea is to leverage \textit{edited video content} to learn the audio-visual patterns that commonly trigger cuts.
While this data is not the same as the real-world input for generating and ranking cuts, we can still model the audio-visual data before and after the cut, thus modeling what good cuts look like. Additionally, this type of data can be found abundantly, which enables the development of data-driven models. Our results show that a model learned to solve this \textit{proxy task} can be leveraged to practical use cases.
Our approach begins with a pair of consecutive shots that form a cut (similar to the bottom row in Figure \ref{fig:teaser}). We look for a representation that discriminates between the good cuts (actual cuts found on edited video) against all alternative options (random alignments). To achieve this goal, we first collect a large-scale set of professionally edited movies, from which we extract shot boundaries to create more than 260K cuts and shot pairs. Using this new dataset, we train an audio-visual model, \emph{Learning-to-Cut}, which learns to rank cuts via contrastive learning. Our experimental results show that, while challenging, it is possible to build data-driven models to rank the plausibility of video cuts, improving upon random chance and other standard audio-visual baselines.
\noindent\textbf{Contributions.} To the best of our knowledge, we are the first to address video cut ranking from a learning-based perspective. To this end, our work brings two contributions.
\noindent\textbf{(1)} We propose Learning-to-Cut, an audio-visual approach based on contrastive learning. Our method learns cross-shot patterns that trigger cuts in edited videos (Section \ref{sec:method}).\\
\noindent\textbf{(2)} We introduce a benchmark and performance metrics for video cut ranking, where we show the effectiveness of Learning to Cut. Moreover, we showcase that expert editors more likely prefer the cuts generated by our method as compared to cuts randomly ranked and other baselines (Section \ref{sec:experiments}).
\section{Learning to Cut}\label{sec:method}
\input{sections/benchmark}
\subsection{Proxy task}
We propose a proxy task closely related to the actual editing process of cutting (and merging) two shots. We define a \textit{snippet} as a short time range within a shot. A snippet is fully contained within a shot and hence is much shorter.
Our proxy task consists of learning which pair of snippets (from a set composed of all clips from consecutive shots) is the best to stitch together.
Since we know that the editing process is mainly done in sequential order (stitching the left shot to the right shot), we can try to solve a local (two-neighboring-shots) proxy task by recurrently asking the following question: which snippets in both of the shot videos are the best to stitch together?. The best place to cut in each of the two shots are these two retrieved snippets.
Our proxy task resembles the actual editing process, where the editor needs to pick a place to cut based on what they want to show in the next shot. However, it is not exactly the same, since we do not have the part of the shots that are cut out during the editing process. Obtaining such data is challenging as it requires video editing expertise to select the appropriate cuts, and at the same time, it is hard to find large-scale footage without edits. Although not all possible clips are available during training, we argue (and show with experiments) that our proxy task, paired with a vast number of edited videos, provide enough supervisory signals to learn the audio-visual cues and patterns that trigger the cuts.
We can address the proxy task in several ways; however, there is one that best fits the task editors solve in continuity editing \cite{smith2006attentional}: \emph{maximize the smoothness of the transition or minimize the discontinuity in the semantics of the scene across a cut}. Furthermore, given the task's artistic nature, there is not necessarily a single correct answer. There might be more than one place in the videos where a cut can be placed and several pairs that would make semantic sense. Considering the previous observations, we decide to model this task as a ranking problem. We also build a small temporal window near the actual shot boundaries, and consider all clips within this window as appropriate cutting points. As a result we don't aim at retrieving only the clips at the very end/beginning of adjacent shots, but we also consider as valid some highly similar clips with significant temporal overlap. We approach this task by using Contrastive Learning. In fact, we aim to find a space, where the representations of clip pairs that belong together are close to each other and far from all others.
\noindent\textbf{Technical description.} Given a ground-truth cut formed by a pair of shots $(S_L, S_R)$, we aim to find the best pair of snippets to stitch together. We define $S_L$ as the set $\{a_i \rvert i \in \mathbb{N}, 0 \leq i \leq t_L \}$, and $S_R$ as $\{b_j \rvert j \in \mathbb{N}, 0 \leq j \leq t_R \}$, with $t_L$ and $t_R$ being the number of snippets contained in the shots, respectively. Our proxy task consists of ranking the set of all pairs of snippets $\{(a_i, b_j) \rvert i,j \in \mathbb{N}, 0 \leq i \leq t_L, 0 \leq j \leq t_R\}$. We know that the pair that should appear on top of the ranking is the one formed by the temporally adjacent snippets $(a_{t_L}, b_0)$. The rest of the pairs
are negative samples that we want to avoid retrieving.
\noindent\textbf{Potential short cuts.} One may think that this artificial task can be easily solved by any machine learning model by learning shortcuts and apparent biases, for instance, always ranking first the pair composed by the last clip of the left-hand-side shot and the first of the right-hand-side shot. Additionally, if we include the clip in which the actual shot transition occur in our training data, we would end up learning a shot detector. We avoid these two degenerate cases by not including any information of the temporal position of a clip within the shot and also ignoring all the clips that contain the transition from one shot to another.
\subsection{Learning to Cut Model}
\input{figures/model}
Our cut ranking model consists of three main modules: a Feature Extraction step, a Contrastive Learning module, and an Auxiliary Task module. We first extract audio-visual features from candidate snippets. Then, the Contrastive Learning Module refines these features such that pairs of clips forming good cuts yield higher similarity score than others. The last module consists of an auxiliary task that guides the model by predicting whether individual snippets are good places to cut or not. Our pipeline is illustrated in Figure \ref{fig:pipeline}. Below, we describe each module in detail.
\subsubsection{Feature Extraction} This task is inherently multimodal, since editors use fine-grained visual and audio signals from the shots to determine the cuts. Hence, we use both audio and visual backbones. It has been shown that effective visual features for video analysis come from short-term spatio-temporal architectures \cite{ji20123d, taylor2010convolutional, tran2015learning, carreira2017quo, wang2018temporal, feichtenhofer2019slowfast}. Similarly, effective audio architectures represent each time instance with a temporal window of fixed size and its corresponding spectogram \cite{takahashi2016deep, hershey2017cnn, Chen20}. We extract features for each snippet of $T$ frames with temporal stride of $\Delta$. We input each one of the shots $S_{\{LR\}}$ with $N_{\{LR\}}$ frames to extract $t_{{\{LR\}}}=Floor(\frac{N_{\{LR\}}-T}{\Delta})+1$ snippets, and extract shot-level features $F_{\{L,R\}}$ by concatenating each of its snippet-level feature maps of dimension $H$. Formally $F_L=[\alpha_1;...;\alpha_{t_L}]$ and $F_R=[\beta_1;...;\beta_{t_R}]$ where $\alpha,\beta \in \mathbb{R}^{1 \times H}$.
\subsubsection{Contrastive Learning Module}
This module consists of a Multi-Layer Perceptron (MLP) with $l_1$ layers interleaved with a ReLU activation. It receives the shot-level features $F_{\{LR\}}$. Each layer reduces the size of the previous layer by $c$, similar to the projection head used in \cite{chen2020simple}. It produces a feature $F^{\prime}_{\{LR\}}$ where each of its elements belongs to $\mathbb{R}^{1 \times \frac{H}{c^{m_1}}}$. We compute each of the possible snippet pairs between $F^{\prime}_L$ and $F^{\prime}_R$, and we produce the annotations as follows: the positive sample is the pair $(\alpha_{t_L}, \beta_0)$, while the negative samples are the set $N(\alpha,\beta) = \{(\alpha_{i^\prime}, \beta_{j^\prime}) \rvert i^\prime,j^\prime \in \mathbb{N}, 0 \leq i^\prime \leq t_L-1, 1 \leq j^\prime \leq t_R \}$. We aim at bringing the features into a different space, in which snippet pairs that are good to stitch together are close to each other and far from the rest. To enable this, we use a Noise-Contrastive-Estimation (NCE) loss \cite{gutmann2010noise} defined as:
\begin{equation}
NCE(S)= -\log \left( \frac{e^{ \alpha_{t}^{\top} \beta_{0} }}
{e^{ \alpha_{t_L}^{\top} \beta_{0} } + \sum\limits_{N(\alpha,\beta)} e^{ \alpha_{i \prime}^{\top} \beta_{j \prime} }} \right)
\end{equation}
This loss encourages that the positive pair (numerator) attains the highest correlation score among all pairs (denominator). We expect it to align well with the ranking task we are interested in. By maximizing the similarity between the two adjacent snippets from each pair of shots, we expect the model to learn smooth transitions between shots, thus mimicking what editors do in continuity editing \cite{smith2006attentional}.
\subsubsection{Auxiliary Task Module}
It consists of an MLP with $l_2$ FC-layers to produce a single value per snippet, which is then passed through a sigmoid function.
This module receives the feature map $F^{\prime}_{\{LR\}}$ and produces a vector $v_{\{LR\}} \in \mathbb{R}^{(t_L+t_R) \times 1}$, one score per snippet. The higher the score, the higher the probability $p_{y_i}$ for a snippet to be a good place to cut. Remember that our auxiliary task aims to answer whether or not each snippet is a good place to cut. We believe that this is a reasonable task to guide the learning process, since there are some pre-established rules in video editing that drive the cutting process \cite{filmgrammar, berthouzoz2012tools}. And so, we hypothesize that individual snippets can contain some information about these cues and can guide the model to cut more precisely. We propose to learn this auxiliary task by classifying each of the snippets as a good place to cut or not. In this case, the only positive snippet is $\alpha_{t_L}$. We iterate over each snippet in $S_L$ to calculate the Binary Cross-Entropy (BCE) Loss per shot:
\begin{equation
\small
BCE(S)=- \frac{1}{t} \sum_{i}^{t} y_i \log(p_{y_i}) + (1-y_i) \log(1-p_{y_i})
\end{equation}
\noindent Therefore, our model optimizes for both two tasks jointly and the losses are combined as follows:
\begin{align}
Loss(S) = \lambda_1 \cdot NCE(S) + \lambda_2 \cdot BCE(S) \label{equation:loss}
\end{align}
\section{Related Work}
\label{sec:related}
\noindent\textbf{Computational Video Editing.} Earlier research in computational video editing focuses on designing new experiences that speed up the video creation process \cite{berthouzoz2012tools, truong2016quickcut, leake2017computational, wang2019write, fried2019text}. For instance, Leake \etal propose a system for the automatic editing of dialogue-driven scenes \cite{leake2017computational}. This system takes as input raw footage, including multiple takes of the same scene, and transcribed dialogue, to create a sequence that satisfies a user-specified film idiom \cite{filmgrammar}. Although this method offers a modern video editing experience, it still relies on a rule-based mechanism to generate the cuts. Another line of work focuses on designing transcript-based video editing systems. To cite an example, QuickCut \cite{truong2016quickcut} and Write-A-Video \cite{wang2019write} develop user interfaces that allow aspiring editors to create video montages using text narrations as input. Although significant progress has been made to create better video editing experiences, it is still an open question of whether learning-based approaches can advance computational video editing. Our work provides a step towards that direction by introducing a benchmark for ranking the plausibility of video cuts and an audio-visual method that learns how to approximate them, without fixed rules and trained from an unconstrained set of professionally edited videos.
\noindent\textbf{Long-term video analysis.} Many efforts have been made to develop deep learning models that analyze and understand long-term information \cite{yu2020relationship, gupta2018linear} and model relationships in long video formats \cite{bain2020condensed,huang2020movienet, gu2018ava}. Recently, Bain~\etal \cite{bain2020condensed} collected a dataset that contains movie scenes along with their captions, characters, subtitles, and face-tracks. Similarly, Huang~\etal \cite{huang2020movienet} created a dataset that comprises complete movies along with their trailers, pictures, synopses, transcripts, subtitles, and general metadata. Based on these datasets, the research community has developed solutions for new movie-related tasks, such as: shot-type classification \cite{rao2020unified}, movie-scene segmentation \cite{rao2020local}, character re-identification and recognition \cite{huang2018person, xia2020online, huang2020caption, brown2020playing}, trailer and synopsis analysis \cite{Xiong_2019_ICCV, huang2018trailers}, and visual-question answering in movies \cite{jasani2019we, garcia2020knowledge}. Unlike previous works in long-term video analysis, our work centers around the video creation process. Another line of work worth a bit more closer to ours is video summarization \cite{tvsum,vs_gong2014diverse,vs_otani2019rethinking}. These approaches are typically given \textit{one long video stream} and their task is to select and shorten shots while keeping the semantic meaning of the composed video. Although video summarization techniques compose shot sequences, they tend to disregard critical aspects of video editing such as maintaining spatial-temporal continuity across shots. To the best of our knowledge, our work is the first benchmark studying the plausibility of cuts. As there are limited previous works aligning with our task, we define a set of initial baselines, and a novel approach for evaluating the ranking quality of video cuts.
\noindent\textbf{Cross-Modal Representations.} The joint exploration of multiple modalities is a hot topic in the research community. Several works have explored self-supervised learning to learn semantic embeddings using cross-modality supervision by using text, audio, and visual streams \cite{alwassel2020selfsupervised, alayrac2020self, patrick2020multi, arandjelovic2017look, aytar2017see, korbar2018cooperative, owens2016ambient, owens2018audio, tian2019contrastive, miech2020end, kalantidis2020hard}. Moreover, recent works have used already pre-trained embeddings from different modalities on video retrieval tasks \cite{gabeur2020multi, liu2019use, yu2020video, escorcia2019temporal, lei2020tvr, li2020hero, amrani2020noise}, active speaker detection \cite{chung2016out, chung2018voxceleb2, alcazar2020active}, and sign spotting \cite{momeni2020watch}. Our method adopts multimodal streams and backbones to model audio-visual signals and learn the patterns that commonly trigger a cut.
\noindent\textbf{Contrastive Losses for Video.} Rapid progress has been attained in video understanding research by leveraging contrastive losses \cite{gutmann2010noise, miech2020end}. The InfoNCE loss \cite{gutmann2010noise}, particularly, has gained tremendous popularity with the advent of self supervised learning \cite{tschannen2020self, gordon2020watching, qian2020spatiotemporal, wang2020selfsupervised, alayrac2020self, morgadoNIPS20, han2019video, han2020self}. The idea behind this loss is simple; given a query representation, its goal is to maximize the similarity with a corresponding positive sample, but minimize the similarity concerning a bag of negatives. In the video domain, the InfoNCE loss and its variants (\eg, MIL-NCE \cite{miech2020end}) have been used for learning: general video representations \cite{tschannen2020self, qian2020spatiotemporal, gordon2020watching}, joint text-video embeddings \cite{miech2020end, alayrac2020self}, or audio-visual models \cite{morgadoNIPS20, alayrac2020self, afouras2020selfsupervised}. Following the widespread adoption in the video community, our work leverage the InfoNCE loss \cite{gutmann2010noise} to learn a representation that encodes how good video cuts look (and sound) compared to all the other cut alternative for a pair of shots.
\section*{Supplementary Material}
\section{Learning to Cut by Users- Toy Example}
Please visit: \textit{\textbf{\url{https://alejandropardo.net/publication/learning-to-cut/}}} for code and complete supplementary material.
To illustrate to the reader a toy example of Learning to Cut we included a folder called \href{./spot-the-real-cut.zip}{Spot-the-Real-Cut}. We encourage the reader to open the \textit{html} file contained in the supplementary material folder and try to choose the more suitable cuts. You will have to wait around 15 seconds for the link to load all the videos. There are going to be 30 examples, each of them showing a pair of cuts. One of them breaks continuity, while the other is an actual cut made by a professional editor. The task is simple: \textbf{choose the cut that is real}. To play the video, click on top of it. To decide what you consider is the real cut, click on the button "This cut is real" below the clip. At the end of the study, you will see what percentage of cuts from the ones chosen were actually real. The purpose of this toy example is to illustrate that there is a signal that a model could learn to Learn how to cut. Such a signal is the one that Learning to Cut is aiming to leverage.
\section{DatasetStatistics}
Additional statistics are shown below. Figure \ref{fig:shots-genre} shows the distribution of number of shots per genre along the dataset. Figure \ref{fig:shots-duration} shows the shots-duration's distribution. Most of the shots in the movies are shorter than 2 seconds. This challenging property comes from the fast-pace edits of action scenes, where the shot duration is tipically short. \\
\vspace{-10pt}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{supplementary/images/shot_genre_bar.pdf}
\caption{\textbf{Distribution of shots per genre.}}
\label{fig:shots-genre}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{supplementary/images/shots_durations_hist.pdf}
\caption{\textbf{Shots' duration distribution.}}
\label{fig:shots-duration}
\end{figure}
\section{Generalization to unedited set.}
\noindent\textbf{Qualitative results.} In table \ref{table:generalization}, we report the qualitative number of our method on the unedited set. We see that the real task is really challenging as the number drop significantly from the proxy task. However, we observe the same trend in the results, our method outperforms the baselines. This results show how challenging is the tasks in a real-world scenario. Thus, future methods have to put effort to solve first the proxy as this results will be reflected in the real task.
\input{supplementary/tables/generalization}
\section{Additional Ablation Study}
\noindent\textbf{Ablation Study.} In Table \ref{table:ablation-complete}, we report the complete numbers for the ablation study. Compared with Table 2 of the main manuscript, we show here $R@10K$ for every $d$. \\
\input{supplementary/tables/ablation_complete}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.9\linewidth]{supplementary/images/top_m_ablation.pdf}
\caption{\textbf{Influence of top M\% per metric.}}
\label{fig:topm}
\end{figure*}
\noindent\textbf{Impact of M.} Figure \ref{fig:topm} shows the impact of the top $M\%$ parameter in the two-stage prediction. We can see different peaks according to the metric that we are looking at; however, there is a clear pattern top $20\%$ favors the best $R@1K$, top $30\%$ $R@5K$, and top $40\%$ $R@10K$, no matter the distance. For the Unedited set we chose top $30\%$, since we were using the top-5 predictions. \\
\vspace{-10pt}
\section{Qualitative Results}
In Figure \ref{fig:qualitative-pos} and Figure \ref{fig:qualitative-neg} we show the feature temporal similarity of two candidate videos to be stitched together. The columns of the matrix represent snippets of video one, and the rows represent snippets of video 2. We show the similarity between these two set of snippets before (Raw) and after (Model) Learning to Cut. In this case all the cuts are in $(15, 15)$, we show in red the region of the ground-truth with distance $d=1$, in cyan the edge of the region for ground-truth with $d=2$, and in white the edge of the region for ground-truth with $d=3$.
On the one hand, we observe in figure \ref{fig:qualitative-pos} that our model tends to localize the similarities around the actual cutting place and the ground-truth regions. In Figure \ref{fig:pos-ex1}, the most salient region is overlapping the ground-truth region after our model was applied, before it, the similarity spikes where located in a complete different place. Thus, our model was able to transform the video features such that the similarities spike around the cutting points. We observe similar behavior for Figure \ref{fig:pos-ex2} and Figure \ref{fig:pos-ex3}; however, the center of the spike region is a couple of spaces off the ground-truth region.
On the other hand, we can observe in Figure \ref{fig:qualitative-neg} some examples in which the spike of the similarities do not match the ground-truth region. Interestingly Figure \ref{fig:neg-ex1} and Figure \ref{fig:neg-ex2} show that the cutting point for one of the videos was predicted correctly (the spike happens along the 15th row); yet, the model was not able to find a cutting point for the second video that would match the ground-truth. This does not necessarily mean that the cutting point found the model is not correct. It means that did not match the cut made by the professional. The Figure \ref{fig:neg-ex3} shows a spike on a region that does not correspond with the ground truth. In this case, the model was not able to move the features away from the initial state, since the features were already spiking in a similar region before the model (raw column).
Regardless of the ground-truth region, Figure \ref{fig:qualitative-pos} and Figure \ref{fig:qualitative-neg} show that our model helps to sharpen feature similarities in specific regions across a pair of videos. The similarity spike is not as blur anymore as it was in the original features (Raw).
\textbf{Additional qualitative results with the actual clips ranking can be found on the attached files and slides. In the examples' files (\href{./Qualitative.zip}{Qualitative.zip}) the videos are name after their ranking, the real cut is also included.}
\begin{figure}[h!]
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{supplementary/images/matrix_visualization_positive_46.pdf}
\caption{Correct prediction - Example 1}\label{fig:pos-ex1}
\end{subfigure}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{supplementary/images/matrix_visualization_positive_26.pdf}
\caption{Correct prediction - Example 2}\label{fig:pos-ex2}
\end{subfigure}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{supplementary/images/matrix_visualization_positive_92.pdf}
\caption{Correct prediction - Example 3}\label{fig:pos-ex3}
\end{subfigure}
\caption{\textbf{Feature similarities before and after Learning to Cut.} Examples where the feature similarities were moved correctly to the ground-truth region. Please zoom in for a better view.}
\label{fig:qualitative-pos}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{supplementary/images/matrix_visualization_negative_65.pdf}
\caption{Incorrect prediction - Example 1}\label{fig:neg-ex1}
\end{subfigure}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{supplementary/images/matrix_visualization_negative_57.pdf}
\caption{Incorrect prediction - Example 2}\label{fig:neg-ex2}
\end{subfigure}
\begin{subfigure}{\linewidth}
\includegraphics[width=\linewidth]{supplementary/images/matrix_visualization_negative_30.pdf}
\caption{Incorrect prediction - Example 3}\label{fig:neg-ex3}
\end{subfigure}
\caption{\textbf{Feature similarities before and after Learning to Cut.} Examples where the feature similarities were moved out of the the ground-truth region. Please zoom in for a better view.}
\label{fig:qualitative-neg}
\end{figure}
\end{document}
|
1,116,691,499,209 | arxiv | \section{Introduction and Main Results}
\labell{main}
In this paper we study Hamiltonian dynamics in a neighborhood of a
nowhere coisotropic submanifold. As a starting point, we establish the
following displacement principle: a closed nowhere coisotropic
submanifold of a symplectic manifold is (infinitesimally) displaceable
provided that there are no topological obstructions to
displaceability. (In general, a compact subset $M$ of a symplectic
manifold is said to be displaceable if it can be disjoined from itself
by a Hamiltonian diffeomorphism $\varphi_H$, i.e., $\varphi_H(M) \cap
M = \emptyset$. Thus, by definition, a displaceable set is
topologically displaceable.) Then we develop a new version of the
theory of action selectors and use it, combined with the displacement
principle, to prove two results in Hamiltonian dynamics. Namely, we
prove the Conley conjecture (for non-negative Hamiltonians) and the
almost existence theorem in a neighborhood of a closed nowhere
coisotropic submanifold, under certain natural assumptions on the
ambient manifold. The Conley conjecture, \cite{conley, SZ}, concerns
time-dependent Hamiltonian systems and asserts the existence of
infinitely many periodic points for a Hamiltonian diffeomorphism. The
almost existence theorem due to Hofer and Zehnder and to Struwe,
\cite{hz:ae, ho-ze:capacity, St}, asserts that almost all regular
level sets of a proper autonomous Hamiltonian on ${\mathbb R}^{2n}$ carry
periodic orbits. A similar result has also been proved for ${\mathbb C}{\mathbb P}^n$,
symplectic vector bundles, subcritical Stein manifolds, and certain
other symplectic manifolds; see, e.g., \cite{FS, gg3, hv, ke3, lu2,
sc} and also the survey \cite{gi:weinstein} and references therein.
Here, similarly to
\cite{cgk,FS,gi:survey,gg3,gk1,gk2,ke1,ke3,lu,mac1,pol2,felix}, we
focus on these theorems for Hamiltonians supported in a neighborhood
of a closed submanifold.
We also introduce the notion of a \emph{wide} symplectic manifold,
which means that the manifold is open and admits an arbitrarily large
compactly supported Hamiltonian without contractible fast periodic
orbits. Immediate examples of such manifolds include ${\mathbb R}^{2n}$,
cotangent bundles, Stein manifolds and twisted cotangent bundles. The
essence of this property lies in the fact that on a wide manifold the
top degree Floer homology is non-zero for any non-negative compactly
supported Hamiltonian which is not identically zero. This allows us to
construct an action selector for geometrically bounded wide manifolds
and, thus, prove a version of the Conley conjecture.
From now on $W$ will always stand for a wide manifold while $P$ will
denote a general symplectic manifold.
Let us now state the main results of the paper.
\subsection{Displacement Principle}
Let $M$ be a closed connected submanifold of a symplectic manifold
$(P^{2n},\omega)$. We say that $M$ is \emph{nowhere coisotropic} if
$T_xM$ is not a coisotropic subspace of $T_xP$ for any $x \in M$. For
example, a symplectic submanifold is nowhere coisotropic; a
submanifold of middle dimension is nowhere coisotropic if and only if
$\omega|_M$ does not vanish at any point, i.e., $T_xM$ is not a
Lagrangian subspace for any $x \in M $.
Our first result is the following principle which extends or
complements the works of Laudenbach and Sikorav, \cite{LauSi}, and of
Polterovich, \cite{pol1}, and plays a crucial role in the proofs of
the Conley conjecture and the almost existence theorem near nowhere
coisotropic submanifolds.
\begin{Theorem}[Displacement Principle]
\labell{thm:displacement}
Let $M$ be a closed, connected submanifold of a symplectic manifold
$(P,\omega)$. Assume that $M$ is nowhere coisotropic and the normal
bundle to $M$ admits a non-vanishing section. Then $M$ is
infinitesimally displaceable, i.e., there exists a non-vanishing
Hamiltonian vector field which is nowhere tangent to $M$.
\end{Theorem}
When $M$ is of middle dimension, Theorem \ref{thm:displacement} was
proved by Laudenbach and Sikorav, \cite{LauSi}, under a less
restrictive assumption that $M$ is non-Lagrangian and (under the extra
assumption that $TM$ has a Lagrangian complement) by Polterovich,
\cite{pol1}. It has also been known for a long time that $M$ is always
displaceable when $\dim M < n$. (Note that such a submanifold is
automatically nowhere coisotropic.) Thus, Theorem
\ref{thm:displacement} can be thought of as an extension of the
displacement principle to submanifolds of dimension greater than $n$.
In contrast with the middle-dimensional case, the condition that $M$
is nowhere coisotropic cannot be replaced by the requirement that $M$
is (somewhere) non-coisotropic. For a non-coisotropic submanifold can
contain a Lagrangian submanifold and, in this case, $M$ is not
displaceable due to the Lagrangian intersection property. However,
this assumption can possibly be relaxed. We will examine
generalizations of the displacement principle elsewhere.
Theorem \ref{thm:displacement} is proved in Section
\ref{pf:displacement}. Let us now proceed with the applications.
\subsection{The Conley Conjecture}
In its original form, the Conley conjecture asserts that every
Hamiltonian diffeomorphism on ${\mathbb T} ^{2n}$ has infinitely many simple
periodic points, \cite{conley, SZ}. Here ``simple'' means that the
orbits sought are not iterated. A similar conjecture makes sense and
is interesting for other symplectic manifolds too. (Observe that the
example of an irrational rotation on $S^2$ demonstrates that the
conjecture, as stated, fails for manifolds admitting spheres. However,
the statement can be suitably modified to be meaningful and
non-trivial for such manifolds as well; cf. \cite{FrHa}.)
Recently, Ginzburg, \cite{gi:conley}, proved the Conley conjecture for
all closed symplectically aspherical manifolds. Prior to Ginzburg's
work, some particular cases of this conjecture were established. When
the manifold is closed, the conjecture was proved by Salamon and
Zehnder, \cite{SZ}, under the additional assumption that the fixed
points are weakly non-degenerate, and Hingston, \cite{hingston},
established the conjecture for ${\mathbb T} ^{2n}$ without the non-degeneracy
assumption.
Other partial results, not necessarily for closed manifolds, were
obtained under assumptions on the size of the Hamiltonian. Namely, the
conjecture is also known to hold is when the support of the
(time-dependent) Hamiltonian is displaceable. For instance, in ${\mathbb R}
^{2n}$ every compactly supported Hamiltonian has displaceable support,
in which case the conjecture has been proved by Viterbo, \cite{vi2},
and by Hofer and Zehnder, \cite{ho-ze:capacity,hz:book}. Admittedly
this is a very restrictive assumption, especially for closed
manifolds. Yet, this is essentially the only situation in which the
conjecture is known to hold for open manifolds. Under the assumption
that the support is displaceable, the conjecture was proved by
Schwarz, \cite{sc}, for closed symplectically aspherical manifolds and
by Frauenfelder and Schlenk, \cite{FS}, for manifolds that are convex
at infinity. (Recall that a manifold is called convex at infinity if
it is isomorphic at infinity to the symplectization of a compact
contact manifold.) The question is still open for many ``natural''
symplectic manifolds such as cotangent bundles.
We establish this conjecture for Hamiltonians supported in a
neighborhood of a nowhere coisotropic submanifold under certain
assumptions on the ambient manifold. More precisely, we prove
\begin{Theorem}
\labell{thm:cc}
Assume that $M$ is a closed, nowhere coisotropic submanifold of a
symplectically aspherical manifold $(W,\omega)$. Let $H$ be a non-zero
time-dependent Hamiltonian, supported in a sufficiently small
neighborhood of $M$.
\begin{itemize}
\item If $W$ is geometrically bounded and wide and $H \geq 0$, then
$H$ has simple (contractible) periodic orbits with positive action and
arbitrarily large period, provided that the time-one map $\varphi_H$
has isolated fixed points with positive action.
\item If $W$ is closed, then $H$ has simple (contractible) periodic
orbits with non-zero action and arbitrarily large period, provided
that the time-one map $\varphi_H$ has isolated fixed points with
non-zero action.
\end{itemize}
\end{Theorem}
We say that $W$ is \emph{wide} if the manifold admits arbitrarily
large, compactly supported, autonomous Hamiltonians $F$ such that all
non-trivial contractible periodic orbits of $F$ have periods greater
than one; see Section \ref{subsec-wide} for a discussion of this
concept. This condition is satisfied for all examples of open
geometrically bounded manifolds known to us.
When $W$ is closed or convex, this theorem can be proved using the
displacement principle, Theorem \ref{thm:displacement}, and the action
selectors introduced in \cite{sc} or \cite{FS} respectively.
Moreover, in these cases it suffices to assume that $W$ is
weakly-exact rather than symplectically aspherical. However,
the constructions of the action selectors for closed or convex manifolds
do not extend to open manifolds which are merely geometrically
bounded. Hence, the proof of Theorem \ref{thm:cc} for geometrically
bounded manifolds requires developing a new version of the theory of
action selectors; see Section \ref{subsec-actionselector}.
An immediate consequence of Theorem \ref{thm:cc} is the following
corollary.
\begin{Corollary}
\labell{cor:cc}
Let $(W,\omega)$ be geometrically bounded, symplectically aspherical
and wide, and let $M$ be a closed and nowhere coisotropic submanifold
of $W$. Then, for every non-zero time-dependent Hamiltonian $H \geq
0$ supported in a sufficiently small neighborhood of $M$, the time-one
map $\varphi_H$ has infinitely many simple periodic points
corresponding to contractible periodic orbits of $H$ with positive
action. (A similar statement for closed manifolds has been proved in
\cite{sc}.)
\end{Corollary}
\begin{Remark}
If $W$ is assumed to be convex, the condition that $H \geq 0$ can be
removed both in Theorem \ref{thm:cc} and in Corollary \ref{cor:cc},
provided that the orbits are required only to have non-zero, rather
than positive, action; see \cite{FS}. Moreover, in Theorem
\ref{thm:cc}, the assumption that $W$ is geometrically bounded and
wide can be replaced by the assumption that $W$ admits an exhaustion
$W_1 \subset W_2 \subset \ldots$ by open sets such that each $W_k$ is
symplectomorphic to an open subset of a geometrically bounded and wide
manifold, perhaps depending on $k$.
\end{Remark}
\subsection{The Almost Existence Theorem}
Combining the displacement principle and the results from
\cite{felix}, we prove the following almost existence theorem for
periodic orbits in a neighborhood of a closed nowhere coisotropic
submanifold; see Section \ref{pf:ae}.
\begin{Theorem}
\labell{thm:ae}
Assume that $M$ is a closed, nowhere coisotropic submanifold of a
symplectic manifold $(P,\omega)$ which is geometrically bounded and
strongly semi-positive. Then the almost existence theorem holds near
$M$: there exists a sufficiently small neighborhood $U$ of $M$ in $P$
such that for any smooth proper Hamiltonian $H \colon U \to {\mathbb R}$, the
level sets $H^{-1} (c)$ carry contractible-in-$P$ periodic orbits of
the Hamiltonian flow of $H$ for almost all $c$ in the range of $H$.
\end{Theorem}
Here $(P^{2n}, \omega)$ is said to be strongly semi-positive if
$c_1(A)\geq0$ for every $A \in \pi_2(P)$ such that $\omega(A)>0$ and
$c_1(A) \geq 2-n$. The condition that $P$ is geometrically bounded
(e.g. convex) is a way to have sufficient control of the geometry of
$P$ at infinity; see Section \ref{prelim} for the definition and
examples.
\begin{Remark}
The displacement results of \cite{felix} rely heavily on \cite{LalMc1,
McDSl}. In Section \ref{proof} we will give a simple proof of this
theorem for symplectically aspherical manifolds $(P,\omega)$ which are
either closed or geometrically bounded and wide.
\end{Remark}
As a particular case, Theorem \ref{thm:ae} implies the almost
existence of periodic orbits in a neighborhood of a closed symplectic
submanifold, provided that $P$ is strongly semi-positive and
geometrically bounded. Note in this connection that almost existence
in a neighborhood of a symplectic submanifold satisfying certain
additional hypotheses was proved by Kerman, \cite{ke3}. On the other
hand, almost existence in a neighborhood of a non-Lagrangian
submanifold of middle dimension was established by Schlenk,
\cite{felix}. Kerman's theorem holds when the ambient manifold $P$ is
symplectically aspherical while Schlenk's requires $P$ to be only
strongly semi-positive. Furthermore, G. Lu, \cite{lu2}, has proved the
almost existence theorem for neighborhoods of symplectic submanifolds
in any symplectic manifold by showing that the contractible
Hofer-Zehnder capacity of such a neighborhood is finite using a deep
and difficult result due to Liu and Tian, \cite{LT2}.
The almost existence theorem is closely related to the existence
problem for periodic orbits of a charged particle in a magnetic field,
also known as the magnetic problem, and to the generalized
Moser-Weinstein theorem; see
\cite{gi:survey,gg3,gk1,gk2,ke1,Mo:orbits}. To be more precise, let
$M$ be a closed Riemannian manifold and let $\eta$ be a closed
two-form (magnetic field) on $M$. Equip $T^*M$ with the twisted
symplectic structure $\omega=\omega_0+\pi^*\eta$, where $\omega_0$ is
the standard symplectic form on $T^*M$ and $\pi\colon T^*M\to M$ is
the natural projection. It is known that $(T^*M,\,\omega)$, a
\emph{twisted cotangent bundle}, is geometrically bounded for any
$\eta$; see \cite{al,cgk,lu}. Finally, let $H$ be the standard kinetic
energy Hamiltonian on $T^*M$. The Hamiltonian flow of $H$ on $W$,
called a \emph{twisted geodesic flow}, is of interest because it
describes, for example, the motion of a charge on $M$ in the magnetic
field $\eta$.
In this setting, as a particular case of Theorem \ref{thm:ae}, we
obtain the existence of contractible twisted geodesics on almost all
low energy levels, provided that the magnetic field is nowhere zero --
a result complementing numerous other theorems on the existence of
twisted geodesics; see, e.g.,
\cite{cgk,gi:survey,gg3,gk1,gk2,ke1,ke3,lu,mac1,pol2,felix}. Note that
the assumption that $\eta$ is nowhere zero ensures that $M$ is nowhere
coisotropic.
\subsection{Organization of the paper}
In Section \ref{prelim} we set the conventions and notation and recall
relevant results concerning filtered Floer homology and homotopy
maps. The goal of Section \ref{action} is two-fold. We first introduce
and discuss the notion of a wide symplectic manifold. Then we
construct an action selector for wide manifolds which are
geometrically bounded and symplectically aspherical. Here we also
state and prove the properties of this selector. In Section
\ref{proof} we prove the main results of this paper.
\subsection*{Acknowledgments.} The author is deeply grateful to Viktor
Ginzburg for many useful discussions and his numerous valuable remarks
and suggestions. The author also thanks to Yasha Eliashberg, Sam Lisi,
Dusa McDuff, Leonid Polterovich, Tony Rieser, and Aleksey Zinger for
helpful discussions and their suggestions. Most of this work has been
completed during author's stay at Stony Brook Math Department; she
would like to thank the department for its warm hospitality.
\section{Preliminaries}
\labell{prelim}
In this section we set up our conventions and notation and recall the
definition of Floer homology. Here we also define the filtered Floer
homology and examine its dependence on the homotopy of Hamiltonians to
the extent needed in this paper.
We will assume that the manifold $P$ is open, for our proofs will
exclusively focus on the case of open manifolds.
\subsection{Floer homology}
Let $(P,\omega)$ be an open symplectic manifold. In order for the
Floer homology to be defined, we need to impose some additional
conditions on the manifold. To this end, we will always assume that
$P$ is \emph{geometrically bounded}. This assumption gives us
sufficient control of the geometry of $P$ at infinity which is
necessary in the case of open manifolds. Examples of such manifolds
include symplectic manifolds that are convex at infinity (e.g. compact
symplectic manifolds, ${\mathbb R}^{2n}$, cotangent bundles) as well as twisted
cotangent bundles, which, in general, fail to be convex at
infinity. For the sake of completeness we recall the definition.
\begin{Definition}
\labell{def:gb}
A symplectic manifold $(P,\omega)$ is said to be \emph{geometrically
bounded} if $P$ admits an almost complex structure $J$ and a complete
Riemannian metric $g$ such that
\begin{itemize}
\item $J$ is uniformly $\omega$-tame, i.e.,
for some positive constants $c_1$ and $c_2$ we have
$$
\omega(X, JX)\geq c_1\| X\|^2 \quad\text{and}\quad
|\omega(X, Y)|\leq c_2\| X\|\,\|Y\|
$$
for all tangent vectors $X$ and $Y$ to $P$;
\item the sectional curvature of $(P,g)$ is bounded from above and
the injectivity radius of $(P,g)$ is bounded away from zero.
\end{itemize}
\end{Definition}
We refer the reader to \cite{al, cgk, lu} for a discussion of
geometrically bounded manifolds. In particular, though we have not
yet recalled the definition of Floer homology, let us note that the
compactness theorem for the moduli spaces of Floer's trajectories for
open geometrically bounded manifolds holds; this is a consequence of
Sikorav's version of the Gromov compactness theorem; see \cite{al}.
Furthermore, assume that $(P,\omega)$ is \emph{symplectically
aspherical}, i.e.,
$$
\omega|_{\pi_2(P)}=0 \quad\text{and}\quad c_1(TP)|_{\pi_2(P)}=0.
$$
We will indicate when $P$ need not be symplectically aspherical, as is
the case in Theorem \ref{thm:ae}.
Among manifolds which are symplectically aspherical and geometrically
bounded are ${\mathbb R}^{2n}$, symplectic tori, cotangent bundles and twisted
cotangent bundles when the form on the base is weakly exact. Under
these hypotheses, the filtered ${\mathbb Z}$-graded Floer homology of a
compactly supported Hamiltonian on $P$ is defined as follows.
Recall that for a time-dependent Hamiltonian $H\colon S^1 \times P\to
{\mathbb R}$, the action functional on the space of smooth contractible loops
$\Lambda P$ is defined as
\begin{equation}
\labell{eq:action}
A_H(x) = - \int_{D^2} \bar{x}^* \omega +
\int_{S^1}H(t,x)\,dt,
\end{equation}
where $x\colon S^1 \to P$ is a contractible loop and $\bar{x}\colon
D^2\to P$ is a map of a disk, bounded by $x$, and $S^1 = {\mathbb R} /
{\mathbb Z}$. Since $P$ is symplectically aspherical $A_H(x)$ is
well-defined. The Hamiltonian vector field $X_H$ is defined by the
equation $ i_{X_H} \omega = -dH $. Let $\varphi^t_H$ denote the
time-dependent flow of $X_H$ and, in particular,
$\varphi_H=\varphi^1_H$ denote the time-one flow.
By the least action principle, the critical points of $A_H$ are
exactly contractible one-periodic orbits of the Hamiltonian flow of
$H$. We denote by ${\mathcal P}_H$ the collection of such orbits and let
${\mathcal P}_H^{(a,\,b)} \subset {\mathcal P}_H$ stand for the collection of orbits
with action in the interval $(a,\,b)$. The action spectrum ${\mathcal S}(H)$ of
$H$ is the set of critical values of $A_H$. In other words, ${\mathcal S}(H)=\{
A_H(x)\mid x \in {\mathcal P}_H\}$. This is a zero measure set; see, e.g.,
\cite{hz:book,sc}.
Throughout this paper we will assume that $H$ is compactly supported
and set $\operatorname{supp} H=\bigcup_{t\in S^1}\operatorname{supp} H_t$. In this case, ${\mathcal S}(H)$
is closed and hence nowhere dense.
Let $J=J_t$ be a time-dependent almost complex structure on $P$. A Floer
anti-gradient trajectory $u$ is a map $u\colon {\mathbb R}\times S^1\to P$
satisfying the equation
\begin{equation}
\label{eq:floer}
\frac{\partial u}{\partial s}+ J_t(u) \frac{\partial u}{\partial t}=-\nabla H_t(u).
\end{equation}
Here the gradient is taken with respect to the time-dependent
Riemannian metric $\omega(\cdot,J_t\cdot)$. Denote by $u(s)$ the
curve $u(s,\cdot)\in \Lambda P$.
The energy of $u$ is defined as
\begin{equation}
\label{eq:energy}
E(u)=\int_{-\infty}^\infty \left\|\frac{\partial u}{\partial s}\right\|_{L^2(S^1)}^2\,ds
=\int_{-\infty}^\infty \int_{S^1}\left\|\frac{\partial u}{\partial t}-J\nabla H (u)
\right\|^2 \,dt\,ds.
\end{equation}
We say that $u$ is asymptotic to $x^\pm\in {\mathcal P}_H$ as $s\to\pm \infty$,
or connecting $x^-$ and $x^+$, if $\lim_{s\to\pm\infty} u(s)=x^\pm$ in
$\Lambda P$. In this case
$$
A_H(x^-)-A_H(x^+)=E(u).
$$
We denote the space of Floer trajectories connecting $x^-$ and $x^+$,
with the topology of uniform $C^\infty$-convergence on compact sets,
by ${\mathcal M}_H(x^-,x^+,J)$. This space carries a natural ${\mathbb R}$-action
$(\tau\cdot u)(t,s)=u(t,s+\tau)$ and we denote by
$\hat{{\mathcal M}}_H(x^-,x^+,J)$ the quotient ${\mathcal M}_H(x^-,x^+,J)/{\mathbb R}$.
Recall that $x \in {\mathcal P}_H$ is said to be non-degenerate if
$d\varphi_H\colon T_{x(0)}P\to T_{x(0)}P$ does not have one as an
eigenvalue. In this case, the so-called Conley--Zehnder index
$\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)\in{\mathbb Z}$ is defined; see, e.g., \cite{sa,SZ}. Here we
normalize $\operatorname{\mu_{\scriptscriptstyle{CZ}}}$ so that $\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)=n$ when $x$ is a non-degenerate
maximum of an autonomous Hamiltonian with a small Hessian. Assume that
all periodic orbits with actions in the interval
$[A_H(x^+),\,A_H(x^-)]$, including $x^\pm$, are non-degenerate. Then,
for a generic $J$, suitable transversality conditions are satisfied
and ${\mathcal M}_H(x^-,x^+,J)$ is a smooth manifold of dimension
$\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x^-)-\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x^+)$; see, e.g., \cite{fh,SZ} and references
therein.
\subsection{Filtered Floer homology}
In this section we briefly outline the construction of filtered Floer
homology following closely \cite{gi:coiso}; see also
\cite{bps,cgk,fh,gg3,sc}.
Throughout the discussion of the filtered Floer homology
$\operatorname{HF}^{(a,\,b)}(P)$, we assume that all intervals are in the positive
range of actions, i.e., $a>0$ for any interval $(a,\,b)$. This
condition is clearly necessary, for $H$ is assumed to be compactly
supported and thus it always has trivial degenerate periodic orbits if
$P$ is open.
\subsubsection{Filtered Floer homology: definitions}
Let $H$ be a compactly supported Hamiltonian on an open symplectic
manifold $P$ which is symplectically aspherical and geometrically
bounded. Assume that all contractible one-periodic orbits of $H$ with
positive action are non-degenerate. This is a generic condition.
Consider an interval $(a,\,b)$, with $a>0$, such that $a$ and $b$ are
outside ${\mathcal S}(H)$. Then the collection ${\mathcal P}^{(a,\,b)}_H$ is finite.
Assume furthermore that $J$ is regular, i.e., the necessary
transversality conditions are satisfied for moduli spaces of Floer
trajectories connecting orbits from ${\mathcal P}^{(a,\,b)}_H$. This is again a
generic property as can be readily seen by applying the argument from
\cite{fh,FHS,SZ}.
Let $\operatorname{CF}_k^{(a,\,b)}(H)$ be the vector space over ${\mathbb Z}_2$ generated by
$x\in {\mathcal P}^{(a,\,b)}_H$ with $\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)=k$. Define
$$
\partial \colon \operatorname{CF}_k^{(a,\,b)}(H)\to \operatorname{CF}_{k-1}^{(a,\,b)}(H)
$$
by
$$
\partial x=\sum_y \#\big(\hat{{\mathcal M}}_H(x,y,J)\big)\cdot y,
$$
where the summation extends over all $y\in {\mathcal P}^{(a,\,b)}_H$ with
$\operatorname{\mu_{\scriptscriptstyle{CZ}}}(y)=\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)-1$ and $\#\big(\hat{{\mathcal M}}_H(x,y,J)\big)$ is the
number of points, modulo 2, in $\hat{{\mathcal M}}_H(x,y,J)$. (Recall that in
this case $\hat{{\mathcal M}}_H(x,y,J)$ is a finite set by the compactness
theorem.) Then, as is well known, $\partial^2=0$. The resulting complex
$\operatorname{CF}^{(a,\,b)}(H)$ is the filtered Floer complex for $(a,\,b)$. Its
homology $\operatorname{HF}^{(a,\,b)}(H)$ is called the filtered Floer homology.
This is essentially the standard definition of Floer homology with
critical points having action outside $(a,\,b)$ being ignored. In
general, $\operatorname{HF}^{(a,\,b)}(H)$ depends on the Hamiltonian $H$, but not on
$J$; see, e.g., \cite{gi:coiso}.
\begin{Remark}
It is clear that the same construction, with suitable modifications,
works for closed manifolds. In this case,
$\operatorname{HF}(H)=\operatorname{HF}^{(-\infty,\infty)}(H)$ is the ordinary Floer homology.
Moreover, $\operatorname{HF}_*(H)=H_{*+n}(P;\,{\mathbb Z}_2)$.
\end{Remark}
Let $a<b<c$. Assume that all of the above assumptions are satisfied for
all three intervals $(a,\,c)$ and $(a,\,b)$ and $(b,\,c)$. Then clearly
$\operatorname{CF}^{(a,\,b)}(H)$ is a subcomplex of $\operatorname{CF}^{(a,\,c)}(H)$, and
$\operatorname{CF}^{(b,\,c)}(H)$ is naturally isomorphic to the quotient complex
$\operatorname{CF}^{(a,\,c)}(H)/\operatorname{CF}^{(a,\,b)}(H)$. As a result, we have the long exact
sequence
\begin{equation}
\label{eq:seq}
\ldots\to \operatorname{HF}^{(a,\,b)}(H)\to\operatorname{HF}^{(a,\,c)}(H)\to \operatorname{HF}^{(b,\,c)}(H)\to\ldots.
\end{equation}
In the construction of the action selector for open manifolds given in
Section~\ref{sel-defn}, we will work with filtered Floer homology for
the interval $(0,\,b)$ even though $0$ is necessarily a critical value
of the action functional. This homology is defined as
\begin{equation}
\label{eq:zero-infty}
\operatorname{HF}^{(0,\,b)}(H)=\varprojlim_{\epsilon\to 0+}\operatorname{HF}^{(\epsilon,\,b)}(H),
\end{equation}
where the inverse limit is taken with respect to the quotient maps and
$\epsilon\to 0+$ in the complement of ${\mathcal S}(H)$. (It is clear that this
definition is equivalent to the original one when $P$ is closed and
$0$ is not in ${\mathcal S}(H)$.)
\subsubsection{Homotopy}
\label{sec:homotopy}
Let us now examine the dependence of $\operatorname{HF}^{(a,\,b)}(H)$ on $H$.
Consider a homotopy $H^s$ of Hamiltonians from $H^0$ to $H^1$. By
definition, this is a family of Hamiltonians parametrized by $s\in
{\mathbb R}$, and such that $H^s\equiv H^0$ when $s$ is large negative and
$H^s\equiv H^1$ when $s$ is large positive. Furthermore, let $J^s$ be
a family of $t$-dependent almost complex structures such that again
$J^s\equiv J^0$ when $s\ll 0$ and $J^s\equiv J^1$ when $s\gg 0$. (We
will most of the time suppress $J^s$ in the notation and refer to
$H^s$ as the homotopy.)
For $x\in {\mathcal P}^{(a_0,\,b_0)}_{H^0}$ and $y\in
{\mathcal P}^{(a_1,\,b_1)}_{H^1}$ denote by ${\mathcal M}_{H^s}(x,y,J^s)$ the space of
solutions of \eqref{eq:floer} with $H=H^s$ and $J=J^s$.
The regularity property takes the following form for open manifolds:
$(H^s,J^s)$ is said to be regular if the transversality requirements
are met along all homotopy trajectories connecting periodic orbits
with positive action. This is a generic property as can be seen by
arguing as in \cite{fh,FHS,SZ}. (When $P$ is closed, regularity of a
homotopy $(H^s,J^s)$ is understood in the standard sense, i.e., the
standard transversality requirements are met by the homotopy
$(H^s,J^s)$; see \cite{fh,FHS,SZ}.)
When the transversality conditions are satisfied, ${\mathcal M}_{H^s}(x,y,J^s)$
is a smooth manifold of dimension $\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)- \operatorname{\mu_{\scriptscriptstyle{CZ}}}(y)$. In particular,
${\mathcal M}_{H^s}(x,y,J^s)$ is a finite set when $\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)=\operatorname{\mu_{\scriptscriptstyle{CZ}}}(y)$. Define
the homotopy map
$$
\Psi_{H^0H^1}\colon \operatorname{CF}^{(a_0,\,b_0)}(H^0)\to \operatorname{CF}^{(a_1,\,b_1)}(H^1)
$$
by
$$
\Psi_{H^0H^1}( x)=\sum_y \#\big({\mathcal M}_{H^s}(x,y,J^s)\big)\cdot y.
$$
Here the summation is over all orbits $y\in {\mathcal P}^{(a_1,\,b_1)}_{H^1}$ with
$\operatorname{\mu_{\scriptscriptstyle{CZ}}}(y)=\operatorname{\mu_{\scriptscriptstyle{CZ}}}(x)$ and $\#\big({\mathcal M}_{H^s}(x,y,J^s)\big)$
is the number of points, modulo 2, in this moduli space.
The map $\Psi_{H^0H^1}$ depends on the entire homotopy $(H^s,J^s)$ and
in general is \emph{not} a map of complexes. However, $\Psi_{H^0H^1}$
becomes a homomorphism of complexes when $(a_0,\,b_0) = (a_1,\,b_1)$
and the homotopy is monotone decreasing, i.e., $\partial_s H^s\leq 0$
point-wise. Moreover, the induced map in homology is then independent
of the homotopy, within the class of decreasing homotopies, and
commutes with the maps from the exact sequence \eqref{eq:seq}. (The
reader is referred to, e.g., \cite{bps,cgk,fh,sa,SZ,sc,vi:functors},
for the proofs of these facts for both open and closed manifolds.)
There are other instances when the same is true. In particular, this
is the case when the location of the intervals $(a_0,\,b_0)$ and
$(a_1,\,b_1)$ is compatible with the growth of the Hamiltonians in the
homotopy, as streamlined by the following theorem from
\cite{gi:coiso}; see also \cite{sc}. (This theorem holds for both open
and closed manifolds.)
\begin{Theorem}[\cite{gi:coiso}]
\labell{thm:homotopy}
Let $H^s$ be a homotopy such that
$$
\int_{-\infty}^{\infty}\int_{S^1}\max_{P} \partial_s H^s_t \,dt\,ds \leq C,
$$
where $C \in {\mathbb R}$. Then
$$
\Psi_{H^0H^1}\colon \operatorname{CF}^{(a,\,b)}(H^0)\to \operatorname{CF}^{(a+C,\,b+C)}(H^1)
$$
is a homomorphism of complexes for any interval $(a,\,b)$. Hence,
$\Psi_{H^0H^1}$ induces a map in Floer homology, also denoted by
$\Psi_{H^0H^1}$. This map sends the exact sequence \eqref{eq:seq} for
$H^0$ and $(a,\,b,\,c)$ to the exact sequence \eqref{eq:seq} for $H^1$
and $(a+C,\,b+C,\,c+C)$, i.e., on the level of homology
$\Psi_{H^0H^1}$ commutes with all maps in the long exact sequence
\eqref{eq:seq}.
\end{Theorem}
\section{Action selectors in wide manifolds}
\labell{action}
The proof of Theorem \ref{thm:cc} relies on the theory of action
selectors, which is one of the standard approaches to the problem,
\cite{FS, hz:book, sc, vi2}. This theory is well developed for
weakly-exact closed or convex manifolds; see, e.g., \cite{FGS,FS,sc} and
also \cite{oh3} for the theory in a more general setting. The main
ingredient in these constructions of an action selector is an
identification between Floer homology spaces for different
Hamiltonians. For example, for symplectically aspherical closed or
convex manifolds, the Floer homology of any Hamiltonian is isomorphic
to the homology of the manifold. Then an action selector can be
associated to any homology class. However, these constructions do not
generalize to the case of open manifolds which are merely
geometrically bounded. The main obstacle is that the Floer homology
for Hamiltonians on such manifolds is no longer an invariant, i.e., it
depends on the Hamiltonian. For the homology can be defined only for
action intervals that do not contain zero and hence, in contrast with
the case of closed or convex manifolds, there is, in general, no
relation between the Floer homology for different
Hamiltonians. Nevertheless, to construct an action selector it
suffices to have a class in the Floer homology of a Hamiltonian, which
is in some sense canonical, even though the homology group it belongs
to depends on the Hamiltonian. In this section we construct an action
selector for geometrically bounded wide manifolds. The homology class
for which the selector is defined is essentially the fundamental class
of the manifold modulo infinity. Then this selector, for non-negative
Hamiltonians, has properties similar to those of action selectors
constructed in \cite{FS,sc}.
\subsection{Wide manifolds}
\labell{subsec-wide}
Let us now introduce and discuss the notion of a \emph{wide}
symplectic manifold.
\begin{Definition}
\labell{def:wide1}
A symplectic manifold $(W,\omega)$ is said to be wide if it is open
and if for every constant $C \geq 0$ and for every compact subset $X
\subset W$, there exists a function $K \colon W \to [0,\infty)$ such
that
\begin{itemize}
\item[(W1)] $K$ is compactly supported;
\item[(W2)] $K|_X \geq C$;
\item[(W3)] the Hamiltonian flow of $K$ has no non-trivial contractible
fast periodic orbits.
\end{itemize}
\end{Definition}
We call a non-trivial orbit fast if its period is less than or equal
to one. Otherwise an orbit will be called slow.
\begin{Remark}
The condition (W2) can be replaced by the condition $\mathrm {(W2')}
\colon \max \,K \equiv K|_X \equiv C$. For, as explained in
\cite{gg3, hz:book}, one can cut off a function meeting the
requirement (W2) without creating new fast periodic orbits and produce
another function satisfying the condition $\mathrm{(W2')}$.
\end{Remark}
Examples of wide manifolds include cotangent bundles and Stein
manifolds, or more generally symplectic manifolds that are convex at
infinity. More importantly, twisted cotangent bundles, which are
geometrically bounded but, in general, fail to be convex at infinity,
are wide. Non-compact covering spaces of compact manifolds are also
examples of wide manifolds, \cite{gi:private}. Furthermore, the
product of two wide manifolds is wide and so is the product of a
compact and a wide manifold. On the other hand, a manifold with finite
contractible Hofer-Zehnder capacity cannot be wide, as easily follows
from Definition \ref{def:wide1}; see, e.g., Section \ref{pf:ae} for
the definition of this capacity.
\begin{Remark}
Wideness, although indispensable for our construction, appears to be a
rather mild assumption: the author is not aware of any example of an
open geometrically bounded manifold which is not wide. It is not yet
clear, however, whether every open geometrically bounded manifold is
wide. The relation between geometrical boundedness and the concept of
wideness is interesting but quite complicated, and we will address
this question elsewhere.
\end{Remark}
The property of wideness can also be viewed in terms of the restricted
relative Hofer-Zehnder capacity introduced in \cite{gg3} or as the
property of a manifold to admit a slow proper function. Namely, we
have
\begin{Proposition}
\labell{prop:equivalent}
Let $(W,\,\omega)$ be a symplectic manifold. Then the following
statements are equivalent.
\begin{itemize}
\item[(i)] $(W,\,\omega)$ is wide.
\item[(ii)] $\operatorname{\bar{c}_{HZ}}(W,X)= \infty$ for every compact subset $X \subset W$,
where $\operatorname{\bar{c}_{HZ}}(W,X)$ is the restricted (or contractible) relative
Hofer-Zehnder capacity introduced in \cite{gg3}.
\item[(iii)] $(W,\,\omega)$ admits a non-negative proper function
without non-trivial contractible fast periodic orbits.
\end{itemize}
\end{Proposition}
We omit the proof of Proposition \ref{prop:equivalent}, for it is
essentially straightforward and we will mainly be using Definition
\ref{def:wide1}.
\begin{Remark}
One could also replace the condition $\mathrm{(W3)}$ in Definition
\ref{def:wide1} by the one that the Hamiltonian flow of $K$ has no
non-trivial fast periodic orbits, hence dropping the requirement that
the orbits be contractible. However, this would be a more restrictive
requirement on $W$; for instance, a cylinder of finite area, which is
wide according to Definition \ref{def:wide1}, would then fail to be
wide.
\end{Remark}
Let us now turn to constructing an action selector for geometrically
bounded wide manifolds.
\subsection{An action selector for wide manifolds}
\labell{subsec-actionselector}
We assume that $(W^{2n},\omega)$ is a symplectically aspherical,
geometrically bounded and wide manifold. We will construct an action
selector for non-negative compactly supported Hamiltonians on~$W$.
\subsubsection{The definition}
\labell{sel-defn}
Let $H \colon S^1 \times W \to {\mathbb R}$ be a compactly supported
Hamiltonian such that $H \geq 0$. It is easy to see that, since $W$
is wide, there exists a smooth compactly supported function $F \colon
W \to [0,\,\infty)$ without non-trivial contractible fast periodic
orbits and such that $F \ge H$ point-wise. (This is essentially the
definition of a wide manifold.) Without loss of generality we may
assume that $\operatorname{supp} (F)$ is a smooth connected manifold with boundary
and that $F$ is a Morse function with finitely many critical points
when restricted to the interior of the support. (These requirements
are generic.) From now on we will call such functions \emph{wide} and
we will reserve the notation $F$ for them.
Under these assumptions,
\begin{equation}
\label{eq:floer=morse}
\operatorname{HF}_*^{(0,\,\infty)}(F) \cong \operatorname{HM}_{*+n}^{(0,\,\infty)}(F),
\end{equation}
where $\operatorname{HM}_*^{(0,\,\infty)}(F)$ and $\operatorname{HF}_*^{(0,\,\infty)}(F)$ denote,
respectively, the (filtered) Morse and Floer homology of $F$ for the
interval $(0, \, \infty)$.
For the sake of completeness, let us explain first why this
isomorphism holds. Intuitively, this is obvious. For $F$ has no
non-trivial contractible one-periodic orbits by definition and, hence,
Floer and Morse complexes are the same as vector spaces. Note,
however, that the two differentials may be different. We,
nevertheless, claim that the resulting homology groups are isomorphic.
To this end, let $\epsilon>0$ be small enough so that $\epsilon F$ is
$C^2$-small. Recall that for $C^2$-small functions, when $W$ is
symplectically aspherical, Morse and Floer homology groups are isomorphic. Then
$$
\operatorname{HF}_*^{(0,\,\infty)}(\epsilon F) \cong \operatorname{HM}_{*+n}^{(0,\,\infty)}(\epsilon F).
$$
Consider a monotone decreasing homotopy $F_s$, for $s \in [0,\,1]$,
from $F$ to $\epsilon F$. Choose $\delta >0$ such that it is below any
critical values of $\epsilon F$. As a result, action values for $F_s$
for all $s \in [0,\,1]$ are greater than $\delta > 0$. By the homotopy
invariance of Floer homology, we then have the isomorphism
$$
\operatorname{HF}_*^{(\delta,\,\infty)}(F) \cong \operatorname{HF}_*^{(\delta,\,\infty)}(F_s)
\text{ for all }s\in [0,\,1].
$$
Also note that $\operatorname{HF}_*^{(0,\,\infty)}(F_s) =
\operatorname{HF}_*^{(\delta,\,\infty)}(F_s)$ for all $s \in [0,\,1]$ since zero is an
isolated critical value and $\delta$ is smaller than any critical
value of $F_s$.
Finally, we have
$$
\operatorname{HF}_*^{(0,\,\infty)}(F)
\cong \operatorname{HF}_*^{(0,\,\infty)}(\epsilon F)
\cong \operatorname{HM}_{*+n}^{(0,\,\infty)}(\epsilon F)
\cong \operatorname{HM}_{*+n}^{(0,\,\infty)}(F).
$$
\begin{Remark}
Observe that $\operatorname{HM}_{*+n}^{(0,\,\infty)}(F)$ is just the ordinary
homology of $\operatorname{supp} (F)$ modulo its boundary, i.e.,
$\operatorname{HM}_{*+n}^{(0,\,\infty)}(F)=H_{*+n}(\operatorname{supp} F, \partial(\operatorname{supp} F);
{\mathbb Z}_2)$.
\end{Remark}
Let us now turn to the definition of the action selector. Since $\operatorname{supp}
(F)$ is connected, equation \eqref{eq:floer=morse}, in particular,
implies that
$$
\operatorname{HF}_n^{(0,\,\infty)}(F) \cong \operatorname{HM}_{2n}^{(0,\,\infty)}(F) \cong {{\mathbb Z}}_2.
$$
Denote by $[\max _F]$ the generator of $\operatorname{HF}_n^{(0,\,\infty)}(F) \cong
{\mathbb Z}_2$, which can be thought of as the fundamental class. Consider the
image of this class under the monotonicity map
$$
\Psi_{\scriptscriptstyle{FH}} \colon
\operatorname{HF}_n^{(0,\,\infty)}(F) \to \operatorname{HF}_n^{(0,\,\infty)}(H),
$$
induced by a monotone decreasing homotopy from $F$ to $H$. (This map
is independent of the choice of the homotopy, as discussed in Section
\ref{prelim}.) Using the homotopy invariance of the Floer homology it
is not hard to show that $\Psi_{\scriptscriptstyle{FH}}[\max F]$ is
independent of the choice of $F$; denote this class by $[\max]$.
\begin{Definition}
\labell{defn:sel}
The action selector is defined as
$$\sigma(H)=\mathrm{inf} \, \{a > 0 \,\,|\,\,j_a^H [\max] =0 \},$$
where
$$
j_a^H \colon \operatorname{HF}_n^{(0,\,\infty)}(H) \to \operatorname{HF}_n^{(a,\,\infty)}(H),
$$
is induced by the quotient map.
\end{Definition}
\begin{Remark}
Note that this definition makes sense for arbitrary Hamiltonians,
i.e., we need not assume that $H$ is non-negative. However, it is not
clear whether $\sigma$ would then have the properties (AS0)-(AS6)
listed below, which are crucial for applications. Also, when $W$ is
convex, this selector is equal to the one constructed in \cite{FS}.
\end{Remark}
\subsubsection{Properties of the action selector}
\labell{sel-prop}
Let ${\mathcal S}(H)$ denote the action spectrum of $H$ and let $\operatorname{Ham}_c^+(W)$
denote the cone in the group of compactly supported Hamiltonian
diffeomorphisms of $W$ generated by non-negative Hamiltonians. The
action selector $\sigma \colon \operatorname{Ham}_c^+(W) \to [0,\,\infty)$
constructed above has the following properties:
\begin{itemize}
\item[(AS0)] $\sigma(H)$ is a spectral value: $\sigma(H) \in {\mathcal S}(H)$;
\item[(AS1)] $\sigma(H)$ is monotone:
if $0 \le H \leq K$ then $0 \le \sigma(H) \leq \sigma(K)$;
\item[(AS2)] $\sigma(H)$ is non-degenerate:
$0< \sigma(H) < \infty$ if $H \not\equiv 0$;
\item[(AS3)] $\sigma(H)$ is continuous in $H$ with respect to the Hofer
norm, and, in particular, it is $C^0$-continuous;
\item[(AS4)] $\sigma(H) \leq \int_0^1 \max {H_t} \,dt = \| H \|$,
where $||\, \cdot \,||$ denotes the Hofer norm;
\item[(AS5)] if $H$ is autonomous and has no non-trivial contractible
fast periodic orbits, then $\sigma(H) = \max H $;
\item[(AS6)] if $H$ and $K$ generate the same time-one flow
and ${\varphi}_H^t$ and ${\varphi}_K^t$ are homotopic (with fixed end
points) in $\operatorname{Ham}_c ^+(W)$, then $\sigma(H)=\sigma(K)$;
\item[(AS7)] $\sigma(H) \leq e_V$ for any $H$ supported in $V\subset
W$, where $e_V$ denotes the displacement energy of $V$. Thus,
$\sigma(H)$ is \emph{a priori} bounded from above by $e_V<\infty$ if
$V$ is displaceable.
\end{itemize}
\subsubsection{Proofs of the properties of the selector}
\labell{proof-sel-prop}
We will prove these properties in varying degree of detail; some
proofs are very similar to those for the selectors constructed in
\cite{FS,sc}, whereas some proofs require modifications. We will
mainly focus on the new parts and refer to the literature for the
standard ones.
\emph{(AS0)} First recall that ${\mathcal S}(H)$ is compact and nowhere dense.
Assume the contrary: $\sigma(H) \not\in {\mathcal S}(H)$. Then, since ${\mathcal S}(H)$
is compact, for a small enough number $\delta >0$ we have
$[\sigma(H)-\delta, \, \sigma(H)+\delta] \cap {\mathcal S}(H) =
\emptyset$. Hence, by the definition of the selector, there exists a
number $a$ with $ \sigma(H) < a \le \sigma(H)+\delta $ such that
${j_a^H}[\max ] =0$.
Let $c$ be such that $\sigma(H)-\delta \le c < \sigma(H) $. Then we
have the isomorphism
$$\operatorname{HF}_n^{(a,\,\infty)}(H) \cong \operatorname{HF}_n^{(c,\,\infty)}(H),$$ since there
are no critical values of $A_H$ in $[c,\,a]$, and the diagram
\begin{displaymath}
\xymatrix
{
&
\operatorname{HF}_n^{(0,\,\infty)}(H) \ar[dl]_{0=j_a^H} \ar[dr]^{j_c^H} & \\
\operatorname{HF}_n^{(a,\,\infty)}(H) \ar[rr]^{\cong} &
& \operatorname{HF}_n^{(c,\,\infty)}(H)
}
\end{displaymath}
commutes. Thus ${j_c^H}[\max ] =0$. This contradicts the definition of
$ \sigma(H)$. We conclude that $\sigma(H) \in {\mathcal S}(H)$.
\emph{(AS1)} Let $K \ge H \ge 0$ and let $F$ be a wide function such
that $F \ge K \ge H$. Monotonicity is a consequence of the
commutativity of the following diagram:
\begin{displaymath}
\xymatrix
{
\operatorname{HF}_n^{(0,\,\infty)}(F) \ar[r]^{\Psi_{FH}} \ar[dr]_{\Psi_{FK}}
& \ar@{}[dr] \operatorname{HF}_n^{(0,\,\infty)}(H) \ar[r]^{j_a^H}
& \operatorname{HF}_n^{(a,\,\infty)}(H) \\
& \operatorname{HF}_n^{(0,\,\infty)}(K) \ar[r]_{j_a^K}
& \operatorname{HF}_n^{(a,\,\infty)}(K) \ar[u]_{\Psi_{KH}}
}
\end{displaymath}
\emph{(AS2)} Finiteness of the selector follows from the compactness
of ${\mathcal S}(H)$, for $\operatorname{HF}_n^{(a,\,\infty)}(H) = 0$ for any $a >
\sup {\mathcal S}(H)$.
Proving the non-degeneracy, i.e., $\sigma(H)>0$ for any $H \not\equiv
0$, requires more work. First observe that we can find a $C^2$-small
\emph{space-time} bump function $f\not\equiv 0$ such that $0 \le f \le
H$ for all $t\in S^1$. This is simply because $H \geq 0$ and $H
\not\equiv 0$ for some $t$. Hence, by the monotonicity of the selector,
it suffices to show that $\sigma(f)> 0$.
More precisely, let $f(t,x)= f_{\scriptscriptstyle{S^1}}(t) \cdot
f_{\scriptscriptstyle{W}} (x) \colon S^1 \times W \to [0,\,\infty)$ be a space-time bump
function satisfying $0 \le f \le H$ for all $t\in S^1$. Here
$ f_{\scriptscriptstyle{W}} (x)\colon W \to [0,\,\infty)$ and $f_{\scriptscriptstyle{S^1}}(t)
\colon S^1 \to [0,\,\infty)$ are both bump functions in the usual
sense, and $ f_{\scriptscriptstyle{W}} $ is autonomous and $C^2$-small. The time-one flow of
$f$ differs from the time-one flow of $ f_{\scriptscriptstyle{W}} $ only by a positive factor
equal to the integral of $f_{\scriptscriptstyle{S^1}}$ over the
circle. Let us assume, for the sake of simplicity, that $ \int_{S^1}
f_{\scriptscriptstyle{S^1}}(t) \, dt =1$. This can be achieved by
choosing $ f_{\scriptscriptstyle{W}} $ sufficiently small so that $f$ still fits underneath
$H$. Then the action spectra of $f$ and $ f_{\scriptscriptstyle{W}} $ are the same.
We claim that $\operatorname{HF}_*^{(a,\,b)}(f) \cong \operatorname{HF}_*^{(a,\,b)}( f_{\scriptscriptstyle{W}} )$ for all
positive intervals of action $(a,\,b) \subseteq (0,\,\infty)$ such
that $a$ and $b$ are not in ${\mathcal S}(f)={\mathcal S}( f_{\scriptscriptstyle{W}} )$. To see this, let $K_s =
s f_{\scriptscriptstyle{W}} + (1-s) f$ be the linear homotopy from $f$ to $ f_{\scriptscriptstyle{W}} $ for $s \in
[0,\,1]$. It is easy to see that all Hamiltonians in this homotopy
have the same time-one flow as that of $ f_{\scriptscriptstyle{W}} $ and, hence, the only
critical points of $A_{K_s}$ are constant one-periodic orbits, i.e.,
the critical points of $ f_{\scriptscriptstyle{W}} $. Thus the action spectrum ${\mathcal S}(K_s)$ for
any $s$ consists of two action values: zero and $\max f_{\scriptscriptstyle{W}} $. This
implies that for $a,\,b \not\in {\mathcal S}(K_s)$ no periodic orbit with
action outside the range $(a,\,b)$ will enter or exit this interval
during the course of the homotopy. In this case the Floer homology
groups are isomorphic for all $K_s$; see, e.g., \cite{bps, gi:coiso,
vi:functors}. In particular, $\operatorname{HF}_*^{(a,\,b)}(f) \cong
\operatorname{HF}_*^{(a,\,b)}( f_{\scriptscriptstyle{W}} )$. (Note that this isomorphism is not induced by
the homotopy map since $K_s$ is not a monotone homotopy.)
We next show that $\sigma( f_{\scriptscriptstyle{W}} )=\max f_{\scriptscriptstyle{W}} >0$. To this end, let $F$ be a
wide function satisfying $F \ge f_{\scriptscriptstyle{W}} $ and recall that
$\operatorname{HF}_*^{(0,\,\infty)} (F) \cong \operatorname{HM}_{*+n}^{(0,\,\infty)}
(F)$. Moreover, since $ f_{\scriptscriptstyle{W}} $ is a $C^2$-small bump function, Morse and
Floer homology groups are isomorphic: $\operatorname{HF}_*^{(0,\,\infty)}( f_{\scriptscriptstyle{W}} ) \cong
\operatorname{HM}_{*+n}^{(0,\,\infty)}( f_{\scriptscriptstyle{W}} )$. Then the diagram
\begin{displaymath}
\xymatrix
{
{{\mathbb Z}}_2 \cong \operatorname{HF}_n^{(0,\,\infty)}(F)
\ar[r]^{\cong}
\ar[d]_{\Psi_{F f_{\scriptscriptstyle{W}} }} &
\operatorname{HM}_{2n}^{(0,\,\infty)}(F) \cong {{\mathbb Z}}_2
\ar[d]^{\Psi_{F f_{\scriptscriptstyle{W}} }} \\
\operatorname{HF}_n^{(0,\,\infty)}( f_{\scriptscriptstyle{W}} )
\ar[r]^{\cong}
\ar[d]_{j_a^{ f_{\scriptscriptstyle{W}} }} &
\operatorname{HM}_{2n}^{(0,\,\infty)}( f_{\scriptscriptstyle{W}} )
\ar[d]^{j_a^{ f_{\scriptscriptstyle{W}} }} \\
\operatorname{HF}_n^{(a,\,\infty)}( f_{\scriptscriptstyle{W}} )
\ar[r]^{\cong} &
\operatorname{HM}_{2n}^{(a,\,\infty)}( f_{\scriptscriptstyle{W}} )
}
\end{displaymath}
is commutative. This can easily be seen by factoring the horizontal
isomorphisms through isomorphisms induced by monotone homotopies and
using the fact that all such diagrams commute when the functions
involved are $C^2$-small.
Let us focus on the right-hand side of the diagram. By Morse theory,
the map
\begin{displaymath}
\xymatrix
{
{{\mathbb Z}}_2 \cong \operatorname{HM}_{2n}^{(0,\,\infty)}(F)
\ar[r]^{\Psi_{F f_{\scriptscriptstyle{W}} }}
&
\operatorname{HM}_{2n}^{(0,\,\infty)}( f_{\scriptscriptstyle{W}} ) \cong {{\mathbb Z}}_2
}
\end{displaymath}
is non-zero, and sends $[\max _F]$ to $[\max _{ f_{\scriptscriptstyle{W}} }]$. Moreover,
$j_a^{ f_{\scriptscriptstyle{W}} }[\max _{ f_{\scriptscriptstyle{W}} }]=[\max f_{\scriptscriptstyle{W}} ]$ for any positive $a < \max f_{\scriptscriptstyle{W}} $
and $j_a^{ f_{\scriptscriptstyle{W}} }[\max _{ f_{\scriptscriptstyle{W}} }]=0$ for any $a \ge \max f_{\scriptscriptstyle{W}} $. Commutativity
of the diagram then implies that $\sigma( f_{\scriptscriptstyle{W}} )=\max f_{\scriptscriptstyle{W}} >0$.
As the last step observe that $\sigma(f)=\sigma( f_{\scriptscriptstyle{W}} )=\max f_{\scriptscriptstyle{W}} > 0$,
which finishes the proof of non-degeneracy. To see this, note that
the diagram
\begin{displaymath}
\xymatrix
{
\operatorname{HF}_n^{(0,\,\infty)}(F) \ar[r]^{\Psi_{Ff}}
\ar[dr]_{\Psi_{F f_{\scriptscriptstyle{W}} }}
& \ar@{}[dr]
\operatorname{HF}_n^{(0,\,\infty)}(f) \ar[r]^{j_a^f} \ar@{=}[d]
& \operatorname{HF}_n^{(a,\,\infty)}(f) \ar@{=}[d] \\
& \operatorname{HF}_n^{(0,\,\infty)}( f_{\scriptscriptstyle{W}} )
\ar[r]_{j_a^{ f_{\scriptscriptstyle{W}} }}
& \operatorname{HF}_n^{(a,\,\infty)}( f_{\scriptscriptstyle{W}} )
}
\end{displaymath}
is also commutative, where $F$ is a wide function satisfying $F\ge f$
and $F \ge f_{\scriptscriptstyle{W}} $.
\begin{Remark}
An important consequence of non-degeneracy of $\sigma$ is that
$\operatorname{HF}_n^{(0,\,\infty)}(H) \neq 0$ for any non-negative Hamiltonian $H
\not\equiv 0$. To see this, note that the \emph{non-zero} map
$\operatorname{HF}_n^{(0,\,\infty)}(F) \to \operatorname{HF}_n^{(0,\,\infty)}( f_{\scriptscriptstyle{W}} )$ can be
factored through $\operatorname{HF}_n^{(0,\,\infty)}(H)$, where $F$ and $ f_{\scriptscriptstyle{W}} $ are as
in the proof above. This fact is used in \cite{gi:coiso}.
\end{Remark}
\emph{(AS3)} Recall that this property asserts the continuity of the
selector with respect to the Hofer norm. Namely, we claim that
$$
|\sigma(H)-\sigma(K)| \le \| H-K \| \text{ for non-negative } H \text{
and } K.
$$
Note first that it suffices to prove continuity for non-degenerate
Hamiltonians. Since we are assuming that $W$ is open, this means that
all one-periodic orbits with positive action are non-degenerate.
Keeping the notation from Section~\ref{sec:homotopy}, let $K=H^0$ and
$H=H^1$, and consider a linear homotopy $H^s$ from $H^0$ to $H^1$,
i.e., $H^s=(1-\phi(s)) H^0 +\phi(s) H^1$, where $\phi \colon {\mathbb R} \to
[0,1]$ is a smooth monotone increasing function equal to zero near
$-\infty$ and equal to one near $\infty$. Then,
$$
\int_{-\infty}^{\infty}\int_{S^1} \max_W \,
{\partial}_s H_t^s\,dt\,ds
=\int_{S^1} \max_W \,(H^1_t-H^0_t) \,dt.
$$
Let, as customary, $e^{\scriptscriptstyle{+}}=
e^{\scriptscriptstyle{+}}(H^1-H^0)=\int_{S^1}\max_W \,(H^1_t-H^0_t)
\,dt$. (Recall that $\| H \|= e^{\scriptscriptstyle{+}}(H) -
e^{\scriptscriptstyle{-}}(H)$, where
$e^{\scriptscriptstyle{-}}(H)=\int_{S^1}\min_W \,H_t \,dt$.) Hence, by
Theorem \ref{thm:homotopy},
for $a \not\in {\mathcal S}(H^s)$, we have the
monotonicity maps $\Psi_{H^0H^1}$ for two intervals:
$
\operatorname{HF}_*^{(a,\,\infty)} (H^0) \to
\operatorname{HF}_*^{(a + e^{\scriptscriptstyle{+}},\,\infty )} (H^1)
$
and
$
\operatorname{HF}_*^{(0,\,\infty)} (H^0) \to
\operatorname{HF}_*^{(e^{\scriptscriptstyle{+}},\,\infty )} (H^1).
$
Now the diagram
\begin{displaymath}
\xymatrix
{
\operatorname{HF}_n^{(0,\,\infty)}(F) \ar[d]_{{\Psi}_{FH^0}} \ar[r]^{{\Psi}_{FH^1}}
&
\operatorname{HF}_n^{(0,\,\infty)}(H^1) \ar[d]
\\
\operatorname{HF}_n^{(0,\,\infty)}(H^0) \ar[d]_{j_a^{H^0}} \ar[r]^{\Psi_{H^0H^1}}
&
\operatorname{HF}_n^{(e^{\scriptscriptstyle{+}},\,\infty)}(H^1)
\ar[d]
\\
\operatorname{HF}_n^{(a,\,\infty)}(H^0) \ar[r]^{\Psi_{H^0H^1}}
&
\operatorname{HF}_n^{(a+e^{\scriptscriptstyle{+}},\,\infty)}(H^1),
}
\end{displaymath}
where $F$ is a wide function satisfying $F \ge H^0$ and $F \ge H^1$,
is commutative; see \cite{gi:coiso}. Here the vertical maps on the
right-hand side of the diagram are just the maps induced by taking
quotient complexes, and their composition is the map
$j_{a+e^{\scriptscriptstyle{+}}}^{H^1}$. Consequently, we have
$$
\sigma(H^1) \le \sigma(H^0) + e^{\scriptscriptstyle{+}}
= \sigma(H^0) + e^{\scriptscriptstyle{+}}(H^1-H^0).
$$
Moreover, exchanging the roles of $H^0$ and $H^1$ we get
$$
\sigma(H^0) \le \sigma(H^1) + e^{\scriptscriptstyle{+}}(H^0-H^1) =
\sigma(H^1) - e^{\scriptscriptstyle{-}}(H^1-H^0).
$$
Thus, we have
$$
e^{\scriptscriptstyle{-}}(H^1-H^0) \le \sigma(H^1) - \sigma(H^0)
\le e^{\scriptscriptstyle{+}}(H^1-H^0).
$$
$C^0$-continuity of the selector follows immediately. In order to
prove the continuity in Hofer's norm, note first that
$e^{\scriptscriptstyle{+}}(H) \ge 0$ and $e^{\scriptscriptstyle{-}}(H)
\le 0$ for any compactly supported Hamiltonian $H$. (This is also true
for any Hamiltonian on a closed manifold.) Consequently, we have
\begin{eqnarray*}
-e^{\scriptscriptstyle{+}}(H^1-H^0)+e^{\scriptscriptstyle{-}}(H^1-H^0)
& \le & e^{\scriptscriptstyle{-}}(H^1-H^0) \\
& \le & \sigma(H^1) - \sigma(H^0) \\
& \le & e^{\scriptscriptstyle{+}}(H^1-H^0) \\
& \le & e^{\scriptscriptstyle{+}}(H^1-H^0) -
e^{\scriptscriptstyle{-}}(H^1-H^0),
\end{eqnarray*}
and hence
$$
|\sigma(H^1) - \sigma(H^0)|
\le e^{\scriptscriptstyle{+}}(H^1-H^0) - e^{\scriptscriptstyle{-}}(H^1-H^0)
= \| H^1 - H^0\|.
$$
This finishes the proof of continuity.
\emph{(AS4)} The assertion readily follows from (AS3).
\emph{(AS5)} We refer the reader to \cite[Lemma 3.5]{gi:weinstein} for
a proof of the property that $\sigma(H) = \max H $ for an autonomous
Hamiltonian $H$ without non-trivial contractible fast periodic orbits.
The proof in \cite{gi:weinstein} is set theoretic in nature and works
in any setting where the selector has the properties (AS0), (AS1),
(AS3) and the claimed property holds for autonomous $C^2$-small
functions. (See the proof of (AS2) above for the fact that $\sigma(H)
= \max H $ when $H$ is a $C^2$-small autonomous function having no
non-trivial contractible fast periodic orbits.)
\emph{(AS6)} It is well-known that the action spectrum of a compactly
supported Hamiltonian on an open symplectically symplectically
aspherical manifold depends only on the time-one flow; see, e.g.,
\cite{FS,hz:book}. Thus, if $H$ and $K$ generate the same time-one
flow and ${\varphi}_H^t$ and ${\varphi}_K^t$ are homotopic (with fixed
end points) in $\operatorname{Ham}_c^+(W)$, then the action spectrum stays the same
throughout the homotopy. On the other hand, due to (AS3), $\sigma$
varies continuously in the course of the homotopy. As the action
spectrum is nowhere dense, $\sigma$ must be constant.
\emph{(AS7)} It is a standard fact that an action selector defined on
a displaceable domain in a closed or convex manifold is \emph{a
priori} bounded from above. However, the proofs existing in literature
rely on the sub-additivity of the action selector and the fact that
the selectors are defined for all Hamiltonians (in particular, not
necessarily non-negative Hamiltonians). Hence, these arguments do not
apply to the action selector introduced here for wide manifolds, and,
for the sake of completeness, we provide a proof of (AS7).
Assume that $V \subset W$ is open and displaceable, and denote by
$e_V$ the displacement energy of $V$; see, e.g., \cite{hz:book, felix,
pol2}. Let $H \colon S^1 \times V \to [0,\,\infty)$ be a compactly
supported Hamiltonian whose support is contained in $V$. Let $K$ be a
compactly supported Hamiltonian such that $\varphi_K$ displaces
$V$. Moreover, without loss of generality, we may assume that $K \ge
0$. For, otherwise, we first shift $K$ up so that $\min K=0$ and then
cut it off away from its original support. The new function is
non-negative, still displaces $V$ and has the same Hofer norm as the
original function.
Recall that, in general, for any two Hamiltonians $H$ and $K$
generating the time-one flows $\varphi_H$ and $\varphi_K$, the
Hamiltonian generating the composition flow $\varphi_H \, \varphi_K$
is given by $H\#K=H(t,x)+K(t,{(\varphi_H^t)}^{-1}(x))$. Since $K \ge
0$ and the selector is monotone, we then have
\begin{equation}
\labell{eq:sel(H)=sel(HK)}
\sigma(H) \le \sigma(H\#K).
\end{equation}
Since $\varphi_K$ displaces $\operatorname{supp} H$, one-periodic orbits of
$\varphi_H^t \, \varphi_K^t$ are exactly the one-periodic orbits of
$\varphi_K^t$. In fact, we claim that ${\mathcal S}(H\#K)={\mathcal S}(K)$. Observe that
this assertion immediately follows from
$$
{\mathcal S}(\varphi_H^t \, \varphi_K^t)
={\mathcal S}(\varphi_K^t \, \ast \, (\varphi_H^t \, \varphi_K) )
={\mathcal S}(\varphi_K^t),
$$
where $\ast$ denotes the concatenation of $\varphi_K^t$ and
$\varphi_H^t \, \varphi_K $. Here the concatenation $a(t) \ast b(t)$
of paths $a(t)$ and $b(t)$ with domain $[0,\,1]$ is defined by
traversing $a(2t)$ for $0 \le t \le 1/2$ and then traversing $b(2t-1)$
for $1/2 \le t \le 1$.
The first identity above is due to the fact that $\varphi_H^t \,
\varphi_K^t$ and $\varphi_K^t \, \ast \, (\varphi_H^t \, \varphi_K) $
are homotopic with fixed end points. (It is straightforward to write a
specific formula for this homotopy.) The second identity is specific
to our situation. Namely, observe that one-periodic orbits of the
concatenation cannot be in $\operatorname{supp} H$, essentially since $\varphi_K$
displaces this support. But when a point is outside $\operatorname{supp} H$, the
flow $\varphi_H^t$ is \emph{identity} and, hence, such a point can
correspond to a one-periodic orbit of the concatenation only if it is
fixed by $\varphi_K$. Therefore, one-periodic orbits of the
concatenation are just reparametrizations of one-periodic orbits of
$\varphi_K^t$. As a result, actions acquired in both cases are the
same. This proves the second identity.
We now have ${\mathcal S}(H\#K)={\mathcal S}(K)$. The same, of course, holds when $H$
is replaced by $\lambda H$ where $\lambda \in [0,\,\infty)$, i.e.,
${\mathcal S}(\lambda H\#K)={\mathcal S}(K)$ for any non-negative $\lambda$. But
$\sigma$ is continuous and the action spectrum is nowhere
dense. Therefore, we conclude from $\sigma(\lambda H\#K) \in
{\mathcal S}(\lambda H\#K)={\mathcal S}(K)$ that $\sigma(\lambda H\#K)$ is independent
of $\lambda$. Setting $\lambda =0$ and $\lambda =1$ yields
$\sigma(K)=\sigma(H\#K)$. Thus, also using \eqref{eq:sel(H)=sel(HK)}
and property (AS4), we have $$ \sigma(H) \le \sigma(H\#K) = \sigma(K)
\le \| K \|.$$
Finally, $e_V = \sup_{K} \| K \|$. Hence, the selector is \emph{a
priori} bounded from above by $e_V$, which is finite when $V$ is
displaceable.
\section{Proofs}
\labell{proof}
In this section we prove the main results of this paper.
\subsection{The Conley Conjecture}
\labell{pf:cc}
We will focus on the case $W$ is open. For closed manifolds, Theorem
\ref{thm:cc} follows from Theorem \ref{thm:displacement} and the
results from \cite{sc}.
In what follows we denote by ${\mathcal S}^+(\cdot)$ the positive part of the
action spectrum of a Hamiltonian. Theorem \ref{thm:cc} is a
consequence of the following proposition.
\begin{Proposition}
\labell{prop:main}
Let $V$ be an open displaceable subset of a symplectically aspherical
manifold $(W,\omega)$ which is geometrically bounded and wide. Let $G
\geq 0$ be a non-zero Hamiltonian supported in $V$. Then, $\varphi_G$
has infinitely many periodic points with positive action,
corresponding to contractible periodic orbits of $G$. Moreover, assume
that ${\mathcal S}^+(G)$ is separated from zero, i.e., $\mathrm{inf}\,{\mathcal S}^+(G)
> 0$. Then, there exists a sequence of integer periods $T_k \to
\infty$ such that for every $T_k$, the Hamiltonian $G$ has such a
periodic orbit with minimal period $T_k$.
\end{Proposition}
Let us first derive the proof of Theorem~\ref{thm:cc} from
Proposition~\ref{prop:main}.
\begin{proof}[Proof of Theorem \ref{thm:cc}.]
Consider $M \times S^1 \subset W \times T^*S^1$, where $T^*S^1$ is
equipped with the standard symplectic structure, also referred to as
the ``stabilization'' of $M$, \cite{mac1, pol1}. Note that $M \times
S^1$ is again nowhere coisotropic and, moreover, the normal bundle to
$M \times S^1$ admits a non-vanishing section. Theorem
\ref{thm:displacement} now implies that a small neighborhood $V=U
\times (S^1 \times (-\epsilon, \epsilon)) \subset W \times T^*S^1$ of
the product $M \times S^1$ is infinitesimally displaceable. Here $U
\subset W$, a neighborhood of $M$ in $W$, and $\epsilon>0$ are both
sufficiently small.
Let $H \ge 0$ be a Hamiltonian as in the statement of Theorem
\ref{thm:cc}. Let $K \colon T^*S^1 \to [0,1]$ be an autonomous
fiber-wise bump function, depending only on the distance to the zero
section, which is supported in $S^1 \times (-\epsilon, \epsilon)$ and
such that $\max K=K|_{S^1\times 0}=1$. Note that $K$ has no
non-trivial contractible periodic orbits.
Consider the Hamiltonian $G=H \cdot K$ supported in the displaceable
open set $V$. Then $G \geq 0$ and $G \not\equiv 0$. Observe that every
contractible periodic orbit of the vector field $X_{G_t}={H_t} \cdot
X_K+K \cdot X_{H_t}$ with positive action must be of the form
$(u(t),\,v(t)) \in W \times T^*S^1 $, where $u(t)$ is contractible in
$W$ and $v(t)$ is constant with $K(v(t))=1$, i.e., $v(t)$ is a point
on $S^1$. To see this, note first that $v'(t)={H_t}(u(t)) \cdot
X_K(v(t))$ where $X_K$ points in the direction of the angular
coordinate. The assumption that $H\geq 0$ then implies that $v(t) \in
T^*S^1$ can be contractible only when $v(t)$ is constant. Since the
pair $(u(t),\,v(t))$ is contractible, $u(t)$ must also be
contractible. Furthermore, the action on the orbit $(u(t),\,v(t))$ can
be positive only when $v(t)$ is a point on the zero-section.
Coming back to the proof of Theorem \ref{thm:cc}, note that by the
previous observation we have ${\mathcal S}^+(H)={\mathcal S}^+(G)$. Moreover, since
$\varphi_H$ is assumed to have isolated fixed points with positive
action, ${\mathcal S}^+(H)$ is separated from zero. Then, ${\mathcal S}^+(G)$ is also
separated from zero and Proposition \ref{prop:main} applies. Finally,
note that $T_k$-periodic orbits of $\varphi_G$ from the proposition
will correspond to infinitely many simple (contractible) periodic
orbits of $\varphi_H$.
\end{proof}
Let us now prove Proposition \ref{prop:main}.
\begin{proof}[Proof of Proposition \ref{prop:main}.]
Note first that, by property (AS7) and the assumption that $V$ is
displaceable, we have $\sigma(G) \leq e_V < \infty$, where $e_V$ is
the displacement energy of $V$.
Let $G^k$ denote the Hamiltonian generating $\varphi_G^k$. Then, since
$G\geq0$, we have $G^l \leq G^k$ whenever $l<k$. (Here we are using
the explicit formula for the Hamiltonian generating the composition
flow; see, e.g., the proof of (AS7) for this formula.) By the
monotonicity and non-degeneracy of and the \emph{a priori} bound on
$\sigma$, we then have the following series of inequalities:
$$
0<\sigma(G)\leq\sigma(G^2)\leq \ldots \leq e_V.
$$
Assume the contrary: $\varphi_G$ has finitely many (simple) periodic
points with positive action. Then, for a sufficiently large prime
number $p$, fixed points of $\varphi_G^p$ must all be $p$-th
iterations of fixed points of $\varphi_G$. Consequently, ${\mathcal S}
^+(\varphi_G^p)=p\, {\mathcal S} ^+(\varphi_G)$ and, in particular,
$\sigma(G^p) \in p\,{\mathcal S} ^+(\varphi_G)$.
Now observe that it suffices to prove the ``moreover'' assertion of
the proposition. For when ${\mathcal S}^+(G)$ is not separated from zero
$\varphi_G$ has infinitely many fixed points and hence has infinitely
many periodic points. Thus, assume that $\inf {\mathcal S}^+(G) > \delta > 0 $
for a sufficiently small $\delta > 0$.
Since $\sigma(G^p) >0$ and $\sigma(G^p) \in p\,{\mathcal S} ^+(G)$, we
necessarily have $\sigma(G^p) > p \cdot \delta >0$. Hence,
$\sigma(G^p) \to \infty$ as $p \to \infty$. This contradicts the fact
that the selector is \emph{a priori} bounded from above.
\end{proof}
\begin{Remark}
In fact, we have proved that for any prime number $p > e_V / \delta $
there exists a simple contractible $p$-periodic orbit of $G$.
Furthermore, the number of simple (contractible) periodic orbits of
$\varphi_G$ with period less than or equal to $k \in {\mathbb N}$ is at least a
constant, depending on $\varphi_G$, times $k$. This can be seen by
applying the argument in \cite[Proposition 4.13]{vi2}.
\end{Remark}
\subsection{Almost Existence}
\labell{pf:ae}
We will next prove the almost existence theorem.
\begin{proof}[Proof of Theorem \ref{thm:ae}.]
The assertion follows from Theorem \ref{thm:displacement} and the
results established in \cite{felix}. Namely, similarly to the proof
of Theorem \ref{thm:cc}, a sufficiently small neighborhood of $M
\times S^1 \subset P \times T^*S^1$ is infinitesimally displaceable by
the displacement principle for nowhere coisotropic submanifolds. Let
$V=U \times (S^1 \times (-\epsilon, \epsilon)) \subset P \times
T^*S^1$ be such a neighborhood and let, as before, $e_V$ denote the
displacement energy of $V$.
Let us recall the definition of the contractible Hofer-Zehnder
capacity $\operatorname{c^{\circ}_{HZ}}(U)$ of a domain $U \subset P $. Denote by
${{\mathcal H}}_{\operatorname{HZ}}(U) $ the space of compactly supported smooth non-negative
Hamiltonians $H \colon U \to {\mathbb R}$ which are constant near their maxima
and have no non-trivial \emph{contractible-in-$P$} fast periodic
orbits. The contractible Hofer-Zehnder capacity is then defined to be
$$
\operatorname{c^{\circ}_{HZ}}(U)=\sup_H \, \{ \max H \,|\, H \in {{\mathcal H}}_{\operatorname{HZ}}(U) \}.
$$
Removing the condition that the orbits are contractible in $P$ yields
a finer capacity, the ordinary Hofer-Zehnder capacity
$\operatorname{c_{HZ}}(U)$. Hence, $\operatorname{c_{HZ}}(U) \leq \operatorname{c^{\circ}_{HZ}}(U)$.
As an immediate consequence of the finiteness of $e_V$ and the
energy-capacity inequality established in \cite[Theorem 1.1]{felix},
we obtain the estimate
$$
\operatorname{c_{HZ}}(U) \leq \operatorname{c^{\circ}_{HZ}}(U) \leq 4 \, e_V < \infty.
$$
In particular, $\operatorname{c_{HZ}}(U) < \infty $ and the almost existence theorem
follows from the finiteness of $\operatorname{c_{HZ}}(U)$ by the standard arguments;
see, e.g., \cite{FGS, hz:book, gg3}.
\end{proof}
\begin{Remark}
As we have already mentioned, the displacement energy-capacity
inequality of \cite{felix} relies heavily on the work of
Lalonde--McDuff and McDuff--Slimowitz,
\cite{LalMc1,McDSl}. Let us give a simple proof of Theorem \ref{thm:ae}
for symplectically aspherical wide manifolds $(W,\omega)$ using the
selector we have constructed in Section \ref{sel-defn}.
Recall that using an action selector one can define an invariant, the
homological capacity $\operatorname{c_{hom}}(V)$, of a domain $V \subset W \times
T^*S^1$ as follows:
$$
\operatorname{c_{hom}}(V)=\sup \{\sigma(H)\,|\, H \colon S^1 \times V \to {\mathbb R},
\text{ where } H\geq 0 \text{ is compactly supported} \}.
$$
By definition $\operatorname{c_{hom}}(V)< \infty$, provided that the selector is
\emph{a priori} bounded from above; for instance, when $V$ is
displaceable. In our case, this is guaranteed by property
(AS7). Furthermore, property (AS5) implies that $\operatorname{c^{\circ}_{HZ}}(V) \leq
\operatorname{c_{hom}}(V)$. Hence, we obtain $\operatorname{c^{\circ}_{HZ}}(V) < \infty$ whenever $V$ is
displaceable, and, consequently, the almost existence theorem holds
for $V$. An argument similar to the one in the proof of Theorem
\ref{thm:cc} finishes the proof.
It is clear that this proof also works for closed manifolds, albeit
using the selector constructed in \cite{sc}.
\end{Remark}
\subsection{Displacement Principle}
\labell{pf:displacement}
The proof of Theorem \ref{thm:displacement} is essentially identical
to, if not slightly simpler than, the proof of this statement in the
middle-dimensional case due to Laudenbach and Sikorav,
\cite{LauSi}. Therefore, we will only outline the proof of this
theorem and refer the reader to \cite{LauSi} for the details of the
argument.
Note that the bundle $TM^{\omega}$ is isomorphic to the normal bundle
to $M$ and hence admits a non-vanishing section. To prove Theorem
\ref{thm:displacement}, it suffices to find a non-vanishing section
$v$ of $TM^{\omega}$ such that $dK(v)>0$ along $M$ for some function
$K$ defined near $M$. (The Hamiltonian vector field $X_K$ of $K$ would
then be nowhere tangent to $M$.) On the other hand, for a fixed $v$,
the existence of such a function is guaranteed by a result due to
Sullivan, \cite{su}, whenever $v$ is non-recurrent, i.e., no
trajectory of $v$ is contained entirely in $M$. Then the proof of
Theorem \ref{thm:displacement}, similarly to the argument in
\cite{LauSi}, is based on constructing a non-recurrent section of
$TM^{\omega}$.
Let $\xi$ be a non-vanishing section of $TM^{\omega}$. \emph{A priori}
this section may be somewhere tangent to $M$. The idea is to turn
$\xi$ into a non-recurrent vector field by killing the recurrence. To
this end, let $R \subset M$ denote the set of all trajectories of
$\xi$ contained in $M$. Pick finitely many (mutually) disjoint
balls $B_i \subset M$ in such a way that every trajectory of
$\xi$ in $M$ intersects the interior of at least one $B_i$. This is
possible since $M$ is compact. Denote by $B$ the (disjoint) union of
$B_i$'s.
Next let us modify $\xi$ inside $B$ to make $R=\emptyset$. Observe
that, if the balls are chosen small enough, inside each $B_i$ the
vector field $\xi$ is almost tangent to $M$ by continuity. Thus, let
us choose a \emph{normal} vector $\zeta_i \in TM^{\omega}$ within each
$B_i$ and add these to $\xi$ using cut-off functions supported in
$B_i$'s. The resulting vector field $v$ has the desired property.
(The real situation is slightly more complicated, for actually $\xi$
need not be tangent to $M$ everywhere in $B_i$ and adding $\zeta_i$ to
$\xi$ may force $v$ to vanish in some $B_i$. However, Thom's jet
transversality theorem guarantees that $\zeta_i$'s can be chosen so
that they not parallel to $\xi$ in each $B_i$.) This finishes the
construction. Let us point out that the essence of assuming $M$ to be
nowhere coisotropic is now more transparent: in order for $\zeta_i's$
to exist, some ``normal space'' in $TM^{\omega}$ is needed; for
instance, this would be impossible if $TM^{\omega} \subset TM$ at a
point in $R$.
Now it is easy to see that $v$ is non-recurrent, just as in
\cite{LauSi}.
|
1,116,691,499,210 | arxiv | \section{Introduction}
In the recent decades, $\gamma$-ray astrophysics has made great advances enabled by improvements in instrumentation in the high energy technologies. The most recent and most accurate $\gamma$-ray source catalog is the \textit{Fermi} Large Area Telescope (LAT) Second Source Catalog \citep[2FGL,][]{2FGL}, based on the first 24 months of data from LAT. Thanks to its silicon strip pair production and modern analysis processes, LAT has drastically reduced the positional error of the sources with respect to previous studies, like those performed by the Energetic Gamma-Ray Experiment Telescope (EGRET) on board the Compton Gamma-Ray Observatory \citep{CGRO}.
As the {\it Fermi}-LAT $\gamma$-ray observations of the sky continue, the task of finding clear counterparts to all the detected sources becomes increasingly challenging, mostly due to the large positional uncertainty of the faintest sources. Since a strong connection between $\gamma$-ray and radio emission has been clearly demonstrated \citep{Ghirlanda2010,Mahony2010,Ackermann2011a}, it is natural to exploit radio surveys for finding new associations.
Several association methods have been proposed to match the $\gamma$-ray sources detected with source catalogs at lower frequencies \citep {U4,Masetti2013}, to give a proper counterpart to unidentified sources, where possible. Of the 1873 sources in the 2FGL catalog, 575 have unknown physical nature. Regarding the \textit{Fermi} sources of known types, blazars are the largest known population representing more than 80\%. It is, therefore, fair to presume that a significant fraction of the unidentified $\gamma$-ray sources (UGSs) can be unrecognized blazar-like sources. Finding them is the aim of our work. \par
Blazars are compact radio sources with flat (i.e., spectral index $\alpha< 0.5$, where $\rm{S}_\nu \sim \nu^{-\alpha}$) radio spectra that steepen towards the infrared-optical bands; their overall spectral energy distribution shows two broadband components: the low energy one peaking in the infrared (IR) to X-ray frequency range, while the high energy one extends up to the MeV-to-TeV range. Their emission features high and variable polarization, apparent superluminal motions of radio jet components and high apparent luminosity, coupled with rapid flux variability over the whole electromagnetic spectrum. Blazars come into two classes: flat-spectrum radio quasars and BL Lac objects, which we label here as BZQs and BZBs respectively, following Roma-BZCAT \citep{RomaBZCAT} nomenclature. Blazar emission interpreted as originating in radio loud active galaxies, with one of the relativistic jets emerging from the super-massive black hole in the galaxy nucleus pointing almost directly to the observer \citep{Giommi2012}. \par
This paper is the latest of a series that focus on the nature of UGSs: \citet{U1} (hereafter Paper I), which investigates sources as UGSs counterparts as described in \citet[][Paper II]{U2}; \citet[][Paper III]{U3} adds another feature for the search of blazar-like sources focusing on the low-frequency radio feature of blazars; \citet[][Paper IV]{U4} involves the X-ray emission as a distinctive feature and \citet[][Paper V]{U5} propose a renewed IR approach, based on a 2-dimensional kernel density estimation (KDE) technique, all for the same purpose.
In this work, we apply the method proposed in {\citetalias{U3}}\ to search for blazar-like candidates within the $\gamma$-ray positional uncertainty regions of the UGSs listed in the 2FGL and to select counterpart among them. This method is based on a multifrequency procedure, relying primarily on the flatness of the radio spectrum. The innovative point in this method is that it is based on low-frequency (i.e., $<1$ GHz) measurements, which is unprecedented in the investigation the emission of blazars and their association with UGSs. In {\citetalias{U3}}, we combined the observations at 325\,MHz from the Westerbork Northern Sky Survey (WENSS) at 325\,MHz with those of the NRAO Very Large Array Sky Survey (NVSS) at 1.4\,GHz.
In this work, we apply the same approach to the region covered by the counterpart of WENSS in the southern sky: the Westerbork in the Southern Hemisphere \citep[WISH,][]{De Breuck2002}.
Thanks to the combined results of this new search and of {\citetalias{U3}}, we build a reasonably sized population of new low-frequency selected blazar candidates. We study the spectral properties of individual candidates and explore the flux density distribution of the whole sample in comparison to the $\gamma$-ray blazars already listed in the 2nd \textit{Fermi} LAT Catalog of Active Galactic Nuclei \citep[2LAC,][]{2LAC}. We further refine our search by looking for IR, optical, UV and X-ray counterparts within the catalogs available.
The paper is organized as follows: in Sect.\ 2 we describe our method; in Sect.\ 3 we report and characterize our list of candidates; in Sect.\ 4 we discuss the spectral properties of all the sources found with the low-frequency method and make some projections regarding expectations from new low-frequency radio telescopes and surveys.
\section{Method}
\subsection{Blazar characteristics in the low-frequency radio band}
The characteristics of blazars (in particular $\gamma$-ray emitting blazars) in this range of the electromagnetic spectrum are discussed in {\citetalias{U3}}; we briefly summarize those results here for the sake of clarity.
Based on the cross correlation between the 2FGL catalog, the Roma-BZCAT v4.1 (the most comprehensive blazar catalog up to now), and the WENSS, we defined one sample labeled as Low radio frequency Blazar (LB) and a subsample labeled Low radio frequency $\gamma$-ray Blazar (LGB).
Since the Roma-BZCAT catalog is based on the NVSS survey, we computed the low-frequency radio spectral index values for these samples as:
\begin{equation}
\centering
\label{eq:index}
\alpha_{\nu}^{1400}=\log{ \left( \frac{S_{1400}}{S_{\nu}} \right )}/\log{\left( \frac{\nu}{1400} \right)}
\end{equation}
{where $\nu$ is the low radio frequency measured in MHz (i.e., 325 MHz for WENSS and 352 MHz for WISH), $S_{1400}$ is the flux density measured at 1400 MHz in the NVSS survey and $S_\nu$ is the flux density measured at $\nu$ frequency. Both flux densities are measured in mJy.} \par
The indices of about 80\% of the sources from both the LB and LGB samples are smaller than 0.5 and 99\% have indices below 1.0 \citep{U3}.
The flatness of the radio spectrum can be seen as a consequence of the dominance, also at low radio frequencies, of the inner jet departing from the core relative to the emission radiated by the larger structures of the blazar. \par
In particular for the $\gamma$-ray blazar sample, we consider as \textit{A class} candidates the radio sources characterized by $-1.0 \leq \alpha_{352}^{1400}\leq 0.55$ and \textit{B class} as those with $0.55\leq \alpha_{352}^{1400}\leq 0.65$. These two classes have been defined to represent respectively the 80\% (\textit{A class}) and 90\% (\textit{B class}) of $\gamma$-ray blazar radio spectral indices. In fact, the flatness of radio spectrum above 1\,GHz is indeed a well known feature and was already used to discern $\gamma$-ray blazar associations in the past \citep{CGRABS}. The low-frequency radio observations allow us to confirm this flatness down to 325\,MHz.\par
\subsection{The WISH survey}
The WISH survey is the natural extension of the WENSS survey to 1.60 sr of the southern sky. at 352\,MHz. Both surveys were performed in the 1990s by the Westerbork Synthesis Radio Telescope (WSRT), using the bandwidth synthesis mosaicing technique to combine 8 different bands of 5 MHz. WISH covers the area between $-9^\circ < {\rm Dec} < -26^{\circ}$ to a limiting flux density of $\sim 18$ mJy ($5 \sigma$), the same limiting flux density as the WENSS. Due to the low elevation of the observations, the survey has a lower resolution in declination ($54^{\prime\prime} \times \csc(\delta)$) than in right ascension ($54^{\prime\prime}$), resulting poorer source localization than for WENSS by a factor $\sim 2\times$ on average. Aside from this consideration, the WISH shares with the WENSS the same features upon which this method was calibrated, except for a negligible difference in frequency ($\Delta \nu = 27$ MHz). For this reason, we are confident in applying the same association procedure that we used for WENSS. \par
\subsection{Association procedure}
For each UGS in the WISH footprint, we search for low-frequency sources in a circular region of radius equal to the semi-major axis of the 95\% confidence level positional uncertainty ellipse of the UGS itself.
For each WISH source found, we look for a corresponding NVSS source in a circular region of radius equal to 8.5\arcsec\ and we then calculate the radio spectral index $\alpha_{352}^{1400}$.
For this study, we only consider WISH sources classified as single component sources and featuring a NVSS association.
In addition, we made local maps of the search regions for each UGS, overlaying on the WISH background the brightness contours of the NVSS map, the \textit{Fermi} positional uncertainty ellipse, and any possible blazar-like \textit{WISE} detection up to 3.3\arcsec\ from the position of the matching NVSS source (see Fig.~\ref{fig:Map2}). This gave us, in addition to a qualitative comparison of the relative positions of various possible candidates and the UGS, a clue about their potential non-blazar nature if complex structures are visible. Also, the angular separation between a candidate and the center of the {\it Fermi}\ ellipse was taken in account in cases of multiple matches, with the nearest source preferred. \par
For associations between WISH and NVSS and between NVSS and \textit{WISE}, the search radii (of 8.5\arcsec, and 3.3\arcsec, respectively) were selected based on a statistical study, as described in {\citetalias{U3}}.
\subsection{Multi-frequency data}
Finally, we looked for additional multifrequency data for our new blazar candidates and those found in {\citetalias{U3}}. In particular, we searched for matches with sources in the Australia Telescope 20 GHz survey\citep[AT20G,][]{AT20G} for the WISH candidates, and in the Green Bank 4.85 GHz northern sky survey \citep[GB6,][]{GB6} for the WENSS candidates. The AT20G survey, performed between 2005 and 2006 with the Australia Telescope Compact Array (ATCA), contains sources with $S_{20.0}\geq40$ mJy; the GB6, performed between 1986 and 1987 with the NRAO seven-beam receiver, takes into account sources with $S_{4.85}\geq18$ mJy. For these surveys, source association is automatically provided by NASA/IPAC Extragalactic Database (NED). We also refer to \citet{U4} for the analysis of X-ray emission observed by the {\it Swift} X-ray Telescope. \par
\section{Results}
\subsection{UGSs in the WISH footprint}
Of the 575 UGSs in the 2FGL catalog, we discard all that feature a `$c$' analysis flag to rule out potentially confused or spurious detections due to the strong interstellar emission \citep{2FGL}. The resulting list of UGSs on the WISH footprint has 27 $\gamma$-ray sources. Of these, 17 have at least one WISH and NVSS match (Table~\ref{tab:table}).\par
The total number of candidates is 31. Seven of these candidates have a radio spectral index in the range $\alpha_{325}^{1400}<0.55$ and are considered {\it class A} candidates while six are in the range $0.55 < \alpha_{325}^{1400} < 0.65$ and are considered {\it class B} candidates.
Sixteen out of the 31 candidates in the WISH footprint feature a {\it WISE}\ detection in at least two IR bands; three of them are {\it A class} candidates and two are {\it B class}. However, we do not consider {\it WISE}\ detection to be necessary for selection. IR data (see Table ~\ref{tab:WISE}) are taken from the \textit{AllWise} November 2013 release \citet{ALLWISE}. Flat spectrum radio quasars at high redshift or high synchrotron peaked BZBs could be too faint in the IR to be detected in one or more {\it WISE}\ bands; moreover, the relative astrometry of the WISH and NVSS {might be poorer} in this region \citep{U5} in respect to WENSS and NVSS.
All considered, we propose eight UGSs blazar-like associations (i.e., five \textit{A} and two \textit{B} class candidates and one special case for WBN 2255.7$-$1833). In particular, we increase the number of {\it class A} and {\it class B} associations combined with {\citetalias{U3}} to 19 and 8, respectively.
\par
\subsection{Multiwavelength observations and radio flatness of low-frequency selected blazar candidates}
In this work and in \citetalias{U3}, we evaluate the flatness of the radio spectrum according to Eqn.~\ref{eq:index} on the basis of just two non-simultaneous flux density measurements, at 1.4\,GHz (NVSS) and at low-frequency (352\,MHz for the WISH or 325\,MHz for the WENSS). For this reason, it is important to find other radio flux density data to better investigate the spectral and variability properties of the candidates. We searched the literature for other radio surveys performed in the WENSS and WISH footprints; we consider here all the candidates found both in \citetalias{U3} and in the present work.\par
In the WENSS footprint, we found that six \textit{A class} and two \textit{B class} candidates are listed in the GB6 survey at 5\,GHz. We report them in Table \ref{tab:regr1} along with the radio spectral index, obtained with a weighted linear regression on the measurements at the three frequencies. For all these sources, the 5 GHz flux density is in good agreement with the extrapolation of the low-frequency spectrum, confirming that the radio spectrum is flat and flux density variability is not dramatic. A sample spectrum is shown in Fig.~\ref{fig:regr1} (left panel).\par
In the WISH footprint, we find that one \textit{B class} (WNB 1251.8$-$2148, alias NVSS 125429$-$220419) is also listed in the AT20G. Data and radio spectral index regression are reported in Table \ref{tab:regr2} (see also Fig.~\ref{fig:regr1}, right panel). In this case, the extrapolation of the low-frequency power law clearly fails to match the high frequency data, suggesting prominent variability or a rather complex spectral shape, with a strongly inverted component above a few GHz; either way, the source behavior is consistent with the BZB class.\par
\section{Discussion and conclusions}
\subsection{Comparison with Paper III}
As we already noted, the WISH and WENSS surveys are very similar but, a posteriori, we can draw some useful conclusions regarding the impacts their different features had on our study.
The poorer resolution in declination of WISH ($\sim2\times$ on average) mainly affects the spatial association and not the parameters on which the counterpart is selected. Since spatial association is calibrated with a higher resolution survey like WENSS, applying it to the WISH region and catalog results in a more conservative approach. The fact that we found almost the same ratio (i.e. $\sim1.8$) of candidates per UGS in both WENSS and WISH (58 WENSS candidates for 32 UGSs and 31 WISH candidates for 17 UGSs) indicates that, on average, spatial association is not compromised. \par
Similarly, the resolution discrepancy cannot be blamed for any impact on the quality of the associations proposed among the candidates, given the similar rate of $\gamma$-ray blazar-like associations in the two surveys. In fact, in \citetalias{U3} we propose 21 (including one special association that is neither {\it class A} nor {\it class B}; see {\citetalias{U3}}) new $\gamma$-ray blazars out of 65 UGSs in the WENSS region (i.e., \ $32\%$) and 8 out of 27 in WISH region (i.e., \ $30\%$)\footnote{The small discrepancy { [with respect to what?]} can be explained, in addition to simple statistics, on the grounds of the slightly different starting list, since in {\citetalias{U3}}\ we considered only $\gamma$-ray sources without {\it any} analysis flag, while here we excluded only the sources with the {\it c} $\gamma$-ray analysis flag in the 2FGL \citep{2FGL}.}. \par
\subsection{Comparison with other methods}
Other methods have been suggested to recognize low-energy counterparts to UGSs, or at least to provide a statistically significant classification of these sources. For instance, \citet{Ackermann2012} have developed a statistical approach to classify UGSs in the first catalog of {\it Fermi}\ sources \citep[1FGL,][]{Abdo2010}. Six of the 2FGL UGSs in the WISH footprint are associated with 1FGL sources analyzed by \citet{Ackermann2012}, five of which are AGN-like and one of which is pulsar-like. Within the former sample, there are two sources for which we propose an association with a blazar counterpart on the basis of the low-frequency spectrum: 2FGL J2017.5$-$1618 and 2FGL J2358.4$-$1811. Interestingly, for the single UGS for which \citet{Ackermann2012} propose a pulsar classification (2FGL J1544.5$-$1126), our method finds a WENSS-NVSS match with quite steep spectral index ($\alpha=0.74\pm0.03$), rejecting a blazar scenario.
We further compared our results with the proposed associations found in the other papers of this series. In Figure~\ref{f.wgs} we show the comparison between the distribution in the IR $[3.4]-[4.6]-[12]\, \mu$m color-color plane provided by {\it WISE}\ for the $\gamma$-ray emitting blazars and the sources selected in this work. The overall distribution of the whole set of the simultaneous WISH-NVSS matches (black, red and green dots) is quite more scattered than the $\gamma$-ray blazar population (orange dots). However, when we only consider the low-frequency selected blazar candidates, all of the most prominent outliers are excluded and the {remaining five sources} (black and red dots) are in much better agreement with the IR colors of $\gamma$-ray blazars. In two cases, the $A$-class candidates NVSS 120900$-$231335 and NVSS 222830$-$163643, the agreement is very good. For the latter source, associated with 2FGL\, J2228.6$-$1633, our method also provides the same counterpart candidate selected on the basis of the kernel density estimator technique to IR colors of WISE counterparts applied to X-ray \citep{U4} and radio \citep{U5} data.
\subsection{Radio flux density analysis}
The $\gamma$-ray blazar candidates selected in this work and in {\citetalias{U3}}\ have by selection the same radio spectral properties of confirmed Roma-BZCAT blazars and of 2FGL blazar associations. It is natural to wonder why these sources have not been detected in the 2FGL catalog and eventually associated, e.g.\ in the second catalog of AGNs detected by {\it Fermi}\ \citep{2LAC}. The sources could have been excluded from the 2LAC because they do not formally pass the threshold for being considered high confidence associations (a test that does not take into account the spectral index); typically, this could happen for low flux density radio sources, which have a larger spatial density. It is thus likely that, together with a general similarity to the already known blazars, our candidates also have some peculiar characteristics.
For this reason, we show in Fig.~\ref{fig:hist} the distribution of radio flux density at 1.4\,GHz for all the blazars in the 2LAC and for the candidates selected here and in {\citetalias{U3}}. The two distributions are clearly different, as confirmed by a Kolmogorov–Smirnov test which yields a probability of $2.7\times10^{-15}$ of being obtained from the same population. In particular, the 2LAC blazar flux density distribution is shifted to much larger values. This strongly suggests that our method is very efficient at selecting faint blazars. These sources are potentially of great interest: if they are of the BZB type, it could mean that they could be of the extreme and elusive class of ultra-high synchrotron peaked sources; one prominent example could be WNB 2225.8$-$1652, which has an inverted low-frequency radio spectral index of $\alpha=-0.19\pm0.10$ and is our proposed association for the UGS 2FGL\, J2228.6$-$1633, characterized by a $\gamma$-ray spectrum as hard as $\Gamma=2.07\pm0.16$. On the other hand, some of our candidates could also be faint BZQs, and in this case their low flux densities could stem from their high redshifts \citep[like WN 1500.0+4815 with an estimated redshift of 2.78, see][]{U3}, which would also make them of scientific value. No WISH sources have a measured redshift.
\subsection{Summary and outlook}
We have searched for counterparts of the the 27 UGSs in the 1.6 sr footprint of the WISH survey, following the methods described in {\citetalias{U3}}\ and based on the flat spectrum at low-frequency characteristic of blazars. We have found blazar-like associations for 8 UGSs, that together with the 23 sources selected in the WENSS footprint with the same method provides a blazar association for a sample of 30 new $\gamma$-ray blazar associations. This sample extends the distribution of the radio flux density of $\gamma$-ray blazars to lower values, allowing us to study otherwise elusive AGNs.
The application of our method thus shows promising results in terms of numbers of counterparts proposed and their physical features. In particular, the possibility of using low-frequency radio data to find UGSs counterparts is even more important in the light of the imminent start of numerous studies in this frequency range, like the LOw Frequency ARray \citep[LOFAR,][]{van Haarlem2013}, the Murchison Widefield Array \citep[MWA,][]{Tingay2013}, the Long Wavelength Array \citep[LWA,][]{Ellingson2009}, and eventually the Square Kilometer Array \citep[SKA, e.g.][]{Dewdney2010}. These facilities will allow our method to be extended using an even deeper and simultaneous dataset, while dedicated targeting of the candidates with optical spectroscopy and Very Long Baseline Interferometry observations will confirm the natures of the proposed associations. \par
\acknowledgements
We are grateful to S. Digel for providing us useful suggestions that improved the presentation of our results.
M.\ Giroletti acknowledges financial contribution from grant PRIN-INAF-2011. The work is supported by the NASA grants NNX12AO97G and NNX13AP20G.
R.\ D'Abrusco gratefully acknowledges the financial support of the US Virtual Astronomical Observatory, which is sponsored by the National Science Foundation and the National Aeronautics and Space Administration.
The work by G.\ Tosti is supported by the ASI/INAF contract I/005/12/0.
This research has made use of data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC) provided by NASA's Goddard Space Flight Center; the SIMBAD database operated at CDS, Strasbourg, France; the NASA/IPAC Extragalactic Database (NED) operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Part of this work is based on the NVSS (NRAO VLA Sky Survey); The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under contract with the National Science Foundation. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
|
1,116,691,499,211 | arxiv | \section{INTRODUCTION}
In the frame of hierarchical formation theories, it is believed
that the mass clustering develops due to the gravitational instability
caused by the intrinsic, tiny Gaussian density perturbations that existed in the early
universe (see, e.g., Frenk et al. 1996 for a review). Since the
initial conditions and input physics are assumed to be scale-free,
dark halos are expected to have evolved
self-similarly in time, which can be described by the scaling laws over many
fundamental parameters (Navarro
et al. 1995 and references therein), as have been advocated by the
numerical simulations in which spherical collapse
and adiabatic gas dynamics are assumed (e.g., Navarro et al. 1996; Eke et al. 1998).
On the observational aspect, tight correlations between X-ray luminosity,
gas temperature, gas entropy, X-ray isophotal radius, and virial mass of galaxy clusters have been
discovered successively with {\em Einstein}, {\em ASCA}, and
{\em ROSAT} (e.g., Mushotzky 1984; Mohr \& Evrard 1997; Horner et al. 1999).
These correlations, however, often show notable deviations from theoretical
predictions. For example, in early 1990s, a significant scatter on the measured
luminosity-temperature curves was reported by, e.g., Edge \& Stewart (1991) and Fabian
et al. (1994b), who showed that the cool-core galaxy clusters are more luminous at a given
temperature. The observed mass-temperature relation, on the other hand, shows a lower
($\sim 40\%$)
normalization and a steeper slope with respect to the prediction of adiabatic models (e.g.,
Evrard et al. 1996; Xu et al. 2001). Moreover, an excess in the gas entropy was found
in the central regions ($< 0.05R_{200}$) of many low temperature systems (Ponman et al. 1999, 2003),
which forms a platform of extra energy ($\sim 1-3$ keV per particle; Tozzi \& Norman 2001)
on the entropy-temperature curve.
These deviations and mismatches, which have been confirmed with {\em Chandra}\/\ and {\em XMM-Newton}\/\ observations
recently (e.g., Arnaud et al. 2005; Kotov \& Vikhlinin 2006), clearly indicate that our
understanding of the thermal history of galaxy clusters
is far from complete, and the roles played by non-gravitational heating processes
need to be evaluated properly (e.g., Kaiser 1991; Evrard \& Henry
1991; Kauffmann et al. 1993), which will also help solve the cooling flow problem
(Fabian 1994a; Makishima et al. 2001; Peterson et al. 2003).
As of today, a number of heating sources have been proposed, which include
merger shocks, cosmic rays, supernovae, thermal conduction, turbulent dissipation
and diffusion, and AGN feedbacks (e.g., McNamara \& Nulsen 2007; Markevitch \& Vikhlinin 2007).
However, none of the proposed candidates is all-purpose for
halting the gas cooling and breaking the self-similarities on both galactic
and cluster scales. For example, recent observations (e.g., Croston et al. 2005; Jetha et al. 2007)
showed that, AGN activity,
the most representative heating source of today, tends to deposit most of its energy
into the vast intergalactic space, rather than within the host galaxy ($<0.05 R_{500}$).
Clearly, detailed mappings of gas temperature gradients in the ICM is necessary as a crucial
observational constraint in order to correctly investigate the gas heating history.
Two-dimensional gas temperature variations have been observed in many galaxy clusters with
{\em Chandra}\/\ and {\em XMM-Newton}. By analyzing the
temperature map generated with the high-quality {\em Chandra}\/\ data, Fabian et al. (2000)
identified a pair of cool gas shells that coincide with two radio bubbles, whose radii are
$\sim 6$ and 8 $h_{70}^{-1}$ kpc, respectively, in the innermost region of the Perseus Cluster.
The authors suggested that the cool gas shells were formed as the buoyant gas detached from
the terminals of AGN jets and inflated in pressure equilibrium with the surrounding gas.
Similar phenomenon was also observed in, e.g., Centaurus cluster (Fabian et al. 2005), Abell 2052 (Blanton
et al. 2003), and Hydra A (Nulsen et al. 2002). Also, in
clusters that show clear merger signatures, such as 1E 0657-56 (Govoni et al. 2004), Abell
3667 (Vikhlinin et al. 2001), Abell 665, and Abell 2163 (Markevitch \& Vikhlinin 2001), conspicuous
temperature variations were copiously found, which are presumably caused
by the rapid adiabatic expansion of the shock-heated gas.
So far, however, in most of
the existing works, little has been done to measure the morphologies of the temperature
substructures and to investigate their origins in a quantitative way. One of few exceptional works was
presented by Shibata et al. (2001), who calculated the two point correlation function of the {\em ASCA}\/\
hardness ratio map of the Virgo Cluster, and found that there is a characteristic scale of
$\sim 300$ $h_{70}^{-1}$ kpc for the two-dimensional temperature variations. The authors associated this
characteristic scale with the size of the gas blobs arising during the infalling of sub-groups.
By analyzing the two-dimensional temperature distribution with the {\em XMM-Newton}\/\ data, Schuecker et al.
(2004) reported the presence of a turnover at $\sim 100$ $h_{70}^{-1}$ kpc on the pressure
fluctuation spectrum of the Coma Cluster, and attributed it to the existence of Kolmogorov/Oboukhov-type
turbulent flows. As of today, due to the lack of more such quantitative studies, it is not clear
if there does exist a prevalence for a characteristic scale $\sim 100$ $h_{70}^{-1}$ kpc in
galaxy clusters and groups, and how far it can be used to constrain the heating models
in the ICM.
In this work we analyze the {\em Chandra} archive data to search for possible
characteristic scales of temperature variations in a sample of nine intermediate-redshift clusters.
In \S 2 \& 3, we describe the sample selection criteria and data
analysis, respectively. In \S 4, we discuss the physical implications of the results.
In \S 5, we summarize our work. We assume $H_0=70$ km s$^{-1}$
Mpc$^{-1}$, a flat universe for which $\Omega_0=0.3$ and
$\Omega_\Lambda=0.7$, and adopt the solar abundance standards of Grevesse \&
Sauval (1998).
\section{SAMPLE AND DATA PREPARATION}
In order to balance between the angular resolution and detector coverage of the targets,
we construct our sample with nine intermediate-redshift ($z \simeq 0.1$) galaxy clusters,
so that the central 400 $h_{70}^{-1}$ kpc regions of the clusters can be fully, or nearly
fully covered by the S3 CCD of the {\em Chandra}\/\ advanced CCD imaging spectrometer (ACIS)
instrument. All the selected clusters have a $0.7 - 8.0$ keV flux greater than 5.0 $\times
10^{-12}$ erg $\rm s^{-1}$ $\rm cm^{-2}$. We list some basic properties of these
clusters in Table 1, which are arranged in the orders of source
names (col. [1]), names of the cD galaxies (col. [2]), redshifts (col.
[3]), richness classes (col. [4]), right ascension and declination
coordinates (J2000) of the cluster optical centroids (col. [5] \& [6]), and notes (col. [7]).
On the optical and infrared images extracted from the DSS\footnote{http://archive.stsci.edu/dss/} (Fig. 1) and 2MASS\footnote{http://www.ipac.caltech.edu/2mass/}
archives, we do not find any strong evidence for
pronounced signatures caused by major mergers on cluster scales, such as
off-center optical and/or infrared substructures.
A478, A1068, A1650, A2244, and A3112
possess a central galaxy that has been classified as a cD, whose size is
$> 3$ times that of any other member galaxy (Rood \& Sastry 1971; Takizawa et al. 2003).
The remaining four clusters are dominated by a pair or a clump of brightest
member galaxies (Struble \& Rood 1987). In addition, Schombert
et al. (1989) reported the existence of a low-velocity
(the radial velocity difference is $\Delta v_{\rm r} \approx$ 50 km $\rm s^{-1}$)
companion galaxy dwelling deep inside the envelope of the cD galaxy
of A2244.
Strong radio emission associated with AGN activity
has been detected in four clusters. In A478, two
symmetric radio lobes are identified at 1.4 GHz, which extend about
$3^{\prime\prime}$ towards northeast and southwest, respectively
(Sun et al. 2003). In A1068, two compact radio sources are resolved, which
are consistent with the locations of the cD galaxy and SDSS J104043.45+395705.3
(a member galaxy located at about $13^{\prime\prime}$ southwest of the
cluster center), respectively (McNamara et al. 2004).
A2204 and A3112 are reported to possess an extended radio
halo that roughly traces the spatial distribution of the X-ray gas, respectively (Sanders
et al. 2005; Takizawa et al. 2003). In the remaining five clusters, neither radio
lobes nor luminous radio halos has been found in the central region in previous works,
suggesting that they have not experienced strong AGN
outbursts during past few tens of Myr, if the life time of such radio
sources can be estimated by $t_{\rm sync} = \frac{9 m_{\rm e}^3 c^5}{4 e^4 \bar{B}^2 \gamma_{\rm e}}$
and $\bar{B} \sim 10$ $\mu$G (Takahashi \& Yamashita 2003; Donahue et al. 2005).
As a unique case in our sample, A2556 harbors an off-center extended radio
source at about $40^{\prime\prime}$ northeast of the center, which is often speculated
to be the relic of a merger-driven shock (e.g., Govoni et al. 2004).
All the X-ray data analyzed in this work were acquired with the ACIS S3 CCD (Table 2),
with the focal plane temperature set to be --120 $\rm ^{o}C$.
Using the CIAO v4.0 software and CALDB v3.5.0, we removed the bad pixels and columns,
as well as events with {\em ASCA}\/\ grades 1, 5, and 7, and then executed the gain, CTI, and
astrometry corrections. In order to identify possible strong background flares,
lightcurves were extracted from regions sufficiently far away, i.e., $\geq 4$$^{\prime}$\ from the X-ray
peaks. Time intervals during which the count rate
exceeds the average quiescent value by 20 percent were excluded. The S1
chip data were used to crosscheck the results when it was available.
\section{X-RAY IMAGING AND SPECTRAL ANALYSIS}
\subsection{X-Ray Images and Surface Brightness Profiles}
Figure 1 shows the intensity contours of the X-ray emission in $0.3-10$ keV
of our sample clusters, which overlay the corresponding optical DSS images.
In all cases, the X-ray morphology appears to be
smooth and symmetric on $> 100$ $h_{70}^{-1}$ kpc scales, showing
no remarkable twists, fronts, or edges that indicate recent major
mergers. On smaller scales, however,
there exist substructures, such as X-ray cavities coinciding with
the AGN lobes (Sun et al. 2003), stripes, filaments,
and arc-like sharp fronts (Wise et al. 2004; Sanders et al. 2005).
For each cluster, we examine the vignetting-corrected surface
brightness profile (SBP) extracted in $0.7-8.0$ keV, after
filtering all the point sources that are detected above the 3$\sigma$
level of the local background by using both the celldetect and
wavdetect tools with their default settings (Fig. 2). By applying the $\chi^2$-test, we find
that, except for the SBP of A2204, which can be well described with
a single-$\beta$ model $S(r) = S(0) (1+(r/r_c)^2)^{-3\beta + 1/2}$,
all the obtained SBPs are consistent with the empirical
two-$\beta$ model $S(r) = S_1(0) (1+(r/r_{c1})^2)^{-3\beta_1 + 1/2} +
S_2(0) (1+(r/r_{c2})^2)^{-3\beta_2 + 1/2} $ (e.g., Ikebe et al. 1996), because
in the central $10 - 20$ $h_{70}^{-1}$ kpc
(A1650, A2556, and A3112) or $20 - 40$ $h_{70}^{-1}$ kpc (A478,
A1068, A1201, A2104, and A2244) there exists an emission excess
beyond the $\beta$ model that can best describe
the SBPs of outer regions. Such an excess is often seen in
clusters that host a central dominating galaxy, which is usually
classified as a cD (Makishima et al. 2001).
\subsection{Azimuthally-averaged Spectral Analysis}
\subsubsection{Background}
In the spectral analysis that follows, we utilized the {\em Chandra}\/\
blank-sky templates for the S3 CCD as the background. The templates
were tailored to match the actual pointings, and the background spectra
were extracted and processed identically to the source spectra (\S 3.2.2).
After rescaling each background spectrum by normalizing its high
energy end to the corresponding observed spectrum, we further enhance the
accuracy of the background by adding an unabsorbed APEC model with
fixed temperature (0.2 keV) and metal abundance (1 $Z_\odot$; Snowden et
al. 1998) to account for variations in the Galactic X-ray background
(GXB; Vikhlinin et al. 2005). We note that the GXB component is significant
only in A2204, where it has a flux of 2.0 $\times 10^{-14}$ erg $\rm s^{-1}$
$\rm cm^{-2}$ $\rm arcmin^{-2}$ in $0.3-1.0$ keV and is nearly uniform over
the S3 chip. This was also reported by Sanders et al. (2009), who indicated
that it is probably due to the excess Galactic foreground.
\subsubsection{Models and Results}
Here we calculate the azimuthally averaged temperature profiles,
which will be used to create the reference temperature maps applied
in wavelet detection (\S 3.3.2) and calculation of excess energies
in the temperature substructures (\S 4).
We divided the inner 400 $h_{70}^{-1}$ kpc of each cluster into
concentric annuli, masked all the detected point sources
($\simeq 15$ point sources per cluster), and
extracted the spectrum from each annulus. The corresponding spectral
redistribution matrix files (RMFs) and auxiliary response files (ARFs)
were generated using the CIAO tool mkwarf and mkacisrmf. The spectra
were grouped with $> 20$ counts per bin to allow the use of
$\chi^2$ statistics. We used XSPEC 12.4.0 package to fit the spectra,
by modeling the gas emission
with an absorbed APEC component and employing the PROJCT model
to correct the projection effect. The
absorption column density $N_{\rm{H}}$ was fixed to the Galactic value (Dickey \&
Lockman 1990) in all cases, except for A478, for which $N_{\rm{H}}$ was
set to be about twice the Galactic value (Sanderson et al. 2005; Table 3);
since the absorption excess in A478 exhibits a spatial extension beyond the
cluster's core region, Pointecouteau et al. (2004) argued that it is likely to be associated with a local absorption
substructure in our Galaxy. In order to calculate the error ranges of
the model parameters, we executed the XSPEC command steppar and
carried out multiple iterations at every step, to ensure that the
actual minimum $\chi^2$ is found. The best-fit gas temperature and
abundance profiles are illustrated in Figure 3, in which multiple sets of annuli
are used for each cluster to crosscheck the results. We note that the inclusion of
an additional gas component could only slightly improve the fits for
the central regions of A478, A1650, A2204, A2244, and A3112, which is
insignificant as indicated by the $F$-test.
A significant temperature decline by a
factor of $> 2$ is observed in the central regions of
A478, A1068, A2204, A2556, and A3112, four of which (except A2556)
possess radio halos and/or lobes in the center (\S 2). In the remaining four
clusters without apparent radio emission, the
inward temperature drop is somewhat ambiguous, as the difference between
the core temperature and mean cluster temperature, which refer
to the temperatures measured in the central 100 $h_{70}^{-1}$ kpc
($\sim 0.1r_{500}$) and $100-400$ $h_{70}^{-1}$ kpc regions (Fig. 3),
respectively, is found within 3$\sigma$ error ranges. If we apply the
classification criteria of Sanderson et al. (2006), the latter four clusters are
non-cooling-core clusters, whose central regions
have not yet fully condensed. These results reflect the
trend for the cooling core clusters to host central radio sources
(Burns 1990). As shown in Figure 3, we also adopt Eqs.$(4)-(6)$ of Vikhlinin et
al. (2006) to describe the obtained radial temperature profiles
in a smooth form, which will be used in \S 3.3.2 to calculate the
reference temperature maps in order to examine the significances of
detected temperature substructures, and in \S 4 to calculate the
excess energies in these substructures.
In A478, A1068, A1650, A2204, A2556, and A3112, the
abundance increases inwards significantly (68\% confidence level)
from $\approx 0.2$ $Z_\odot$ in the outermost annuli to $> 0.6$ $Z_\odot$
in the innermost regions ($\lesssim 30$ $h_{70}^{-1}$ kpc).
Emission measure-weighted gas temperature and abundance of each
cluster are also calculated (Table 3), which are
in good agreement with previous results obtained with {\em Chandra}\/\ and
{\em XMM-Newton}\/\ (Sun et al. 2003; Takahashi \& Yamashita 2003; Takizawa et al.
2003; Pointecouteau et al. 2004; Wise et al. 2004; Donahue et al. 2005;
Sanders et al. 2005; Sanderson et al. 2005).
\subsection{Two-Dimensional Analysis}
\subsubsection{Projected Temperature Maps}
Following the methods of, e.g., O'Sullivan et al. (2005), Maughan et
al. (2006), and Kanov et al. (2006), we define a set of knots ${\bf
r_i}$ ($>5000$ knots per cluster) in the central 300 $h_{70}^{-1}$ kpc
region, which are randomly distributed with a separation of $\Delta_{ij}$ $<
5^{\prime\prime}$ between any two adjacent knots $i$ and $j$. To each knot we assign
an adaptively sized circular cell, which is centered on
the knot and contains more than 800 counts in $0.7-8$ keV after the
background is subtracted. The typical cell radius ranges from $\lesssim 5$$^{\prime\prime}$\
at the center to about 10$^{\prime\prime}$\ at r
$\geq 150$ $h_{70}^{-1}$ kpc, which is always larger than the associated $\Delta_{ij}$.
We calculate the projected temperature of each cell $T_{\rm c}({\bf r_i})$,
which is assigned to the corresponding knot,
by fitting the spectrum extracted from the cell with
an absorbed APEC model, using the absorption column given in Table 3;
adding another thermal component can neither improve the fit
significantly, nor make the parameters constrained more tightly. In
the fittings, the source spectrum is grouped to have $> 20$ photons
in each energy bin. In addition to the standard $\chi^2$ statistics,
we also have applied the maximum likelihood statistics, and find
that the best-fit parameters obtained with the two statistics
agree nicely with each other within the $1\sigma$ error ranges. The
calculated $1\sigma$ errors of gas temperature are typically $<10$\% in the
central 100 $h_{70}^{-1}$ kpc, which increase progressively to $\lesssim$ 15\%
(A478, A1650, A2244, and A3112) or $\lesssim$ 20\% (the remaining five
clusters) in $100-300$ $h_{70}^{-1}$ kpc.
For any given position ${\bf r}$, we define a scale $s({\bf r})$, so that there
are at least 800 counts contained in a circular region centered at ${\bf r}$,
whose radius is $s({\bf r})$. We calculate the
projected temperature at ${\bf r}$ by integrating over all the knots ${\bf r_i}$
located in the circular region,
\begin{equation} \label{eq:tmap}
T({\bf r}) = \sum_{\bf r_i} (G_{\bf r_i}(R_{{\bf r},{\bf r_i}})T_{\rm c}({\bf r_i}))/\sum_{\bf r_i} G_{\bf r_i}(R_{{\bf r},{\bf r_i}}), \mbox{ } \rm when \mbox{ } \it R_{{\bf r},{\bf r_i}} < s({\bf r}),
\end{equation}
where $R_{{\bf r},{\bf r_i}}$ is the distance from ${\bf r}$ to ${\bf r_i}$, and $G_{\bf r_i}$ is the Gaussian kernel whose scale
parameter $\sigma$ is fixed at $s({\bf r_i})$. Since $s({\bf r})$ is essentially proportional
to the square root of the local counts, the obtained temperature map $T({\bf r})$ (Fig. 4) is
less affected by the statistical uncertainties caused by surface brightness
fluctuations. In addition, angular resolutions of $\sim 10$$^{\prime\prime}$\ are guaranteed by
the use of compact Gaussian kernel. Maps for the upper
and lower limits of $T({\bf r})$ are calculated in a similar way.
\subsubsection{Temperature Substructures}
\noindent{\bf Wavelet Transform}
\indent A visual inspection of the obtained temperature maps (Fig. 4)
indicates that there exist significant substructures, which typically
have linear sizes of $\sim 100$ $h_{70}^{-1}$ kpc, in all of the
sample clusters. In order to describe these substructures in a quantitative way, we
analyze the statistical properties of the temperature maps by applying the wavelet transform,
a powerful tool in imaging process that identifies and disentangles small
embedded features as a function of scales (e.g., Vikhlinin et al. 1997;
Starck \& Pierre 1998). In an analogue study, Bourdin \& Mazzotta (2008)
conducted the wavelet transform to achieve an optimized spatial binning
of temperature maps. We employ a non-orthogonal version of discrete
wavelet transform, the \`{a} Trous wavelet (Shensa 1992; Starck et
al. 1995; Vikhlinin et al. 1998), which
defines a straightforward decomposition rule. To be specific, at position
${\bf r}=(x,y)$ and scale level $i$ ($i=1,2,...$), the \`{a} Trous wavelet
$w_i(x,y)$ equals the signal difference between two smoothed maps
$c_{i-1}(x,y) - c_i(x,y)$, where $c_{0}(x,y)$ is the obtained temperature
map $T(x,y)$, and $c_{i}(x,y)$ connects with $c_{i-1}(x,y)$ via
$c_{i}(x,y) = \sum_{m,n} h(m,n) c_{i-1}(x+2^{i-1}m,y+2^{i-1}n)$,
in which $h(m,n)$ is the convolution kernel, and $(m,n)$ is its grid.
Here $h(m,n)$ is chosen to be
a compact Gaussian $h(m,n) = e^{-(m^2+n^2)/2/2\pi}$. Each temperature
map is thus decomposed into a set of subimages $w_i(x,y)$ on nine scales
($2^i$ pixels, $i=1,2,..9$), respectively.
Based on the obtained $w_i(x,y)$ maps, we calculate the significant
wavelet coefficient $W_{i}(x,y)$ to detect the temperature
substructures on different scales. For each cluster, we create a set
of random knots ${\bf r_i}$ as used in \S 3.3.1, and assign a
randomized temperature $T_{\rm c}'({\bf r_i})$ to each knot, which
is constrained by the measured temperature profile $T(r)$ and its
error ranges (Fig. 3). By applying Eq.(1) to the $T_{\rm c}'({\bf
r_i})$ map, we obtain a reference temperature map $T_{\rm ref}({\bf
r})$, in which the effects of the radial temperature gradients and
knot tessellation are both involved. A series of such reference
temperature maps are created by repeating this random process, and
each reference map is decomposed into subimages $w_{i,\rm ref}(x,y)$
on nine scales by performing the same \`{a} Trous wavelet transform
as described above. On each scale $i$, we determine the significant
coefficient as $W_{i}(x,y) \equiv w_i(x,y)$ when $w_i(x,y) \geq 3
\sigma_{i,\rm ref}$, and $W_{i}(x,y) \equiv 0$ when $w_i(x,y) < 3
\sigma_{i,\rm ref}$ (e.g., Slezak et al. 1994; Starck et al. 2003),
where $\sigma_{i,\rm ref}$ is the standard deviation of the
simulated $w_{i,\rm ref}(x,y)$ maps so that 3$\sigma_{i,\rm ref}$ is
equivalent to a confidence interval of $1.3 \times 10^{-3}$ for
Gaussian noise. A similar method was used to determine the actual
significances of diffuse sources on {\em ROSAT}\/\ images (e.g., Rosati et
al. 1995; Grebenev et al. 1995; Damiani et al. 1997).
Since the \`{a} Trous transform is essentially a non-orthogonal
algorithm, the obtained coefficient $w_i(x,y)$, and thus $W_i(x,y)$, may be
systematically biased from the exact solution (Starck \&
Pierre 1998). This bias can be corrected by using an iterative method (Van
Cittert 1931; Biviano et al. 1996; Starck et al. 2003; Bourdin et al. 2004), the iteration
routine of which is defined as
\begin{equation} \label{eq:vancittert}
W_i^{n}(x,y) = \left \{
\begin {array} {c}
W_i^0(x,y) + W_i^{n-1}(x,y) - PW_i^{n-1}(x,y) \mbox{ } \mbox{ } \mbox{
} \mbox{ } \mbox{ } W_i^{0}(x,y)\neq0, \\
W_i^{n-1}(x,y) \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ }
\mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{
} \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ }
\mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{
} \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ }
\mbox{ } \mbox{ } \mbox{ } \mbox{ } \mbox{ } W_i^{0}(x,y)=0,
\end{array}
\right .
\end{equation}
where $W_i^n(x,y)$ is the corrected detection on scale $i$ after $n$
iteration(s) ($W_i^0(x,y) \equiv W_i(x,y)$ is the initial run),
and $P$ is a non-linear synthesized operator of inverse wavelet
transform, wavelet transform, and filtering operation using
$3\sigma_{i,\rm ref}$ as the threshold. Typically the calculation
converges within five iterations, which yields a corrected $W_i(x,y)$
map as the unbiased reconstruction of the temperature substructures
on scale $i$ (Fig. 4). We define the wavelet spectrum as $\left <\mid W_i(x,y) \mid^2\right >$
(Torrence \& Compo 1998), and plot it in Figure 5 as a function of scale
for each cluster.
As a crosscheck, we also employ the
two-dimensional B-spline function (e.g., Bourdin \& Mazzotta 2008) as the scaling function $\phi(x,y)$
to perform a compact \`{a} Trous transform in the Fourier domain
$(u,v)$, so that the Fourier transform of the wavelet coefficient $w_i(x,y)$
is directly calculated as $\hat{w}_{i}(u,v) =
(1-\hat{h}(2^{i-1}u,2^{i-1}v))\hat{c}_{i-1}(u,v)$, where the filter
$\hat{h}(u,v)=\hat{\phi}(2u,2v)/\hat{\phi}(u,v)$ when $|u|,|v| \leq
1/2$ (in this paper the Fourier transform of any function $f$ is written as
$\hat{f}$). By carrying out the inverse Fourier transform to $\hat{w}_{i}(u,v)$,
${w}_{i}(x,y)$ maps are obtained, and then used to calculate the corrected
$W_i(x,y)$ maps following the procedure described above. The corresponding
wavelet spectra are illustrated in Figure 5 for comparison, which agree
nicely with those obtained with the Gaussian wavelet. We find that
the most popular scales of the projected temperature substructures are
in the range of about $50 - 200$ $h_{70}^{-1}$ kpc, and about
70\% of these substructures are located at 100 -- 200 $h_{70}^{-1}$ kpc
from the cluster center, exhibiting irregular shapes and spatial
distributions (Fig. 4).
\noindent{\bf Power Spectrum}
\indent As a straightforward approach, we have also studied the power spectra of the two-dimensional gas
temperature distributions. Adopting the method of Schuecker et
al. (2004), we determine the flat fields $\bar{T}({\bf r})$
by applying a low-pass wavelet filter with a scale of $2^8$ pixels
on the original temperature maps,
and calculate the normalized temperature fluctuation distributions as
$\delta T ({\bf r}) = T({\bf r})/\bar{T}({\bf r})-1$.
By carrying out the Fourier transform of $\delta T ({\bf r})$
\begin{equation}
\delta ({\bf k})={1\over S}\int_{S} \delta T({\bf r})e^{i{\bf r}\cdot {\bf k}}d{\bf r},
\end{equation}
where $S$ is the projected area, we obtain the power spectrum $P(k)$,
\begin{equation}
P(k) = \left <\mid{\delta ({\bf k})}\mid^2 \right >.
\end{equation}
Following the method of Schuecker et al. (2004), we carry out
Monte-Carlo simulations for each cluster to estimate the temperature
fluctuation at any given knot ${\bf r_i}$ based on the temperature
error range measured at the same knot (\S 3.3.1). After a randomized
value $T_{\rm err}({\bf r_i})$ is assigned to each knot, a random
temperature error map $T_{\rm err}({\bf r})$ is created by applying
Eq.(1) to the $T_{\rm err}({\bf r_i})$ map, in which the uncertainties
introduced by data statistics, spectral model fitting, and cell
tessellation are all involved. Then, by combining $T({\bf r})$ and
$T_{\rm err}({\bf r})$ maps we create a random temperature map
$T_{\rm ran}({\bf r})$. The $1\sigma$ error bars shown in Figure 6 are
calculated from the variances of power spectra obtained from ten such
$T_{\rm ran} ({\bf r})$ maps for each cluster. In general,
the power spectra $P(k)$ show a significant peak in $50 - 200$ $h_{70}^{-1}$
kpc, and drop quickly beyond $200-250$ $h_{70}^{-1}$ kpc. This strongly supports
our conclusion derived with the wavelet analysis (Fig. 5).
\section{DISCUSSION}
We present robust evidence for the prevailing existence of hot gas clumps, whose characteristic scales are $\sim 100$
$h_{70}^{-1}$ kpc, in the central regions of
a sample of nine intermediate-redshift galaxy clusters. These substructures are not caused
by uncertainties in background subtraction, response calibration,
and spectral model fittings, nor are they related to background
flares, the effects of which have been largely corrected and/or fixed. The possibility of
the temperature substructures arising from the non-thermal emission of low-mass X-ray
binaries (LMXBs) can also be eliminated, since the ICM is only sparsely filled with galaxies. Our results confirm the previous findings in the Coma Cluster
(Schuecker et al. 2004) and the Virgo Cluster (Shibata et al. 2001).
We estimate the excess thermal energy $E_{\rm excess}$ in each
high temperature substructure (Table 4), where the gas temperature is typically
higher than environment by $\Delta T_{\rm X} \approx 2-3$ keV (Fig. 4), by defining the thermal
energy as the product of gas pressure $P$ and volume $V$,
and assuming a spherical or elliptical geometry for the volume of the hot
gas clump. We obtain
$E_{\rm excess} = \delta P \times V = \Delta T_{\rm X}n_{\rm e}V$
$\sim10^{58-60}$ erg for each clump, where $n_{\rm e}$ is the electron density
determined in the deprojected spectral analysis (\S 3.2.2). To account for
azimuthal variations in the electron density, we include a 10\% systematic
error (Sanders et al. 2004) in the calculations. As the
projection effect is not corrected for $\Delta T_{\rm X}$, the obtained
excess energy may have been underestimated by a factor of about two. The origin
of this energy excess is discussed as follows.
\subsection{Inhomogeneous Cooling, Conduction, and Turbulence}
\indent The detected temperature substructures may have been formed
due to the spatially inhomogeneous cooling in the ICM. Since for a unit volume,
the radiative cooling rate is $R_{\rm rad} = {n_{\rm e}}^2 \Lambda_{\rm rad}(T,Z)$,
where $\Lambda_{\rm rad}$ is the cooling function calculated by
including both continuum and line emissions, the radiative cooling
time of the substructures can be estimated as
\begin{equation} \label{eq:tcool}
t_{\rm cool} \simeq \frac{n_{\rm e}k_{\rm B}T_{\rm X}}{n_{\rm e}^2
\Lambda_{\rm rad}}.
\end{equation}
Typically, the calculated cooling time is $\sim 10^{9-10}$ yrs, and
is thus nearly an order of magnitude longer than the characteristic
time for thermal conduction alone to smear out the temperature
substructures (Table 4), which is given by
\begin{equation} \label{eq:ttherm}
t_{\rm cond} \simeq \frac{\delta h}{q} \delta r = \frac{\frac{5}{2}
n_{\rm e}k_{\rm B} \delta T_{\rm X}}{f{\rm \kappa_{\rm cond}}\frac{{\rm d}{T_{\rm X}}}{{\rm d}r}}{\rm \delta}r,
\end{equation}
where $\delta h$ is the enthalpy excess per unit volume in the hot
substructure, $q$ is the conductive
flux, ${\rm \delta}r$ is the characteristic scale length of the
substructure, $f$ = 0.2 is the factor to assess the
suppression caused by magnetic fields (e.g., Narayan \& Medvedev 2001),
and $\kappa_{\rm cond}$ is the thermal conductivity of hydrogen plasma,
\begin{equation} \label{eq:conduc}
\kappa_{\rm cond} \simeq 5.0 \times 10^{-7} T_{\rm X}^{\frac{5}{2}}~[{\rm erg} \mbox{ }{\rm cm^{-1}}\mbox{ }{\rm s^{-1}}\mbox{ }{\rm K^{-1}}]
\end{equation}
for Coulomb gas (Spitzer 1962). This indicates that, unless the
conduction process is extremely hindered (i.e., when $f \ll 1$; Maron et
al. 2004; Markevitch et al. 2003), the effect of the radiative cooling
is less important in the formation and evolution of temperature
substructures. Hence the possibility of the substructures being formed by
inhomogeneous cooling can be excluded.
The destruction of a temperature substructure via conduction can be
accelerated by turbulence (Dennis \& Chandran
2005), whose effect can be evaluated by introducing a modified
conductivity (Cho et al. 2003; Voigt \& Fabian 2004)
\begin{equation} \label{eq:turb}
\kappa'_{\rm cond} = \alpha n_{\rm e}k_{\rm B}l\Bigg(\frac {5 k_{\rm B}T}{3 \mu m_{\rm p}}\Bigg)^{1/2},
\end{equation}
where $\alpha$ is the ratio of the turbulence velocity $v_{\rm turb}$
to the local adiabatic sound speed $c_{\rm s}$ ($\alpha = v_{\rm turb}/c_{\rm s}$), and $l$ is the turbulence length
scale. To calculate $\kappa'_{\rm cond}$, the turbulence velocity $v_{\rm turb}$ can be estimated as follows.
As there is evidence that heating continuously compensates a large
fraction of radiative cooling since z$\sim 0.4$ (Bauer et al. 2005; McNamara \& Nulsen 2007),
we may assume an approximate energy balance between heating and cooling
for the cluster (e.g., Kim \& Narayan 2003; Zakamska \& Narayan
2003; Dennis \& Chandran 2005) as
\begin{equation} \label{eq:balance}
R \simeq H + {\rm \Gamma} + Q,
\end{equation}
where $R$ is the radiative loss, and $H$, $\rm \Gamma$ and $Q$ are the
heating rates of thermal conduction, viscous
dissipation and turbulent diffusion, respectively. Since
\begin{equation} \label{eq:thermalconduction}
H = \nabla \cdot (\kappa_{\rm cond} f \nabla T),
\end{equation}
\begin{equation} \label{eq:turbulentdiss}
{\rm \Gamma} = \frac {c_{\rm diss} \rho v_{\rm turb}^3}{l},
\end{equation}
and
\begin{equation} \label{eq:turbulentdiff}
Q = \nabla \cdot (D \rho T \nabla s),
\end{equation}
where $\rho$ is the gas mass density, $s$ is the specific entropy,
$c_{\rm diss} \approx 0.42$ is a dimensionless constant (Dennis \&
Chandran 2005 and references therein), and $D \approx 10^{29}$ $\rm cm$ $\rm s^{-1}$
is the diffusion coefficient (Rebusco et
al. 2006). By subtracting Eqs.$(10)-(12)$ into Eq.(9), we have
\begin{equation} \label{eq:turbulentvelocity}
v_{\rm turb}=\{[{n_{\rm e}}^2 \Lambda_{\rm rad} - (\frac {d\Psi}{dr} +
\frac {2\Psi}{r}) - D (\frac{d\Theta}{dr} + \frac
{2\Theta}{r})]\frac{l}{c_{\rm diss} \rho}\}^{\frac{1}{3}},
\end{equation}
if we define $\Psi = f \kappa_{\rm cond} \frac {dT}{dr}$ and $\Theta = \rho T
\frac {ds}{dr}$ (Dennis \& Chandran 2005). Using the observed
deprojected gas density and temperature profiles (Fig. 3), we
calculate $v_{\rm turb}$ and show the results as a function of
turbulence scale $l$ in Figure 7. If we follow
Schuecker et al. (2004) to adopt the lengths indicated by the turnover
points on the wavelet spectra (Fig. 5) as the turbulence scale, we
find that $v_{\rm turb}$ commonly ranges from 200 to 400 $\rm km$ $\rm
s^{-1}$, or $\alpha \sim 0.2 - 0.4$, for our sample. By using Eqs.(6)
and (8), we find that the
inclusion of turbulence reduces the characteristic time $t_{\rm cond}$ to
the modified characteristic time $t'_{\rm cond}$ by a factor
of about 3 (Table 4),
which sets an upper limit ($\sim 10^{8}$ yrs) on the duty cycles of the dominating heating sources.
\subsection{AGN Feedback}
\indent The calculated $E_{\rm excess}$ (Table 4) suggests that the hot clumps
may be dying or remnants of buoyant bubbles, into which the central
AGN has injected energy via shocks and turbulences (e.g., McNamara
\& Nulsen 2007). Such AGN-induced hot gas clumps are
predicted in numerical simulations (e.g., Quilis et al. 2001; Dalla
Vecchia et al. 2004; Vernaleo \& Reynolds 2006) and theoretical calculations (e.g., Pizzolato \&
Soker 2006), whose scales are expected to be about 100 kpc by the
end of the buoyancy process. By examining the temperature maps (Fig. 4),
we note that some hot clumps appear in pairs on both sides
of the central galaxies ($\#$1 and $\#$2 in A1201; $\#$1 and $\#$2 in A2204; $\#$1 and $\#$3 in A3112),
which suggests possible AGN activity taking place at the cluster cores.
However, can an AGN-heated temperature substructure survive
conduction and turbulence destructions? To answer this question, we estimate the characteristic rising
time of a bubble as $t_{\rm buoy} = r_{\rm b} / v_{\rm buoy}$, where $r_{\rm b}$ is
the travel distance measured from the cluster center to the location of a hot clump,
and $v_{\rm buoy}$ is the travel velocity.
In terms of the Keplerian velocity $v_{\rm K}(r_{\rm b})$, a
large bubble is expected to travel through the
ambient gas at
\begin{equation} \label{eq:buoyvelocity}
v_{\rm buoy}(r_{\rm b}) \simeq \sqrt{\frac{8R_{\rm b}}{3Cr_{\rm b}}}v_{\rm K}(r_{\rm b}),
\end{equation}
where $C \simeq 0.75$ is the drag coefficient and $R_{\rm b}$ is
the bubble radius in kpc (Churazov et al. 2001). This yields $t_{\rm buoy}$
smaller or very close to $t'_{\rm cond}$ for $\approx
95$\% of the substructures (Table 4), indicating that such substructures can
survive in most cases. Even for the clumps with $t_{\rm buoy} >
t'_{\rm cond}$, the AGN heating scenario may still stand, if the
distance traveled by the bubble is shorter (e.g., Gardini 2007). It
should be noted that no hot gas clump is found associated with any
known radio source in all sample clusters. This, however, does not
contradict with the AGN heating scenario, because the lack of radio
sources can be explained by the fact that the relativistic electrons
may have lost most of their energy when the bubbles arose, as inferred
by $t_{\rm sync} \lesssim t_{\rm buoy}$ ($t_{\rm sync} \sim
10^{7-8}$ yrs; \S 2).
\subsection{Other Possible Mechanisms}
Numerical and theoretical works show that shocks caused by
supersonic galaxy motion in the ICM can generate observable
temperature substructures via gas compression and friction (e.g.,
El-Zant et al. 2004), and even the low-speed galaxy infalling may also be a
heating source candidate due to the efficient energy conversion via the magnetohydrodynamic turbulence
and magnetic reconnection (Makishima et al. 2001; reviewed by Markevitch \& Vikhlinin
2007). The energy dissipation rate in the central region of
a cluster is as large as $10^{44}$ erg $\rm s^{-1}$ (Fujita et al. 2004),
which assembles $\sim 10^{59}$ erg during the merger process and is thus
sufficient to account for the excess energy in the observed temperature substructures.
Nevertheless, no convincing evidence is given in numerical studies that
the non-filamentary morphologies of the observed temperature substructures can
appear commonly in merger process (e.g., Poole et al. 2006).
We also have estimated the contribution of supernova feedback to the ICM
heating over the substructure lifetime ($t'_{\rm cond}$) using the observed
rate of type Ia supernovae (SNe Ia; Sharon et al. 2007), which implies that
about $0.8-6.1 \times 10^7$ SN Ia
explosions have occurred in the central 400 $h_{70}^{-1}$ kpc region
of each sample cluster. Since the averaged dynamic energy injection into the
environment is $4\times10^{50}$ erg per SN Ia event (Spitzer 1978),
and $\sim 10$ \% of this energy is assumed to have been used to heat the gas
(e.g., Thornton et al. 1998), the total SN Ia heating is found to be $<
3\times 10^{57}$ erg per cluster per $10^8$ yrs. The contribution of type II supernovae
(SNe II) will
not significantly increase the total supernova heating, since even in the IR-luminous
cluster A1068, where the ratio of SN
II rate to SN Ia rate is $\approx 3$ (Wise et al. 2004), the SN II
heating is still lower than $8\times 10^{57}$ erg. Therefore, we conclude that the supernova
heating is insufficient to create the observed high temperature substructures.
High temperature substructures may also occur as a result of the
inverse-Comptonization (IC) of the cosmic microwave background (CMB) photons that are scattered by the relativistic
electrons in the ICM, since in a {\em Chandra}\/\ ACIS spectrum it is difficult to distinguish between
such an IC component and a high temperature thermal component (Petrosian et al. 2008). If in
the ICM $1$\% of the electrons are relativistic ($\gamma \sim 10^{4}$; Eilek 2003),
we will have an IC energy conversion rate of
$b ( \gamma ) = \frac{4}{3} \frac{ \sigma_{\rm T} }{ m_{\rm e} c } \gamma^2 U_{\rm CMB} \simeq 6 \times 10^{44}$
erg $\rm s^{-1}$, where $\sigma_{\rm T}$ is the Thomson cross section
and $U_{\rm CMB}$ is the CMB energy density at the location of the
cluster (Sarazin 1999), which implies that the flux of the IC component is
sufficiently high to bias the temperature measurements by about $1-3$ keV.
However, a tight correlation between the high temperature
substructures and radio synchrotron sources is required in this scenario,
which is actually not observed, possibly due to the lack of relativistic electrons.
Therefore, the IC mechanism for the formation of the temperature substructures
can be eliminated.
\section{SUMMARY}
We reveal the prevailing existence of temperature substructures on
$\sim 100$ $h_{70}^{-1}$ kpc scales in the central regions of nine
intermediate-redshift galaxy clusters. By comparing
the characteristic destruction times of the temperature substructures
with the gas cooling times and the rising times of AGN-induced bubbles,
we conclude that the AGN outbursts may have played a crucial role in
the forming of the observed temperature substructures. Our results agree
with those found earlier in the Virgo and Coma Clusters
(Shibata et al. 2001; Schuecker et al. 2004).
\section*{Acknowledgments}
We thank Kazuo Makishima, Kazuhiro Nakazawa, Peter Schuecker,
and Herv$\acute{\rm e}$ Bourdin for their helpful suggestions and
comments. This work was supported by the National Science Foundation of
China (Grant No. 10673008 and 10878001), the Ministry
of Science and Technology of China (Grant No. 2009CB824900/2009CB24904),
and the Ministry of Education of China (the NCET Program).
|
1,116,691,499,212 | arxiv | \section{Acknowledgment}
I am indebted to Boris Kopeliovich, Irina
Potashnikova and Ivan Schmidt,
Universidad Federico Santa Mar\'ia, Valpara\'iso, Chile, and to Massimiliano Alvioli
and Chiara
Benedetta Mezzetti, INFN, Perugia, Italy, for a fruitful collaboration.
\bibliographystyle{aipproc}
|
1,116,691,499,213 | arxiv | \section*{Supplementary Information: Accessing eigenstate spin-glass order from reduced density matrices}
In this supplementary document we provide additional data and describe the required error analysis for the real time evolution using the time
evolving block decimation algorithm.
\section{Real time evolution using time evolving block decimation~(TEBD) technique}
to
We use the standard TEBD technique~\cite{SCHOLLWOCK201196} to perform unitary real time evolution starting from a given initial state.
\begin{equation}
\label{time_evolution}
| \psi(t)\rangle = U(t) | \psi_{in}(0)\rangle = e^{-iHt} | \psi_{in}(0)\rangle,
\end{equation}
where $H$ is the Hamiltonian of the system considered. The TEBD algorithm relies on the Suzuki-Trotter decomposition of the time-evolution operator
$U(t)$, Eq.~\ref{time_evolution}. In this context one decomposes the time-evolution operator into $N$ small time steps, where $N$ is
a large enough number such that the time interval $dt=\frac{t}{N}$ is small compared to the physical time scale of the system.
\begin{equation}
\label{decomposition}
U(t) = U(dt=t/N)^N.
\end{equation}
Due to the local nature of the Hamiltonian, i.e., only nearest-neighbor interaction, the Hamiltonian can be conveniently decomposed into sum
of many local terms $h_i$ with support only on lattice sites $i$ and $i+1$.
Therefore $U(dt)$ can be approximated by an $n$th-order Trotter decomposition~\cite{Suzuki_1976}. In our
calculations we use a second order Suzuki-Trotter decomposition, where the local terms can be broken down to product of terms in even and odd sutes,
\begin{equation}
\label{2nd order st}
U(dt) = \prod_{\substack{i \\ even}}U_i \Le( \frac{dt}{2}\Ri)\prod_{\substack{i \\ odd}} U_i(dt) \prod_{\substack{i \\
even}} U_i \Le(\frac{dt}{2}\Ri) + O(dt^3).
\end{equation}
Where the $U_n(dt)$ are the infinitesimal time-evolution operators $\exp(-ih_{i,i+1} dt)$ on the bonds $i$ which can be even or odd.
\subsection{Error sources and analysis}
\begin{figure}[thb!]
\centering
\includegraphics[width=0.4\columnwidth]{sup_EA_quench_x_ini_J_error_analysis_Jx_0,25_site_4}
\includegraphics[width=0.4\columnwidth]{sup_EA_quench_z_ini_J_error_analysis_Jx_0,25_site_4}
\caption{Error analysis for different set of parameters $(\varepsilon, dt)$ for non-MBL SG, $J/h = 1.0$, and MBL-SG, $J/h = 3.0$, phases. Left
pannel for a chain $L = 80$ with all initial states randomly pointed in the $x$-direction. Right pannel for a chain $L = 48$ with all initial states
randomly pointed in the $z$-direction.}
\label{fig:error_analysis: x-dir_err}
\end{figure}
There are two main sources of error that one needs to take into account while analysing the time-dependent data.
\paragraph{\textbf{Trotter error:}}~Due to finite degree of Trotter decomposition an error accumulates in time evolution. For an $n$th-order
Trotter decomposition, the error in one time step $dt$ is of order $dt^{n+1}$. To reach a given time $t$, one has to perform $t/dt$ time steps, such
that in the worst case the error grows linearly in time $t$ and the resulting error is of order $(dt)^nt$. As well in our computation the error
scales linearly with system size $L$, which is due to $2 \times L/2$ number of Trotter gate operation for given $dt$. Therefore the overall error is
of the order $(dt)^nLt$.
\paragraph{ \textbf{Truncation error:}}~This error arises due to truncation of local matrices due to ever growing Hilbert space during the TEBD run.
The truncation error $\epsilon$ at each time step is small but it accumulates as $O(Lt/dt)$ for total time $t$. This is due to the fact that
truncated wave function after each time step $dt$ has a norm less than $1$ and needs to be renormalized by a factor of $(1-\epsilon)^{-1} > 1$.
Truncation errors accumulate roughly exponentially with an exponent of $\epsilon Lt/dt$ and eventually the calculations break down at very
large time limit due to error propagation. Therefore carefully checking the error at each step and accommodating sufficient number of states to
control it is necessary.
\paragraph{\textbf{Optimal choice of TEBD parameters:}} In order to reach long simulation time $t$ one has to find optimal control parameters,
which are time step $dt$, and the number of the truncated states (kept state) $\chi_{max}$. We implemented the TEBD algorithm is such a way that we
discarded states below certain threshold, $\varepsilon$.
Therefore the control parameters are the time step $dt$ and the truncation error threshold is $\varepsilon$. The total error would increase at
larger
$dt$ due to the Trotter error, and at smaller $dt$ due to the truncation error.
It is reasonable to choose for small times rather small values of $dt$ in order to minimize the Trotter error and for large times, to choose a
somewhat coarser time interval, in order to push the time to as large as possible~\cite{PhysRevE.71.036102}.
We choose two small value for $dt$
\begin{equation}
h dt \in [0.01, 0.005]
\end{equation}
and for $\varepsilon$ we consider different truncation thresholds
\begin{equation}
\varepsilon \in [1e-7, 1e-9, 1e-11]
\end{equation}
In the Following we do the error analysis for two different initial states, first error analysis for the results that we have shown in the main text
of the paper and second for initial state that break the $Z_2$ symmetry of the system in the $z$-direction, i.e., an initial state where all spins
are pointed randomly in the $z$-direction.
Figure~\ref{fig:error_analysis: x-dir_err} left panel shows the error analysis for the initial states, which is product state randomly directed in
$x$-direction.
Note that we average over $1000$ realizations for this simulation. In each set of TEBD parameters, i.e., $\varepsilon$ and $dt$, the initial states
remain unchanged, which give us confident that results shown here and in the main text are well converged.
Figure~\ref{fig:error_analysis: x-dir_err} right panel shows the evolution of the $\ESG$ for different TEBD parameters starting with an initial
state, which is randomly pointed in $z$-direction. As it is seen that for all the parameters choice of the TEBD algorithm the dynamics remain
unaffected. Here the discarded weight remains very small of the order $~1e-16$.
\subsection{Integrable model: dynamics in the vicinity of the transition}
\begin{figure}[thb!]
\centering
\includegraphics[width=0.7\columnwidth]{EA_quench_x_ini_spec_sub_0_4}
\caption{Eigenstate spin-glass order parameter $\ESG$ across the transition in the integrable model, where the coupling along the
$x$-direction is taken to be zero~($J^x=0$, see main text for further details of the model). For weak couplings $\ESG$ decays with
time, while in the other regime it increases.}
\label{fig:intg_transition}
\end{figure}
In Fig.~\ref{fig:intg_transition} we show data for $\ESG$ in the vicinity of the transition for the integrable case of the model studied in the main
text~($J^x=0$), where the transition is known
to be at $J/h=1$~\cite{Pfeuty70}. From this plot one can again identify two phases separated by a crossing point, with which one might identify the
location of the MBL-SG transition in the asymptotic long-time limit. From our finite-time data, however, we find the crossing point at $J/h\approx
1.2$, which overestimates the transition by 20 $\%$. As in the main text, we attribute this discrepancy to the slow dynamics that can occur in these
models, which effectively implies that one would need to reach even longer times to see a drift towards the known transition value.
\section{Initial state with broken $Z_2$ symmetry}
\begin{figure}[h]
\centering
\includegraphics[width=0.55\columnwidth]{f2_ReducedDynamics_Z}
\caption{{\bf Quench dynamics} (a)~Shows the dynamical evolution of $\ESG$ starting from an initial
state where all spins initially align along the $z$- direction. Because of the initially broken $\mathbb{Z}_2$ symmetry at $t=0$, the $\ESG$
is non-zero as seen in both the plots. For
weak coupling $J/h=1.0$ the $\ESG$ decays at long times. (b)~For strong couplings $J=3.0$ the $\ESG$ becomes finite instead. The shaded region
indicates the statistical error in
the data.}
\label{fig:z-dir}
\end{figure}
In this section we show additional data for quench dynamics when the initial state breaks the Ising $\mathbb{Z}_2$ symmetry by choosing product states
aligned along the $z$-direction. As seen in
Fig.~\ref{fig:z-dir} we observe the same long-time dynamics as for the initial condition studied in the main text.For weak couplings the $\ESG$ values
goes to zero with increasing system sizes at long times. For large couplings instead, $\ESG$ remains finite at
long times for all system sizes.
\section{Long-range correlated Ising Model}
\begin{figure}[h]
\centering
\includegraphics[width=0.55\columnwidth]{EA_param_LongRange}\
\includegraphics[width=0.55\columnwidth]{ESG_finitesize}
\caption{{\bf Eigenstate order} Upper panel: Shows both the $\EA$ and $\ESG$ for the long-range model~\eqref{eq:lrange} in for excited states
taken from the middle of the spectrum. Above $W/h_z\gtrsim2.5$ the model appears to have a nonzero $\EA$, suggesting MBL-SG order. This
behavior is also captured by the $\ESG$ as well. Lower panel shows the $L$ dependence of both the order parameters
for few different values of disorder strength as mentioned in the label. Here $\alpha=2$ is chosen. As in the main text, the lines are a guide to the
eye and are not supposed to provide a quantitative extrapolation.}
\label{fig:longrange}
\end{figure}
In this section we consider a different model and show that also in the presence of long-range interaction the eigenstate order
parameter shows the essential behavior of MBL-SG. The model we consider is the following:
\eq{
\mc{H} = \f{1}{N(\alpha)}\sum_{i<j} \frac{\Omega_i \Omega_j}{|i-j|^{\alpha}} \sigma_i^x \sigma_j^x + \sum_i h_i \sigma_i^z,
\label{eq:lrange}
}
where $N(\alpha) = \sum_{i<j} \f{1}{|i-j|^\alpha}$ is the Kac normalization. $\Omega_i$ is taken from an uniform distribution $[0, W]$, while the
field is distributed randomly between $[-h_z, h_z]$.
Like in the main text here we also calculated both the Edwards-Anderson parameter $\EA $and the eigenstate order parameter $\ESG$. It has been
theoretically proposed that this model can be realized in current trapped ion experiments~\cite{Hauke2015,Grass2016}.
Figure~\ref{fig:longrange} shows the parameter dependence of both $\EA$ and $\ESG$ along with the finite-size dependence. The calculation is
performed
for excited states taken from the middle of the spectrum for $\alpha=2.0$. As one can see the behavior of $\EA$ is well reproduced by
the $\ESG$, however again due to strong finite size effects the critical disorder value for MBL-paramagnet to MBL-SG transition cannot be reliably
detected. We have
used similar averaging procedure as it is done for all the data in the main text.
\end{document}
|
1,116,691,499,214 | arxiv | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{I}{nstance} segmentation lays a foundation in biomedical image analysis such as cell migration study~\cite{lienkamp2012vertebrate} and cell nuclei detection~\cite{gurcan2009histopathological}.
It aims to not only group together pixels in different semantic categories to form object instances, but also distinguish individual objects of the same category.
Instance segmentation is generally quite challenging because (1) objects of the same class can get crowded tightly together and have obscure boundaries, (2) small local pixel-level errors can disturb instance-level correctness in the neighborhood,
and (3) the number of instances in an input image is unknown during prediction.
Deep learning (DL) based semantic segmentation methods are able to obtain effective object segmentation in various biomedical image datasets~\cite{chen2016dcan,graham2019mild,Mishra2022,ronneberger2015u}. However, their performances on instance segmentation may deteriorate considerably because they mainly produce probability maps to indicate pixels for various possible semantic classes but focus less on how to effectively group pixels together to form object instances.
Many recent biomedical image instance segmentation methods sought to incorporate instance-level features into semantic segmentation architectures. There are two main types of such strategies. The strategies of the first type encode pixel-wise correlations across instances~\cite{chen2019instance,voigtlaender2019feelvos,kulikov2020instance,payer2019segmenting}. With similarity or distance loss functions, pixel embeddings from the same instance are pushed together while pixel embeddings from different instances are pushed further away.
The strategies of the second type aim to utilize instance morphological properties to distinguish different instances~\cite{eschweiler2019cnn, stringer2021cellpose,pena2020j}. Some of the best known methods are watershed-based~\cite{meyer1994topographic}, directly applying watershed as a post-processing~\cite{eschweiler2019cnn,lux2019dic} or watershed-inspired deep learning methods such as distance maps~\cite{naylor2018segmentation,graham2019hover,scherr2021improving} and gradient maps~\cite{stringer2021cellpose},
which show great performances on a wide range of biomedical image datasets.
Although much effort has been dedicated to developing instance segmentation methods for biomedical images, it is still quite challenging to distinguish instances in hard cases (\textit{e.g.}, densely packed cells). In temporal instance segmentation settings (e.g., videos), previous studies~\cite{arbelle2019microscopy,payer2018instance,akram2016joint} exhibited the potential of exploiting temporal consistency of objects to alleviate the severity of this challenge. But, exploiting temporal consistency does not always show advantages compared to general deep learning segmentation methods, as suggested in the public leader board of the temporal cell segmentation challenge benchmark~\cite{cellseg}. Early work applied matching between segmentation masks based on hierarchical structures~\cite{akram2016joint} or segmentation sets~\cite{schiegg2015graphical}. Instance candidates were generated mostly using heuristic rules~\cite{turetken2016network,magnusson2014global} or handcrafted features~\cite{akram2016joint,schiegg2015graphical},
and thus they could not directly benefit from DL semantic segmentation pipelines. Recent temporal approaches considered pair-wise correlation of the same instances across multiple image frames utilizing context-aware methods such as GRU\cite{payer2019segmenting} and LSTM\cite{arbelle2019microscopy}. But, temporal correlation is preserved only on individual pixels with pixel-pixel correlation, while clustering/grouping different pixels is still required in order to obtain segmentation masks. Consequently, instance-level temporal consistency between segmentation masks across different image frames is not guaranteed to preserve.
In this paper, we propose a new framework, called hierarchical earth mover's distance (H-EMD), which is capable to simultaneously incorporate semantic segmentation and temporal/spatial consistency among instance segmentation masks for instance segmentation in biomedical 2D+time videos and 3D images.
We observe that probability maps from common DL semantic segmentation models can allow to directly generate instance candidates using different threshold values. These instance candidates carry instance-level features and fit well with temporal/spatial consistency. To select the correct instance candidates, we first distinguish the easy-to-identify instances which are typically isolated objects with simple topological structures. From the viewpoint of probability maps, such instances correspond to connected components with a single local maximum. Moreover, unselected (hard-to-identify) instances (\textit{e.g.}, multiple crowded cells) can be extracted by matching with the already identified instances.
Specifically, H-EMD consists of two main stages. (1) \textit{Instance candidate generation}: Given a probability map $\mathcal{P}$ produced by a DL semantic segmentation model (\textit{e.g.}, a fully convolutional network (FCN)), we apply different threshold values of $\mathcal{P}$ to generate many sets of instance candidates. We show that the generated instance candidates form a well-behaved hierarchical structure, called instance candidate forest (ICF). (2) \textit{Instance candidate selection}: Utilizing ICF, we aim to identify the correct instance candidates. First, easily segmented instances corresponding to the root nodes of those special trees in ICF each of which is just a single path (i.e., no branches) are identified and selected. Then, we develop an iterative matching and selection algorithm to identify instances from the unselected nodes in ICF. In each iteration, we first conduct matching with the selected instances to reduce redundant matching errors. Through a similar matching scheme, the selected but unmatched instances are later propagated to adjacent image frames to select more instances from the remaining instance candidates. After several iterations, the remaining unselected instance candidates go through a padding process. Finally, the padding results combined with all the selected instance candidates form the final output instances.
In the second stage of H-EMD, we formulate a key instance candidate selection problem from ICF as an optimization problem based on the earth mover’s distance (EMD)\cite{rubner1998metric}, and solve it by integer linear programming (ILP).
In summary, our contributions in this work are as follows.
\begin{itemize}
\item We introduce a novel framework, H-EMD, which is able to utilize both semantic segmentation features and temporal/spatial consistency among segmentation masks.
\item We present a general method to produce comprehensive instance candidates directly from probability maps generated by DL semantic segmentation models.
\item We identify easy instances with high confidence in the instance candidates. We further present an iterative matching and selection method to propagate the selected
instance candidates to match with the unselected hard-to-identify candidates.
\item We formulate a key instance selection problem from the candidate forest as an optimization problem based on the earth mover’s distance (EMD)\cite{rubner1998metric}, and solve it by integer linear programming (ILP).
\item We conduct experiments on four public and two in-house temporal instance segmentation datasets (i.e., 2D+time videos). Built on top of common DL semantic segmentation models, our H-EMD is highly competitive with state-of-the-art segmentation methods.
We also submitted the H-EMD results to Cell Segmentation Benchmark on these public datasets. Compared to all the submissions, H-EMD ranks top-3 in three out of the four datasets.
\item We also conduct experiments on two 3D instance segmentation datasets, and demonstrate that H-EMD is able to boost performances using spacial consistency on 3D instance segmentation tasks.
\end{itemize}
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\textwidth]{framework.pdf}
\caption{An overview of our proposed
H-EMD instance segmentation framework based on semantic segmentation, which consists of two main stages. (a) \textit{Instance Candidate Generation}: Given a probability map produced by a pixel-wise DL semantic segmentation model, we apply many possible threshold values given by the probability map to generate different masks. Taking all the connected components resulted from the masks, we produce instance candidates which are organized in a forest structure (for a simple illustration, only one tree is shown in this figure). (b) \textit{Instance Candidate Selection}: Given the instance candidate forest, we incorporate instance consistency information (\textit{i.e.}, the selected instances from neighboring frames) to select an optimal instance candidate subset. The selected instances are taken as part of the final instance segmentation results.}
\label{overview}
\end{figure*}
\section{Related Work}
\subsubsection{Semantic Segmentation for Instance Segmentation}
Deep learning based semantic segmentation models are one of the mainstream approaches for instance segmentation in biomedical images. For input images, an FCN (\textit{e.g.}, U-Net) produces probability maps that assign a semantic label to each image pixel. Pixels of the same semantic class in the probability maps are grouped into connected components as output instances~\cite{ronneberger2015u,chen2016dcan,zhou2019cia,kang2019nuclei}.
Semantic segmentation attains good performance for instance segmentation, and has considerable adaptability in various instance scenarios (\textit{e.g.}, crowded irregular-shape cells).
However, semantic segmentation utilizes only pixel semantic labels and requires an additional empirical pixel grouping step (\textit{e.g.,} 0.5-thresholding) in the inference stage. Semantic segmentation architectures are often not well suited to directly capturing instance-level features/properties.
Thus, more effective methods are needed for incorporating instance information to better identify instances.
\subsubsection{Incorporating Instance Information}
Many methods have been proposed to incorporate instance-level information with semantic segmentation architectures. There are two main types of strategies. The strategies of the first type encode pixel-wise correlation in an instance fashion to determine individual instances, which are also known as pixel embedding methods~\cite{payer2019segmenting,chen2019instance,kulikov2020instance,zhao2021faster,payer2018instance}. Specifically, pixels for the same instance are pushed together while pixels of different instances are pushed further away. Instead of predicting a probability value for a semantic class, an embedding vector is typically generated for each pixel. Payer \textit{et al.}~\cite{payer2019segmenting} proposed a convolutional gated recurrent unit (ConvGRU) network to predict an embedding for each instance in a sequence of images. Kulikov \textit{et al.}~\cite{kulikov2020instance} presented a method to explicitly assign embedding to pixels of the same instance by feeding instance properties to pre-defined harmonic functions.
Note that pixel embedding methods still require a pixel clustering step to obtain final instance segmentation results. For example, mean-shift clustering algorithms~\cite{comaniciu2002mean,zhao2021faster} have been widely used to obtain instance masks from the embedding space.
The strategies of the second type seek to utilize morphological instance information to identify instances. One of the best known methods is watershed~\cite{meyer1994topographic,couprie2005quasi}, which takes a topographic map and multiple markers as input to separate adjacent regions in an image. The number of markers determines the number of regions. Watershed has been widely used as a post-processing method to generate instances~\cite{eschweiler2019cnn,iglovikov2018ternausnetv2,lux2019dic}. For example, Eschweiler \textit{et al.}~\cite{eschweiler2019cnn} proposed to generate markers from the predicted cell centroids and topological map from the predicted membranes; the final cell segmentation results were generated by applying marker-controlled watershed~\cite{beare2006watershed}. Another line of work aimed to incorporate instance-level topological properties by transforming them into training labels and corresponding object functions~\cite{bai2017deep, naylor2018segmentation}. In distance map methods~\cite{naylor2018segmentation,schmidt2018cell,scherr2021improving}, the Euclidean distances between each instance pixel to the nearest instance boundary are utilized as target labels. Graham \textit{et al.}~\cite{graham2019hover} proposed to use both horizontal and vertical distances between each instance pixel to the corresponding instance mass centers.
Recently, Cellpose~\cite{stringer2021cellpose} utilized horizontal and vertical gradients of center distances to determine instances; it provided a pre-trained model which was pre-trained on a large dataset. Thus, Cellpose can be applied out of the box (without re-training).
Different from all the previous work, we explore instance-level information contained in the probability maps. Given a probability map generated by a general DL semantic segmentation model, we apply many possible threshold values and directly obtain different sets of instance candidates.
\subsubsection{Consistency in Sequential Instance Segmentation}
In 2D+time videos, the structures and distributions of dynamic instances between consecutive frames are often similar or consistent. For example, the same cell does not change position, size, or shape drastically from one frame to the next. The consistency in 2D image videos is called temporal consistency. Such consistency also exists in the 2D slices of a 3D image as spatial consistency. For example, in a 3D cardiovascular image, cardiovascular tissues show gradual and continuous changes in shape/color across adjacent 2D slices. Consistency is preserved well with a higher frame rate or less movements and morphological changes, while with a low frame rate or large movements and morphological changes (\textit{e.g.,} cell mitosis) consistency properties may not be conserved well.
Previous work~\cite{romera2016recurrent,arbelle2019microscopy,payer2019segmenting} has shown that consistency could be explored to improve instance segmentation in sequential image settings.
There are mainly two ways to incorporate temporal consistency, conducting instance-level matching or pixel-wise correlation with recurrent neural networks (RNNs). In instance-level matching methods, instance segmentation is usually jointly attained with instance matching. Typically, instance candidates are first generated using handcrafted features and an optimal matching model is applied to obtain both instance segmentation and instance matching results. In~\cite{schiegg2015graphical}, instance candidates were generated from over-segmented super pixel images, and a factor graph based matching model was applied to select candidate instances. In~\cite{turetken2016network}, instance candidates were generated by a simple classifier trained on handcrafted features, and the following matching task was formulated as a constrained network flow problem.
Recently, deep learning methods were explored to utilize pixel correlations between adjacent image frames. In~\cite{arbelle2019microscopy}, UNet and LSTM were combined into UNet-LSTM to exploit pixel correlations between neighboring image frames. An embedding recurrent network~\cite{payer2019segmenting} was proposed to consider pixel correlations within the same frame and across multiple frames.
Different from the known RNN based methods which consider pixel-wise correlations, our method can better preserve consistency through instance-level matching. Compared to the previous matching based methods, our generated instance candidates come from the probability maps, which take advantage of the rich features extracted by DL semantic segmentation models. Besides, our method is generic and easy, free from complicated variables and data-specific hyper-parameters.
\subsubsection{Segmentation Trees} Tree-like structures are commonly used in segmentation methods, with each tree node representing one possible instance candidate. Tree-like structures are typically generated by traditional methods without DL networks~\cite{silberman2014instance,funke2018candidate,fehri2019bayesian}. For example, a tree can be generated from super-pixels~\cite{silberman2014instance, funke2018candidate}; after the leaf nodes of initial over-segmented candidates are obtained, a tree structure is built by iteratively merging similar super-pixels, until a pre-specified stopping criterion is met. Segmentation trees~\cite{akram2014segmentation, akram2016joint} can also be generated from binary segmentation masks with a set of filter banks. A related segmentation tree is called component tree~\cite{souza2016overview,jones1999connected} which is created based directly on intensities with a set of filters of different threshold values. Different from the previous segmentation trees, the instance candidate forest in our method is built directly on top of probability maps generated by DL semantic segmentation models.
\begin{table*}
\centering
\caption{The notation of the main variables used in Section~\ref{method1}.}
{
\begin{tabular}{ l|l }\hline
Variables & Description \\ \hline
$X=(x_1, x_2, \ldots, x_W)$ & A sequence of input raw images \\\hline
$P=(p_1, p_2, \ldots, p_W)$ & A sequence of probability maps generated by a semantic segmentation model for a sequence $X$ of images \\ \hline
$V^w = \{v_1^w, v_2^w,\ldots, v_H^w \}$ & The list of all distinct probability values of a probability map $p_w$, in increasing order\\ \hline
$I^{v_h^w}$ & The instance candidate set induced by a probability value $v_{h}^{w}$ for a probability map $p_w$ \\ \hline
$\mathcal{F}^w = (I^w, \mathcal{H}^w)$ & The instance candidate forest formed by the instance candidate set $I^w$ and the parent function $\mathcal{H}^w$ for a probability map $p_w$ \\ \hline
$\mathcal{R}^w$ & The root set of the instance candidate forest $\mathcal{F}^w$ \\ \hline
$\mathcal{L}^w$ & The leaf set of the instance candidate forest $\mathcal{F}^w$ \\ \hline
$\mathcal{N}^w = \{\mathcal{N}_q^w\}^z_{q=1}$ ($z = |\mathcal{L}^w|$) & All the leaf-to-root paths (\textit{i.e.}, sequential node lists from each leaf node to its root node) in the instance candidate forest $\mathcal{F}^w$ \\ \hline
$T_n^w$ & The instance candidate tree containing a node $n$ in the instance candidate forest $\mathcal{F}^w$ \\ \hline
$S_t = \{S_t^1, S_t^2,\ldots,S_t^W\}$ & The instance candidate selection state at iteration $t$ for all the images in $X$ \\ \hline
$M_{r,d}$ & The pairwise matching score between a reference instance $r$ and a target instance $d$ \\ \hline
$f_{r,d}$ & A matching flow (integer variable) between a reference instance $r$ and a target instance $d$, $f_{r,d} \in \{ 0, 1 \}$ \\ \hline
\end{tabular}
\label{variable}
}
\end{table*}
\section{Methodology}
\label{method1}
In this section, we present our proposed hierarchical earth mover's distance (H-EMD) framework for instance segmentation tasks in 2D+time video and 3D image settings. Table \ref{variable} summarizes the main variables used.
For an input sequence of 2D images (\textit{i.e.}, a 2D+time video or a 3D image stack), $X=(x_1, x_2, \ldots, x_W)$, the corresponding probability maps $P=(p_1, p_2, \ldots, p_W)$ can be generated by a pixel-wise DL semantic segmentation model inferring individually on each image $x_w$. Our H-EMD method performs two main stages to identify instances in the entire image sequence $X$ (Fig.~\ref{overview} gives an overview of these two stages).
(1) \textit{Instance Candidate Generation}:
For each probability map $p_w$, we use different threshold values to generate many possible instance candidates. These instance candidates form a forest structure $\mathcal{F}^w$, called instance candidate forest (ICF). We obtain a list of ICFs, $\mathcal{F} = (\mathcal{F}^1, \mathcal{F}^2, \ldots, \mathcal{F}^W)$.
(2) \textit{Instance Candidate Selection}:
We incorporate temporal or spatial instance consistency to select a subset of instance candidates from the ICF list $\mathcal{F}$. Specifically, we propose an iterative matching and selection method (Fig.~\ref{case:tracking} shows an example of this iterative selection process). We use a ``state" $S_t$ to represent the instance candidate selection status at iteration $t$ of the iterative matching and selection process. Initially, we select all the root nodes in $\mathcal{F}$ which have only one leaf node in their corresponding instance candidate trees. The selected instance candidates thus form the initial state, $S_0$.
Given a state $S_t$ ($t=0,1,\ldots$) in iteration $t$, the next state $S_{t+1}$ is obtained by instance candidate matching: each selected instance candidate in $S_t$ can potentially be matched
with unselected instance candidates in their neighboring image frames. Newly
matched (unselected) instance candidates are then selected and added to $S_t$ to form $S_{t+1}$. After $T$ iterations
(for a specified $T$), we terminate the iterative process. Finally, we apply a padding method to select from the remaining unselected candidates. All the selected instance candidates form the final output.
\begin{figure*}
\centering
\includegraphics[width = 0.68\textwidth]{tree_candidates.pdf}
\caption{A visual example of some generated instance candidates (with a UNet-LSTM backbone). For a simple illustration, it shows only the instance candidates generated with two threshold values (th), 0.50 and 0.80.
Instance candidate region boundaries are marked in white color. Yellow arrows point to the instance candidate regions where splitting occurs when the threshold value increases from 0.50 to 0.80.}
\label{fig:candidates}
\end{figure*}
\subsection{Instance Candidate Generation}
\label{method1.1}
In this stage, for every input image $x_w \in X$ with a foreground class probability map $p_w$ whose values range from 0 to 1,
we aim to generate all the possible instance candidates, \textit{i.e}., connected components, on $x_w$. This process is somewhat similar to the generation of component trees with sequential gray-levels~\cite{carlinet2014comparative, souza2016overview}. Specifically, we take all the probability values of $p_w$ (rounded to 2 decimal places),
remove the duplicate ones,
and sort them into a list $V^w = \{v_1^w, v_2^w, \ldots,v_H^w \}$ in increasing order.
To remove noisy instance candidates, we use only threshold values that are not smaller than a pre-defined value $\tau$, that is, $V^w = \{v_h^w \ | \ v_h^w \geq \tau \}$, where $\tau$ is empirically set to $0.50$.
We then use each value $v_h^w \in V^w$ as a threshold value to determine the connected components in $x_w$ whose pixels' probability values are all $\geq v_h^w$. We say that $v_h^w$ {\it induces an instance candidate set} $I^{v_h^w}$, which is a collection of mutually disjoint regions (connected components) in $x_w$. All such instance candidate sets form the instance candidates for $x_w$, $I^w$. Fig.~\ref{fig:candidates} shows a visual example of the generated instance candidates.
Next, we show that the instance candidates $I^w$ in $x_w$ thus generated have some useful properties.
First, we discuss the \textit{Containment Relationship} among the instance candidates.
\subsubsection{Containment Relationship}
For any two consecutive
values $v_i^w$ and $v_{i + 1}^w$ ($v_{i}^w < v_{i + 1}^w$) in the sorted list $V^w$ and their induced candidate sets $I^{v_{i}^w}$ and $I^{v_{i+1}^w}$, every candidate region $I_{j}^{v_{i+1}^w} \in I^{v_{i+1}^w}$
is a sub-region of one and only one candidate region $I_{k}^{v_{i}^w} \in I^{v_{i}^w}$, \textit{i.e.}, $I_{j}^{v_{i+1}^w} \subseteq I_{k}^{v_{i}^w}$.
We shall prove the containment relationship below, which is hinged on the mutual
exclusion-inclusion property of the instance candidate regions.
\subsubsection{The Mutual Exclusion-inclusion Property} For any two different instance candidate regions (\textit{i.e.}, candidate instances) $R'$ and $R''$ in $I^w$, either $R'$ and $R''$ do not intersect (exclusion), or one is entirely contained in the other (inclusion). Note that for any threshold value $v_{h}^w \in V^w$ with its induced candidate set $I^{v_{h}^w}$,
if a pixel $p$'s probability value is $\geq v_{h}^w$, then $p$ belongs to one and only one candidate region (a connected component) $I_{j}^{v_{h}^w} \in I^{v_{h}^w}$. Further, for any two different candidate regions $R'$ and $R''$, clearly either $R'$ and $R''$ do not intersect (overlap), or they do intersect. If $R'\cap R''=\emptyset$, then they are mutually exclusive. If they intersect, then let $v'$ and $v''$ be their induced threshold values, respectively. If $v'=v''$ and $R'$ and $R''$ overlap, then $R'\cup R''$ is one connected component, and thus $R'$ and $R''$ are not two different candidate regions, a contradiction. Hence, assume $v'<v''$.
For any pixel $p \in R''$, $p$'s probability value is $\geq v''> v'$. Thus, $p$ is also contained in a candidate region induced by $v'$. Hence, $R''$ is a sub-region of a candidate region induced by $v'$. Since $R'$ and $R''$ overlap and the candidate regions induced by $v'$ are mutually disjoint, $R''\subseteq R'$ (inclusion).
Based on the containment relationship, for any candidate $I_{j}^{v_{i+1}^w}\in I^{v_{i+1}^w}$, we can find its unique parent candidate $I_{k}^{v_{i}^w}\in I^{v_{i}^w}$. Let $\mathcal{H}^w$ be the function that maps each candidate $I_{j}^{v_{i+1}^w}\in I^{v_{i+1}^w}$ to its parent candidate $I_{k}^{v_{i}^w}\in I^{v_{i}^w}$ (if $I_{j}^{v_{i+1}}$ is a root node, then $\mathcal{H}^w$ returns NIL). In summary, all the instance candidates $I^w$ together form a forest $\mathcal{F}^w = (I^w, \mathcal{H}^w)$, where $I^w$ can also be viewed as the node set of all the candidate regions and $\mathcal{H}^w$ is the parent function for each node in $I^w$. Denote the root node set of the forest $\mathcal{F}^w$ as $\mathcal{R}^w$ and the leaf node set as $\mathcal{L}^w$. Denote the set of all the leaf-to-root paths in $\mathcal{F}^w$ as $\mathcal{N}^w = \{\mathcal{N}_q^w\}^z_{q=1}$ ($z = |\mathcal{L}^w|$), which includes all the paths from a leaf to its root in $\mathcal{F}^w$.
The instance candidate forest $\mathcal{F}^w$ is composed of instance candidate trees (ICTs). Each node $n$ belongs to one and only one tree $T_n^w$ in $\mathcal{F}^w$ with the root node $r_n^w \in \mathcal{R}^w$, and $T_n^w$ is associated with a leaf-to-root path set $\mathcal{N}(T_n^w) \subseteq \mathcal{N}^w$ and a leaf set $\mathcal{L}(T_n^w) \subseteq \mathcal{L}^w$.
\subsection{Instance Candidate Selection}
\begin{figure*}[h]
\centering
\includegraphics[width=0.77\textwidth]{case.pdf}
\caption{An example for illustrating the instance candidate selection stage {\color{red} on three consecutive image frames (\textit{e.g.}, frames $w-1$, $w$, and $w+1$)}. Suppose instance candidates are already generated from the probability maps. First, high-confidence segmentation instance candidates corresponding to the tree roots with only a single path are selected to form the initial state. Then, selected instance candidates are propagated to the neighboring frames to match with and select more (previously unselected) candidates. Finally, the remaining unselected candidates are selected by a padding method.
All the selected instance candidates are taken as the final instance segmentation results.}
\label{case:tracking}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width = 0.42\textwidth]{H-EMD.pdf}
\caption{Illustrating our proposed H-EMD matching model for video instance segmentation. The probability map of the frame $w$ gives rise to an instance candidate forest (ICF). The ICF of frame $w$ is matched with the considered (already selected) instances in the frame $w-1$ to determine the optimal instance candidates in the frame $w$ as part of the final instances. For a simple illustration, only a subset of instances/candidates is shown.}
\label{fig:HEMD}
\end{figure}
\label{method1.2}
In the instance candidate selection stage, we aim to select the correct instance candidates from the generated instance candidate forest list $\mathcal{F}$ for the entire image sequence $X$, for which we develop an iterative
matching and selection method on top of $\mathcal{F}$.
Specifically, we first obtain an initial ``state" from each forest $\mathcal{F}^w\in \mathcal{F}$ by identifying and selecting the easy-to-identify cases. The reason for doing so is that the easy-to-identify instance candidate regions are quite stable and robust based on the probability maps, and hence their selection is of high confidence. Next, the selected instance candidates can be potentially matched with (as in tracking) unselected instance candidates to select more instance candidates. The matching process is performed for several iterations, and the state, which contains the selected instance candidates and is used in the matching process, is iteratively updated. Finally, we select the remaining unselected instance candidates using a padding method. The padding-selected candidates and the selected instance candidates in the state form the final results.
Hence, this selection stage is conducted in three phases: building the initial selected instance candidate state, iterative matching and selection, and remaining instance candidate selection by padding. We maintain a ``state" to contain the selected instance candidates (which are part of the final output), both initially and in the iterations. In each iteration, the state is used by the matching process to help select more candidates, and the newly selected candidates are added to the state. Below we present these three phases in detail.
\subsubsection{Building the Initial Selected Instance Candidate State}
\label{method1.2.2}
For each ICF $\mathcal{F}^w\in \mathcal{F}$ for the image frame $x_w$ in $X$, all the root nodes with only one leaf node in their corresponding instance candidate trees (ICTs) are selected and put into the initial selected instance candidate state $S_0^w$. All these root instance candidates are easy to identify in its probability map $p_w$, with a single local maximum probability value
in each such root candidate region. Then, all the nodes in the ICTs that contain any of these selected root nodes are discarded from the instance candidates of $\mathcal{F}^w$ (because their candidate regions overlap with their root region and thus cannot appear in the final output together with their root region). Let $\mathcal{F}^w_0$ be the forest with the remaining instance candidates for $x_w$. The initial state construction is performed on the entire ICF list $\mathcal{F}$, and hence the initially selected instance candidates can appear across the image frames in $X$. Thus, we have:
\begin{equation}
S_0^w = \{ r \ | \ r \in \mathcal{R}^w \text{ and } |\mathcal{N}(T_r^w)| =1 \},
\end{equation}
\begin{equation}
\mathcal{F}_0^w = \mathcal{F}^w - \{n \ | \ n \in T_r^w, r \in S_0^w \},
\end{equation}
\begin{equation}
S_0 = \{S_0^1,S_0^2,\ldots,S_0^W\}.
\end{equation}
\subsubsection{Iterative Matching and Selection}
\label{method:itera}
The matching and selection process is conducted in $T$ iterations (for a specified $T$; we choose $T= 10$ experimentally). In each iteration, we perform three major steps: EMD matching, H-EMD matching, and state updating. We describe these three steps in detail below.
\subsubsection*{EMD Matching}
Given the instance candidate selection state $S_t=\{S_t^1,S_t^2,\ldots,S_t^W\}$ and the forests in $\mathcal{F}_t=\{\mathcal{F}_t^1,\mathcal{F}_t^2$, $\ldots,\mathcal{F}_t^W\}$, in iteration $t+1$ ($t = 0,1, \ldots, T-1$),
our goal is to use the selected instance candidates in each $S_t^w$ to match with and select the unselected instance candidates in forests $\mathcal{F}_t^{w-1}$ and $\mathcal{F}_t^{w+1}$ of the neighboring image frames (if any). Yet, in order to do that, we need to first perform a matching between every two consecutive $S_t^w$ and $S_t^{w+1}$. The reason for doing so is that we need to first identify the (high-confidence) instance consistency between every $S_t^w$ and $S_t^{w+1}$, that is, the robust matching pairs between the already selected instance candidates in $S_t^w$ and in $S_t^{w+1}$. Note that for correctness, a selected instance candidate (say) in $S_t^w$ that is involved in a matched pair with a selected instance candidate in $S_t^{w+1}$ is already ``occupied" and thus should not be used to further match with any unselected instance candidates in $\mathcal{F}_t^{w+1}$.
To efficiently preform this step, note that all the selected instance candidate regions in $S_t^w$ and in $S_t^{w+1}$ are mutually disjoint connected components and do not form a non-trivial hierarchical structure. Thus, an earth mover's distance (EMD) based matching model as in \cite{rubner1998metric,chen2016hybrid,Chen-Iris2016} is sufficient for computing an optimal matching between $S_t^w$ and $S_t^{w+1}$.
This EMD based matching model is defined as:
\begin{equation}\label{object_function_1}
\textbf{EMD}(S_t^w, S_t^{w+1}) = \max_{f} \sum_{r \in S_t^{w}} \sum_{d \in S_t^{w+1}} M_{r,d} f_{r,d}
\end{equation}
\begin{equation}\label{constraint_2}
\sum_{r \in S_t^w} f_{r,d} \leq 1, \forall d \in S_t^{w+1},
\end{equation}
\begin{equation}\label{constraint_3}
\sum_{d\in S_t^{w+1}} f_{r,d} \leq 1, \forall r \in S_t^w,
\end{equation}
\begin{equation} \label{constraint_1}
f_{r,d} \in \{0,1\}, \text{ if } \ \frac{2\cdot {\rm abs}(|r|-|d|)}{|r|+|d|} < \delta,
\end{equation}
\begin{equation}
M_{r,d} = \textbf{IoU}(r, d), \forall r \in S_t^w \ {\rm and \ } \forall d \in S_t^{w+1}.
\end{equation}
We utilize the Intersection over Union (IoU) as the matching score $M_{r,d}$ between each pair of considered instance candidates across two frames. Only the candidate pairs with an object size change less than $\delta$ (we set $\delta= 0.35$ experimentally) are taken as valid considered matching pairs (see Eq.~(\ref{constraint_1})). Eq.~(\ref{constraint_1}) enables us to deal with cell mitosis. Cell mitosis often occurs with considerable shape/size changes (e.g., larger than $\delta$). Thus, if a splitting event occurs (say) from frame $w-1$ to frame $w$, the (already) divided cells in frame $w$ are unaffected by the matching result on frames $w-1$ and $w$, while they can be matched (or captured) by the corresponding divided cells in frame $w+1$.
We solve this EMD matching problem by integer linear programming (ILP) to obtain the optimal matching result (flow) $f_{r, d}$. Then, the selected instance candidates in $S_t^w$ and in $S_t^{w+1}$ thus matched are removed from involvement in the next matching step (H-EMD), as:
\begin{equation}
S_t^{w \to w+1} = S_t^w - \{r \in S_t^w \ | \ f_{r,d} = 1, \ \forall d \in S_t^{w+1} \},
\end{equation}
\begin{equation}
S_t^{w+1 \to w} = S_t^{w+1} - \{d \in S_t^{w+1} \ | \ f_{r,d} = 1, \ \forall r \in S_t^{w} \}.
\end{equation}
Note that $S_t^w$ and $S_t^{w+1}$, containing the selected instance candidates for frames $x_w$ and $x_{w+1}$ so far, remain unchanged.
\subsubsection*{H-EMD Matching}
Next, we present a new hierarchical earth mover's distance (H-EMD) matching model to use the selected instance candidates which are unmatched by the above EMD matching step to match with and select unselected instance candidates in the neighboring frames. Consider the unmatched selected instance candidates in $S_t^{w \to w+1}$ and the unselected instance candidates in $\mathcal{F}_t^{w+1}$ for the frame $x_{w+1}$. This matching case is denoted as ${w \to w+1}$. The reverse case is also considered symmetrically and denoted as ${w+1 \to w}$.
We define the case of \textbf{H-EMD}$(S_t^{w \to w+1}, \mathcal{F}_t^{w+1})$ below (the reverse matching case is \textbf{H-EMD}$(S_t^{w+1 \to w}, \mathcal{F}_t^{w})$ symmetrically).
\begin{equation}\label{object_function_2}
\textbf{H-EMD}(S_t^{w \to w+1}, \mathcal{F}_t^{w+1}) = \max_{f} \sum_{r \in S_t^{w \to w+1}} \sum_{d \in \mathcal{F}_t^{w+1}} M_{r,d} f_{r,d}
\end{equation}
\begin{equation}\label{constraint_3}
\sum_{d\in \mathcal{F}_t^{w+1}} f_{r,d} \leq 1, \forall r \in S_t^{w \to w+1},
\end{equation}
\begin{equation}\label{constraint_6}
\sum_{r\in S_t^{w \to w+1}} \sum_{n_{q} \in \mathcal{N}_{q}} f_{r,n_{q}} \leq 1, \forall \mathcal{N}_q \in \mathcal{N}(\mathcal{F}_t^{w+1}),
\end{equation}
\begin{equation} \label{constraint_7}
f_{r,d} \in \{0,1\}, \text{ if } \ \frac{{\rm abs}(|r|-|d|)}{|r|} < \delta,
\end{equation}
\begin{equation}
M_{r, d} = \textbf{IoU}(r, d), \forall r \in S_t^{w \to w+1} \ {\rm and \ } \forall d \in \mathcal{F}_t^{w+1},
\end{equation}
where $\mathcal{N}(\mathcal{F}_t^{w+1})$ denotes the set of all the leaf-to-root paths in $\mathcal{F}_t^{w+1}$. Eq.~(\ref{constraint_6}) aims to incorporate mutual exclusion in H-EMD. According to the mutual exclusion requirement for the output instances, one pixel can belong to at most one output instance, which was used in previous work (e.g., \cite{silberman2014instance}).
Specifically, let $\mathcal{N}_{q}$ denote the node list of a leaf-to-root path in $\mathcal{F}_t^{w+1}$. If an instance candidate (a node) in $\mathcal{N}_{q}$ is selected, then any other candidates in $\mathcal{N}_{q}$ cannot be selected anymore.
Further, note that Eq.~(\ref{constraint_6}) also plays a role for H-EMD that Eq.~(\ref{constraint_2}) plays for EMD, that is, it ensures that at most one node $r\in S_t^{w \to w+1}$ can be matched with any node $n_{q} \in \mathcal{F}_t^{w+1}$.
We solve the H-EMD matching problem by integer linear programming (ILP) to yield the optimal matching results.
Fig.~\ref{fig:HEMD} gives an illustration of the H-EMD matching model.
\subsubsection*{State Update}
Note that for each frame $x_w$ with $1 < w < W$ (i.e., except the first and last frames), $\mathcal{F}_t^{w}$ can receive matching results from both the (left) frame $x_{w-1}$ and the (right) frame $x_{w+1}$. There may be conflicts between these two directional flows to any of the trees in $\mathcal{F}_t^{w}$. Thus, we apply a combining function $\Phi$ with a left-right-competing rule: For each tree $T^w \in \mathcal{F}_t^{w}$, if the sum of the matching scores from the left frame is larger than or equal to the sum of the matching scores from the right frame, then we use the left matching results as the matching results for $T^w$; otherwise, we use the right matching results as the matching results for $T^w$. That is,
\begin{equation}
\Delta S_t^w =
\begin{cases}
f^{w+1 \to w} & w = 1, \\
f^{w-1 \to w} & w = W, \\
\Phi(f^{w+1 \to w}, f^{w-1 \to w}) & \text{otherwise}.
\end{cases}
\end{equation}
After the instance candidate selection results from each $\mathcal{F}_t^{w}$ are obtained by the above H-EMD matching step in iteration $t+1$, we compute the new state $S_{t+1}^w$ by adding $S_t^w$ and the newly selected instance candidates $\Delta S_t^w$ from $\mathcal{F}_t^{w}$. Further, for each selected instance candidate node $n_c \in \Delta S_t^w$, all the nodes in any leaf-to-root path containing $n_c$ in $\mathcal{F}_t^{w}$ are removed from the remaining unselected instance candidates of $\mathcal{F}_t^w$ (due to mutual exclusion). That is,
\begin{equation}
S_{t+1}^w = S_t^w \cup \Delta S_t^w,
\end{equation}
\begin{equation}
\mathcal{F}_{t+1}^w = \mathcal{F}_{t}^w - \{n \ | \ n \in \mathcal{N}_{n_c}^w, \ \mathcal{N}_{n_c}^w \in \mathcal{N}(\mathcal{F}_t^w,n_c), n_c \in \Delta S_t^w \},
\end{equation}
where $\mathcal{N}_{n_c}^w$ denotes a leaf-to-root path containing a node $n_c\in \Delta S_t^w$ in the set $\mathcal{N}(\mathcal{F}_t^w,n_c)$ of all such paths in the forest $\mathcal{F}_t^w$.
After performing $T$ iterations of the above matching and selection process, we finally complete our instance candidate selection by applying the ``padding" process below.
{\bf Remarks}: (i) The above matching and selection process is repeated multiple ($T$) iterations in two directions independently and locally (both for frames $w-1$ and $w$, and for frames $w$ and $w+1$) in order to allow matching results to be propagated not only to the consecutive frames but also to nearby non-consecutive frames. For example, matching results on earlier (or later) frames can be propagated gradually through iterations to multiple nearby later (or earlier) frames to help select unselected instance candidates on such later (or earlier) frames.
(ii) The observant readers may have noticed that in the EMD matching step, some redundant matching may be performed, which is unnecessary. For example, in iteration $t+1$, the EMD matching step may obtain some matched pairs between the selected instance candidates in $S_t^w$ and in $S_t^{w+1}$ which have been obtained by the EMD matching in earlier iterations. The reason for this is that we keep all the selected instance candidates for the frame $x_w$ in the state $S_t^w$, which is used for the EMD matching step. Note that the EMD matching step is needed to ensure correctness because newly selected instance candidates will be added in order to produce new states iteratively, no matter such redundant matching occurs in the EMD matching step or not. While we could remove those selected instance candidates in each $S_t^w$ that are involved in any
matched pairs obtained by the EMD matching in any iteration, doing so involves a tedious case analysis, since $S_t^w$ is in general involved with both $S_t^{w-1}$ and $S_t^{w+1}$ in the EMD matching, and $S_t^{w-1}$ may cause a different subset of the selected instance candidates in $S_t^w$ to be removed from $S_t^w$ than those caused by $S_t^{w+1}$, thus giving rise to two versions of $S_t^w$ before the EMD matching step. For simplicity of our presentation, we choose to present our matching and selection process in the current form, tolerating some redundant matching in the EMD matching step. We should mention that experimentally, we found that such redundant matching incurs only very negligible additional computation time.
\subsubsection{Remaining Instance Candidate Selection by Padding}
\label{final-selection}
Finally, we consider the remaining unselected instance candidates in each forest $\mathcal{F}_T^w$, as there may still be instance candidates in $\mathcal{F}_T^w$ that are not selected by the iterative matching and selection process. We can put these remaining unselected instance candidates into two cases. (a) Unmatchable case: The candidates do not show instance consistency in two consecutive frames. For example, the object size changes drastically compared to other corresponding candidates in the two consecutive frames, or the candidates appear in one frame but disappear in the other frame. (b) Matchable case: The candidates do show instance consistency, but none of the potentially matchable candidates in the neighboring frames is picked to match by the above matching process. In both these cases, the matching does not show effect on such unselected candidates. Thus, we solve these cases in a single-frame segmentation manner, called padding (that is, no longer utilizing instance consistency across consecutive frames because up to this point, it could not help select those remaining unselected candidates).
To select from the remaining unselected instance candidates in $\mathcal{F}_T^{w}$ for each individual frame $x_w$, one can actually apply any common post-processing methods.
We found experimentally that selecting all the remaining unselected root nodes yields very good results (these root nodes belong to ICT trees in $\mathcal{F}_T^{w}$ with one or more leaf-to-root paths). As shown in Eq.~(\ref{equ:result2}), the final instance set $S_{F}^w$ for each frame $x_w$ is formed by the union of the unselected root instance candidate regions $\mathcal{R}_T^w$ in $\mathcal{F}_T^{w}$ and the already selected instances in the final state $S_T^w$ by the above matching and selection process, as
\begin{equation} \label{equ:result2}
S_F^w = S_T^w \cup \mathcal{R}_T^w.
\end{equation}
$S_{F}=\{S_F^1,S_F^2, \ldots, S_F^W\}$ is taken as the final instance segmentation results for the input image sequence $X$.
\begin{comment}
Eq.~(\ref{equ:result2}) aims to cover the cases where instance consistence does not hold between the consecutive $s_{i-1}$ and $s_i$.
There are four cases in which instance consistence does not hold.
(a) An instance ends in $s_{i-1}$ and does not appear in $s_{i}$ (\textit{e.g.}, a cell moves out of the field of view (FOV)).
(b) An instance that does not exist in $s_{i-1}$ appears in $s_{i}$ (\textit{e.g.}, a new cell moves into the FOV).
(c) An instance in $s_{i-1}$ is split into multiple disjoint portions in $s_{i}$ (\textit{e.g.}, a mother cell is divided into two daughter cells).
(d) Multiple disjoint instance regions in $s_{i-1}$ merge into one instance region in $s_{i}$ (\textit{e.g.}, two vessel segments join into one segment).
In such cases, when a suitable matching is not found for an instance/candidate, our selection is based on the candidates in the instance candidate set $I^{0.50}$, as in Eq.~(\ref{equ:result2}).
\end{comment}
\begin{table*}[t]
\centering
\scriptsize
\caption[dd]{Instance segmentation results on the six video datasets. A {\bf bold} score represents the best performance on the corresponding dataset, and an \underline{underline} denotes the best post-processing performance with a specific backbone.
``*'' indicates that our result is statistically significantly better than the second-best post-processing result (with $p$-value $<$ 0.05).}
\setlength{\tabcolsep}{1mm}
{
\begin{tabular}{ c| c |c c |c c |cc|cc|cc|cc }\hline
\multicolumn{2}{c|}{}& \multicolumn{2}{c|}{Fluo-N2DL-HeLa} & \multicolumn{2}{c|}{PhC-C2DL-PSC} &
\multicolumn{2}{c|}{PhC-C2DH-U373} &
\multicolumn{2}{c|}{Fluo-N2DH-SIM+} &
\multicolumn{2}{c|}{P.~aeruginosa} & \multicolumn{2}{c}{M.~xanthus} \\\cline{3-14}
\multicolumn{2}{c|}{} & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$
\\ \hline
\multicolumn{2}{c|}{KTH-SE~\cite{ulman2017objective}}
& 96.3 & 90.0 & 93.8 &72.9
& -- & -- & 97.9 & \textbf{87.8}
& -- & -- & -- & -- \\
\multicolumn{2}{c|}{Cellpose~\cite{stringer2021cellpose}}
&93.9 &79.9 & 92.8 &72.4
& -- &-- & 73.3 & 64.6
& -- & -- & -- & -- \\
\multicolumn{2}{c|}{Hybrid~\cite{chen2016hybrid}}
& -- & -- & -- & --
& -- & -- & -- & --
& -- & --& 79.2 & 61.1 \\
\multicolumn{2}{c|}{Embedding~\cite{chen2019instance}}
& -- & -- & -- & --
& -- & -- & 89.8$\pm$0.8 & 73.2$\pm$1.2
& -- & -- & -- & -- \\
\multicolumn{2}{c|}{Mask R-CNN\cite{he2017mask}}
&90.7$\pm$1.2 & 77.0$\pm$2.0 &89.7$\pm$0.8 & 71.0$\pm$1.1
&67.6$\pm$ 0.9 & 62.5$\pm$2.0 & 90.4$\pm$1.3 & 76.9$\pm$1.5
& 69.5$\pm$0.6 & 51.5$\pm$0.5 & 62.3$\pm$0.9 & 39.6$\pm$0.7 \\
\multicolumn{2}{c|}{StarDist~\cite{schmidt2018cell}
} &\textbf{96.7$\pm$0.1} &91.1$\pm$0.2 &96.4$\pm$0.1 & 82.0$\pm$0.4 & 93.4$\pm$0.3 &82.2$\pm$1.0
& 96.2$\pm$0.2 & 79.1$\pm$0.4
& 93.1$\pm$0.4 & 68.9$\pm$0.7 &85.4$\pm$0.5 & 62.0$\pm$0.4 \\
\multicolumn{2}{c|}{KIT-Sch-GE~\cite{scherr2021improving}}
& 96.6$\pm$0.1 & \textbf{92.6$\pm$0.2} & 95.7$\pm$0.3 &80.4$\pm$1.0
&93.2$\pm$0.8 & 79.6$\pm$0.8 &95.7$\pm$0.9 & 81.5$\pm$1.8
& 90.1$\pm$1.1 & 65.5$\pm$0.9 &92.8$\pm$0.1 & 60.2$\pm$0.9 \\
\multicolumn{2}{c|}{nnU-Net~\cite{isensee2021nnu}}
& 93.1$\pm$0.4 & 83.7$\pm$0.8 & 91.4$\pm$0.1 & 77.7$\pm$0.1
&92.3$\pm$0.1 &86.3$\pm$0.4 & 97.9$\pm$0.1 & 83.9$\pm$0.3
&93.5$\pm$0.4 & \textbf{83.7$\pm$0.6} & 95.6$\pm$0.1 & 84.5$\pm$0.4 \\\hline \hline
\multirow{7}{*}{\tabincell{c}{
U-Net\\~\cite{ronneberger2015u}}
}
& 0.5-Th
& 93.4$\pm$0.2 & 84.8$\pm$0.4 & 93.8$\pm$0.5 & 82.7$\pm$0.2
& 92.4$\pm$0.6 & 88.5$\pm$0.5
& 92.9$\pm$0.6 & 75.5$\pm$1.5
& 92.7$\pm$0.3 & 80.1$\pm$0.6
& 94.3$\pm$0.3 & 84.8$\pm$0.9 \\
& Otsu & 93.2$\pm$0.2 & 84.4$\pm$0.4 & 93.1$\pm$0.1 & 82.0$\pm$0.1
& 91.3$\pm$1.4 & 89.0$\pm$0.4 & 92.6$\pm$0.6 & 74.9$\pm$1.5
& 92.8$\pm$0.3 & 81.0$\pm$0.6 & 94.2$\pm$0.3 & 84.2$\pm$0.8 \\
& Watershed
& 95.7$\pm$0.1 & 91.5$\pm$0.2 & 95.9$\pm$0.1 & 86.7$\pm$0.1
& 91.6$\pm$0.6 & \underline{\textbf{89.4$\pm$0.8}} & \underline{97.6$\pm$0.1} & 83.8$\pm$0.5
& 93.4$\pm$0.8 & 82.0$\pm$0.9 & 95.5$\pm$0.1 & 87.5$\pm$0.5 \\
& DenseCRF
& 91.4$\pm$0.1 & 80.0$\pm$0.2 & 92.2$\pm$0.2 & 80.5$\pm$0.2
& 92.5$\pm$0.3 & 89.0$\pm$0.4 & 91.6$\pm$0.5 & 73.1$\pm$0.2
& 93.2$\pm$0.2 & 80.0$\pm$0.5 & 94.1$\pm$0.3 & 83.9$\pm$1.0 \\
& MaxValue
& 93.3$\pm$0.2 & 84.5$\pm$0.4 & 93.1$\pm$0.1 & 81.8$\pm$0.1
& 91.6$\pm$0.7 & 88.9$\pm$0.4 & 91.4$\pm$0.6 & 72.1$\pm$1.6
& 92.2$\pm$0.2 & 80.0$\pm$0.9 & 93.9$\pm$0.4 & 83.7$\pm$1.0\\
& H-EMD
& \underline{96.4$\pm$0.1$^*$} & \underline{92.4$\pm$0.2$^*$} & \underline{96.3$\pm$0.3$^*$} & \underline{\textbf{87.2$\pm$0.1$^*$}}
& \underline{\textbf{93.5$\pm$0.2$^*$}} & \underline{\textbf{89.4$\pm$0.4}} &\underline{97.6$\pm$0.1} & \underline{84.3$\pm$0.5}
& \underline{\textbf{94.4$\pm$0.3$^*$}} & \underline{82.1$\pm$0.4} &
\underline{\textbf{97.1$\pm$0.6$^*$}} & \underline{\textbf{89.3$\pm$0.4$^*$}} \\\hline
\multirow{7}{*}{\tabincell{c}{DCAN\\~\cite{chen2016dcan}}}
&0.5-Th & 92.3$\pm$0.6 & 82.8$\pm$0.8 & 92.6$\pm$0.2 & 80.9$\pm$0.3
& 89.6$\pm$1.4 & 87.6$\pm$0.6 & 92.6$\pm$0.8 & 76.2$\pm$0.9
&92.3$\pm$0.9 & 79.6$\pm$0.4 & 93.3$\pm$0.3 & 82.2$\pm$0.4 \\
& Otsu & 92.1$\pm$0.5 & 82.5$\pm$0.7 & 92.0$\pm$0.1 & 80.0$\pm$0.2
& 89.4$\pm$1.0 & 88.1$\pm$0.6 & 92.5$\pm$0.6 & 75.6$\pm$0.8
&92.2$\pm$1.0 & 80.4$\pm$0.4 &93.0$\pm$0.3 & 81.9$\pm$0.5 \\
& Watershed
& 95.5$\pm$0.4 & 91.4$\pm$0.6 & 95.8$\pm$0.1 & 86.5$\pm$0.1
& 91.2$\pm$0.4 & 88.4$\pm$0.5 & 97.7$\pm$0.3 & 83.7$\pm$0.8
&92.5$\pm$0.3 & 79.9$\pm$0.6 &95.6$\pm$0.2 &87.4$\pm$0.3 \\
& DenseCRF
& 91.2$\pm$0.4 & 80.0$\pm$0.6
& 91.1$\pm$0.2 & 78.6$\pm$0.3 & 91.0$\pm$0.6 & 88.1$\pm$0.6 & 91.7$\pm$0.6 & 74.0$\pm$0.7
& 93.1$\pm$0.3 & 79.7$\pm$0.4 &93.1$\pm$0.3 &81.4$\pm$0.6 \\
& MaxValue
& 92.6$\pm$0.6 & 83.5$\pm$1.0 & 92.0$\pm$0.3 & 79.9$\pm$0.5 & 89.0$\pm$1.3 & 88.0$\pm$0.6 & 91.5$\pm$0.6 & 73.3$\pm$0.9
&91.6$\pm$1.1 &79.7$\pm$0.5 &93.2$\pm$0.3 &82.1$\pm$0.5 \\
& H-EMD
& \underline{96.1$\pm$0.2$^*$} & \underline{91.6$\pm$0.2} & \underline{\textbf{96.6$\pm$0.4$^*$}} & \underline{\textbf{87.2$\pm$0.1$^*$}}
& \underline{93.3$\pm$0.3$^*$} & \underline{88.7$\pm$0.4} & \underline{98.0$\pm$ 0.1} & \underline{85.2$\pm$0.6$^*$}
& \underline{94.2$\pm$0.3$^*$} & \underline{80.9$\pm$0.5$^*$}
& \underline{96.7$\pm$0.2$^*$} & \underline{88.4$\pm$0.4$^*$} \\\hline
\multirow{7}{*}{\tabincell{c}{UNet-\\LSTM\\ ~\cite{arbelle2019microscopy} }}
&0.5-Th
&92.5$\pm$0.2 & 83.4$\pm$0.4 & 91.5$\pm$0.3 & 79.6$\pm$0.5
& 88.4$\pm$0.9 & 86.4$\pm$0.3 & 96.2$\pm$0.5 & 82.4$\pm$1.0
& 90.9$\pm$0.3 & 80.5$\pm$0.8 & 93.3$\pm$0.3 & 81.2$\pm$0.6 \\
&Otsu & 92.4$\pm$0.2 & 83.4$\pm$0.3 & 91.1$\pm$0.3 & 78.9$\pm$0.4
& 87.6$\pm$0.8 & 86.9$\pm$0.3 & 95.9$\pm$0.4 & 81.8$\pm$0.8
&90.9$\pm$0.2 & 80.5$\pm$0.7 & 93.2$\pm$0.3 & 81.0$\pm$0.6\\
& Watershed & 94.6$\pm$0.3 & 89.1$\pm$0.6 &95.7$\pm$0.1 & 85.3$\pm$0.2
& 89.6$\pm$0.7 & \underline{88.1$\pm$0.4} & 98.3$\pm$0.2 & 86.0$\pm$0.7
& 90.3$\pm$0.1 & 80.2$\pm$0.2 &94.9$\pm$0.2 &85.4$\pm$0.2 \\
& DenseCRF & 92.2$\pm$0.2 & 82.5$\pm$0.3 & 91.7$\pm$0.3 & 79.4$\pm$0.4
& 89.2$\pm$0.8 & 87.0$\pm$0.3 &95.5$\pm$0.4 & 81.3$\pm$1.1
& 91.3$\pm$0.1 & 81.6$\pm$0.4 &93.2$\pm$0.4 &80.5$\pm$0.9\\
&MaxValue &92.4$\pm$0.2 &83.4$\pm$0.3 &91.1$\pm$0.2 & 79.3$\pm$0.4
& 87.1$\pm$0.9 & 86.8$\pm$0.3 & 95.7$\pm$0.3 & 81.3$\pm$0.8
&90.9$\pm$0.2 & 80.5$\pm$0.7 & 93.3$\pm$0.3 & 81.1$\pm$0.5 \\
& H-EMD
& \underline{95.7$\pm$0.1$^*$} & \underline{90.2$\pm$0.3$^*$} & \underline{96.1$\pm$0.1$^*$} & \underline{85.4$\pm$0.2}
& \underline{92.6$\pm$0.4$^*$} & 87.3$\pm$0.4 &\underline{\textbf{98.7$\pm$0.6}} & \underline{86.6$\pm$0.8}
& \underline{92.6$\pm$0.3$^*$} & \underline{82.5$\pm$0.1$^*$} & \underline{96.7$\pm$0.2$^*$} & \underline{87.1$\pm$0.3$^*$} \\
\hline
\end{tabular}
\label{tab:public_video}
}
\end{table*}
\begin{table*}
\caption[dd]{Cell segmentation benchmarking: Evaluation of the submitted methods published by the Cell Tracking Challenge organizers.}
\begin{subtable}{0.7\textwidth}
\centering
\begin{tabular}{c c |c|c|c|c} \hline
& & Fluo-N2DL-HeLa & PhC-C2DL-PSC & PhC-C2DH-U373 & Fluo-N2DH-SIM+ \\
\hline
\multirow{5}{*}{OP$_{CSB}$} & $1^{st}$ & \cellcolor[HTML]{C2FF9D}0.957 & \cellcolor[HTML]{D59DFF}0.859 & \cellcolor[HTML]{AED6F1}0.959 & \cellcolor[HTML]{ABB2B9}0.905 \\
& $2^{nd}$ &\cellcolor[HTML]{B9770E}0.957 &\cellcolor[HTML]{FA8072}0.849 ($2^{nd}$)& \cellcolor[HTML]{FA8072}0.958 ($2^{nd}$) & \cellcolor[HTML]{FA8072}0.899 ($2^{nd}$) \\
& $3^{rd}$ &\cellcolor[HTML]{D98880}0.954 &\cellcolor[HTML]{7FB3D5} 0.847 & \cellcolor[HTML]{C2FF9D}0.956 & \cellcolor[HTML]{B9770E}0.898 \\
& & \cellcolor[HTML]{FA8072}0.946 ($6^{th}$)
& & \\
\hline
\multirow{5}{*}{SEG} & $1^{st}$ & \cellcolor[HTML]{C2FF9D}0.923 & \cellcolor[HTML]{D59DFF}0.743 & \cellcolor[HTML]{FA8072}0.929 ($1^{st}$) & \cellcolor[HTML]{ABB2B9}0.832 \\
& $2^{nd}$ & \cellcolor[HTML]{D98880}0.923 & \cellcolor[HTML]{7FB3D5} 0.733 & \cellcolor[HTML]{AED6F1}0.927 & \cellcolor[HTML]{FA8072}0.827 ($2^{nd}$) \\
& $3^{rd}$ & \cellcolor[HTML]{B9770E}0.922 &\cellcolor[HTML]{FA8072}0.728 (${3^{rd}}$) & \cellcolor[HTML]{7FB3D5}0.927 & \cellcolor[HTML]{B9770E}0.826 \\
& & \cellcolor[HTML]{FA8072}0.903 ($8^{th}$) & &
& \\ \hline
\multirow{5}{*}{DET} & $1^{st}$ & \cellcolor[HTML]{D59DFF}0.994 &\cellcolor[HTML]{D59DFF}0.975 & \cellcolor[HTML]{AED6F1}0.991 & \cellcolor[HTML]{C2FF9D}0.983 \\
& $2^{nd}$ & \cellcolor[HTML]{D6DBDF}0.992 &\cellcolor[HTML]{76D7C4} 0.972 & \cellcolor[HTML]{C2FF9D}0.990 & \cellcolor[HTML]{F7DC6F}0.981 \\
& $3^{rd}$ &\cellcolor[HTML]{EBDEF0}0.992 &\cellcolor[HTML]{FA8072}0.971 ($3^{rd}$) & \cellcolor[HTML]{FA8072}0.988 ($3^{rd}$) &
\cellcolor[HTML]{FFF59D} 0.979 \\
& & \cellcolor[HTML]{FA8072}0.990 ($7^{th}$) & &
& \cellcolor[HTML]{FA8072}0.972 ($8^{th}$) \\
\hline
\end{tabular}
\end{subtable}
\hfill
\begin{subtable}{0.25\textwidth}
\centering
\fbox{\begin{tabular}{ll}
\cellcolor[HTML]{FA8072} & Our H-EMD \\
\cellcolor[HTML]{FFF59D} & TUG-AT~\cite{payer2019segmenting} \\
\cellcolor[HTML]{AED6F1}& CALT-US~\cite{pena2020j}\\
\cellcolor[HTML]{C2FF9D}& BGU-IL~\cite{arbelle2019microscopy}\\
\cellcolor[HTML]{D59DFF}& KIT-Sch-GE~\cite{scherr2021improving}\\
\cellcolor[HTML]{ABB2B9}& DKFZ-GE~\cite{isensee2021nnu}\\
\cellcolor[HTML]{D98880}& MU-Ba-US~\cite{al2018multi}\\
\cellcolor[HTML]{7FB3D5}& UNSW-AU~\cite{zhu2021automatic}\\
\cellcolor[HTML]{76D7C4}& UVA-NL~\cite{lux2019dic}\\
\cellcolor[HTML]{F7DC6F}& FR-Ro-GE~\cite{ronneberger2015u}\\
\cellcolor[HTML]{EBDEF0}& RWTH-GE~\cite{eschweiler2019cnn}\\
\cellcolor[HTML]{B9770E}& BRF-GE~\cite{koerber2022mia}\\
\cellcolor[HTML]{D6DBDF} & KTH-SE~\cite{ulman2017objective}
\end{tabular}}
\end{subtable}
\label{tab:leaderboard}
\end{table*}
\begin{table}
\scriptsize
\centering
\caption[dd]{Instance segmentation results on the two 3D datasets. A {\bf bold} score represents the best performance on the corresponding dataset, and an \underline{underline} denotes the best post-processing performance with a specific backbone.
``*'' indicates that our result is statistically significantly better than the second-best post-processing result (with $p$-value $<$ 0.05).}
\setlength{\tabcolsep}{1mm}
{
\begin{tabular}{ c| c |c c |c c }\hline
\multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Fluo-N3DH-CHO} & \multicolumn{2}{c}{Fungus} \\\cline{3-6}
\multicolumn{2}{c|}{} & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$
\\ \hline
\multicolumn{2}{c|}{KTH-SE~\cite{ulman2017objective}} & 83.4 & 75.1 & -- & -- \\
\multicolumn{2}{c|}{SPDA~\cite{zhang2019MILD}} & -- & -- & 86.9 & 77.2 \\
\multicolumn{2}{c|}{Cellpose~\cite{stringer2021cellpose}} & 76.3 & 66.3 & -- &-- \\
\multicolumn{2}{c|}{PixelEmbedding~\cite{chen2019instance}} & 89.0$\pm$1.0 & 74.9$\pm$1.6 & -- & -- \\
\multicolumn{2}{c|}{Mask R-CNN~\cite{he2017mask}}
&89.4$\pm$0.6 & 77.4$\pm$1.7 & 67.3$\pm$1.1 & 53.1$\pm$1.1 \\
\multicolumn{2}{c|}{StarDist~\cite{schmidt2018cell}} &94.9$\pm$0.4 & 83.7$\pm$0.6 & 84.6$\pm$0.2 & 75.0$\pm$0.3 \\
\multicolumn{2}{c|}{KIT-Sch-GE~\cite{scherr2021improving}} &96.2$\pm$0.2 & 83.8$\pm$0.4 &84.8$\pm$0.2 & 73.9$\pm$0.3 \\
\multicolumn{2}{c|}{nnU-Net~\cite{isensee2021nnu}} &94.8$\pm$0.1 & 84.0$\pm$0.1 & 84.5$\pm$0.2 & 70.7$\pm$0.1 \\ \hline
\multirow{7}{*}{U-Net~\cite{ronneberger2015u}}
& 0.5-Th & 95.5$\pm$0.3 & 85.4$\pm$0.4 & 87.3$\pm$0.1 & 77.3$\pm$0.7 \\
& Otsu & 95.6$\pm$0.2 & 85.2$\pm$0.4 & 87.3$\pm$0.1 & 77.0$\pm$0.8 \\
& Watershed & 96.2 $\pm$0.2 & 85.4$\pm$0.3 & 87.3$\pm$0.1 & 77.4$\pm$0.1 \\
& DenseCRF & 96.0$\pm$0.3 & 85.4$\pm$0.3 & 86.8$\pm$0.1 & 76.8$\pm$0.6 \\
& Max Value & 95.4$\pm$0.3 & 85.4$\pm$0.3 & 87.3$\pm$0.1 & 75.7$\pm$0.9 \\
& H-EMD & \underline{\textbf{97.0$\pm$0.1$^*$}} & \underline{\textbf{86.4$\pm$0.3$^*$}} & \underline{\textbf{87.4$\pm$0.1}} & \underline{\textbf{78.4$\pm$0.1$^*$}} \\\hline
\multirow{7}{*}{DCAN~\cite{chen2016dcan}}
&0.5 Th & 94.7$\pm$0.6 & 84.8$\pm$0.7 & 87.1$\pm$0.4 & 76.8$\pm$0.5 \\
& Otsu & 94.8$\pm$0.6 & 84.6$\pm$0.7 & 87.0$\pm$0.3 & 76.7$\pm$0.6 \\
& Watershed & 95.5$\pm$0.6 & 85.1$\pm$0.6 &87.0$\pm$0.3 &77.2$\pm$0.2 \\
& DenseCRF & 95.7$\pm$0.5 & 84.7$\pm$0.7 & 86.5$\pm$0.3 & 76.5$\pm$0.6 \\
& Max Value & 94.7$\pm$0.5 & 84.8$\pm$0.6 &87.0$\pm$0.3 &76.1$\pm$0.5 \\
& H-EMD & \underline{96.9$\pm$0.5$^*$} & \underline{85.9$\pm$0.5$^*$} & \underline{87.3$\pm$0.3} & \underline{78.3$\pm$0.2$^*$} \\\hline
\multirow{7}{*}{\tabincell{c}{UNet-LSTM\\ ~\cite{arbelle2019microscopy} }}
&0.5-Th &90.5$\pm$1.0 & 81.8$\pm$0.6 & 85.6$\pm$0.1 & 72.6$\pm$0.8 \\
&Otsu & 90.5$\pm$0.9 & 81.7$\pm$0.5 & 85.5$\pm$0.1 & 72.5$\pm$0.7 \\
& Watershed & 90.5$\pm$0.8 & 78.5$\pm$0.6 & \underline{86.0$\pm$0.3} & 75.0$\pm$0.2 \\
& DenseCRF & 92.5$\pm$0.6 & 81.9$\pm$0.5 & 85.1$\pm$0.1 & 72.3$\pm$0.8 \\
&Max Value & 89.9$\pm$0.9 & 81.7$\pm$0.5 & 85.5$\pm$0.1 & 72.4$\pm$0.7 \\
& H-EMD
& \underline{92.7$\pm$0.4} & \underline{82.7$\pm$0.8}
& 85.8$\pm$0.3 & \underline{76.2$\pm$0.2$^*$} \\ \hline
\end{tabular}
\label{tab:3D}
}
\end{table}
\begin{figure*}
\centering
\includegraphics[width = 0.90\textwidth]{VideoResult.pdf}
\caption{Visual examples of instance segmentation results on the six video datasets using the U-Net backbone. Yellow arrows point to some instance errors corrected by our H-EMD method, and green arrows point to better instance boundaries attained by H-EMD. GT = ground truth; ST = silver truth; PM = probability map; WS = Watershed.}
\label{result}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width = 0.91\textwidth]{result.pdf}
\caption{Visual examples of instance segmentation results on some slices of the two 3D stack datasets using the U-Net backbone. Yellow arrows point to some instance segmentation errors corrected by H-EMD. GT = ground truth; ST = silver truth; PM = probability map; WS = watershed.}
\label{fig:3Dresult}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{threshold.pdf}
\caption{Illustrating the influence of the pre-specified threshold value $\tau$ on the instance segmentation results in F1 and AJI on three video datasets. U373 = PhC-C2DH-U373, P = P.~aeruginosa, and M = M.~xanthus.}
\label{fig:thres}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}{\columnwidth}
\includegraphics[width= 0.88\textwidth]{F1.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{\columnwidth}
\includegraphics[width = 0.85\textwidth]{AJI.pdf}
\end{subfigure}
\caption{Illustrating the influence of the number of matching-and-selection iterations on the instance segmentation results in F1 (left) and AJI (right) on the six video datasets.}
\label{fig:iterations}
\end{figure*}
\iffalse
\begin{table*}[t]
\centering
\scriptsize
\caption[dd]{Instance segmentation results on the six video datasets.}
\setlength{\tabcolsep}{1mm}
{
\begin{tabular}{ c| c |c c |c c |cc|cc|cc|cc }\hline
\multicolumn{2}{c|}{}& \multicolumn{2}{c|}{Fluo-N2DL-HeLa} & \multicolumn{2}{c|}{PhC-C2DL-PSC} &
\multicolumn{2}{c|}{PhC-C2DH-U373} &
\multicolumn{2}{c|}{Fluo-N2DH-SIM+} &
\multicolumn{2}{c|}{P.~aeruginosa} & \multicolumn{2}{c}{M.~xanthus} \\\cline{3-14}
\multicolumn{2}{c|}{} & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$ & F1 $(\%)$ $\uparrow$ & AJI $(\%)$ $\uparrow$
\\ \hline
\multirow{7}{*}{nnU-Net~\cite{isensee2021nnu}}
& H-EMD & \underline{93.1$\pm$0.3} & \underline{84.6$\pm$0.5} &\underline{95.9$\pm$0.2} & \underline{84.5$\pm$0.1}
& \underline{93.3$\pm$0.1} & 86.7$\pm$0.2 & \underline{98.1$\pm$0.1} & \underline{84.1$\pm$0.1} \\\hline
\end{tabular}
\label{tab:public_video}
}
\end{table*}
\fi
\begin{table*}
\scriptsize
\centering
\caption[dd]{SEG and DET based evaluation on the four public video datasets (using ground truth for the Fluo-N2DH-SIM+ dataset and gold truth for the other three datasets).}
\setlength{\tabcolsep}{1mm}
{
\begin{tabular}{c| c|c c|c c|c c|c c} \hline
\multicolumn{2}{c|}{} &
\multicolumn{2}{c|}{Fluo-N2DL-HeLa} & \multicolumn{2}{c|}{PhC-C2DL-PSC} &
\multicolumn{2}{c|}{PhC-C2DH-U373} &
\multicolumn{2}{c}{Fluo-N2DH-SIM+} \\\cline{3-10}
\multicolumn{2}{c|}{} & SEG $\uparrow$ & DET $\uparrow$ & SEG $\uparrow$ & DET $\uparrow$ & SEG $\uparrow$ & DET $\uparrow$ & SEG $\uparrow$ & DET $\uparrow$ \\ \hline
\multicolumn{2}{c|}{KTH-SE~\cite{ulman2017objective}}
& 0.848 & 0.983 & 0.680 & 0.967
& -- & -- & \textbf{0.865} & 0.989 \\
\multicolumn{2}{c|}{Cellpose~\cite{stringer2021cellpose}} &
0.800 & 0.970 & 0.677 & 0.930
& -- &-- & 0.679 & 0.814 \\
\multicolumn{2}{c|}{Embedding~\cite{chen2019instance}}
& -- & -- & -- & --
& -- & -- & 0.671$\pm$0.008 & 0.850$\pm$0.007 \\
\multicolumn{2}{c|}{Mask R-CNN\cite{he2017mask}}
& 0.714$\pm$0.018 & 0.877$\pm$0.008 & 0.611$\pm$0.006 & 0.802$\pm$0.020
& 0.552$\pm$0.020 & 0.663$\pm$0.260 & 0.721$\pm$0.120 & 0.903$\pm$0.006 \\
\multicolumn{2}{c|}{StarDist~\cite{schmidt2018cell}
}
& 0.870$\pm$0.002 & 0.984$\pm$0.001 & 0.741$\pm$0.004 & \textbf{0.976$\pm$0.001}
& 0.806$\pm$0.005 & 0.939$\pm$0.005 & 0.774$\pm$0.004 & 0.986$\pm$0.001 \\
\multicolumn{2}{c|}{KIT-Sch-GE~\cite{scherr2021improving}}
& \textbf{0.883$\pm$0.003} & \textbf{0.987$\pm$0.001} & 0.721$\pm$0.008 & 0.975$\pm$0.003
& 0.802$\pm$0.005 & 0.953$\pm$0.004 & 0.799$\pm$0.019 & 0.966$\pm$0.015 \\
\multicolumn{2}{c|}{nnU-Net~\cite{isensee2021nnu}}
& 0.743$\pm$0.013 & 0.879$\pm$0.018 & 0.693$\pm$0.005 & 0.919$\pm$0.001
& 0.840$\pm$0.003 & 0.954$\pm$0.001 & 0.826$\pm$0.003 & 0.995$\pm$0.001\\\hline \hline
\multirow{7}{*}{\tabincell{c}{
U-Net~\cite{ronneberger2015u}}
}
& 0.5-Th
& 0.763$\pm$0.007 & 0.956$\pm$0.002 & 0.748$\pm$0.002 & 0.954$\pm$0.001
& 0.852$\pm$0.001 & 0.951$\pm$0.001
& 0.784$\pm$0.009 & 0.967$\pm$0.003 \\
& Otsu & 0.758$\pm$0.007 & 0.955$\pm$0.001 &0.744$\pm$0.003 & 0.950$\pm$0.001 & 0.852$\pm$0.001 &0.950$\pm$0.002 & 0.780$\pm$0.009 & 0.966$\pm$0.003\\
& Watershed
& 0.872$\pm$0.002 & 0.980$\pm$0.001 & 0.761$\pm$0.004 & 0.972$\pm$0.001
& 0.841$\pm$0.005 & \textbf{0.957$\pm$0.002} & 0.826$\pm$0.005 & 0.994$\pm$0.001\\
& DenseCRF &0.701$\pm$0.004 & 0.941$\pm$0.001 &0.725$\pm$0.005 & 0.942$\pm$0.001 & 0.852$\pm$0.001 & 0.952$\pm$0.001 & 0.772$\pm$0.009 & 0.959$\pm$0.003
\\
& MaxValue & 0.760$\pm$0.007 & 0.955$\pm$0.001 & 0.746$\pm$0.001 & 0.951$\pm$0.001& 0.852$\pm$0.001 & 0.951$\pm$0.001 & 0.760$\pm$0.009 & 0.961$\pm$0.003
\\
& H-EMD
& 0.872$\pm$0.004 & 0.982$\pm$0.001 & \textbf{0.763$\pm$0.001} & 0.971$\pm$0.005
& \textbf{0.853$\pm$0.001} & 0.955$\pm$0.001 & 0.829$\pm$0.004 & \textbf{0.996$\pm$0.001} \\\hline
\end{tabular}
\label{tab:cell_eva}
}
\end{table*}
\begin{table}
\scriptsize
\centering
\caption[dd]{Ablation study of the EMD matching.}
\setlength{\tabcolsep}{1mm}
{
\begin{tabular}{ c |c c |c c }\hline
& \multicolumn{2}{c|}{F1 $(\%)$ $\uparrow$} & \multicolumn{2}{c}{AJI $(\%)$} \\\cline{2-5}
& w/ EMD & w/o EMD & w/ EMD & w/o EMD
\\ \hline
PhC-C2DH-U373 &\textbf{93.5$\pm$0.2} &\textbf{93.5$\pm$0.2} & \textbf{89.4$\pm$0.4} & \textbf{89.4$\pm$0.4} \\ \hline
Fluo-N2DH-SIM+ &\textbf{97.6$\pm$0.1} & \textbf{97.6$\pm$0.1} &\textbf{84.3$\pm$0.5} & \textbf{84.3$\pm$0.5} \\ \hline
P. aeruginosa & \textbf{94.4$\pm$0.3} & 94.3$\pm$0.4& \textbf{82.1$\pm$0.4} & 82.0$\pm$0.4 \\ \hline
M. xanthus & \textbf{97.1$\pm$0.6} &97.0$\pm$0.6 & \textbf{89.3$\pm$0.4} & 89.2$\pm$0.5\\ \hline
\end{tabular}
\label{tab:EMD}
}
\end{table}
\begin{table}[t]
\scriptsize
\centering
\caption[dd]{Comparison of our H-EMD matching based selection method and the NMS selection method.}
\setlength{\tabcolsep}{1mm}
{
\begin{tabular}{ c |c c |c c }\hline
& \multicolumn{2}{c|}{F1 $(\%)$ $\uparrow$} & \multicolumn{2}{c}{AJI $(\%)$} \\\cline{2-5}
& H-EMD & NMS & H-EMD & NMS
\\ \hline
PhC-C2DH-U373 &\textbf{93.5$\pm$0.2} &93.4$\pm$0.1 & \textbf{89.4$\pm$0.4} & 89.0$\pm$0.3 \\ \hline
Fluo-N2DH-SIM+ &\textbf{97.6$\pm$0.1} & 96.0$\pm$0.2 &\textbf{84.3$\pm$0.5} & 80.9$\pm$0.9\\ \hline
P. aeruginosa & \textbf{94.4$\pm$0.3} & 94.0$\pm$0.4& \textbf{82.1$\pm$0.4} & 81.5$\pm$0.5 \\ \hline
M. xanthus & \textbf{97.1$\pm$0.6} &96.5$\pm$0.2 & \textbf{89.3$\pm$0.4} & 88.4$\pm$0.4\\ \hline
\end{tabular}
\label{tab:NMS}
}
\end{table}
\section{Experiments}
\label{par:exp}
In the experiments, we apply H-EMD directly onto the probability maps generated by three representative DL semantic segmentation models on six biomedical video datasets and two 3D datasets. Compared to commonly-used post-processing methods, our H-EMD yields improved instance segmentation results across all the three DL semantic segmentation models. Compared to other state-of-the-art segmentation methods, our H-EMD also shows superiority. Further, we submitted our results on the public video datasets to the open Cell Segmentation Benchmark from the Cell Tracking Challenge~\cite{mavska2014benchmark}, and it showed that our results are highly competitive compared to all the other submissions. Finally, we conduct additional experiments, which demonstrate the effectiveness of the key components in our H-EMD.
\subsubsection{Datasets}
We evaluate our H-EMD on six 2D+time cell video datasets, including two in-house datasets (P.~aeruginosa~\cite{chen2016segmentation} and M.~xanthus~\cite{chen2016hybrid}) and four public datasets from the Cell Tracking Challenge~\cite{mavska2014benchmark} (Fluo-N2DL-HeLa, PhC-C2DL-PSC, PhC-C2DH-U373, and Fluo-N2DH-SIM+), and two 3D datasets, including one in-house Fungus~\cite{Fredericksen2017,liang2019cascade,zhang2019MILD} and one public Fluo-N3DH-CHO from the Cell Tracking Challenge.
For the datasets from the Cell Tracking Challenge, each one contains two training sequences and two challenge sequences (with no reference annotation provided). Except the Cell Segmentation Benchmark submission experiment (see Section~\ref{exp:benchmark}) which uses both the training sequences as training data and performs evaluation on the two challenge sequences, all the other experiments use one training sequence as the training set and perform inference on the other training sequence. More specifically, Fluo-N2DH-SIM+ uses the 2nd video for training and the 1st video for testing, and vice versa for all the other datasets.
For the in-house datasets, instance segmentation annotations were manually labeled by experts. For the public datasets from the Cell Tracking Challenge, three types of instance segmentation annotations are provided for the training sequences: ground truth, gold truth, and silver truth. Ground truth is the ``exact true" annotations for simulated datasets. Gold truth annotations contain human-made reference annotations for a part of object instances. Silver truth is computer-aided annotations which are generated by fusing the previous submission results~\cite{akbas2018automatic}. For the simulated Fluo-N2DH-SIM+ dataset, we use ground truth. For the other public datasets, we use silver truth unless otherwise stated. More details on these datasets are given below.
\textbf{P.~aeruginosa.} The in-house P.~aeruginosa dataset~\cite{chen2016segmentation} contains two videos of 2D fluorescence microscopy images for segmenting dynamic \textit{P.~aeruginosa} cells. One video of 100 frames is used for training (400 × 400 pixels per frame), and the other video of 40 frames is for testing (513 × 513 pixels per frame).
The videos contain high density cells (see Fig.~\ref{result}) which are difficult to segment.
\textbf{M.~xanthus.} The in-house M.~xanthus dataset~\cite{chen2016hybrid} contains two videos of 2D electron microscopy images for segmenting dynamic \textit{M.~xanthus} cells, in which each frame has 317 × 472 pixels. One video of 30 frames is used for training, and the other video of 43 frames is for testing.
\textbf{Fluo-N2DL-HeLa.} The public Fluo-N2DL-HeLa dataset~\cite{mavska2014benchmark} contains fluorescence microscopy videos of dynamic HeLa cells.
\textbf{PhC-C2DL-PSC.} The public PhC-C2DL-PSC dataset~\cite{mavska2014benchmark} contains phase contrast microscopy videos of dynamic pancreatic stem cells.
\textbf{PhC-C2DH-U373.} The public PhC-C2DH-U373 dataset \cite{mavska2014benchmark} contains phase contrast microscopy videos of dynamic Glioblastoma-astrocytoma U373 cells.
\textbf{Fluo-N2DH-SIM+.} The public Fluo-N2DH-SIM+ dataset \cite{mavska2014benchmark} contains simulated fluorescence-stained cells.
\textbf{Fluo-N3DH-CHO.} The public Fluo-N3DH-CHO dataset \cite{mavska2014benchmark} contains 3D fluorescence microscopy images of Chinese Hamster Ovarian (CHO) nuclei.
\textbf{Fungus.} The in-house Fungus dataset~\cite{Fredericksen2017,liang2019cascade,zhang2019MILD} contains 4 3D electron microscopy images for segmenting fungus cells captured from body tissues of ants, whose 2D slices are of 853 × 877 pixels each. 16 slices in one stack are used as training data, and the other 3 stacks are used as test data. For each test stack, 16 slices are labeled (48 slices in total) for evaluation.
\subsubsection{Comparison Methods}
We apply our H-EMD on top of common DL semantic segmentation models to compute instance segmentation results from the generated probability maps. Thus, we compare our H-EMD with two types of known methods: (1) DL semantic segmentation models with other post-processing methods; (2) state-of-the-art instance segmentation methods that do not necessarily output probability maps.
We choose the following three commonly-used DL semantic segmentation models for biomedical image segmentation to generate probability maps. All these semantic segmentation models predict three-class semantic segmentation: foreground, boundary, and background. Foreground-class probability maps are used as input to our H-EMD method.
\begin{itemize}
\item U-Net~\cite{ronneberger2015u}: A U-shaped architecture that yields accurate dense predictions for biomedical image segmentation.
\item DCAN~\cite{chen2016dcan}: A model proposed for gland segmentation which introduces the contour class to separate instances.
\item UNet-LSTM~\cite{arbelle2019microscopy}: A model integrating Convolutional Long Short-Term Memory with U-Net for temporal instance segmentation tasks.
\end{itemize}
We compare our H-EMD with five common post-processing methods that identify instances from probability maps.
\begin{itemize}
\item 0.5-Th~\cite{zhou2019cia}: The probability maps are binarized using a specific threshold value of 0.50, and connected components are taken as the final instances~\cite{chen2016dcan,zhou2019cia}.
\item Otsu~\cite{otsu1979threshold}: Given an image, the threshold value is automatically determined by minimizing the intra-class intensity variance. Then pixels are binarized and connected components are taken as the final instances.
\item MaxValue~\cite{arbelle2019microscopy}:
Each pixel is assigned to one of three classes (background, boundary, and foreground) with the maximum probability. Then the connected foreground pixels are grouped into instances.
\item Watershed~\cite{meyer1994topographic}: First, the probability maps are binarized using the 0.50 threshold value. Then a distance map is computed for each binary image. To reduce over-segmentation, the distance map is smoothed. Finally, the smoothed distance maps are fed to the Watershed algorithm to produce instance results.
\item DenseCRF~\cite{kamnitsas2017efficient}: Both the raw images and their probability maps are fed to the DenseCRF model. Pixels with similar features (\textit{e.g.}, color and probability) are assigned to the same semantic class (\textit{e.g.,} foreground or background). Connected components are taken as the final instance segmentation results.
\end{itemize}
We also compare our H-EMD with other state-of-the-art instance segmentation methods.
\begin{itemize}
\item Hybrid~\cite{chen2016hybrid}: It was proposed for the M.~xanthus dataset, which uses active contour to obtain cell segmentation and utilizes video tracking to further improve segmentation.
\item KTH-SE~\cite{ulman2017objective}: It ranks 1st (in the OP$_{CSB}$ and DET metrics) in the Cell Segmentation Benchmark on the Fluo-N3DH-CHO dataset. It uses a bandpass filtering based segmentation algorithm~\cite{mavska2014benchmark} to segment cells and applies the Viterbi tracking algorithm~\cite{magnusson2014global} to correct potential segmentation errors.
\item SPDA~\cite{zhang2019MILD}: It was the best-known method for the Fungus dataset. It proposed the superpixel augmentation approach for training a deep learning model to improve biomedical image segmentation.
\item PixelEmbedding~\cite{chen2019instance}: It predicts pixel embedding such that pixels of neighboring instances have large cosine distances. A seed growing algorithm is applied to group pixels together to form the final instances. We conduct experiments on the datasets using the model implemented by the authors of~\cite{chen2019instance}, which show that PixelEmbedding yields just acceptable results on the Fluo-N3DH-CHO and Fluo-N2DH-SIM+ datasets.
\item Mask R-CNN~\cite{he2017mask}: A top-down segmentation approach. It first detects instances and then segments instance masks.
\item StarDist~\cite{schmidt2018cell}: It encodes cell instances using star-convex polygons. For each pixel, it predicts an $n$-dimensional vector which indicates the distance to the instance boundaries along a set of $n$ predefined radial directions with equidistant angles. The final instances are obtained by non-maximum suppression (NMS).
\item Cellpose~\cite{stringer2021cellpose}: It is a generalist cell segmentation model for various types of datasets without re-training. The model predicts instance gradient maps and then groups pixels via a post-processing method to generate cells.
\item KIT-Sch-GE~\cite{scherr2021improving}: It ranks 1st (in all the metrics) in the Cell Segmentation Benchmark on the PhC-C2DL-PSC dataset. The proposed model predicts cell distance maps. Watershed is then applied to the distance maps to obtain the final instance segmentation results.
\item nnU-Net~\cite{isensee2021nnu}: It ranks 1st (in the OP$_{CSB}$ and SEG metrics) in the Cell Segmentation Benchmark on the Fluo-N2DH-SIM+ dataset. It is an automatic self-configured UNet-based model including pre-processing, network architecture, training, and post-processing for biomedical image segmentation.
\end{itemize}
We evaluate the performances of these methods using two widely-used metrics for image instance segmentation~\cite{zhou2019cia}:
\begin{itemize}
\item Average Jaccard Index (AJI): An instance segmentation metric which considers an aggregated intersection over an aggregated union for all ground truth and segmented instances. Let $G = \{g_1, g_2, \ldots, g_n\}$ denote a set of ground truth instances, $S = \{s_1, s_2,\ldots,s_m\}$ denote a set of segmented instances, and $N$ denote a set of segmented instances that have no intersection with any ground truth instances. $AJI=\frac{\sum_{i=1}^{n}g_i\cap s_j}{\sum_{i=1}^{n}g_i\cup s_j+\sum_{s_k\in N}s_k}$, where $j=\underset{k}{\arg\max} \frac{g_i\cap s_k}{g_i \cup s_k}$.
\item F1-score: An instance detection metric. $\text{F1}=\frac{2*Precision*Recall}{Precision+Recall}$, where $Precision=\frac{\text{TP}}{\text{TP}+\text{FP}}$ and $Recall=\frac{\text{TP}}{\text{TP}+\text{FN}}$. Basically, a predicted instance is true positive (TP) if it matches with a ground truth instance. Two instances match if and only if they have IoU $\geq$ 0.5. False positives (FP) are predicted instances that are not true positives, while false negatives (FN) are ground truth instances that are not matched with any true positive instances.
\end{itemize}
For Hybrid, KTH-SE, and SPDA, we are able to use the two metrics on the datasets which these methods were designed for and applied to. For Cellpose, we are able to apply the pre-trained model to all the datasets with these two metrics. Cellpose yields reasonable results only on the datasets which do not appear to have a large domain gap with the training data. For the other methods, we conduct experiments on all the datasets using the two metrics (running over five times to attain the mean and standard deviation). For PixelEmbedding, KIT-Sch-GE, and nnU-Net, we use the default parameters in the applications. For Mask R-CNN, we modify the original implementation by using smaller
anchor boxes in order to make it more suitable for biomedical cell segmentation. For StarDist, for elongated shape cells, we slightly increase the number of rays for the cell representation to achieve the best possible performance.
\subsubsection{Main Experimental Results}
\subsubsection*{Video Results}
Table~\ref{tab:public_video} shows instance segmentation comparison results on the six video datasets: Fluo-N2DL-HeLa, PhC-C2DL-PSC, PhC-C2DH-U373, Fluo-N2DH-SIM+, P.~aeruginosa, and M.~xanthus.
First of all, working with the three common DL semantic segmentation models, our H-EMD consistently improves the F1 score and AJI on most datasets. Compared to all the post-processing methods, H-EMD obtains performance improvements on all the six datasets.
More specifically, compared to the basic post-processing method (0.5-Threshold), H-EMD improves the F1 score by up to $4\%$ and AJI by up to $6.3\%$ on the PhC-C2DL-PSC dataset with the DCAN backbone. Compared to the best results of the other post-processing methods, H-EMD still improves the F1 score by up to $1.8\%$ and AJI by up to $1.7\%$ on the M.~xanthus dataset with the UNet-LSTM backbone.
The improvements over these post-processing methods indicate that considerable instance segmentation errors can come from the post-processing step on probability maps, and H-EMD can reduce such instance segmentation errors noticeably. We also notice that even though the UNet-LSTM model has already utilized temporal information in the videos, H-EMD can further explore temporal instance information to improve instance segmentation effectively.
In addition, by incorporating with the three common DL semantic segmentation models, our H-EMD outperforms the state-of-the-art (SOTA) instance segmentation methods (\textit{e.g.}, with a U-Net backbone) on most the datasets. In comparison, note that each of the SOTA methods can obtain the best results on at most one dataset.
Fig.~\ref{result} showcases some visual results on video instance segmentation. One can see that for instances that are over-segmented or under-segmented, H-EMD can attain correct instance-level segmentation results. Furthermore, H-EMD obtains more accurate instance boundaries.
\subsubsection*{Cell Segmentation Benchmark}
\label{exp:benchmark}
We submitted our experimental results on four public 2D+time video datasets (Fluo-N2DL-HeLa, PhC-C2DL-PSC, PhC-C2DH-U373, and Fluo-N2DH-SIM+) of various dynamic cells to the Cell Segmentation Benchmark from the Cell Tracking Challenge. In the setting of this challenge, for each dataset, both its two training videos provided by the challenge organizers are used as the training set, and its two challenge videos are used as the test set.
The challenge organizers conduct inference using the submitted methods on the challenge videos, perform evaluation, and rank the segmentation results of all the submitted methods.
The challenge uses SEG, DET, and OP$_{CSB}$ as evaluation metrics. SEG shows how well the segmented cell regions match the actual cell or nucleus boundaries (evaluated using cell segmentation masks).
DET shows how accurately each target object is detected (evaluated using cell markers, which does not necessarily consider cell boundaries). OP$_{CSB}$ is the average of the DET and SEG measures for direct comparisons of all the submitted methods. We use DCAN as the backbone with H-EMD on all the four datasets.
Table~\ref{tab:leaderboard} shows the comparison results of the four datasets on the leader board. One can see that in general, different methods favor different datasets, that is, no method obtains dominating performances on all the four datasets. Yet, our H-EMD achieves runner-ups in three out of the four datasets (PhC-C2DL-PSC, PhC-C2DH-U373, and Fluo-N2DH-SIM+) in the ranking OP$_{CSB}$ metric. In the SEG metric, H-EMD attains the best performance on the PhC-C2DH-U373 dataset, ranks in 2nd on the Fluo-N2DH-SIM+ dataset, and ranks in 3rd on the PhC-C2DL-PSC dataset.
In summary, the compelling benchmarking performances validate the effectiveness of our H-EMD on the 2D+time video datasets.
\subsubsection*{3D Results}
We also conduct experiments on two 3D stack datasets with 2D slices, the Fluo-N3DH-CHO and Fungus datasets. To explore spatial instance consistency, we conduct all the experiments in the same setting as for 2D+time temporal datasets and compared to the same methods as in Table~\ref{tab:public_video}. Table~\ref{tab:3D} shows the instance segmentation comparison results on the two 3D datasets. One can see that H-EMD can consistently boost instance segmentation performances using probability maps.
Compared to the best scores of the known post-processing methods on the DCAN model, H-EMD improves the F1 score by $1.2\%$ and AJI by $0.8\%$ on the Fluo-N3DH-CHO dataset, and H-EMD improves the F1 score by $0.2\%$ and AJI by $1.1\%$ on the Fungus dataset.
Compared to the other state-of-the-art instance segmentation methods,
H-EMD also shows considerable superiority working with various DL semantic segmentation models.
On the Fluo-N3DH-CHO dataset, compared to the other best methods (e.g., KIT-Sch-GE), H-EMD improves the F1 score by $0.8\%$ and AJI by $2.6\%$ with the U-Net backbone. On the Fungus dataset, compared to the other best methods (e.g., KIT-Sch-GE), H-EMD improves the F1 score by $2.6\%$ and AJI by $4.5\%$ with the U-Net backbone.
Fig.~\ref{fig:3Dresult} shows some visual 2D slice examples of instance segmentation results on the two 3D datasets. Compared to the watershed method that tends to incur over-segmentation results (as well as under-segmentation instances), our H-EMD can yield better instance results.
\subsubsection*{Significance} We highlight several advantages of our H-EMD method as follows.
\begin{itemize}
\item DL semantic segmentation based: Our method generates instance candidates (for possible segmentation) directly from DL-generated probability maps, thus exploiting the advantages of the generalizability of FCN pixel-wise classification models and attaining state-of-the-art performances on various datasets.
\item Incorporating temporal/spatial instance consistency: H-EMD provides a new way to effectively incorporate temporal/spatial instance consistency to instance candidate masks. Our proposed matching model is relatively simple and does not need to rely on data-specific parameters.
\item Matching as an auxiliary task: Most previous work considered segmentation jointly with matching. With possible instance candidates, previous work often designed complete but complicated tracking schemes. However, tracking may not be strictly needed for each instance candidate. In our model, matching acts only as an auxiliary task for instance segmentation, and thus the temporal/spatial consistency property is not overused.
\end{itemize}
\subsubsection{Additional Experiments}
\subsubsection*{Gold Truth (GoT) Evaluation}
In our main experiments, we use silver truth (ST) which supplies computer-aided generation of full instance mask labels for evaluating the public datasets (except for the simulated dataset Fluo-N2DH-SIM+, which contains full ground truth). Here, we further evaluate instance segmentation results using sparse human-made gold truth (GoT). Since GoT provides very sparse labels, many predicted instances may not have corresponding ground truth instances and thus can be mistakenly taken as false positives (FP). We use the SEG and DET metrics from the cell segmentation challenge, which give small or no penalty to the FP cases. Note that for the DET metric, it does not have requirements on the accuracy of instance boundaries. Table~\ref{tab:cell_eva} shows such instance segmentation comparison results. First, we find that our method can consistently obtain better SEG performances compared to the most competitive post-processing methods, which suggests that our method can yield better instance segmentation masks. Second, compared to the other methods, our method can consistently attain good results (top 3 on almost all the datasets in all the metrics). Other SOTA methods may yield the best results on one dataset, but can give relatively worse results on the other datasets.
\subsubsection*{Influence of the Pre-specified Value $\tau$}
In our instance candidate generation stage, to reduce noisy instance candidates, we use only threshold values that are not smaller than a pre-specified value $\tau$ to generate instance candidates (see Section~\ref{method1.1}). To examine the influence of the $\tau$ value on the final instance segmentation results, we change $\tau$ from 0.10 to 0.90 with a 0.05 step and evaluate the corresponding instance segmentation results. We conduct experiments on three video datasets (PhC-C2DH-U373, P.~aeruginosa, and M.~xanthus) with the U-Net backbone (see Fig.~\ref{fig:thres}). First, one can see that different $\tau$ values indeed influence the final instance segmentation results, and $\tau=0.50$ is in general a good choice.
Second, we find that on most of these datasets, the performance of our method does not change much in a relatively large range of $\tau$ (e.g., $\tau=[0.40, 0.60]$), which demonstrates the stability of our method.
Third, we notice that the P.~aeruginosa dataset incurs a large performance decay as the value of $\tau$ is increased from 0.45 to larger, while PhC-C2DH-U373 has relatively small change. This shows that different datasets can have different instance candidate attributes. The P.~aeruginosa instance candidates form more different levels in the instance candidate forest (ICF) compared to the PhC-C2DH-U373 dataset, and thus it is more challenging to identify correct instances from such probability maps. In such cases, it is harder for common thresholding methods to attain correct instances, while our method is more flexible to adaptively utilize instance-dependent threshold values effectively.
\subsubsection*{Influence of the EMD Instance-instance Matching}
In our iterative matching and selection phase of the instance candidate selection stage, we first apply an EMD matching model to identify instance-instance matched pairs that should not be involved in the next H-EMD matching and selection of instance candidates (see Section~\ref{method:itera}). Note that the EMD instance-instance matching is performed to ensure correction (e.g., avoiding one selected instance candidate in $S_t^w$ to form multiple matched pairs with different selected instance candidates in $S_t^{w+1}\cup \mathcal{F}_t^{w+1}$). Thus, the EMD matching may also affect the performance of the final instance candidate selection.
We examine the contribution of the EMD matching by removing the EMD matching step from our iterative matching and selection phase. Table~\ref{tab:EMD} shows the comparison results. One can see that without the EMD matching, the instance candidate segmentation performances on the P. aeruginosa and M. xanthus datasets decrease slightly by $\sim$0.1\%, which demonstrates that EMD matching indeed reduces some redundant matching which can cause errors in the subsequent H-EMD matching.
\subsubsection*{Effect of the H-EMD Matching Model}: In the instance candidate selection stage, our H-EMD matching model selects instance candidates from the ICF. To evaluate the effect of our H-EMD matching model, we compare it with a commonly-used selection method: the Non-Maximum Suppression (NMS) algorithm. NMS is widely-used in many instance segmentation frameworks~\cite{he2017mask,wang2020solo}, which selects objects from given proposals based on the corresponding objectness scores using a greedy strategy. Table~\ref{tab:NMS} shows the comparison results with the U-Net backbone. H-EMD consistently outperforms NMS in accuracy, demonstrating the effectiveness of our H-EMD matching model, which is an optimization approach with guaranteed optimal matching solutions. Note that in our experiments, the number of variables in the ILP model is relatively small, which gives rise to comparable execution time as the NMS method.
\subsubsection*{Influence of the Number of Matching-and-selection Iterations}
In our instance candidate selection stage, we employ an iterative matching and selection process to select instance candidates. We examine the influence of the number of matching iterations on the final instance segmentation results. Fig.~\ref{fig:iterations} shows the comparison results. One can see that our iterative matching process converges fast for most of the datasets. In particular, the performances generally do not show large changes after the 4th iteration. We further notice that the first two iterations give large performance gains, and as the iteration number increases, the improvement decays. It shows that our matching method can effectively incorporate instance consistency to select accurate instance candidates. As the iteration number increases, less remaining instance candidates can be matched, and thus less improvement gain is obtained. In general, we choose the number of matching iterations $T=10$.
\section{Conclusions}
In this paper, we proposed a novel framework, H-EMD, for instance segmentation in biomedical 2D+time videos and 3D images. H-EMD builds a forest structure of possible instance candidates from DL semantic-segmentation-generated probability maps. To effectively select instance candidates, H-EMD first selects easy-to-identify instance candidates, and then propagates the selected instance candidates to match with other candidates in neighboring frames in an iterative matching process. Evaluated on six 2D+time video datasets and two 3D datasets, our H-EMD can consistently improve instance segmentation performances
compared to widely-used post-processing methods on probability maps. Incorporating with common DL semantic segmentation models, H-EMD is highly competitive with state-of-the-art instance segmentation methods as well. Experimental results validated the effectiveness of H-EMD to incorporate instance consistency on top of DL semantic segmentation models.
Our future work will study three main issues. First, in the current 3D applications, we view 3D images in a 2D+depth manner, which incurs some limitations in directly exploring the whole 3D instance structures (possibly losing some contextual information of 3D objects). Thus, incorporating full 3D information in 3D instance segmentation with H-EMD is an interesting future research target. Second, the current implementation of our method takes a few hours (1-2 hours in general) for processing stacks of images in a test set, while the other methods take only a few minutes (less than 10 minutes in general). In future studies, it could be a useful target to speed up the H-EMD implementation using parallel computation with optimized efficiency. Third, in the instance candidate selection stage, our current matching method uses a forward-backward matching mechanism. Thus, a whole video or sequence of images has to be acquired before the images can be processed, which does not accommodate online tracking and segmentation applications. A new version of H-EMD for online applications should be developed in future studies.
\bibliographystyle{IEEEtran}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.